title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
GraphMorph: Tubular Structure Extraction by Morphing Predicted Graphs
Accept (poster)
Summary: The paper presents a technique (called GraphMorph) for extracting tubular patterns as found for instance in retinal images (veins) and aerial images (roads). The proposed pipeline is made of several modules: a segmentation network providing centerline probability and features, a graph decoder for detecting nodes and connectivity and a morph module to get centerline masks. Experimental results demonstrate the efficiency of the proposed pipeline against state of the art methods on several benchmarks. Strengths: The paper is well written and clear. Experimental results against SOTA are convincing. Weaknesses: Some hyperparameters (e.g thresholds) have to be set. The methodology is not always clear to set these parameters e.g. are experiments for setting $p_{thresh}$ (lines 560-563 ) done on test sets? minor remarks: - Fig 2 "Modified Deformbale DETR" -> Deformable - Figure 3 caption: "Inferece" Technical Quality: 3 Clarity: 3 Questions for Authors: are experiments for setting $p_{thresh}$ (lines 560-563 ) done on test sets? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are addressed in the paper (e.g. in appendix I line 586, and in appendix H line 576-578). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive suggestions! We will correct the typo error you raised in the revised version. Please find our reply to your questions below. **Q1:** Some hyperparameters (e.g thresholds) have to be set. The methodology is not always clear to set these parameters. e.g. are experiments for setting $p_{thresh}$ (lines 560-563 ) done on test sets? **A1:** Thanks for the suggestions. Parameters like $\lambda_{\text{class}}=0.2$, $\lambda_{\text{coord}}=0.5$, $\alpha=0.6$, and $\gamma=2$ in the loss function (lines 462) are adopted from established settings in previous works, such as Deformable DETR[1]. We adapted $\alpha=0.75$ specifically for the MassRoad dataset to better handle the sparse nature of road networks. Parameters like learning rate and weight decay were set in line with those used in Pointscatter[2] to ensure a fair comparison. For $p_{thresh}$, we empirically set it to 0.5 for all experiments on the four used datasets. In Table 9 of Appendix, we compared it to other thresholds on the test set of DRIVE dataset and found the values close to 0.5 yielded better results. References: [1] Xizhou Zhu, et al. "Deformable detr: Deformable transformers for end-to-end object detection." arXiv preprint arXiv:2010.04159, 2020. [2] Dong Wang, et al. "Pointscatter: Point set representation for tubular structure extraction." European Conference on Computer Vision, pages 366–383. Springer, 2022. --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: Thanks for clarifying this. I am happy with rebuttal answers.
Summary: In this work, the authors tackle curvilinear image segmentation. In particular, they move away from pixel-level prediction which has limitations when predicting thin structures. They propose GraphMorph which predicts location of endpoints of each branch and finds the optimal path between them. In this way, they are able to improve upon both False Positive and False Negative errors. Their method can be applied to any segmentation backbone. Strengths: - Their method is able to handle both FP and FN errors. Otherwise, most existing methods in literature tend to tackle only FN errors. - The authors compare against adequate baselines and have improvements over them. Weaknesses: - For the performance mentioned in the tables, the authors should provide standard deviation to understand whether their method is statistically significant or not. The authors should conduct t-test [1] to confirm this. - The authors should provide the computational complexity of their method with respect to the baselines. The complexity should be about the training and inference time of their method. I suspect the algorithm is slow due to iterating over each edge. **References** [1] Student, 1908. The probable error of a mean. Biometrika, pp.1–25. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses section Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately mentioned the limitations of their method, specifically the small ROI which can miss spatial context necessary for good performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! We will answer your questions below. **Q1:** For the performance mentioned in the tables, the authors should provide standard deviation to understand whether their method is statistically significant or not. The authors should conduct t-test to confirm this. **A1:** Thanks for your suggestion. We have provided error bars in our rebuttal pdf (detailed in Table 1). As can be seen from the results, most improvements of our method over baselines are significant. We will definitely include error bars for all experimental results in the revised version of the paper. Regarding the t-test, we demonstrate the comparison of segmentation performance on the PARSE and ISBI12 datasets based on UNet, and the results are shown in the following table. PARSE is a 3D pulmonary artery dataset, and we show the results of GraphMorph's experiments on it in the rebuttal pdf. | Dataset | Metric | t-Statistic | p-Value | | ------- | --------------- | ----------- | ------- | | PARSE | Dice | 2.0749 | 0.0386 | | | clDice | 3.9496 | <0.0001 | | | $\beta_0$ error | -12.3858 | <0.0001 | | | $\beta_1$ error | -4.1148 | <0.0001 | | | $\chi$ error | -11.1030 | <0.0001 | | ISBI12 | Dice | 2.6297 | 0.0137 | | | clDice | 2.9060 | 0.0071 | | | $\beta_0$ error | -7.1229 | <0.0001 | | | $\beta_1$ error | -2.9271 | 0.0035 | | | $\chi$ error | -7.4095 | <0.0001 | The statistical results from our t-tests clearly demonstrate that GraphMorph significantly improves segmentation performance on both 2D and 3D datasets, as evidenced by the significant p-values across all metrics for the PARSE and ISBI12 datasets. Notably, while improvements in volumetric scores such as Dice and clDice are significant, the most striking results are observed in the topological metrics ($\beta_0$ error, $\beta_1$ error, and $\chi$ error), where our method consistently demonstrates substantial enhancements. These results underline the capability of GraphMorph to effectively leverage branch-level features to optimize the extraction of tubular structures' topology. **Q2:** The authors should provide the computational complexity of their method with respect to the baselines. The complexity should be about the training and inference time of their method. I suspect the algorithm is slow due to iterating over each edge. **A2:** We appreciate the request for detailed computational complexity comparisons. Below is a table that outlines the resource utilization during the training process on the DRIVE dataset with a UNet backbone. | Method | Params | FLOPs | Time per iteration/s | GPU Memory | | ------------- | ------ | ----- | -------------------- | ---------- | | SoftDice | 39M | 187G | 0.203 | 5.4 GB | | softDice+Ours | 48M | 268G | 0.589 | 11.8 GB | The increase in parameters and FLOPs in our approach primarily stems from the integration of the Graph Decoder featuring a DETR module. This component is crucial for predicting accurate topological structures of the graphs. Advancements in transformer architectures that reduce computational overhead could potentially enhance the efficiency of our model during training. Implementations such as Lite-DETR [1], noted for their efficiency in handling transformers, could serve as alternatives to the current DETR module, potentially reducing training time and computational resource usage. As detailed in Appendix F, the inference time analysis has been conducted in Table 5. The results show that the Morph Module is the most time-consuming part during inference process. However, the iterative execution over each edge using the SkeletonDijkstra algorithm is not the primary contributor to this time consumption. This is because the algorithm operates on much smaller patches. For example, a $384 \times 384$ image is divided into numerous $32 \times 32$ patches. Due to the limited size, running SkeletonDijkstra on one patch is highly efficient (about 0.01s per patch). The primary time expenditure currently arises from processing these patches sequentially. However, as each patch operates independently, there is significant potential to enhance efficiency by parallelizing the computations of all patches. Recognizing this opportunity, we plan to focus future work on optimizing the Morph Module by implementing parallel processing techniques to accelerate inference. Additionally, your question has inspired us to explore the parallelization of edge operations within each patch, which may further reduce time cost. We intend to detail these improvements in revised version of our paper and believe this will substantially elevate the operational efficiency of our model. References: [1] Li, Feng, et al. "Lite detr: An interleaved multi-scale encoder for efficient detr." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. As it answers my questions, and the performance on 3D PARSE dataset is adequate, I will increase my final rating. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you very much for your feedback and for the decision to increase the rating. We are pleased that our rebuttal addressed your concerns, and the performance on the 3D PARSE dataset received your recognition. We will ensure that your suggestions are incorporated in our revised version. --- Reply to Comment 1.1.2: Title: Looking forward to your further feedback! Comment: Dear Reviewer 4DfY, Thank you once again for your efforts in reviewing our paper. We sincerely appreciate your re-evaluation and the decision to increase the final rating. However, we noticed that the final score remains at 5, suggesting there might still be unresolved concerns regarding our work. As the deadline for revisions is quickly approaching, we would be grateful if you could share any remaining issues or feedback that could help us enhance the quality of our paper and align it more closely with your expectations. We look forward to your valuable feedback! Paper 13902 Authors
Summary: This paper proposes a method to extract tubular structures from images. Specifically, the authors have integrated graph extraction into the segmentation architecture. They have proposed a link prediction part to predict graph connectivity in tandem with the segmentation network. They combined the predicted graph into a segmentation branch via a morphing module during inference. Strengths: 1. The method is well-motivated, and the paper is well-written. 2. The link prediction idea is technically sound for computationally efficient connectivity prediction. 3. The morph module, which integrates the predicted graph in the segmentation, is novel. It seems effective, as demonstrated in Table 1 and Figure 6. 4. The proposed method seems to improve segmentation results across different metrics over multiple datasets. Weaknesses: 1. Is the threshold in the SkeletonDijkstra algorithm dataset-dependent? 2. Is the segmentation network trained simultaneously with the graph decoder or one before the other? 3. Are there any metrics computed on the extracted centerlines? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. See weaknesses 2. I was wondering whether the morph module could also be used during training to correct some segmentation errors. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been discussed well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback! Please see our answers for your questions below. **Q1:** Is the threshold in the SkeletonDijkstra algorithm dataset-dependent? **A1:** $p_{thresh}$ is a hyper-parameter which is dataset-independent. It is set to 0.5 as the default setting across all experiments. We will add this setting to the "Implementation Details" in the revised version. **Q2:** Is the segmentation network trained simultaneously with the graph decoder or one before the other? **A2:** The segmentation network and the graph decoder are trained simultaneously to optimize both components effectively and enhance overall training efficiency. This joint training approach allows the segmentation network to leverage the branch-level features provided by the graph decoder, leading to more coherent and accurate predictions of tubular structures. **Q3:** Are there any metrics computed on the extracted centerlines? **A3:** In addressing the metrics for evaluating the quality of extracted centerlines, Table 1 in our main paper presents a detailed breakdown of the metrics. These metrics are calculated by comparing the centerline masks predicted by our model with the ground truth masks, detailed below: 1. **Volumetric Scores:** The first three columns of Table 1 focus on volumetric scores which assess different aspects of quality of extracted centerlines: - *Dice* evaluates the overlap between the predicted centerline and the ground truth. A higher Dice score indicates better alignment between the predicted centerline and its corresponding label. - *AUC* and *ACC* provide a comprehensive evaluation of the model’s ability to classify pixels correctly, indicating the model's performance in distinguishing between the centerline of tubular structures and the background. 2. **Topological Metrics:** The last three columns of Table 1 introduce topological metrics, which indicate accurate representation of topology: - *$\beta_0$ error* measures the error in the number of connected components of the extracted centerline. Predicted broken or redundant vessels will cause this metric to rise. - *$\beta_1$ error* assesses the accuracy in the number of loops or cycles within the centerline, which is effective for evaluating complex vascular topology. - *Euler characteristic error ($\chi$​ error)* combines the first two topological aspects, providing a holistic view of the topological accuracy. Metrics like Dice, ACC, $\beta_0$ error, $\beta_1$ error, and $\chi$ error are also utilized in Table 2 and Table 3 of our main paper, which evaluate the performance of our model on the segmentation task. By using share metrics across different contexts, we establish a consistent framework for evaluating our approach against both tasks. **Q4:** I was wondering whether the morph module could also be used during training to correct some segmentation errors. **A4:** Currently, the morph module is designed to operate during the inference phase and does not propagate gradients, which precludes its direct use in training the segmentation network. However, the patterns of connectivity it identifies have significant potential to refine the training process. Recognizing this potential, the possibility of modifying the morph module to enhance error correction during training is promising. One feasible approach could involve developing a differentiable version of the morph module that can be integrated into the back propagation process. Another one is to enhance the training process by assigning increased weights to path points identified by the morph module as crucial for connectivity. Such developments could enhance the robustness and accuracy of segmentation networks, and we plan to pursue this direction in our future work.
Summary: This paper presented a method called GraphMorph, for tubular structure extraction to achieve more topologically accurate predictions. GraphMorph consists of a Graph Decoder and a Morph Module: 1) the Graph Decoder generates/decodes/learns a graph structure from the ouput of segmentation network and segmentation image features; 2) then the Morph Module processes the graph provided by the Graph Decoder and the centerline probability map given from the segmentation network by employing a novel SkeletonDijkstra algorithm, to output the final centerline mask. Various popular datasets are used for empirical validation. Strengths: 1, The paper is well written with good clarity. 2, The method proposed overall is unsuprising but logical (but definately not a breakthrough). Weaknesses: My main questions are more related to the overall technical contributions to this line of work: a) The datasets are pretty small and have been extensively tested in the past. There is an over-fitting concern on these small and widely used datasets, in analogy to the "peek too much" bias. b) all examples are primarily 2D patches. From the qualitative examples shown in the paper, they looks more like "toy examples". Does the methods work on full 3D CT scans, like extracting lung airway trees? c) The numerical improvements reported in the paper, overall are marginal. Authors makes claims as follows: "Broader Impacts: The GraphMorph framework significantly enhances medical diagnostics by improving the accuracy of tubular structure extraction, such as blood vessels, which is crucial in AI-assisted medical image analysis. These advancements can lead to more precise diagnostics, potentially reducing unnecessary medical procedures and costs. Furthermore, if the capability for real-time analysis is further enhanced, it could significantly impact emergency medical responses by accelerating decision-making and improving patient outcomes. Ultimately, our research not only advances technical knowledge in medical image analysis, but also holds potential for significant positive impacts in healthcare efficiency and patient care." These claims are significantly over-claimed. The examples shown here are very much toy examples on small image patches. In real world healthcare diagnosis, robust 3D tubular structures like vessels for heart and lung, or airway trees need to be precisely extracted at certian depth from 3D CT scans. Topology constrained tubular structures are parsed from medical image analysis literature (there are many previous work on vessel extraction, and lung airway extraction). The method proposed here does not seem to work directly in such real applications. So authors need to be very careful about making such claims. Technical Quality: 3 Clarity: 3 Questions for Authors: 1, The SkeletonDijkstra Algorithm is proposed by authors alone, any reference? 2, My main questions are more related to the overall technical contributions to this line of work: a) The datasets are pretty small and have been extensively tested in the past. There is an over-fitting concern on these small and widely used datasets, in analogy to the "peek too much" bias. b) all examples are primarily 2D patches. From the qualitative examples shown in the paper, they looks more like "toy examples". Does the methods work on full 3D CT scans, like extracting lung airway trees? c) The numerical improvements reported in the paper, overall are marginal. 3, Authors makes claims as follows: "Broader Impacts: The GraphMorph framework significantly enhances medical diagnostics by improving the accuracy of tubular structure extraction, such as blood vessels, which is crucial in AI-assisted medical image analysis. These advancements can lead to more precise diagnostics, potentially reducing unnecessary medical procedures and costs. Furthermore, if the capability for real-time analysis is further enhanced, it could significantly impact emergency medical responses by accelerating decision-making and improving patient outcomes. Ultimately, our research not only advances technical knowledge in medical image analysis, but also holds potential for significant positive impacts in healthcare efficiency and patient care." These claims are significantly over-claimed. The examples shown here are very much toy examples on small image patches. In real world healthcare diagnosis, robust 3D tubular structures like vessels for heart and lung, or airway trees need to be precisely extracted at certian depth from 3D CT scans. Topology constrained tubular structures are parsed from medical image analysis literature (there are many previous work on vessel extraction, and lung airway extraction). The method proposed here does not seem to work directly in such real applications. So authors need to be very careful about making such claims. 4, The results here are far from clinically useful. on smaller datasets for evaluation, error bars or std should also be reported. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: as weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! We will respond to your concerns below. **Q1:** The SkeletonDijkstra Algorithm is proposed by authors alone, any reference? **A1:** The SkeletonDijkstra algorithm proposed by our paper represents an innovative adaptation of the classic Dijkstra's algorithm, specifically tailored to the unique challenges of tracing centerlines in tubular structures. This adaptation was necessary to handle the specific topology and morphology of tubular structures that are not addressed by traditional shortest path algorithms. While we have no external references for this particular adaptation, our extensive experiments in the paper confirm its efficacy and accuracy. **Q2:** The datasets are pretty small and have been extensively tested in the past. **A2:** Thanks for the comments. In medical imaging, especially for tubular structure analysis, large-scale public datasets are scarce due to privacy concerns and data collection complexities. The datasets we selected, while smaller relative to those used in broader computer vision tasks, are established benchmarks within the medical imaging community. Despite their frequent use in previous studies, these datasets still present unresolved challenges [1] [2], especially related to topological errors that happen regularly, which is derived from under-utilisation of branch-level features of tubular structures. Our approach mitigates this problem through the efficient exploitation of branch-level features. To demonstrate the broader applicability of our approach, we have conducted experiments on the Massachusetts Roads dataset with 1171 aerial images in the paper. Moreover, we provided further validations with 3D CT scans in our rebuttal pdf. These results not only demonstrate the method's effectiveness but also its potential in more clinically relevant, real-world settings. **Q3:** All examples are primarily 2D patches. The effectiveness of the method on 3D data needs to be validated. **A3:** We recognize the need to validate our method, GraphMorph, on 3D datasets for its clinical relevance. Thus, we have extended GraphMorph to use the 3D UNet architecture and tested it on the pulmonary arterial vascular segmentation dataset from the PARSE challenge [3], which includes 100 annotated 3D CT scans. These cases were divided in a 7:1:2 ratio for training, validation, and testing. The preliminary results, as detailed in Table 2 of our rebuttal pdf, show that our method consistently outperforms existing baselines across all metrics, mirroring the success we observed with 2D data. This alignment between 2D and 3D results not only underlines the effectiveness of our method but also its adaptability to 3D vessel segmentation task. We plan to apply our method to more diverse 3D medical datasets to further validate the effetiveness of our method. **Q4:** The numerical improvements reported in the paper, overall are marginal. **A4:** Thanks for the comments. Our method, GraphMorph, is specifically designed to enhance the accuracy of topological features in tubular structure extraction, which is a critical aspect often overlooked in most traditional pixel-level frameworks. The focus of our method on topological accuracy is reflected in the significant improvements in topological metrics, such as the reduction of $\beta_0$ error by 25% to 45% as reported in Table 2 of our paper. While the improvements in volumetric metrics like Dice and clDice are relatively modest (approximately 1%), the advancements in topological integrity are significant. Accurately restoring topology is both challenging and crucial in tubular structure extraction tasks. The greatly reduced topology errors are derived from our direct exploitation of branch-level features. This can also be illustrated by our experiments on the 3D pulmonary artery dataset PARSE. **Q5:** On smaller datasets for evaluation, error bars or std should also be reported. **A5:** Thanks for the suggestion. We have provided error bars in our rebuttal pdf (detailed in Table 1). As can be seen from the results, most improvements of our method over baselines are significant. We will definitely include error bars for all experimental results in the revised version of the paper. **Q6:** "Ultimately, our research not only advances technical knowledge in medical image analysis, but also holds potential for significant positive impacts in healthcare efficiency and patient care." These claims are significantly over-claimed. **A6:** We appreciate your feedback on our claims regarding the broader impacts of GraphMorph. Our results on 2D datasets demonstrate the clinical potential of our method for 2D tubular structure extraction. However, we acknowledge that its performance in complex 3D vascular tasks was not fully explored in the initial submission. In our rebuttal pdf, we have included preliminary results from applying GraphMorph to 3D pulmonary artery segmentation, where we observed obvious improvements. Moving forward, we plan to extend our evaluations to more diverse 3D datasets to further validate and demonstrate the clinical applicability of our method. We will also ensure that our claims will be carefully measured to reflect the current state and future potential of our research accurately in the revised version. References: [1] Hu, Xiaoling, et al. "Topology-preserving deep image segmentation." Advances in neural information processing systems 32 (2019). [2] Dong Wang, et al. "Pointscatter: Point set representation for tubular structure extraction." European Conference on Computer Vision, pages 366–383. Springer, 2022. [3] Luo, et al. "Efficient automatic segmentation for multi-level pulmonary arteries: The PARSE challenge." arXiv preprint arXiv:2304.03708, 2023. --- Rebuttal 2: Title: Looking forward to your re-evaluation. Comment: Dear Reviewer Ysem, Thank you for the time and effort you've invested in reviewing our paper. We have carefully responded to each of your comments and questions. As the deadline for the author-reviewer discussion is nearing, we would greatly appreciate it if you could kindly take a look at our responses and provide your valuable feedback. We are fully prepared to engage in further discussions should you have any additional concerns or suggestions. Thank you once again and we greatly appreciate your contributions to refining our work and eagerly await your re-evaluation. Paper 13902 Authors --- Rebuttal 3: Title: thanks for rebuttal Comment: This paper provided some clever technical novelty to improve topology constrained tubular structure extraction. From a paper perspective, this is a reasonably all-round paper that has an interesting idea and consistent empirical performance improvement. For a NeurIPS paper, this is probably ok. However authors should not over claim too much on the clinical impact side of business. From what I can see, this paper has very limited utility in real clinical applications. From the submitted version, it has not been rigorously tested. With rebuttal, it is on some quite limited public datasets which say very little on real clinical application complexity. You would need at least participate in some well known medical imaging public challenges and achieve some encouraging results to start with. https://atm22.grand-challenge.org/ --- Rebuttal Comment 3.1: Title: Thanks for your feedback Comment: Thank you for your feedback and the points you've raised. We appreciate the opportunity to address your concerns and clarify aspects of our work. 1. **Dataset and Experimental Validation**: Our paper utilizes several 2D medical datasets that have been commonly employed in previous leading studies [1-3], validating the effectiveness and generalizability of our method. Additionally, the supplementary experiments conducted on PARSE dataset [4], which is sourced from the public PARSE challenge (https://parse2022.grand-challenge.org/) and comprises authentic CT scans, further demonstrate the applicability and robustness of GraphMorph to 3D data. 2. **Concerns Regarding Claims**: We appreciate your concerns regrading potential overstatements about the clinical impact of our work. And we are in the process of revising our manuscript to better align our claims with the experimental conclusions presented, as detailed in **A6** of our rebuttal. Additionally, we are open to the possibility of participating in further public challenges to provide additional empirical validation for our approach in more settings. We hope that our clarifications and revisions address your concerns satisfactorily. [1] Shit, Suprosanna, et al. "clDice-a novel topology-preserving loss function for tubular structure segmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [2] Dong Wang, et al. "Pointscatter: Point set representation for tubular structure extraction." European Conference on Computer Vision, pages 366–383. Springer, 2022. [3] Qi, Yaolei, et al. "Dynamic snake convolution based on topological geometric constraints for tubular structure segmentation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [4] Luo, et al. "Efficient automatic segmentation for multi-level pulmonary arteries: The PARSE challenge." arXiv preprint arXiv:2304.03708, 2023.
Rebuttal 1: Rebuttal: Dear Reviewers and ACs, We were encouraged to receive the following positive comments from reviewers: "The method is well-motivated"(CJeh), "The paper is well written with good clarity"(Ysem), "Their method is able to handle both FP and FN errors."(4DfY), "Experimental results against SOTA are convincing."(S6G1). Your comments are very valuable in improving our paper. Reviewers Ysem and 4DfY both suggested adding error bars to the experimental results. We have updated Table 2 of the main paper and put it in the rebuttal pdf (detailed in Table 1). As can be seen from the results, most improvements of our method over baseline is significant. We will ensure that error bars for all experiments are provided in the revised version of the paper. Below, we address specific concerns raised by individual reviewers in detail. Pdf: /pdf/1bb6d2f2a447bae1fd13060535a438048fedf4c5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Optimizing Automatic Differentiation with Deep Reinforcement Learning
Accept (spotlight)
Summary: This paper studies automatic differentiation in a computational graph. The topic is classic with wide applications in scientific research, e.g., computing gradients of a neural network or Jacobians needed in numerical optimization. Classic numerical methods like standard forward- and reverse-mode differentiation (backpropagation) exist, and the novelty and contribution in this work lies in its proposal of training a reinforcement-learning (RL) agent to explore an optimized order in graph vertex elimination. The paper demonstrates the efficacy of its method by comparing the time cost of the found auto differentiation policy (measured by multiplication count and wall-clock time) with that of classic and heuristic methods. Strengths: Training an RL agent to rethink classic numerical methods is an interesting research topic. Previous examples include DeepMind’s works on matrix multiplication and sorting algorithms. Because these numerical problems are basic and fundamental building blocks in today’s scientific research, improvements in its solutions can have high potential and impact. This paper follows this trend and applies RL to another specific example: automatic differentiation. I think the paper picks a good topic and makes a meaningful contribution to the community. The technical method in this paper also looks good to me. Using RL to explore better numerical and computing methods typically requires a careful, RL-friendly (re-)formulation of the original numerical problem, including the design of the state space, the action space, and the reward function. I think choosing to tackle vertex elimination ordering is a smart and reasonable move, which leads to a well-formulated game that captures the key structure in automatic differentiation. Weaknesses: I don’t have major concerns with the paper, but I want to mention that there are quite a few typos in the equations in the main paper. For example, I suspect “i > j” should be replaced with “i < j”, and “sum_i” in line 192 should be “sum_j”. Please double-check all equations in the paper and make sure there are no typos. Technical Quality: 3 Clarity: 3 Questions for Authors: I don’t have any questions for now. The key idea described in this paper is quite straightforward, and the writing in the main paper is generally good. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Admittedly, the proposed RL method does not beat classic methods across the board in the experiments. I share the similar conjecture expressed in the paper: backpropagation is a quite strong baseline for large-input, small-output computational graphs, and it can be quite difficult to beat. This may also indicate that any small improvement on this strong baseline can have a significant impact, as backpropagation is so widely used in practice. On a related note, I appreciate that this paper is upfront about its marginal improvement over baselines in certain cases. This gives readers a balanced and well-positioned view of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for pointing out the typos and will definitely fix them for the final submission. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: I remain positive about this work and will maintain my score.
Summary: The authors study the problem of computing the Jacobian of arbitrary computation graphs. The typical approach in most autodiff libraries is to implement the classic backpropagation algorithm, aggregating gradients from the end to the beginning. This is because such approaches are well suited for cases where the number of inputs is significantly larger than the number of outputs, such as neural nets. The paper first describe alternate approaches of computing the gradient from the literature, then describe how Jacobian computation can be treated as a sequential decision making problem. The state is the current computation graph, as represented by an adjacency matrix, and the actions are choosing vertices of the computation graph to eliminate. Each vertex elimination incurs a cost of updating the partial derivatives of its neighbors, and the terminal state is when all intermediate vertices are removed and only edges between the inputs + outputs remain. The goal is to minimize the cost, which this paper measures in multiplications. The authors try both an AlphaZero style agent and a PPO agent, and demonstrate their agents can outperform standard methods (backward AD, forward AD, etc.) on a few example functions. The empirical runtime is also analyzed, showing that the AlphaZero agent can lead to faster computation, even though the reward function does not account for all runtime factors (i.e. memory access patterns) Strengths: The paper is written well, and serves as a helpful introduction to how to optimize autodifferentiation. The paper acknowledges that it makes some simplifications to the classes of computation graphs it can be applied to (notably limits on input dimensionality make the Transformer experiment somewhat unrealistic), but demonstrate it is possible to achieve gains over typical methods using learning based methods. An analysis on real hardware is performed as well. Weaknesses: By my understanding, both the AlphaZero agent and PPO agent need to be instantiated separately for each class of functions. This is based off the note in Appendix F, that the authors did not succeed in training a joint model for the PPO agent. One of the papers cited as related work is the one for improving matrix multiplication with search algorithms, and one benefit of that work was that although the search was very specific to 4x4 matrices for example, it was possible to us the discovered results of that work elsewhere. In comparison, I believe the current approach requires the time spent learning the PPO / AlphaZero agent to get made up by the time spent running the autodifferentiation calls. My suspicion is that the net time ends up being negative, assuming the AlphaZero setup takes longer than PPO, since the PPO setup is a few minutes of training to improve the MLP gradient computation by $$0.30 - 0.29 = 0.01$$ milliseconds in the GPU case, which implies 18-24 million steps to hit a break-even point. (3-4 minutes / time-saved-per-step) Still, I overall think there are interesting signs of life here, even if the method itself is not entirely practical yet. Technical Quality: 4 Clarity: 3 Questions for Authors: As part of the graph representation coding, there is a list of 21 different sparsity patterns to describe the intermediate edges of the computation graph. Is this list exhaustive to more general computation graphs, or are there other forms the partial derivative can take if the space of computation graphs is increased? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the helpful input to our work. In particular, we agree with the reviewer that method is not entirely practical yet, but we intend to improve so that is becomes applicable to a wider range of problems. Automatic differentiation is an ubiquitous tool that sees application in many scientific areas. Our experiments show that there are benefits to be reaped across fields on already the simplest problems. Thus we are confident that the ramifications of scaling our algorithm and bringing it to practical use are huge with large potential benefits for users in all application areas. Furthermore, the reviewer's analysis made us realize that the presentation of our results does not entirely capture the actual runtime improvements achieved with our algorithm. We invite the reviewer to have a look at figure 1 in the accompanying .pdf-document where we demonstrate the scaling of our optimized algorithm for the MLP for increasing batch sizes and layer sizes. For larger MLPs and larger batch sizes, the actual runtime gains are significantly larger than what is presented in table 2 of the main text, thereby providing a stronger argument for the practical use of our method. In the words of the reviewer, with larger batch sizes, the break-even point shifts in favor of our method. We will adjust the main body of our submission accordingly to accommodate this. Finally, to answer the question regarding the 21 different sparsity types and their exhaustiveness: The list of sparsity types is exhaustive for Jacobian tensors of up to 4 dimensions i.e. $\dfrac{\mathrm{d}T_{ij}}{\mathrm{d}x_{kl}}$. Note that dimension in this case refers to the size of the mathematical objects, i.e. a number is zero-dimensional, a vector is one-dimensional and a matrix is two-dimensional etc. All computation graphs that only rely on objects that are 2-dimensional (max. 4-dim Jacobian) can be analyzed and differentiated with this procedure without any further extensions. If we want support for higher-order Jacobians, e.g. a six dimensional Jacobian tensor like $\dfrac{\mathrm{d}T_{ijkl}}{\mathrm{d}x_{mn}}$, we need to introduce new sparsity types. This is not too difficult, but for the sake of simplicity we limited ourselves to 4 dimensions as many interesting applications can already be treated in this case. In the future, we intend to extend the support for higher-dimensional Jacobian tensors to make our approach more generally applicable. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I will keep my score as-is.
Summary: This paper considers the (combinatorially hard) optimization problem related to automatic differentiation algorithms represented via an "elimination order" based on prior work and proposes an RL formulation to solve it approximately. This follows in similar vein to recent celebrated results on matrix multiplication and sorting. The authors demonstrate that their approach is able to improve, conceptually on the first order metric of the number of multiplications as defined in the optimization search, and more practically that this can be sufficiently meaningful for runtime as they measure it on representative benchmarks , compared to the default forward/reverse mode implementations. The paper is well written and considers a reasonable variety of computational graphs in the evals. The main overhead in adopting the proposal in practice is primarily in the form of a specialized policy search for each computational graph, which factors in roughly speaking, at the compilation stage and is not reflected explicitly in the benchmarks. Strengths: - The paper is well motivated and proposes a policy search framework for a well studied combinatorial optimization problem (vertex elimination for AD). Instead of targeting compilation (at a lower level) or dealing exclusively with optimizing individual matmuls as in prior work, the paper targets a novel alternative by focusing on vertex elimination in AD which is a ubiquitous part of all training compute graphs. - Any non-trivial improvements on large scale computational graphs could lead to significant compute savings, and this work proposes several novel small scale benchmarks to evaluate vertex elimination algorithms which maybe of interest in its own right. Weaknesses: - The primary weakness in terms of the impact appears to be that the policy must be trained from scratch separately for each graph. This unfortunately means that there is a whole training run necessary per graph unlike all of the other competing methods (including the minimum Markowitz degree heuristic, which needs no "training", and already appears to do better than fwd/reverse mode in many situations). - The overhead involved in running a separate optimization for every graph that is distinct from the compilation step (mostly under the hood) seems like it could makes the proposed method unwieldy for iterative usage inspite of saving some compute. Of course, any compute savings on large training graphs are valuable. However, the transformer baseline comparison indicates that the gains are more prominent in the smaller benchmarks and in the larger experiments, the reverse mode and AlphaGrad perform very close to each other. Technical Quality: 3 Clarity: 3 Questions for Authors: - When considering the number of multiplications, given the definition of a vertex, I assume this refers to the number of matmuls, but the costs of these vary by the dimensions of the jacobians. Do you take this into account anywhere? If not, what is the hurdle? Making the exposition a bit more explicit on the definition/implementation of the reward is framed might be helpful. Other comments: - The labels for cross edges in Figure 2(d) appear to be swapped. - Caption for Figure 2 has intermediate variable definition of $v_2=cos(v_1)$, which needs to be $sin(v_1)$ instead? - L128 graph 2a --> Fig 2a - In Definition 1 L138: should this be $c_{ij}=c_{jk}=0$ rather than $c_{ij}=c_{ik}=0$? - L142, $\phi_j \circ \phi_k$ needs to be $\phi_k \circ \phi_j$ instead. - Equation (2) needs to be $W_{ik}$ rather than $W_{ki}$? - L324 refers to Table 2 being of theoretical value -- should this be Table 1? - L341 s/note/not - Figure 1 3rd box: s/executation/execution Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable feedback on our work and appreciate the time the reviewer took to double-check the equations. We will implement the corrections accordingly. We partially agree with the reviewers assessment that our method requires retraining for every computational graph. While it is true that the results presented in tables 1 and 2 of the main paper were obtained with individual training, we also investigated the prospect of joint training and found that in 7 out of 10 cases our method still outperforms the baseline with a major improvement of around 10\% for the random function $f$. These results make us confident that for a larger dataset and longer training time, our method will eventually generalize out-of-distribution and predict the optimal elimination order without requiring training. We intend to implement this feature in future iterations of our algorithm. Furthermore, we would like to emphasize the point that this paper not only presents a novel RL-based method to optimize AD algorithms but also introduces, for the first time, a Python-based interpreter that actually allows to apply vertex elimination to a wide range of problems, including machine learning. In particular, the creation of the baselines using the minimal Markowitz heuristic are only possible due to our implementation in the first place. So while it is true that the minimal Markowitz heuristic can do better than forward-mode and reverse-mode in many cases already, it is only due to our implementation of the interpreter that this could be exploited in practice. We believe that this already makes a contribution in its own right. Furthermore, we were able to show that while the minimal Markowitz heuristic performs very well, it is outperformed by our method on all benchmark tasks, sometimes by a significant margin. Regarding the second major weakness that the reviewer pointed out, which is that our proposed method unwieldy for iterative usage, we are not entirely sure if we understand the remark correctly. Hence, we interpret it such that if the computational graph changes after the compilation stage, retraining is required. First of all, we want to clarify that the computational graph and the respective Jacobian function are independent of the input data, i.e. it is not necessary to retrain the graph every time the input of the Jacobian function changes (this might change when including control flow, which we have not considered yet). Secondly, for many practical use cases, the compute graph does not change after the compilation stage, i.e. the neural network architecture or the computational fluid dynamics equations are static. Therefore, the training phase is only necessary prior to the compilation stage once and the computed AD algorithm can be used iteratively after that for any input parameters. We still agree with the reviewer on the fact that the performance of our method on transformers is not satisfactory yet, but we would like to point out that theoretically there is a benefit to be harnessed as shown by table 1 in the main body of the paper. Furthermore, we deliberately included the negative outcome on the transformer case in hope of sparking future research endeavours in this direction and point out current limitations of our approach. Regarding the questions about how the reward is actually calculated and whether the shape of the Jacobian is taken into account: We thank the reviewer for pointing out that this part of the algorithm might not be described very well in the main body of the text and we will perform the appropriate changes to make this more explicit. While we gave a overview of the answer in the general rebuttal, we would like to take the time here to quickly explain in more detail how the reward is calculated: The reward function takes into account the shape/dimension of the Jacobians, i.e. if we have a matrix-multiplication of two dense Jacobians of shape 3x3 and 3x2, the resulting reward would be -18. It also takes into account the sparsity type of the Jacobian. i.e. some operations have a naturally diagonal Jacobian. In this case the matrix-multiplication becomes a pointwise operation so that for diagonal 3x3 Jacobian and a dense 3x2 Jacobian we have a reward of -6 instead. Furthermore, the *graphax* package provided together with this work is a novel interpreter that can take these sparse Jacobians into account, i.e. automatically detect these diagonal Jacobians and replace the matrix-multiplication or tensor-contraction by a pointwise multiply which is much faster in many cases. The native JAX AD package is not able to perform this kind of analysis. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for the rebuttal clarifications. > ...These results make us confident that for a larger dataset and longer training time, our method will eventually generalize out-of-distribution and predict the optimal elimination order without requiring training. We intend to implement this feature in future iterations of our algorithm. Amortizing the search costs across different graphs by learning to generalize between them would be nice, and the authors point out they have already seen evidence of this. However, even while considering the core results for a fixed graph, I am curious what defines distribution of the inputs that makes this evaluation distinct from that of a one-off search procedure looking for a single best elimination order not conditioned on any other inputs. Given the small sizes of the graphs involved, it would be helpful to understand the extent of learning and generalization demonstrated even if within the same graph. >...Furthermore, we would like to emphasize the point that this paper not only presents a novel RL-based method to optimize AD algorithms but also introduces, for the first time, a Python-based interpreter that actually allows to apply vertex elimination to a wide range of problems, including machine learning.... I agree this is a significant and valuable contribution of the work (and distinct from the policy learning/generalization aspects). I will update my final score to take this into account. --- Reply to Comment 1.1.1: Title: Response Comment: We thank the reviewer for their reconsideration and appreciate the additional questions since they are indeed quite relevant for our work. > However, even while considering the core results for a fixed graph, I am curious what defines distribution of the inputs that makes this evaluation distinct from that of a one-off search procedure looking for a single best elimination order not conditioned on any other inputs. The idea behind generalization across graphs or even within one graph is that our agent recognizes reoccurring connectivity patterns and learns how these should be optimally eliminated, conditioned on the global structure of the graph. The distribution of graphs is in that sense the distribution of all possible ways to connect $N$ vertices in a meaningful way such that they correspond to proper, mathematically well-defined and observable functions. A single computational graph and its different stages during vertex elimination do no sample this space entirely (for a single graph with 100 vertices, which is a typical size for the problems in this work, we have $100!=10^157$ possible elimination orders), thereby making it hard to find enough patterns so that the algorithm can generalize out-of-distribution without requiring further training. Also for a single graph, the patterns tend to be highly correlated, which is detrimental for generalization. There has been a lot of prior work in the AD community that aims to identify these connectivity patterns. (Griewank and Walter, 2008) contains some examples in chapters 9, 10 and 11. They call this *local preaccumulation*. In particular, they show that in many cases, an efficient solution is to first find the best elimination order locally for one or more subgraphs. These motifs can be generalized across tasks. However, in practice these recipes do not work very well because it is hard for a human to identify and generalize these for already reasonably small graphs. Given enough training samples and a reasonable search budget, we believe that our agent learns to identify these locally optimal elimination procedures and generalize them to novel computational graphs. Thus, the difference between the one-off search and the search with the model trained on a multitude of graphs is that the agent has learned more diverse wiring patterns and the corresponding elimination orders it could potentially apply. This is compounded by the fact that RL algorithms are prone to be stuck in local minima, i.e. unaware of better solutions\different connectivity patterns due to a lack of exploration. This could be partly alleviated with more compute/training time, but sometimes the one-off training on a single graph is not sufficient and the agent actually benefits from the knowledge gained from training on other graphs. This is evident when looking at the random function $f$, which is particularly hard to solve because there are no regularities and a diverse set of solutions exist. In this difficult domain, the joint training increased performance by almost 10%. We believe this is due to the agent being aware of more possible wiring patterns and their optimal elimination orders which it learned from the other tasks. > Given the small sizes of the graphs involved, it would be helpful to understand the extent of learning and generalization demonstrated even if within the same graph. This is a good recommendation for a future direction of research. To learn more about learning and generalization in our setting, it might be worth taking a look at the actual elimination orders (see Appendix C) and correlate them with the functions that were executed in the graph. This could provide some insights into some common themes that algorithm uses to optimize the AD algorithm. For example, we found that the algorithm quite often performs elimination of vertices with small Markowitz degree first, except for those that are involved in accumulation operations like summations. Those are eliminated last. More insight could possibly be gained by looking at the actual attention maps of the transformer which could give more information about which parts of the graph actually matter for optimizing the elimination orders. However, a more in-depth understanding of our method from this perspective will require additional tools which we first need to implement. --- Rebuttal 2: Title: Response Comment: We apologize for misunderstanding the question and try to answer the questions to the best of our understanding: In the following everything is described for a single example, say RoeFlux_1d. We create an initial computational graph representation in form of the adjacency matrix. This adjacency matrix serves as an initial state for the agent similar to how for AlphaZero the initial position of the chess or Go board is fixed. In this sense, VertexGame starts from a fixed initial input for every example task, i.e every row. However, every time a vertex is eliminated, new edges are created and others are deleted and the current state of the graph and as such the adjacency matrix changes. Thus the computational graph representation evolves with every action and this evolving computational graph representation (the adjacency matrix) is fed into the agent's neural network model at every step to decide the next action. The selection of the next action is stochastic but becomes more deterministic over time since the policy, i.e. the probability distribution over actions becomes more deterministic. Thus we sample the space of possible computational graphs that can arise from the initial graph through vertex elimination with a focus on exploration early on. Throughout the training on a single task and its initial computational graph, the agent is thus confronted with a distribution over different graph configurations and learns the optimal action to take, similar to how in chess for different board positions the agent learns to make the next optimal move. AlphaZero learns two quantities, the policy and the value function which are realized as two distinct heads forked from the transformer backbone. The policy learns to select the next action and the value function learns to predict the value of the state, which in our case is the number of multiplications remaining until all remaining vertices are eliminated if following the current policy. A pure search without learning (given that we understand learning as an improvement of policy and value functions) means that those quantities would be random. The performance with random policy and value and their subsequent improvement is show in the reward curves presented in Appendix E, pages 24 and 25. We clearly see that at initialization, when the policy and value functions are random, the performance of the tree search is usually very bad and far worse than the baseline. However the performance improves over time, as evidenced by the evolving reward curves that become increasingly better until they beat the baselines. This is in our opinion clear evidence that the policy and value functions are learned over time and improve the tree search by focusing the search budget on the relevant, most promising parts of the tree. The distribution that the agent is learning on is the distribution over all possible graph configurations and their optimal next actions that can arise from performing vertex elimination on the initial computational graph of a given task example, e.g. RoeFlux_1d. Thus for each row in table 1, we start with a fixed initial input, i.e. the state at the root node in the tree is fixed. But when expanding the tree by sampling and taking actions, we create state-action-reward triplets that are sampled from the the distribution over possible graphs that can arise from the initial input graph due to vertex elimination. These samples are then used to train the policy and value functions to improve them, i.e. learn them. If we did not answer the reviewer's question, we are happy to try to clarify further! --- Rebuttal Comment 2.1: Title: Reply Comment: Thank you for the clarification, this was very helpful.
Summary: In this paper, the authors propose a novel method to optimize the number of necessary multiplications for Jacobian computation by leveraging deep reinforcement learning (RL) and a concept called cross-country elimination while still computing the exact Jacobian. The author present the search for the optimal elimination order, aimed at minimizing the required number of multiplications, as a single-player game managed by an RL agent. The results show that this approach yields up to a 33% improvement over state-of-the-art methods across various relevant tasks from different domains. Strengths: - Rephrasing the elimination order in AD as a deep reinforcement learning problem sounds somehow novel. - The paper is well-organized. Weaknesses: - The proposed method only shows a on obvious improvement on the RoeFlux_3d and Random function. While on other benchmarks especially on deep learning related MLP and Transformer cases, the improvements are very marginal (less than 3%). - It seems that the test cases provided are preliminary i.e., formula value evaluation instructions extracted from the whole graph, while not the the entire computational graph of a function/kernel with control-flow or mutable variables. Technical Quality: 2 Clarity: 3 Questions for Authors: - How does the VertexGame ensures that the reduced computational graph gives correct derivatives? Is there a grad check procedure? - What is the application scope of the proposed method? Does it support to programs with complex control-flow (e.g., for-loop/while loop/if-condition and the combination of them)? - The mutable variables are notorious for a AD framework as the values of different version of these variables may needed to be stored for derivatives computations during the backward pass, how does the proposed method eliminate the vertex which is a mutable variable? - What level does the propose method works on? e.g., the source code or intermediate representation? - Does the proposed method more close to Source Code Transformation or Tracing/Operator Overloading in a context of AD? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback of our approach and want to take the time to address some of the feedback more in-depth as compared to the general rebuttal. Firstly, we noticed that our presentation of the obtained results is not optimal and invite the reviewer to have a quick look at figure 1 in the attached .pdf-document. This figure shows the actual runtime gains obtained with our algorithm for the MLP on different batch sizes and layer sizes. It clearly demonstrates that for larger MLPs, there is a significant runtime gain to be harvested. We will make our exposition more precise to properly appreciate this result. Unfortunately, the transformer benchmark does not reproduce the scaling properties of the MLP, but table 1 of the main body shows that there is potential for improvement. In fact, we deliberately included this result in the hope of sparking future research and outlining the current limitations of our algorithm. Secondly, we partially agree with the sentiment that the results are preliminary. While the current benchmark tasks do not take into accounts actual kernels or control flow, many interesting practical applications do not require these features such as the MLP or the RoeFlux borrowed from computational fluid dynamics. However we appreciate the feedback and intend to include support for control flow in future iterations of the algorithm. **Q1**: How does the VertexGame ensure that the reduced computational graph gives correct derivatives?: The vertex elimination algorithm which VertexGame is based on, always yields mathematically provable exact gradient up to machine precision. This is the base assumption of our work and we thank the reviewer for pointing out that we might not have made this assumption clear enough. We will improve our manuscript accordingly. For reference, we suggest the reviewer to check (Evaluating Derivatives, Griewank and Walter, 2008, Chapters 9+10, esp. Corollary 9.2). An in-depth explanation is available in the main rebuttal. **Q2:** What is the application scope of the proposed method? Does it support to programs with complex control-flow? Our work is targeted at every field that actively applies automatic differentiation for computing Jacobians. While our method currently does not support control flow primitives such as conditionals and loops, we are eager to implement support in future iterations of our methods. One however has to make a distinction between cases where the control flow depends on variables that are to be differentiated versus variables that are not differentiated. The latter case is already implicitly supported by our package while the former case will require additional work, since . There exists an extensive body of work that addresses the more complicated case which we will harvest for future improvements. **Q3**: The mutable variables are notorious for an AD framework as the values of different version of these variables may needed to be stored for derivatives computations during the backward pass, how does the proposed method eliminate the vertex which is a mutable variable? We have considered the problems that mutable variables can cause but they are circumvented for many cases by the way JAX works. JAX leverages a functional programming paradigm which requires all functions to be pure functions with no side-effects, thereby not allowing for mutable variables that break the "self-containedness" of the pure function. Regarding mutable variables that do not break this, JAX guarantees referential transparency, i.e. at which point the variable has a given value by unrolling the respective computations and assigning all the intermediate values their own distinct static variables. This effectively creates a trace of the computations which can then be processed by the vertex elimination algorithm. In effect, we pay for being able to treat mutable variables with additional compute and memory consumption. It is important to note however, that in certain relevant cases such as for control flow and loops, there exist dedicated functions in JAX that avoid this costly rollout, e.g. the *jax.lax.scan* primitive. As already discussed, we intend to implement these cases in future iterations of the algorithm. **Q4**: What level does the propose method works on? Our framework works at the Jaxpression-level of JAX. Jaxpressions are a kind of intermediate representation introduced by JAX to handle many of JAX' features in a simple and elegant manner. In particular, JAX' own forward- and reverse-mode AD implementations are realized using Jaxpressions. If analyzed and transformed in the right way, the Jaxpression is actually the representation of the functions computational graph. We exploit this property in order implement vertex elimination which relies on the computational graph to work properly. **Q5**: Does the proposed method more close to Source Code Transformation or Tracing in a context of AD? Our proposed method is a mix of both, source code transformation and tracing since this is the modus operadi of JAX itself. All major features in JAX are implemented as function transformations and use a tracing feature to keep track of all operations performed. JAX introduces the Jaxpression intermediate representation to store the trace and perform the respective function transformations, including automatic differentiation. Our framework *graphax* is no exception to this. However, we want to emphasize here that to make vertex elimination work in JAX, we had to implement an entirely new interpreter from scratch. This interpreter first extracts the computational graph from the Jaxpression and then proceeds to apply the vertex elimination algorithm to obtain the optimized AD code. We consider our package *graphax* as a novel contribution in its own right because there exists no prior implementation of vertex elimination in any Python-based popular machine learning/numerical computing framework. --- Rebuttal Comment 1.1: Title: Response to the reply Comment: Thanks for the detailed reply to my concerns and questions. I can see the performance improvements comparing to default Jax reverse mode in Figure 1 of the attached PDF. Though it does not work the same on transformer benchmark, I agree that it can be a future work as the author mention. I don't have further questions and would like to improve my score.
Rebuttal 1: Rebuttal: We thank reviewers for their thoughtful, detailed reviews and the additional literature suggestions. We first discuss two common themes, then address certain individual comments. **Retraining is necessary for every function** The best results for most of the graphs were indeed achieved with individual training. However, we also performed an ablation study where we jointly trained on all graphs at once. The results of joint training still outperform the baseline for 7/10 tasks with even a major improvement for the random function $f$ over individual training by almost 10% (see table 1 in the attached .pdf-document). Thus, we are confident that with more computational graph examples, it is possible to train a model that eventually generalizes out-of-distribution and can predict high-performance, individually tailored elimination orders/AD algorithms for almost arbitrary functions. **AlphaGrad does not outperform reverse-mode on ML workloads** We acknowledge that the theoretical gain for the transformer architecture does not translate into practical runtime improvements. In fact, we chose to include this result to outline the current limitations of our approach. However, we feel that it is important to emphasize that, our algorithm, for the first time, has found a new automatic differentiation algorithm that is more efficient, i.e. requires fewer multiplications than reverse-mode/backpropagation. This shows that there is some potential to be harvested here which might spark further research since the transformer architecture is now an integral part of the ML community. Furthermore, we invite the reviewers to have a quick look at the accompanying .pdf-document that exemplifies the scaling of the results obtained for the MLP for different workload sizes. The relevance of MLPs for small-scale ML applications is evident and we show that for reasonably sized MLPs, we can achieve significant runtime improvements. **Response to Reviewer #3** **Q1**: How does the VertexGame ensure that the reduced computational graph gives correct derivatives? Is there a grad check procedure? There exists a series of mathematical proofs that guarantee that vertex elimination always yields the correct derivatives up to machine precision. For more information, we suggest (Evaluating Derivatives, Griewank and Walter, 2008, Chapters 9+10, esp. Corollary 9.2). Computing derivatives of a function can be seen as multiplying a long chain of Jacobians of elemental functions (such as matmuls and $ \cos$) with each other. No matter in what order they are multiplied, they will always yield the same exact Jacobian of the function. However, the order in which they are multiplied with each other severely impacts the computational performance, i.e. the number of operations that are needed to arrive at the final result. As an example take three matrices A,B,C of shapes 3x3, 3x2, 2x1. First multiplying A and B and then C requires $3\cdot 3\cdot 2 + 3 \cdot 2\cdot 1 = 24$ multiplications. Solving B and C first and then A requires $3\cdot 2\cdot 1 + 3\cdot 3\cdot 1 = 15$ multiplications. AlphaGrad tries to find the best possible order of multiplication for the chain of elemental Jacobians. The elemental Jacobians are hardcoded into the algorithm as for every other automatic differentiation tool such as Tensorflow or PyTorch. Thus, there are no approximations and vertex elimination always gives the exact gradient. **Q2**: What is the proposed application scope? Does it support to programs with complex control-flow? The application scope of this method includes major areas that actively utilizes automatic differentiation for gradient computation, including computational fluid dynamics, robotics and deep learning. While the current method is limited to functions without control flow, it is possible to extend the notion of vertex elimination to these cases. However, many interesting applications already arise without the need for control flow e.g. MLPs or differential kinematics of robots. Nonetheless, we agree that expanding the method to include control flow will significantly widen the scope of AlphaGrad. **Q3**: The mutable variables are notorious for a AD framework as the values of different version of these variables may needed to be stored for derivatives computations during the backward pass, how does the proposed method eliminate the vertex which is a mutable variable? The functional programming paradigm of JAX requires functions to be pure, i.e. to have no side effects outside the scope of the current function. This already precludes many potential problems arising from mutable variables. However, it is still possible to have mutable variables that do not violate the scope of the function. In these cases, while converting the function of interest into a Jaxpression, JAX rolls out these computations and assigns each new value of the mutable variable a new static variable (similar to static single assignment, SSA), thereby paying the price of additional compute and memory to allow for such constructs. The resulting computational graph can then be differentiated with vertex elimination. **Q4**: What level does the propose method works on? The method works on the Jaxpression level, which is a type of intermediate representation introduced by JAX to handle function transformations. **Q5**: Does the proposed method more close to Source Code Transformation or Tracing/Operator Overloading in a context of AD? The proposed method is a mixture of both approaches, operator overloading and source code transformation, since JAX also utilizes this hybrid approach. **Response to Reviewer #4** **Q1**: The number of multiplications is dependent on the shape of the Jacobians? Correct. We take this into account when computing the reward and will adjust the description to make this more explicit. A full-blown explanation of how the rewards are calculated for (potentially sparse) Jacobians can be found in Appendix C. Pdf: /pdf/8ca4350600411952595abcdfada645a7360da200.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes to learn an efficient computation of the Jacobian of a program by reinforcement learning. The authors base their learning procedure on cross-country elimination, a classical scheme that progressively transforms a computational graph to a bipartite graph representing the evaluation of the Jacobian. If all operations are simply scalar, a reward can simply be defined by the number multiplications accumulated in the final edges. The authors go a step further to accomodate for vector valued operations. The representation of actions and rewards can then feed to a deep reinforcement learning approach a la Alpha-zero. Experiments illsutrate the performance of the approach to find more efficient Jacobian evaluation algorithms than standard baselines on several computational graphs stemming from relevant applications. Comparisons in time are also provided to assess the practical benefits of the approach. The authors developed a library enabling a direct use of the approach built on freely available package JAX. Numerous comparisons are also available in the supplementary material. Strengths: - The proposed approach demonstrates benefits across tasks. - The algorithm is developed in a package that will be released such that the method may quickly be adopted. - The problem can serve as an interesting benchmark for deep reinforcement learning. - The experimental details are throughly detailed to ensure preoducibility. Weaknesses: - As stated by the authors, the generalization performance of the approach may still be unclear, which may make the approach rather computationally prohibitive. Technical Quality: 4 Clarity: 4 Questions for Authors: - Are the performance displayed in Table 2 given after also some jit compilation? - To be sure I understand well: currently, to compute the best elimination procedure given some function, one needs to run a full training of a deep reinforcement learning agent and then use the final policy. In terms of computational time to find the right elimination strategy, this may be rather prohibitive? (Note that showing that better elimination strategies are possible is great.) Detail: line 288: typo "two functions random functions" Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Limitations are discussed such as the poor generalization when performing joint training and the difficulties to handle the range of rewards. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his constructive feedback and the careful study of our manuscript. We will correct the typos immediately. Regarding the two questions posed by the reviewer: The performances in table 2 are indeed given after JIT compilation of our AD algorithm. This was straight-forward to achieve because our package *graphax* is compatible with all other JAX transformations including JIT. The reviewer's assessment is correct as the results presented in table 2 are obtained by training on a single computational graph. However, we also performed ablation studies on joint training where we optimized the AD algorithms of all benchmark tasks simultaneously. In this case, we still observed an improvement over the baseline in 7 of 10 cases with a major improvement of almost 10\% for the random function $f$ over individual training. This makes us confident that with more data, i.e. computational graph examples and more compute we can train a model that can generalize out-of-distribution and predict optimal AD algorithms without requiring a training stage. --- Rebuttal Comment 1.1: Title: Acknowledging rebuttal Comment: I thank the authors to answer my questions and comments. I maintain my score. - Applying RL methods for such a problem is an impressive feat, it even serves as a new benchmark for such approaches. - Even if the results do not "significantly outperform" baselines, the fact that they can for example find better strategies than reverse-mode is quite surprising. - The authors also clearly answered reviewer ErgV concerns. As the authors acknowledge themselves, it would be great if, for a final version, additional generalization results could be reported. --- Reply to Comment 1.1.1: Title: Response Comment: We thank the reviewer for their response and will try our best to include further results in the final version.
Summary: This paper presents a novel method called AlphaGrad for optimizing computational graphs derived through automatic differentiation (AD) algorithms using deep reinforcement learning. The authors formulate the optimization of a computation graph as a single-player game where an AlphaZero agent aims to minimize the number of required multiplications. The authors demonstrate that AlphaGrad is able to improve over preexisting forward-mode AD methods in domains like computational fluid dynamics, robotics, optimization, and computational finance. Notably Deep Learning based tasks which leverage reverse-mode AD don't show significant improvements with AlphaGrad. Finally, the authors introduce Graphax, an AD interpreter for Jax that enables the optimization and execution of computational graphs discovered from elimination orders such as those proposed by AlphaGrad. Strengths: - Demonstrates a novel approach to optimizing AD by leveraging deep RL to discover optimized elimination orders. - The paper is technically sound and provides good descriptions of these various topics for an audience who may not be very familiar with AD. - Overall, the paper is well-written and the key ideas and results are clearly communicated. - The work has the potential for significant practical impact as even small improvements in AD efficiency can lead to large runtime and energy savings when applied at scale. Weaknesses: - Currently, the approach has a simple model based on optimizing the number of multiplications as a proxy for runtime. As the authors note this does not capture the full complexity and ignores factors like memory access patterns. - The baselines for this work seem somewhat limited, there are preexisting AD systems that operate on intermediate representations such as Enzyme [1] and LAGrad [2] that are able to perform much more sophisticated AD optimizations. This seems to be the future of AD and it would have been nice for AlphaGrad to operate on these IRs in a more general fashion. It would be nice if this was mentioned in the paper. - Results don't seem to be as strong for reverse-mode AD which makes sense. Deep learning is probably the largest user of AD and any gains here could have a significant real-world impact. - For every AD graph an RL algorithm must be run to solve for the elimination orderings. This can be quite computationally expensive and RL is known to be sensitive to hyperparameters and take quite a bit of tuning per environment. [1] Instead of Rewriting Foreign Code for Machine Learning, Automatically Synthesize Fast Gradients. William Moses and Valentin Churavy. Neural Information Processing Systems (NeurIPS), 2020. [2] LAGrad: Statically Optimized Differentiable Programming in MLIR. Mai Jacob Peng and Christophe Dubach.International Conference on Compiler Construction (CC), 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: - How challenging would it be to directly optimize for metrics like execution time or memory usage rather than the number of multiplications? This seems like an important direction for making the approach more practical. - Could you provide more details on how the random functions f and g were generated in the text? - Were hyperparameter sweeps performed per graph? Do optimal hyperparameters for the RL agent change depending on the structure of the graph? How might we overcome this limitation? - Why was Jax chosen instead of targeting more powerful IRs like MLIR? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors are forthcoming about the main limitations and highlight important areas for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback, in particular for the additional references which we consider a useful contribution to shape the future of our project. We agree with the reviewer's criticism regarding the use of the theoretical number of multiplications as a proxy for actual runtime improvements. Still, we want to emphasize that this somewhat crude approach already leads to several measurable improvements. Thus, we are confident that when improving for actual runtime and memory access patterns, our results will only further improve. Achieving this with AlphaGrad would require an efficient hardware performance model, which we did not have available at the time of writing. We will leave this for future work, as it is promising research direction in its own right. To answer **Q1**: Implementing new reward mechanism such as runtime and memory accesses will not be too hard, since it just requires to measure the quantity of interest either directly or using one of the aforementioned hardware models. The biggest challenge lies in doing it efficiently and in a statistically robust way, since runtime and memory consumption measurements are inherently noisy. We believe that by expanding our method to incorporate methods from distributional RL, we can alleviate this issue. Given the additional additional overhead of implementing the hardware models, reward measurement mechanisms and efficient implementation thereof, we believe that it would be out of the scope of this paper. We partially agree with the statement that our baselines are limited with respect to state-of-the-art methods such as Enzyme an LAGrad. After a careful examination of both papers, we find that they operate on a different optimization level that is complementary to AlphaGrad. To prevent any misunderstandings upfront, we see this as a big advantage rather than a critique. LAGrad uses static optimizations to create and then optimize the differentiated code at the MLIR level. However, under the hood LAGrad is currently only able to leverage reverse-mode AD to create the derivative code. We conjecture that for an appropriate choice of function where reverse-mode AD performs worse than other AD algorithms (e.g. a function with few inputs but many outputs, which are often found in computer graphics problems such as differential rendering), the stochastic optimizations will in many cases not be able to bring reverse-mode to the same performance as other, better suited algorithms, e.g. forward-mode or minimal Markowitz. Thus, while being an amazing feat of engineering, the user is still required to select a specific AD algorithm for his use case from a very limited amount of choices. Our method instead aims to find the optimal AD algorithm, of which forward-mode, reverse-mode and minimal Markowitz are only certain instantiations. Thus, AlphaGrad is a tool for algorithm search while LAGrad is a tool for algorithm optimization, even though depending on the level of granularity, such a distinction sometimes might be fuzzy. Thus, we do not entirely agree that including comparisons to LAGrad in our benchmarks are fair to either side, since they operate on different levels. Nonetheless, we believe that combining both approaches offers a novel research direction to further extend their potential, thereby enabling even bigger gains. In particular, the handling of sparse operations seems quite intriguing. Regarding Enzyme, we completely agree with the reviewer that implementing AlphaGrad with a LLVM backend will surely be the more efficient choice. But due to our limited experience with LLVM and fitness to other concurrent projects in our research group, we selected JAX and its intermediate representation called a Jaxpression as an appropriate backend that enables fast prototyping (this should answer **Q4**). Regarding the reviewer's recommendation, we will add LAGrad and Enzyme to the introductory and discussion sections to properly appreciate their contributions and propose new research directions. In the future, we will seriously consider a reimplementation of our method using LLVM/Enzyme. To answer **Q3** regarding the brittleness of RL, we agree that the performance of many RL algorithms can massively vary which is why we selected PPO and AlphaZero as our primary agents. They are known to generalize across many different tasks with the same choice of hyperparameters. In fact, all experiments in this paper were obtained with the same set of hyperparameters and we did not find any major improvement by performing a hyperparameter search. The only deviation are the best results for the MLP and transformer cases, where we used 250 Monte-Carlo simulations instead of the default of 50. Thus we are confident, that our training method works well for many other tasks as well, leveraging a plug-and-play style mechanism to improve AD algorithms. Finally, we want to address the reviewer's **Q2** on how the random functions $f$ and $g$ were generated in more detail. To generate the random functions, we implemented another custom JAX interpreter which consists of a repository of elemental operations such as $\cos$, $\log$ but also matrix multiplications and array reshape operations. We then provide an interface to specify the number of input and output variables as well as the number of intermediate vertices that will be eliminated by the vertex elimination algorithm. The random function generator then randomly selects elemental functions from the repository. The number of functions that are unary, binary or perform accumulation or reshape operations can be controlled by adjusting the respective sampling probabilities. The function generator is part of the *alphagrad* package and can be found in the source code under *src/alphagrad/vertexgame/codegeneration/random/random\_codegenerator.py*. The actual implementation of the random functions can be found in the examples directory of the *graphax* package. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions and agreeing to update the manuscript to discuss future work on optimizing over other IRs. I want to update my score to reflect this, I think this work is important and a good first step towards applying RL for optimizing AD.
null
null
null
null
NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction
Accept (poster)
Summary: This paper focuses on improving SDF-based volume rendering and proposes a new pipeline to address issues stemming from SDF-to-density conversion and geometric regularization. First, it changes a global scale parameter to local adaptive value, allowing more flexible density values to be converted from SDF. Second, the method proposes a novel loss function to align the maximum probability distance in volume rendering with the zero-level set of the SDF representation. Third, the paper claims that SDF regularization may be too strong to allow flexible topological changes and thus proposes a two-stage training process. The coarse stage operates similarly to a density field without strong constraints, while the refinement stage then encourages enhanced smoothness. Strengths: 1. The proposed improvements are well-motivated and technically sound in general. 2. The paper is well-written and easy to follow overall. It's great to have Figure 3 for illustration. 3. The paper appropriately mentions related work that shares a similar idea and the problems of prior solutions. 4. The authors compare the proposed method on two datasets and demonstrate its superiority over several baseline methods. They also provide qualitative examples for the ablation study. Weaknesses: 1. It would be more convincing to include quantitative results and more qualitative examples for verifying the effectiveness of the proposed components in the ablation study. Currently, the ablation study is only conducted on a single example scene qualitatively. 2. The idea of using an adaptive scale is not quite new, as it is also seen in previous works [1]. It would be better to discuss the differences with existing works sharing a similar idea. Also, it would be better to provide more motivating examples or analysis on why an adaptive scale is important, such as when to use a large scale and when to use a small scale. 3. For the two-stage training, the motivation is to make the coarse stage more like a density field without strong constraints. The paper claims that "eliminating or downweighting any geometric constraints often results in an unnatural zero-level set." However, this point is not verified in experiments. It is also strongly recommended to add an ablation variant that does not have the eikonal loss (using estimated gradient) in the coarse stage. 4. It is common to use numerical gradients in calculating the eikonal loss. The benefit of sampling a step size should be analyzed and compared more thoroughly in the ablation studies. 5. While the variance of stochastic gradients is understandable, it is unclear why this ensures stability for large features and flexibility for complex details. More explanation and analysis would be helpful. 6. The proposed pipeline is built upon many techniques from TUVR. It would be better to directly compare with TUVR in the experiments. 7. It would be beneficial to show results on the commonly used DTU dataset as well. 8. The paper needs more careful proofreading and polishing: (a) Line 25: "fails to intricate geometric details" (b) Line 28: "is produced by is by" (c) Equation (11), the symbol 'n' is not explained. (d) Equation (8), the symbol 'd' is not explained. [1] Wang, Zian, Tianchang Shen, Merlin Nimier-David, Nicholas Sharp, Jun Gao, Alexander Keller, Sanja Fidler, Thomas Müller, and Zan Gojcic. "Adaptive shells for efficient neural radiance field rendering." arXiv preprint arXiv:2311.10091 (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Equation (8), why does it only penalize a single point instead of a region or multiple points (both SDF > 0 and SDF < 0)? 2. Line 273: What does "prior knowledge in terms of SDF" refer to? 3. Why would optimizing the color conditioned on the normal restrict topological change? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, it's mentioned in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful feedback. For more comprehensive ablation study, comparison with TUVR, results on DTU and more explanation of stochastic gradients, please refer to our global response. Apologies for the brevity due to word constraints. ### **Motivation of Local Scale Factor** > Why an adaptive scale is important. We further illustrate the importance of the local scale in Figure 3 from our PDF file related to the global response, using a simple single plane scenario for explanation. This plane has a low-texture region on the left and a rich-texture region on the right. Without special constraints, the rendering weight should converge to a Dirac delta function at the surface in the richly textured region and form a scattered distribution in the weakly textured region. However, under the assumption of a global scale factor, all areas on the plane follow the same distribution which will be derived in the following sections. This means that both richly textured and weakly textured regions share the same density bias, preventing the surface from converging correctly to the richly textured surface with higher certainty. By introducing the local scale factor, the most significant difference is that the distribution of rendering weights along the ray is no longer uniform. The network can adaptively converge in the richly textured regions, and the density bias in these areas is no longer affected by the low-texture regions. In Adaptive shells, their focus is primarily on rendering quality and they did not evaluate surface reconstruction metrics. A straightforward application to surface reconstruction may lead to issues such as increased density bias. We tackle this by implementing special designs to ensure effectiveness, such as gradually scheduling the lower bound of the scale factor to correct relatively small biases and the explicit bias correction. ### **Design of Explicit Bias Correction** > In Equation (8), why does it only penalize a single point instead of a region or multiple points (both SDF > 0 and SDF < 0)? Although aligning $t^*$ with the SDF zero-crossing is intuitive, our experiments showed that this approach requires special design considerations. We attempted to constrain the SDF values both before and after $t^\*$, which led to very strange optimized surfaces, likely due to overly strong constraints on the SDF. We have shown the visual result in Figure 3 from our PDF file related to the global response. The key to our method's design lies in correcting large relative bias with explicit loss functions, while smaller biases are corrected by gradually scheduling the lower bound of the scale factor. We empirically found that visible surface errors are always caused by $t^\*$ is being before SDF zero-crossing points $t^0$. Therefore, we penalize the SDF value just after $t^\*$ to trend towards negative values, which prevents $t^*$ from being before $t^0$. For the case where $t^\*$ is after $t^0$, we can directly address it by gradually scheduling the lower bound of the scale factor, as the bias in this situation is relatively small. We provided a mathematical explanation under the assumption of a sufficiently small local surface. In the VolSDF framework we used, In this case, if the angle between the ray and the plane is θ, and the distance from the ray’s origin to the plane is $d$, then the distribution of the SDF along the ray can be expressed as $f(r(t)) = −t\sin θ + d$. By setting the partial derivative of the rendering weight with respect to $t$ to zero, we can directly obtain a closed-form solution for $t^\*$: $$ t^*=\frac{s\ln (k) + d}{\sin\theta}, $$ where $k=2\sin\theta$ if $\sin \theta \leq 0.5$, and $k=(2+\sin\theta-\sqrt{\sin^2\theta+4\sin\theta})^{-1}$ if $\sin \theta > 0.5$. In this case, the closed-form solution for $t^0$ can be directly obtained as $t^0 = d/\sin\theta$. Then the distance between $t^*$ and $t^0$ is $$ t^{\Delta} = t^* - t^0 = \frac{\ln k }{\sin\theta}s. $$ When $t^\Delta < 0$, $t^\*$ is before $t^0$, and when $t^\Delta > 0$, $t^\*$ is after $t^0$. In our PDF file related to the global response, we visualized the values of $t^\Delta$ under different $\theta$ in Figure 2. We found that when$t^\*$ is before $t^0$ ($\sin\theta$ less than 0.5), the relative bias is significantly greater than when $t^\*$ is after $t^0$. This aligns with our experimental findings, where visible erroneous surfaces are primarily caused by $t^*$ preceding $t^0$. Therefore, we penalize the SDF value just after $t^\*$ to prevent $t^\*$ being before $t^0$. We will include the complete derivation in the appendix. ### **Impact of Color Conditioning on Normal** > Why would optimizing the color conditioned on the normal restrict topological change? In Figure 6 from our PDF file related to the global response, we experimentally observed that in indoor scenes, when color conditioning on normals is applied, the optimization process becomes very slow and affects the optimization results. However, this issue is nearly absent in outdoor scenes. We believe this is primarily due to the convergence behavior of the scale factor. Indoor scenes often have many low-texture areas, leading to larger scale factors and a greater scatter distribution. Additionally, color conditioned on normals, which is intended for geometry disentanglement as mentioned in IDR and only resonable for surface points, results in many details being optimized to incorrect positions before convergence is achieved. However, our use of stochastic normals helps mitigate these erroneous surfaces. For detailed regions, the estimated normals have greater variance, which alleviates the impact of incorrect surfaces. This is also demonstrated in our ablation experiments, as shown in the table below: | F-score | D | E | Full | |:-:|:-:|:-:|:-:| | Courthouse (outdoor) | 0.11 | 0.11 | 0.21 | | Meetingroom (indoor) | 0.35 | 0.21 | 0.43 | --- Rebuttal Comment 1.1: Title: Please let us know if your concerns have been addressed Comment: Dear Reviewer EWQD, Thank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address. Thank you for your time.
Summary: The paper improved NeRF based surface reconstruction from two aspects: adaptive scale $s$ and a bias correction loss to reduce the bias, which encourages the SDF becomes negative after the maximum at $t^*$. Strengths: The position $\mathbf{r}(t)$ based scale $s$ increases the degree of freedom, which has the potential to improve the accuracy. The bias correction loss encourages the SDF to be negative after the maximum weight point. Weaknesses: The bias correction looks like partially. It doesn't punish negative SDF before $t^*$, so it is not completed. The experiments is not comprehensive. TUVR is not compared, which also aims to reduce the bias. Technical Quality: 3 Clarity: 3 Questions for Authors: While training $\mathcal{L}_{bias}$, how to evaluate $t^*$? Perhaps, it is an iterative procedure to obtain the maximum value, but can you embed the iterations into training? How to avoid the maximum $t^*$ behind the zero iso-surface? Why only use F-score for validation? What about other scores, such as Chamfer distance. NeRF based training is slow, but the training times are not reported in the paper. Does it become slower due to the bias correction? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: In the limitation section, it needs to more details about reconstruction of thin structures, sharp features, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We hope the following response can address your concerns. ### **Design of Explicit Bias Correction** > The bias correction looks like partially. It doesn't punish negative SDF before $t^*$, so it is not completed. As you mentioned, although aligning $t^*$ with the SDF zero-crossing is intuitive, our experiments showed that this approach requires special design considerations. We tried simply constraining $f(t^\*)$ to approach zero but found that visible surface defects persisted. We also attempted to constrain the SDF values both before and after $t^\*$, which led to very strange optimized surfaces, likely due to overly strong constraints on the SDF. We have shown the visual result in Figure 3 from our PDF file related to the global response. The key to our method's design lies in correcting large relative bias with explicit loss functions, while smaller biases are corrected by gradually scheduling the lower bound of the scale factor. We empirically found that visible surface errors are always caused by $t^\*$ is being before SDF zero-crossing points $t^0$. Therefore, we penalize the SDF value just after $t^\*$ to trend towards negative values, which prevents $t^*$ from being before $t^0$. For the case where $t^\*$ is after $t^0$, we can directly address it by gradually scheduling the lower bound of the scale factor, as the bias in this situation is relatively small. We provided a mathematical explanation under the assumption of a sufficiently small local surface. In the VolSDF framework we used, In this case, if the angle between the ray and the plane is θ, and the distance from the ray’s origin to the plane is $d$, then the distribution of the SDF along the ray can be expressed as $f(r(t)) = −t\sin θ + d$. By setting the partial derivative of the rendering weight with respect to $t$ to zero, we can directly obtain a closed-form solution for $t^\*$: $$ t^*=\frac{s\ln (k) + d}{\sin\theta}, $$ where $k=2\sin\theta$ if $\sin \theta \leq 0.5$, and $k=(2+\sin\theta-\sqrt{\sin^2\theta+4\sin\theta})^{-1}$ if $\sin \theta > 0.5$. In this case, the closed-form solution for $t^0$ can be directly obtained as $t^0 = d/\sin\theta$. Then the distance between $t^*$ and $t^0$ is $$ t^{\Delta} = t^* - t^0 = \frac{\ln k }{\sin\theta}s. $$ When $t^\Delta < 0$, $t^\*$ is before $t^0$, and when $t^\Delta > 0$, $t^\*$ is after $t^0$. In our PDF file related to the global response, we visualized the values of $t^\Delta$ under different $\theta$ in Figure 2. We found that when $t^\*$ is before $t^0$ ($\sin\theta$ less than 0.5), the relative bias is significantly greater than when $t^\*$ is after $t^0$. This aligns with our experimental findings, where visible erroneous surfaces are primarily caused by $t^*$ preceding $t^0$. Therefore, we penalize the SDF value just after $t^\*$ to prevent $t^\*$ being before $t^0$. We will include the complete derivation in the appendix. ### **Comparison with TUVR** > TUVR is not compared, which also aims to reduce the bias. Since TUVR is not open-source and only reports metrics on the DTU dataset, we reproduced its unbiased SDF-to-density technique and combined it with a hash grid to create the TUVR-Grid method for comparison. Additionally, we compared our results on the DTU dataset with the reported results in the paper that did not use MVS priors (TUVR-MLP). The results are shown in the table below. Results on Tanks and Temples dataset: | Methods | TUVR-Grid | Ours | |:---:|:---:|:--:| | F-score | 0.33 | 0.51 | Results on DTU dataset: | Methods | TUVR-Grid | TUVR-MLP | Ours | |:---:|:---:|:--:|:--:| | CD↓ | 0.89 | 0.71 | 0.60 | The performance of TUVR is not ideal on all datasets because its unbiased nature is not fully guaranteed, and it is also somewhat affected by over-regularization. ### **Evaluation of $t^*$** > how to evaluate $t^*$? In lines 197-198 of the paper, we mentioned that we directly use the sampled point with the highest weight as the point where $t^\*$ is located. This is because we perform importance sampling along the ray before rendering, and the sampled points are already concentrated in areas with higher weights. Therefore, there is no need for additional iterations to find $t^*$. Experimentally, this approach does not significantly impact the results. ### **Other Evaluation Metrics** > Why only use F-score for validation? What about other scores, such as Chamfer distance. On the Tanks and Temples benchmark, previous work only reported the F-score as a surface quality metric. We compared our F-score results with the reported F-scores from other methods, although we also evaluated additional metrics using our method. However, these additional metrics cannot be directly compared with other methods. Additionally, we compared the PSNR metric for image reconstruction on the Tanks and Temples dataset and the Chamfer distance metric on the DTU dataset. The results are shown in the table below: | Methods | NeuS | Neuralangelo-22 | Ours-19 | Ours-22 | |:-------:|:----:|:----:|:-------:|:-------:| | PSNR | 24.58 | 27.24 | 26.90 | 27.67 | | Methods | NeuS | TUVR-Grid | TUVR-MLP | Neuralangelo | Ours | |:---:|:---:|:---:|:--:|:--:|:--:| | CD↓ | 0.84 | 0.82 | 0.71 | 0.61 | 0.60 | ### **Impact of Explicit Bias Correction on Training Time** > Does it become slower due to the bias correction? After incorporating the explicit unbias correction design, the time per iteration increased from approximately 59ms to 63ms. After 300,000 iteration, this results in an additional 20 minutes of training time. This is primarily because we need additional network inference to compute the SDF slightly beyond $t^*$. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' feedback. After reading the rebuttal and other reviews, I am glad to increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful consideration. I truly appreciate your willingness to increase your score!
Summary: This paper introduces an innovative two-stage framework, NeuRodin, for neural surface reconstruction that significantly improves upon previous SDF-based methods. Locally adaptive parameters for SDF-to-density conversion and a novel explicit bias correction are introduced to enhance the fine reconstruction of SDF surfaces. Strengths: 1. The paper thoroughly elaborates on the shortcomings of current SDF-based approaches and provides insightful and impactful plug-and-play solutions, such as SDF-to-density conversion and explicit bias correction. 2. The paper is clearly written with detailed explanations of the methodology, the challenges addressed, and the solutions proposed. 3. Strong experimental results demonstrate that the proposed framework can generate high-quality surface reconstruction results. Weaknesses: 1. The two-stage optimization process proposed in the paper seems somewhat cumbersome, and finding a balance between over-regularization and fine reconstruction is challenging. This detailed design somewhat weakens the depth of discussion on the essential issues of SDF reconstruction. Easily portable modification modules, such as explicit bias correction and SDF-to-density conversion, could have a more far-reaching impact. 2. The difference between the ideal situation discussed by the authors in Explicit Bias Correction and the SDF distribution in actual training seems to have a similar mechanism to the convergence difficulty of shape adjustment discussed in [1]. Is it possible to avoid the biased situation shown in Figure 3 by introducing depth or other supervision information? 3. In the experimental section, Table 2 does not explain the measurement indicator of the data. Combined with Appendix 5, it can be inferred that the indicator is the percentage F-Score. 4. There is a lack of comparison of image reconstruction results. Since color is used in the training process and the baseline method in the paper includes image reconstruction comparisons, why does this paper not provide qualitative or quantitative results of image reconstruction? [1] Yang, Huizong, et al. "Stabilizing the Optimization of Neural Signed Distance Functions and Finer Shape Representation." *Advances in Neural Information Processing Systems* 37 (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: Refer to the weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of the proposed model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. The issues we mentioned regarding SDF-based rendering, such as the bias problem, might indeed be addressed through improved SDF-to-density modeling. While it is possible to tackle these issues directly through mathematical modeling, we believe that our detailed design can be very easily implemented within the existing framework and effectively and significantly improve surface quality. This also provides considerable value. ### **Impact of Additional Signal Supervision** > Is it possible to avoid the biased situation shown in Figure 3 by introducing depth or other supervision information? We tested monocular depth and normal supervision on an indoor scene from ScanNet++ to examine the density bias. We found that the density bias no longer caused significant surface errors. This aligns with intuition, as the impact of density bias depends on the convergence of the surface. Any SDF-to-density design ensures that when the scale factor approaches zero, the weights along the ray can approximate a Dirac distribution at the SDF zero-crossing points. We believe that the bias issue is caused by photometric ambiguity, meaning that the surface cannot be accurately determined based on color alone. This is consistent with the outlier distribution in MVS methods. From the MVS perspective, the scattered distribution in low-texture areas of MVS results is similar to the weight distribution along the ray and is considered outliers and removed. However, our goal is dense surface reconstruction, and we need to uniquely determine the surface within this scattered distribution. Therefore, we select the part with the highest weight as the most reliable surface. When we have more reliable supervision signals, such as monocular depth and normals, these outliers decrease because the model can use these additional signals to accurately determine the surface. ### **Image Reconstruction Comparision** > There is a lack of comparison of image reconstruction results. Since color is used in the training process and the baseline method in the paper includes image reconstruction comparisons, why does this paper not provide qualitative or quantitative results of image reconstruction? While our design is primarily intended for surface reconstruction tasks, we have also compared the results of NeuS and Neuralangelo using the image quality evaluation methods from Neuralangelo, across different parameter scales. | Methods | NeuS | Neuralangelo-22 | Ours-19 | Ours-22 | |:-------:|:----:|:----:|:-------:|:-------:| | PSNR | 24.58 | 27.24 | 26.90 | 27.67 | As shown, with fewer parameters, our method (Ours-19) achieves image reconstruction quality close to that of Neuralangelo. With a larger number of parameters, our method (Ours-22) surpasses Neuralangelo in terms of image reconstruction quality. ### **Addressing Typographical Errors and Omissions** > In the experimental section, Table 2 does not explain the measurement indicator of the data. Combined with Appendix 5, it can be inferred that the indicator is the percentage F-Score. Thank you for your valuable feedback and for pointing out the typographical errors and missing elements in our manuscript. We sincerely apologize for these mistakes. We will carefully revise the manuscript to correct these issues and ensure that it meets the highest standards of clarity and accuracy. --- Rebuttal Comment 1.1: Comment: After reviewing the authors' responses and supplementary materials, I appreciated the additional experiments and explanations provided during the rebuttal phase. The concerns regarding the supervision of the additional signal were addressed, and the results of the rendering of the images demonstrated the effectiveness of the method, prompting me to improve my initial score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your prompt reply and for increasing your score!
Summary: This article introduces NeuRodin, a Signed Distance Function (SDF)-to-density-based neural surface reconstruction method. The author summarizes the two main factors, SDF-to-density representation and geometric regularization, which cause low-quality performance in SDF-based methods and improve them in the pipeline. They argue that the widely used global scale parameter may cause the identical density in the same SDF level set. The paper attempts to solve this problem by using a non-linear mapping to obtain this scale parameter and adding a bias of density. For the geometric regularization, they mentioned some reasons such as the Eikonal loss and smoothness constraints that caused oversmoothness. Stochastic-step numerical gradient estimation and two-step training strategy are employed for over-regularization issue. The results overperform some baselines and ablation studies show the effectiveness of the modules. Strengths: The mentioned factors theoretically may cause the low-quality performance. The pipeline is clear, and theories are reasonable. The strategies and improvements enhance the performance of the model. The shown depth maps examples also improved with the proposed strategies. This could be useful for other depth estimation tasks. Weaknesses: The authors only provide F-score for the evaluation metrics except for some single scenes in appendix. It will be good to provide more such as PSNR, SSIM, etc. There are some minor mistakes in Table 5. The underline format in caption is not consistent with the italics format in the table. Is it possible to compute the proposed method under gird resolution of 2^22 just as nerualangelo did? This could be a fairer comparison in table 1. It will be good to discuss the runtime compared with other baselines. Technical Quality: 3 Clarity: 3 Questions for Authors: Is it possible to compute the proposed method under gird resolution of 2^22 just as nerualangelo did? This could be a fairer comparison in table 1. It will be good to discuss the runtime compared with other baselines. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors only provide F-score for the evaluation metrics except for some single scenes in appendix. It will be good to provide more such as PSNR, SSIM, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. Below, we address the concerns raised in the review. ### **Image Reconstruction Comparision** > The authors only provide F-score for the evaluation metrics except for some single scenes in appendix. It will be good to provide more such as PSNR, SSIM, etc. While our design is primarily intended for surface reconstruction tasks, we have also compared the results of NeuS and Neuralangelo using the image quality evaluation methods from Neuralangelo, across different parameter scales. | Methods | NeuS | Neuralangelo-22 | Ours-19 | Ours-22 | |:-------:|:----:|:----:|:-------:|:-------:| | PSNR | 24.58 | 27.24 | 26.90 | 27.67 | As shown, with fewer parameters, our method (Ours-19) achieves image reconstruction quality close to that of Neuralangelo. With a larger number of parameters, our method (Ours-22) surpasses Neuralangelo in terms of image reconstruction quality. ### **Hash Grid with a Dictionary size of $2^{22}$** > Is it possible to compute the proposed method under gird resolution of 2^22 just as nerualangelo did? We also tested our method on the Tanks and Temples dataset with a hash dictionary size of 2^22 as shown in the table below. | Methods | Neuralangelo-19 | Neuralangelo-22 | Ours-19 | Ours-22 | |:-------:|:---------------:|:---------------:|:-------:|:-------:| | F-score | 0.43 | 0.50 | 0.51 | 0.50 | We found that increasing the number of parameters did not significantly improve the results but instead dramatically increased the running time (from 18 hours to 48 hours). The redundant parameters (with a dictionary size of $2^22$ resulting in over a hundred million parameters) did not enhance surface accuracy but did help fit the training images better (PSNR from 26.90 to 27.67). Neuralangelo addresses some surface defects through over-parameterization. However, we achieve the same effect with significantly fewer parameters, thanks to our specialized design. Furthermore, we test with a hash dictionary size of $2^{22}$ in a large-scale scenario from BlendedMVS dataset. By significantly increasing the number of parameters, the model is able to capture fine-grain details on images, thereby preserving more details in the reconstructed surface. In our PDF file related to the global response, Figure 1 displays the quantitative comparison. ### **Training time** > It will be good to discuss the runtime compared with other baselines. As mentioned in lines 504-506 of the original text, on the Tanks and Temples dataset, Neuralangelo requires nearly 48 GPU hours for optimization, whereas our method only requires approximately 18 GPU hours. This is due to our specialized design for addressing surface defects, which allows us to maintain performance without requiring as many parameters as Neuralangelo. --- Rebuttal Comment 1.1: Title: Please let us know if your concerns have been addressed Comment: Dear Reviewer o1gj, Thank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address. Thank you for your time.
Rebuttal 1: Rebuttal: We express our sincere gratitude to all the reviewers for their valuable insights and constructive feedback on our work. We truly appreciate your dedicated efforts and the time you have devoted to evaluating our work. Here we address some common concerns raised by reviewers. We also have provided a PDF for visualizing results or charts to assist in illustrating and explaining the issues. Please check. ### **Image Reconstruction Comparision** While our design is primarily intended for surface reconstruction tasks, we have also compared the results of NeuS and Neuralangelo using the image quality evaluation methods from Neuralangelo, across different parameter scales. | Methods | NeuS | Neuralangelo-22 | Ours-19 | Ours-22 | |:-------:|:----:|:----:|:-------:|:-------:| | PSNR | 24.58 | 27.24 | 26.90 | 27.67 | As shown, with fewer parameters, our method (Ours-19) achieves image reconstruction quality close to that of Neuralangelo. With a larger number of parameters, our method (Ours-22) surpasses Neuralangelo in terms of image reconstruction quality. ### **Comparison with TUVR** Since TUVR is not open-source and only reports metrics on the DTU dataset, we reproduced its unbiased SDF-to-density technique and combined it with a hash grid to create the TUVR-Grid method for comparison. Additionally, we compared our results on the DTU dataset with the reported results in the paper that did not use MVS priors (TUVR-MLP). The results are shown in the table below. Results on Tanks and Temples dataset: | Methods | TUVR-Grid | Ours | |:-:|:-:|:-:| | F-score | 0.33 | 0.51 | Results on DTU dataset: | Methods | TUVR-Grid | TUVR-MLP | Ours | |:-:|:-:|:-:|:-:| | CD↓ | 0.89 | 0.71 | 0.60 | The performance of TUVR is not ideal on all datasets because its unbiased nature is not fully guaranteed, and it is also somewhat affected by over-regularization. ### **Comparison on DTU Benchmark** Although our design is not specifically tailored for single-object datasets, we validated our method on the DTU Benchmark. The results are shown in the table below. We found that, even without special parameter tuning, our method achieves results comparable to Neuralangelo and surpasses other baseline methods such as TUVR. | Methods | NeuS | TUVR-Grid | TUVR-MLP | Neuralangelo | Ours | |:-:|:-:|:-:|:-:|:-:|:-:| | CD↓ | 0.84 | 0.82 | 0.71 | 0.61 | 0.60 | ### **More Ablation Study** We conducted additional ablation experiments on the Tanks and Temples training set and we included two more cases to verify the role of Eikonal loss in maintaining the natural zero level set during the first stage, as well as the impact of color condition on normals. The five cases are: A: Without the local scale factor B: Changing the stage 1 estimated gradient to the analytical gradient C: Without explicit bias correction D: Without stage 1 Eikonal loss, but with color conditioning on the estimated normal E: Without stage 1 Eikonal loss, but with color conditioning on the analytical normal The quantitive results are as follows: | Case | A | B | C | D | E | Full Model | |:-:|:-:|:-:|:-:|:-:|:-:|:-:| | F-score | 0.49 | 0.42 | 0.49 | 0.42 | 0.40 | 0.51 | The results from cases A, B, and C demonstrate the effectiveness of the techniques we proposed. The results from cases D and E indicate the necessity of applying Eikonal loss during the first stage. Here, we present the results of removing the Eikonal loss during the first stage (cases D and E), which numerically demonstrate the necessity of this loss in the first stage. If removed, the following issues arise: 1. Noisy surfaces behind the correct surface. 2. Some large-scale areas display challenging-to-fill holes in the subsequent stage. 3. Under this poor SDF initialization in stage one, the optimization in stage 2 is not rubust. ### **Training time** As mentioned in lines 504-506 of the original text, on the Tanks and Temples dataset, Neuralangelo requires nearly 48 GPU hours for optimization, whereas our method only requires approximately 18 GPU hours. This is due to our specialized design for addressing surface defects, which allows us to maintain performance without requiring as many parameters as Neuralangelo. ### **More Explanation of Stochastic Gradients** As discussed in the main paper, our stochastic gradient estimation introduces some uncertainty into the true normals. For large-scale features, the Eikonal loss with varying step sizes can still be successfully minimized (since SDF near a large-scale plane should satisfy the Eikonal equation for different step sizes). In other words, the variance of stochastic gradients is small for large-scale features, making it easier for the model to minimize the Eikonal loss. However, for fine details, the random step sizes lead to high variance in the estimated normals, which reduces the impact of the Eikonal loss and makes it more challenging for the model to minimize samples with high variance. Pdf: /pdf/de5311334fd9f2455193ab9c381db89e238d5abd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Assouad, Fano, and Le Cam with Interaction: A Unifying Lower Bound Framework and Characterization for Bandit Learnability
Accept (spotlight)
Summary: This paper aims to provide a unified framework for deriving lower bounds of interactive decision-making problems using an extended technique adapted from Chen et al., 2016 [11], which was established only for non-interactive estimation problems. The paper offers a quite general formulation of the minimax value of interactive decision-making that encompasses both previously known and novel formulations. In particular, the paper demonstrates that such a lower bounding technique can be used to rederive the lower bounds from Foster et al., 2023 [25] and provide tighter lower and upper bounds via a new complexity named "decision coverage." Strengths: I find the general formulation and the technique based on percentiles (instead of the expected value used in prior literature such as [25]) for deriving lower bounds to be quite appealing. Although it appears to me that the lower bounding techniques are essentially "abstracted out" from prior literature, such as [11] and [23, 25], the paper presents it in a quite clean manner. The paper also provides several new results, such as interactive parameter estimation and tighter bounds for interactive reward maximization. I generally enjoyed reading this paper and find it worthy of publishing at NeurIPS. I expect the paper to have a broad impact on the community. Weaknesses: My main complaint about this paper is that it contains many hand-waving arguments. I had to fill in many gaps in the proofs myself to verify and understand the results. I outline some specific comments below (some are minor, while others need additional justifications): 1. I'm not sure if it is appropriate to call Corollary 2 a "Generalized Fano’s inequality", as it is not clear to me that it implies the original Fano's inequality (though they are "qualitatively" equivalent, i.e., both require $I(M,X) \ll \log |\mathcal{M}|$ to arrive at a constant lower bound). 2. You claim in line 186 "Fano’s inequality, e.g., in the form of Corollary 2, cannot be used to prove Lemma 3." Is there an argument why this is true? 3. Can you explain how Theorem 5 is instantiated from Theorem 1? It appears to me that its proof is completely self-contained without invoking any parts of Theorem 1. 4. Can you comment on how prior technique for deriving Corollary 9 differs from your Fano's inequality-based argument? 5. Can you explain how $T^*(\mathcal{M},\Delta)$ characterizes the regret? Specifically, it does not appear to me that there is a simple way to express the regret in terms of $T^*(\mathcal{M},\Delta)$. Moreover, why does Theorem 4 imply the statement in line 282? 6. The proof of Theorem 10 is too sketchy. It is not clear to me that (12) shares the structure as Proposition 8. Are you applying the minimax theorem here? 7. Theorem 10 looks quite trivial to me, as it does not depend on any information-theoretical structure of the model class (except Assumption 2). 8. Given point 5, I'm not sure how to interpret your Theorem 12 and how to compare it with [25]. It appears you are claiming this as a main result; can you provide any specific examples that demonstrate the improvement (i.e., explicitly computing the "decision coverage" for some classes)? 9. You claim that "Theorem 12 provides (polynomially) matching lower and upper bounds for learning M." Isn't $\max\\{a,b\\}$ can be arbitrarily smaller than $a \cdot b$? How should the "matching" be interpreted? 10. It appears to me the upper bound of Theorem 11 essentially converts the dependency on the size of the model class to that of the decision space. Given the similarity of the proof with that in [24], can you comment on if the results from [24] recover your upper bound (with possible replacement of decision coverage with $\log|\Pi|$)? 11. Typos (there are many, but I only include what I still remember): - $D_f(a,b)$ was never properly defined, though one can guess it is for $D_f(\text{Bern}(a),\text{Bern}(b))$. - What is the $\alpha$ in line 873? Technical Quality: 2 Clarity: 3 Questions for Authors: See above. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and for dedicating time to review our paper! > Q1. I'm not sure if it's appropriate to call Corollary 2 "Generalized Fano’s inequality" In our proof of Theorem 1 on page 15, we actually prove Theorem D.1, which holds for general quantile $\delta\in(0,1)$ rather than $½$. Similarily, Corollary 2 can be generalized to any $\delta$: when $I(M;X)+\log 2\leq (1-\delta)\cdot \log \frac{1}{\mu(M \in \mathcal{M} : L(M,x) < \Delta)}$, the Bayes risk $\geq\delta\Delta$. This precisely implies original Fano’s inequality when $\mu$ is uniform, $\mathcal{M}=\mathcal{X}, L(M,x)=\mathbb{1}(M\neq x), \Delta=1$. We will restate Corollary 2 for arbitrary $\delta$ and clearly explain the generalization. > Q2. You claim in line 186 "Fano’s inequality, e.g., in the form of Corollary 2, cannot be used to prove Lemma 3." Fano’s method and the “mixture vs mixture” method described in Lemma 3 are conceptually distinct due to the use of different divergences. In particular, Lemma 3 (and the mixture vs mixture method in general) use total variation distance, whereas Fano inequality uses KL-based mutual information. As a result, unless the distributions under consideration have favorable structure that allow one to bound the KL-based mutual information by total variation distance, it does not appear to be possible to recover Lemma 3 from Corollary 2. We mention in passing that we are not aware of any results which recover guarantees based on the mixture vs. mixture method from Fano method in prior work. > Q3. How Theorem 5 is instantiated from Theorem 1? The current proof of Theorem 5 is indeed self-contained (for completeness), but the idea is the same as Theorem 1, and it can indeed be proven as a corollary. To instantiate Theorem 1, we can choose the $f$-divergence to be the squared Hellinger distance, set $\mathbb{Q}=\mathbb{P}^{\overline{M},ALG}$, and choose $\mu$ to be the delta distribution on the model $M$. We will clarify this in the revision. > Q4. How prior technique for deriving Corollary 9 differs from your Fano's inequality-based argument? Prior work proves the regret lower bound in Corollary 9 using Assouad’s lemma (see e.g. Section 24 of [34]), which explicitly relies on the hypercube structure of the parameter space. By contrast, the proof of Corollary 9 is based on bounding the mutual information under a certain (uniform) prior and then applying Fano’s inequality. Conceptually, these two approaches appear to be fairly distinct (Assouad’s lemma often gives tight results when hypercube structure is available, while the Fano method is somewhat more general, but can require more effort to apply, particularly in interactive settings; therefore, it is interesting to see that both lead to the same result here). > Q5. How T∗(M,Δ) characterizes the regret? Why does Theorem 4 imply line 282? By the definition of $T^\star(\mathcal{M},\Delta)$, we have $$ \frac{1}{T}\mathrm{Reg}^\star_T=\sup\{ \Delta: T^\star(\mathcal{M},\Delta)\leq T \}. $$ For example, $T^\star(\mathcal{M},\Delta) \asymp \frac{C}{\Delta^2}$ implies that $\mathrm{Reg}^\star_T \asymp \sqrt{CT}$. Further, notice that Theorem 4 implies (under Assumption 3) $$ \mathrm{dec}^c_{\underline{\epsilon}(T)}(\mathcal{M}) \lesssim\frac{1}{T}\mathrm{Reg}^\star_T\lesssim \mathrm{dec}^c_{\bar{\epsilon}(T)}(\mathcal{M}) $$ The statement in line 282 then follows from the definition of $T^\star(\mathcal{M},\Delta)$ and $T^{\rm DEC}(\mathcal{M},\Delta)$. > Q6. The proof of Theorem 10 is too sketchy. Are you applying the minimax theorem here? Thank you for pointing this out. The current presentation of the proof focuses on highlighting the connection to Proposition 8, which is somewhat subtle and requires applying the minimax theorem. We will be sure to include more details in the final revision (there is also a more direct proof, which we are happy to include for completeness). > Q7. Theorem 10 looks quite trivial... Indeed, the quantity DC measures certain inherent complexity of the decision space, and it does not depend on the information-theoretical structure of the model class. However, it does imply non-trivial lower bounds for various model classes: (1) Linear bandits: $\mathsf{DC}\gtrsim d$ (2) Unstructured contextual bandits with context space $\mathcal{C}$: $\mathsf{DC}\gtrsim |\mathcal{C}|\log|\mathcal{A}|$ (3) Tabular RL: $\mathsf{DC}\gtrsim |\mathcal{S}|$. Given that the proof of Theorem 10 only utilizes the structure of the decision space, DC should be regarded as a (non-trivial) complexity measure that is complementary to DEC (which captures the information-theoretical structure of the model class), in the sense that the lower bounds they provide are complementary, and together they provide also an upper bound (Theorem 11). > Q8. How to interpret your Theorem 12? Any specific examples? > Q9. Isn't max{a,b} can be arbitrarily smaller than a⋅b? How should the "matching" be interpreted? The main contribution of Theorem 12 is that the upper bound is at most the square of the lower bound (as $a\cdot b\leq \max(a,b)^2$). Therefore, for **any** convex model class, the sample complexity is completely determined by DEC and DC up to a square gap. Such a characterization of sample complexity for general decision making is new, and we believe it to be conceptually important. > Q10. Can you comment on if the results from [24] recover your upper bound (with possible replacement of decision coverage with log⁡|Π|)? Our proof of Theorem 11 is essentially an adaptation [24], with the goal of replacing $\log|\Pi|$ by the decision coverage in the upper bound. For general decision making problems, $\log|\Pi|$ can be arbitrarily larger than the decision coverage, meaning that our upper bound represents a significant improvement. > Typos Thank you for pointing out these typos. Indeed, $D_f(a,b)$ is the abbreviation of $D_f(Bern(a),Bern(b))$, and $\alpha$ in line 873 is a typo. We will clarify these in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I maintain my current rating, favoring the acceptance of the paper.
Summary: This paper develops the notion of Interactive Statistical Decision Making (ISDM), and a generic lower bound (Theorem 1) which can be instantiated to capture the standard Le Cam, Assaoud, Fano methods as well as recent lower bound results in interactive decision making. The authors further use Theorem 1 to derive new sample complexity bounds (Theorem 12, 13) on interactive decision making contexts (under some regularity conditions on the model class). Strengths: I think it's interesting to try to unify different lower bound approaches in order to gain new insights, so the premise of the paper is very intriguing to me. The submission further uses the general theorem to derive new bounds, based on the new notion of Decision Coverage, that tightens existing results in interaction decision making. Weaknesses: It may be due to my lack of expertise in interaction decision making literature, but I'm having trouble understanding and evaluating the new contributions, with intuition missing that I hoped the paper would give. Some of the contributions also seem a bit overclaimed. I hope the following constructive criticisms would help the authors improve the paper. - "Addressing remaining gap" and "complete characterization": the authors say that the new results (Theorems 12/13) "completely characterize" the sample complexity of interactive decision making for convex model classes, but the left and right hand sides differ quadratically (and ignores log factors). Am I misinterpreting the results? They aren't even tight up to constants. - Line 56: I don't quite understand why "unifying two-point vs mixture-vs-mixture methods" is a new contribution. Mixture-vs-mixture is clearly a generalization of singleton two-point methods, so why is the unification sold as a new contribution? - Generally I find the bound in Theorem 1 challenging to interpret, as opposed to two-point or mixture-vs-mixture methods that make intuitive sense. I'm struggling to understand the insight gained by the Theorem 1 formulation through unifying two-point methods and Fano with interaction decision making. To me, ISDM reads like a very generic minimax game formulation, and then Theorem 1 tags on the reference distribution/ghost data $\mathbb{Q}$ in order to encompass existing techniques for interaction decision making. Lemma 3 then removes this extra component by declaring $\mathbb{Q}$ simply as the transcript of the algorithm, so what have we learned about standard statistical estimation through Theorem 1? In other words, my question is, why is Le Cam/Assaoud/Fano even part of the paper, instead of focusing only on the new bounds in interactive decision making? What am I missing? - Theorem 12 is a bit too informal, hiding the log factors (especially without specifying log factors in what). It also wasn't actually proven -- I think the calculations using Assumptions 2 and 3 (and applied to Theorem 11) should be explicitly shown in the appendix. - Relatedly, again on the topic of needing more interpretation, I wish the submission explained how DC is better than the $\log |\mathcal{M}|$ factor in Line 282. Misc typos I spotted and other small comments: - First page, should define Perf as cost, so that minimization is the correct direction. - End of Line 123, should the asterisk in $M^\ast$ be removed? - Line 177, $L:\Theta \times \mathcal{A} \rightarrow \mathbb{R}_+$ right? Instead of domain being $\Theta \times \Theta$? Just a consistency issue with the rest of the lemma. - Line 594, I presume "ISDM" instead of "ASDM"? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see weaknesses above. - Could the authors also comment on how the techniques apply/adapt if we want to show lower bounds on high-probability loss instead of expected loss? Two-point methods are straightforward (and even simpler than Le Cam) to use in a high-probability setting. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and constructive criticism! We will work on better presenting the intuition behind our results and revise our statements to make them more precise. > Regarding "addressing remaining gap" and "complete characterization": the lower and upper bounds differ quadratically ...They aren't even tight up to constants. Indeed, by using the term "complete characterization", we want to highlight that DC and DEC together provide a characterization of *polynomial learnability*. Although their lower and upper bounds may not offer a tight characterization of the T-round minimax risk, our one-round complexity measures, DC and DEC, elucidate what is necessary and sufficient for the estimation component (DC) and the estimation-to-decision-making component (DEC) in interactive decision making.  DC is essential because it appears in both the upper and lower bounds (similar to DEC). In contrast, prior work [23][25] includes a $\log|\mathcal{M}|$ factor only in the upper bound, with no hope of it appearing in a lower bound. This discrepancy was identified as a major open question in previous studies [23][25]. To address this gap, we need to eliminate $\log|\mathcal{M}|$ and identify DC that appears in both upper and lower bounds. > Line 56: I don't quite understand why "unifying two-point vs mixture-vs-mixture methods" is a new contribution. Mixture-vs-mixture is clearly a generalization of singleton two-point methods, so why is the unification sold as a new contribution? Sorry for the confusion. Mixture-vs-mixture is indeed a generalization of the singleton two-point method, and we mean to say that our approach recovers both, rather than unifying them into one concept. Our intention is to convey that our method unifies mixture-vs-mixture (and thus the entire scope of two-point methods) with Fano's and Assoud's methods, where these three previously lacked a unified perspective. We will rewrite the sentence as “unify mixture-vs-mixture (and thus the entire scope of two point methods) as a special case of our general algorithmic lower bound” to convey the right message. > Generally I find the bound in Theorem 1 challenging to interpret, ...why is Le Cam/Assaoud/Fano even part of the paper, instead of focusing only on the new bounds in interactive decision making? Our goal is to integrate the Fano and Assouad methods (which provide dimensional insights but are typically challenging to apply in interactive settings and inherently follow a T-round analysis approach) with the DEC framework (which offers one-round complexity measures for interactive settings but previously lacked a dimensional factor/estimation component in the lower bound). Therefore, it is crucial to demonstrate that our framework can recover the non-interactive versions of these methods as a special case, serving as an important sanity check. Moreover, by utilizing the algorithmic Fano method (Proposition 8), we not only introduce the new complexity measure DC (Theorem 12), which includes the dimension in the lower bound, but also recover a tight lower bound for linear bandits (Corollary 9), which would otherwise not be captured by Theorem 12. This also illustrates the advantage of having a unified method for interactive decision making and Fano’s method. > Theorem 12 is a bit too informal, hiding the log factors (especially without specifying log factors in what). It also wasn't actually proven -- I think the calculations using Assumptions 2 and 3 (and applied to Theorem 11) should be explicitly shown in the appendix. Thanks! We will provide a detailed step-by-step explanation in the revised version. The minimax-optimal sample complexity $T^\star(\mathcal{M},\Delta)$ is just a way to better illustrate our minimax regret upper and lower bounds. Assumptions 2 and 3 are conditions to establish that the additional terms in the regret upper (Theorem 11) and lower bound (Theorem E.1) are of the same order as the dec terms. This ensures that one has  $$ \mathrm{dec}^c_{\underline{\epsilon}(T)}(\mathcal{M}) \lesssim\frac{1}{T}\mathrm{Reg}^\star_T\lesssim \mathrm{dec}^c_{\bar{\epsilon}(T)}(\mathcal{M}) $$ Theorem 12 is then proved by using the definitions of sample complexities $T^\star(\mathcal{M},\Delta)$ and $T^{\rm DEC}(\mathcal{M},\Delta)$. > Relatedly, again on the topic of needing more interpretation, I wish the submission explained how DC is better than the factor $\log|\mathcal{M}|$ in Line 282. DC is fundamentally different with $\log|\mathcal{M}|$ because it arise in both the upper and the lower bounds, while $\log|\mathcal{M}|$ only appears in the upper bound (and there is no hope for having it appear in a lower bound in general). Comparing Theorem 12 and Line 282, one can observe that DC together with DEC gives a complete characterization of the sample complexity up to a quadratic order. In contrast, $\log|\mathcal{M}|$ is clearly not a good candidate for the estimation component in the lower bound, as it is the coarsest measure in learning theory. Moreover, for examples like convex bandits, the $\log|\mathcal{M}|$ factor is obviously too loose, which will be $\exp(d)$ for the complete class of convex functions, while we know that $poly(d)$ regret is achievable for convex bandits [23][32]. > Could the authors also comment on how the techniques apply/adapt if we want to show lower bounds on high-probability loss instead of expected loss? ... This is a great question! In our proof of Theorem 1, we essentially establish a lower bound for the quantile $\mathbb{P}_{M∼\mu, X\sim{\mathbb{P}^{M,ALG}}} (L(M, X) \geq \Delta)$ (line 578). This quantile lower bound (Theorem D.1) can be directly used to prove a high-probability lower bound in a straightforward manner (for any error $\delta$). > Misc typos I spotted and other small comments... Many thanks for pointing these out. We will be sure to clarify these issues in the final revision. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have some more comments. **"Complete characterization"**: please rephrase it in the final paper. **Fano/Assouad**: >Our goal is to integrate the Fano and Assouad methods (which provide dimensional insights but are typically challenging to apply in interactive settings and inherently follow a T-round analysis approach) with the DEC framework (which offers one-round complexity measures for interactive settings but previously lacked a dimensional factor/estimation component in the lower bound). Therefore, it is crucial to demonstrate that our framework can recover the non-interactive versions of these methods as a special case, serving as an important sanity check. I actually think this is a much more understandable and convincing message than the current version of the abstract and intro. "Goal is integrating Fano/Assouad, and we do a sanity check" does get around my concern that the ISDM problem and some lemmas looked artificial. **DC vs $\log|\mathcal{M}|$**: I was hoping for more quantitative intuition, actually comparing the two quantities. I understand that the difference is pretty analogous, in the qualitative sense, to something like VC dimension vs $\log |\mathcal{C}|$ in standard non-interactive PAC learning. I think a more quantitative discussion would strengthen the paper. But even absent that, emphasizing the qualitative difference more clearly would be useful. **High probability**: Thanks for confirming my guess. I wasn't sure about the quantile, since I didn't have time to read the proofs thoroughly. I think this would be a good remark to put in the paper. --- Reply to Comment 1.1.1: Comment: Thank you for the comments! In the revision, we will add a more detailed discussion regarding the "characterization," high probability version of the lower bounds, integrating Fano, and DC vs $\log|\mathcal{M}|$. >DC vs $\log|\mathcal{M}|$: I was hoping for more quantitative intuition, actually comparing the two quantities. Regarding "DC vs $\log |\mathcal{M}|$": Quantitatively, we always have $\mathsf{DC}_\Delta(\mathcal{M})\leq \log|\mathcal{M}|$. Specifically, $p_\Delta^\star=\sup_p\inf_M p(\pi:g^M(\pi)\leq \Delta)$ is lower bounded by $1/|\mathcal{M}|$ when $p$ is chosen as the induced distribution of $\pi_M$ for $M\sim \textup{uniform}(|\mathcal{M}|)$. Thus, $ DC_\Delta(\mathcal{M})= \log(1/p_\Delta^\star) \leq \log|\mathcal{M}|$. From this perspective, their relation is indeed analogous to VC dimension vs $\log|\mathcal{C}|$ for PAC learning. We will discuss this more thoroughly in the revision.
Summary: This paper proposes a unified framework for lower-bound methods in statistical estimation and interactive decision-making. The authors integrate classical lower bound techniques (Fano's inequality, Le Cam's method, Assouad's lemma) with recent minimax lower bounds for interactive decision-making (Decision-Estimation Coefficient). The framework is based on a general algorithmic lower bound method and introduces a novel complexity measure, decision coverage. The paper also has a lot of other results, including the unification of classical and interactive methods, the generalization of Fano's separation condition, and the derivation of new lower bounds for interactive decision-making. Strengths: * The framework developed in the paper is valid for a very general class of problems and unifies classical and classical lower bound techniques, Fano's inequality, Le Cam's method, and Assouad's lemma, and provides a comprehensive framework for statistical estimation and decision making. * Lots of other contributions: incorporating the previous work on DEC into the framework of this paper introduces a new complexity measure called decision coverage, which quantifies the complements of the previous DEC lower bounds. * Except for a few typos, the paper is well-written. * Overall, it is a solid paper, and contributions are significant. Weaknesses: * I dont find any major technical weakness in this work. The work is good and a bit hard to follow for someone who is a non-expert in this area. Most of the points I highlight below are bunch of typos that I found and clarification that I think its good to include. * Line 123, I think it should be $L(M,X)$ instead of $L(M^*,X)$ * The proof of corollary 2 is skipped. It involves 3-4 steps, and therefore I don't think it is trivial, especially for those who are not experts in dealing with these inequalities. * There seem to be a lot of (minor) typos in Lemma 3, and it's proof. 1. Firstly, it's better to clarify whether sets $\theta_0, \theta_1$ are required to be distinct or not in the Lemma statement. 2. ASDM is not defined in the first line of proof. 3. I think the second equality in Line 602 will contain a factor of $1/2$. Can authors also clarify if the next step follows by data processing? 4. In line 603, $ d_{3/4}(\cdot,\cdot)$ is not defined. May be authors mean $ d_{f, 3/4}(\cdot,\cdot)$. Please also explain why that inequality follows, is it by choice of $\Delta$ and so that Theorem 1 can be applied? 5. There seems to be a typo in the subscripts of expectation in line 604, third inequality. * I guess in Line 218 Eq. 9, $p_{out}$ is not defined. * I don't think $\pi_{out}$ in Line 226 is defined before (in the statement of theorem 5). * I would appreciate if the authors mention where the realizability assumption (Assumption 1) is needed and the issues that arise in the agnostic setting, i.e., when realizability does not hold. Also, please provide examples (or references) of well-posed model classes in Assumption 2. * It is not clear to me why there exists $M$ in the model class, i.e., why is the supremum achieved here by some $M$ in proof of Theorem 5. * Line 329, I don't think $\mathbb{V}$ is defined. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Essentially no broader societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and for dedicating time to review our paper! We respond to the specific questions as follows: > 1. The proof of corollary 2 is skipped. It involves 3-4 steps, and therefore I don't think it is trivial, especially for those who are not experts in dealing with these inequalities. Thank you for the feedback. We will be sure to include a detailed proof for Corollary 2 for readers less familiar with these sorts of manipulations. > 2. There seem to be a lot of (minor) typos in Lemma 3, and it's proof. >> Firstly, it's better to clarify whether sets $\theta_0,\theta_1$ are required to be distinct or not in the Lemma statement. In the most general case, $\Theta_0$ and $\Theta_1$ do not have to be distinct. It could be implied to be distinct if the loss function $L$ satisfies $L(\theta,\theta) = 0$ for all $\theta\in \Theta$. >> ASDM is not defined in the first line of proof. We apologize for the typo. This line is meant to say “ISDM”. >> I think the second equality in Line 602 will contain a factor of 1/2. Can authors also clarify if the next step follows by data processing? Yes, it will contain a factor of 1/2, and it is an inequality due to the convexity of TV distance. And yes, the next step follows by data-processing inequality. >> In line 603, $d_{3/4}(\cdot,\cdot)$ is not defined. May be authors mean $d_{d,3/4}(\cdot,\cdot)$. Please also explain why that inequality follows, is it by choice of $\Delta$ and so that Theorem 1 can be applied? In line 574, we defined $d_{f,\delta}$. In line 603, we mean $d_{|\cdot|, 1/4}$ where $f(x)=|x|$, which yields the TV distance. We then apply Theorem D.1 with $f(x)=|x|$ and $\delta=1/4$. >> There seems to be a typo in the subscripts of expectation in line 604, third inequality. We apologize for the typo. This should be $\mathbb{E}_{M\sim\mu}\mathbb{E} _{X\sim {\mathbb{P}^{M,\texttt{ALG}}}}[\ell(M,X)]$. > 3. I guess in Line 218 Eq. 9, $p_{out}$ is not defined. ...I don't think $\pi_{out}$ in Line 226 is defined before (in the statement of theorem 5). The output decision $\pi_{\text{out}}$ is the decision $\hat{\pi}$ in line 104, and the output policy $p_{\text{out}}$ is the distribution over the output decision $\pi_{\text{out}}$. We will make sure to add these explanations. > 4. I would appreciate if the authors mention where the realizability assumption (Assumption 1) is needed and the issues that arise in the agnostic setting, i.e., when realizability does not hold. Also, please provide examples (or references) of well-posed model classes in Assumption 2. Without the realizability assumption, the lower bound still holds since the model can be arbitrary. The realizability assumption is required for the upper bound, e.g., the last inequality of line 835 in Theorem F.2. When the realizability assumption is not satisfied, the algorithm of EXO+ is not known to adapt to misspecification, so no results can be established. Assumption 2 is a relatively mild condition on the model class $\mathcal{M}$. Examples: (1) **Bandits.** Suppose that $\mathcal{M}$ is a class of bandits with Gaussian rewards, i.e. $\Pi=\mathcal{A}$ is the action space, and for each $M\in \mathcal{M}$, $a\in\mathcal{A}$, the observation is generated as $o\sim N(f^M(a),1)$. Then, we can consider the reference model $\overline{M}$ with $f^{\overline{M}}(a)\equiv 0$, and $D_{KL}(M(a)||\overline{M}(a))=\frac{f^M(a)^2}{2}$. Therefore, $C_{KL}$ is bounded as long as the mean reward is uniformly bounded for models in the model class $\mathcal{M}$. (2) **Contextual bandits.** Suppose that $\mathcal{M}$ is a class of contextual bandits with context space $\mathcal{C}$ and Gaussian rewards. Then, similar to (1), we can bound $C_{KL}\leq \log|\mathcal{C}|+O(1)$. (3) **Problem classes with bounded observation $\mathcal{O}$.** For finite $\mathcal{O}$, we can always bound $C_{KL}\leq \log|\mathcal{O}|$. More generally, $C_{KL}$ is bounded as long as the models in $\mathcal{M}$ admit uniformly bounded density functions with respect to a common base model. This is indeed the case for most applications, including many control problems and RL problems. > 5. It is not clear to me why there exists $M$ in the model class, i.e., why is the supremum achieved here by some $M$ in proof of Theorem 5. We proved in Theorem 5 that $\sup_{M\in\mathcal{M}} \mathbb{P}^{M,\texttt{ALG}}\left( g^M(\pi)\geq \Delta \right)> \delta$. Notice the supremum is *strictly* larger. Thus, by the definition of supremum, there exists one model that is larger than $\delta$. > 6. Line 329, I don't think $\mathbb{V}$ is defined. Thank you for pointing this out. \mathbb{V} is defined as variance. We will be sure to clarify this in the final revision. --- Rebuttal Comment 1.1: Comment: Thanks for the response. As my questions have been adequately addressed, and I did not identify any major flaws or concerns with the approach, I maintain my current rating, and I believe the paper meets the necessary standards for acceptance.
Summary: This work provides a unified perspective of existing techniques for deriving lower bounds. This viewpoint covers techniques that are useful for traditional statistical estimation (e.g., Fano's inequality, Le Cam's method, and Assouad's approach) as well as the recently proposed approach using decision-estimation coefficients that concerns interactive decision-making. In addition, this work proposes a novel complexity measure called decision coverage. Using this novel measure, this work derives lower and a polynomially matching upper bound for learning convex model classes. Strengths: S1. The paper discusses a unified perspective of techniques to derive lower bounds. It integrates classical techniques (Fano’s inequality, Le Cam’s method, and Assouad’s lemma) with contemporary methods for interactive decision-making, based on the Decision-Estimation Coefficient (DEC). S2. This work introduces a novel complexity measure called decision coverage. This measure facilitates the derivation of new lower bounds specifically tailored for interactive decision-making. Weaknesses: See questions Technical Quality: 4 Clarity: 4 Questions for Authors: Q1. How realistic is Assumption 2? It seems to require that we have a model $\bar{M}$ that is close to all models $M \in \mathcal{M}.$ It is unclear why should such an assumption be true for a finite $C_{KL}?$ Similarly, how realistic is Assumption 3? Can you give some application examples where Assumptions 2 and 3 hold? Q2. What is $\mathbb{V}$ in line 329? Is it variance? Q3. Why is Assumption 2 not needed in Theorem 13, even though the line above says that this result is a corollary to Theorem 12? Q4. Can the ideas in this work be extended to the RL setup? Q5. Can the ideas in this work be extended to the case of interactive decision-making with multiple agents? I am happy to increase my score based on your response. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The work does not explicitly discuss the limitations of their work; at least, I could not find it anywhere. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and for dedicating time to review our paper! > Q1. How realistic is Assumption 2? It seems to require that we have a model that is close to all models It is unclear why should such an assumption be true for a finite Similarly, how realistic is Assumption 3? Can you give some application examples where Assumptions 2 and 3 hold? A1: Assumption 2 is a relatively mild condition on the model class $\mathcal{M}$. Examples: (1) **Bandits.** Suppose that $\mathcal{M}$ is a class of bandits with Gaussian rewards, i.e. $\Pi=\mathcal{A}$ is the action space, and for each $M\in \mathcal{M}$, $a\in\mathcal{A}$, the observation is generated as $o\sim N(f^M(a),1)$. Then, we can consider the reference model $\overline{M}$ with $f^{\overline{M}}(a)\equiv 0$, and $D_{KL}(M(a)||\overline{M}(a))=\frac{f^M(a)^2}{2}$. Therefore, $C_{KL}$ is bounded as long as the mean reward is uniformly bounded for models in the model class $\mathcal{M}$. (2) **Contextual bandits.** Suppose that $\mathcal{M}$ is a class of contextual bandits with context space $\mathcal{C}$ and Gaussian rewards. Then, similar to (1), we can bound $C_{KL}\leq \log|\mathcal{C}|+O(1)$. (3) **Problem classes with bounded observation $\mathcal{O}$.** For finite $\mathcal{O}$, we can always bound $C_{KL}\leq \log|\mathcal{O}|$. More generally, $C_{KL}$ is bounded as long as the models in $\mathcal{M}$ admit uniformly bounded density functions with respect to a common base model. This is indeed the case for most applications, including many control problems and RL problems. Assumption 3 is a growth condition on the DEC. For a broad range of model classes, including bandits, contextual bandits and structured RL problems (see e.g. [25] and also [23, 9]), the DEC of $\mathcal{M}$ scales as $C\sqrt{\epsilon}$, where $C$ is a quantity depending on the complexity of model class $\mathcal{M}$, and hence the DEC (as a function of $\epsilon$) is indeed of moderate growth for such classes. > Q2. What is in line 329? Is it variance? A2: Yes, it is variance. We will clarify this in the revision. > Q3: Why is Assumption 2 not needed in Theorem 13, even though the line above says that this result is a corollary to Theorem 12? A3: For contextual bandits problem, we can directly bound $C_{KL}$ by $\log|\mathcal{C}|$, which is a logarithmic factor and hence hidden in $\lesssim$. We will clarify this in the revision. > Q4. Can the ideas in this work be extended to the RL setup? A4: Yes, the RL setup is encompassed by the DMSO framework [9], so our results can be applied as-is. It has been shown in [25, 9] that many existing lower bounds for RL problems can be recovered by the DEC framework, and hence the ideas of this work can be directly applied to RL setup. Further, when applied to the RL setup, our framework can give more powerful results than the commonly used Fano method and DEC method, and hence we expect it to provide a unified perspective for proving lower bounds for RL problems. > Q5. Can the ideas in this work be extended to the case of interactive decision-making with multiple agents? A5: Yes, the problem of equilibrium computation in multi-agent decision making is encompassed by the DMSO framework [21], so our results can be applied as-is. In particular, the lower bounds in [21] can also be recovered by our framework through Theorem 6. --- Rebuttal 2: Title: Please interact with the authors Comment: Dear reviewer, Thanks for your work on this submission. Can you please interact with authors at this stage? At a minimum this would require acknowledging their rebuttal and saying whether you intend to change your score. Best, Roberto
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Propensity Score Alignment of Unpaired Multimodal Data
Accept (poster)
Summary: This work addresses the problem of aligning unpaired samples from multimodal data, whereby the problem is to find sample $x_i$ from modality $i$ that is best “related” to sample $x_j$ from modality $j$, when those two samples come from the observation of the same phenomenon according to two different modalities (e.g. sensors). This is an important problem that arise in many endeavors including biology, and that has been somehow overlooked in the literature of multimodal representation learning, whereby large quantities of paired data (images and their corresponding captions) are available. In a nutshell, the idea proposed in this work is as follows. Provided the following assumptions are valid, that is there exists a common latent space $Z$, which is entangled with a “perturbation signal” or label $t$, but that it is independent of modality specific noise $U_{i,j}$, then it is possible to estimate a common space for matching samples through the “bridge” across modalities offered by the perturbation signal $t$. In practice, this amounts to training a classifier to predict $t$ given one of the available modality, which allows the creation of a transformed dataset endowed with a Euclidean distance. Such distance can be used to compute a matching matrix $M$, whereby entry $(i,j)$ represents the likelihood of $x_i$ being matched to $x_j$. Matching matrix $M$ can be obtained efficiently through the lenses of optimal transport matching with entropic regularization. It should be noted that $M$ can then be used to construct pseudo-samples for any learning task that requires paired samples $(x_i, x_j)$ by replacing them with $(x_i, \hat{x}_j)$, where the second term is obtained through the application of the matching matrix to a structural equation $f_\theta$, which is a parametric function with learnable parameters. Several experiments on synthetic data, and a dataset whereby ground truth pairing is available, as well as experiments where no ground truth is available, indicate that the proposed method, in combination with optimal transport matching, provides an improvement over state of the art matching methods. Strengths: * This work presents a simple method to produce a matching matrix that can be used to pair multimodal data. The algorithm only requires training of two classifiers and the execution of the Sinkhorn’s algorithm to compute the optimal transport matching. * Experiments show that the proposed method using propensity scores in combination with optimal transport matching outperform the literature in terms of MSE and other specific metrics. * Experimental result indicate, surprisingly, that the proposed method implicitly enforces invariant to modality specific information, which endows propensity score with an improved generalization, although this specific phenomenon is only hinted at and requires additional studies Weaknesses: * The proposed method relies on strong assumptions, which are only superficially discussed. Relaxation of the conditional independence assumption A1 is discussed in appendix A, but only for the case of a small perturbation set $t \in \{ 0,1 \}$, which allows the authors conclude that under exact optimal transport, this is equivalent to operate on order preserving effects of $t$. The validity of the method also relies on A2, which allows computing propensity score from data alone, and not the shared latent space $Z$. * The overall perturbation method, as well as the existence of a shared latent representation $Z$ is only vaguely described. I have looked up the key reference [Yang et al, 2021], but the lack of detail is uncomfortable. It is possible to get an intuition of how the method works, but I would prefer such important technical details to be spelled out clearly. * There is no mention of the scalability of the proposed approach. Building a matching matrix can be a daunting task for some domains whereby a large amount of unpaired training data is available. In this case, the matching matrix could be either a big square matrix, or a very skinny and tall matrix that could pose some computational challenges. It would also be important to discuss about what would be the best operating scenario for the proposed method in terms of the number of samples per modality. In very unbalanced scenarios, would the classifiers using one or the other modality really behave similarly as per Equation (5)? Technical Quality: 3 Clarity: 2 Questions for Authors: * Would it be possible to explain (and eventually add to the appendix) in more detail the setup from [Yang et al, 2021] which is used in this work? More precisely, it would be great to have the definition of the multimodal autoencoder, the existence of a shared latent space, and the details of the perturbations $t$ and their effects on $Z$. This would really help formalizing better the use of $t$ as a bridge between modalities. * Can you provide a thorough discussion about the scalability of the proposed approach, the impact of imbalance in the size of the multimodal data (e.g. $n_1 >> n_2$), and the possible application to other domains with more abundant data? * Can you elaborate more on the baselines you compare your method to? Lines 235-239 are not detailed enough to fully appreciate the differences w.r.t. the proposed method. ======= POST REBUTTAL UPDATE ======= Dear authors, thank you for the rebuttal. I have raised my score. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes, the authors acknowledge that their work relies on strong assumptions, but suggest that empirical results indicate their method is robust to failures. Nevertheless, I could not find the discussion on the experimental results that clearly indicates such robustness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **“Strong” Assumptions** The problem that we address in this paper is obviously impossible in general: for example, if I gave you data from two modalities that are completely unrelated to each other (e.g. text from the complete works of Shakespeare, and images of cells under a microscope), there would be no correct solution to the problem because there is no shared information between the two modalities. But not all multimodal matching problems are as pathological as this: if we have shared information and some way of tying them together, then matching is possible. To our knowledge, there are no prior works tackling the matching problem for which any theoretical guarantees exist on the validity of the matching itself. E.g., Ryu et al., [2024] provides theory on obtaining OT solution itself, but does not describe whether the solution corresponds to a valid matching. In this case, the pathological example above may have an OT solution, but it would clearly not be a useful one. Similarly, Yang et al. [2021] gives no theoretical guidance on the limitations of the method. Our assumptions clearly delineate where we know the problem is solvable from settings where we have no theoretical guarantees. We also note that our method empirically outperforms all of the published baselines, so we could have chosen to just present it as a state of the art method with no mention of the inherent limitations of matching in general, but we chose to be upfront about the limitations (which likely apply to all of the baselines). NeurIPS reviewer guidelines state, “Reviewers will be specifically instructed to not penalize honesty concerning limitations,” we ask you to reconsider whether the theory that is explicit about assumptions that are sufficient for the method to work is really a weakness (we think it’s a strength!). **Shared Latent Space** We will include a more formal description of Yang et al., [2021]. Roughly, they assume the following data generating process. Each modality, indexed by $i$, is generated according to $X_i = f_i(Z, N_i)$, where $Z$ is a latent variable shared across all modalities. They fit a multi-modal generative model with a VAE assuming $P_Z$ the latent distribution is known, by enforcing that all modalities have the same latent distribution. If label information $t$ is available, they add the classification term requiring $Z$ to classify $t$ (they refer to “prior knowledge” of clusters rather than perturbations, but mathematically $t$ plays the same role). We believe our data generating process (Eq. (1)) is significantly more general in describing multimodal settings. Yang et al., [2021] require a known latent distribution, while we do not even assume knowledge of the space of $Z$, nor of the functional form of $f$ (though additional structure can yield a richer theory, such as our Prop 3.2). In our biology experiments, $Z$ is just an abstract random variable representing the cells, and $U$ an abstract notion of measurement noise. For the VAE, any theoretical guarantees would require correctly specified latent spaces and decoder functional classes (both of which are also untestable). In [Yang et al., 2021], they state > The dimensionality of the latent distribution is a hyperparameter that is tuned to ensure that the autoencoders are able to reconstruct the respective data modalities well. By contrast, our theory about propensity scores is valid with only (A1) and (A2), both of which, are easy to interpret scientifically for practitioners (e.g., (A1): interventions do not modify the measurement process, (A2): the measurement contains complete information about the biology). **Scalability** Entropy regularized OT is $\mathcal(O)(n^2 / \epsilon)$ (ignoring log factors, see [Pham 2020] for details), where in our setting, n corresponds to the number of samples in each domain (bounded by the max number of sample in the unbalanced case you refer to). Given that we only match within a treatment group, this is not unreasonable in the domains that we consider: for example, a well of cells in a cell-paint assay will typically have approximately 1000 cells. Constructing a 1000 x 1000 distance matrix and solving the resulting OT problem is very doable on current hardware. Of course, there may exist settings with even larger numbers of samples which would require more scalable unbalanced OT methods, or alternatively switching to shared nearest neighbors (at the cost of some performance; see our experiments). We emphasize that our contribution is in designing the distance metric that leverages propensity scores (and explaining theoretically when and why it works), rather than any particular matching approach that operates on that distance metric, so our approach can be combined with any scalable method that operates on pairwise distances, and we simply found that entropy regularized OT was most effective empirically. Pham, K., Le, K., Ho, N., Pham, T., & Bui, H. (2020, November). On unbalanced optimal transport: An analysis of sinkhorn algorithm. **Elaborate on the baselines** Gromov-Wasserstein OT (SCOT) uses the local geometry within each modality, assumed to lie in a metric space, to build a cross-modality distance by comparing the distances between pairs within modalities. We expect this approach to fail when one of the modalities does not have a simple metric structure, e.g., for images where euclidean distance in pixel space usually poorly captures semantic similarity. scGLUE is also a VAE approach that is tailored to biological applications, by enforcing that gene expression and their associated proteins (or, in their case, ATAC-seq locations) to have similar embeddings. As such, scGLUE is not applicable outside of genomics data. Importantly, neither of these methods utilize label information. For this reason, the VAE of [Yang et al., 2021] described above, which can use label information in addition to the VAE objective, is considered our main comparison. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: Dear authors, thank you for the thorough rebuttal to my observations and questions. I appreciate the explanation on the assumptions made in this work, and how do they compare to the ones done in the state of the art. I also appreciate the discussion on the obtained results, which surpass currently available methods. I will raise my score to reflect these observations. --- Reply to Comment 1.1.1: Comment: Thank you so much!
Summary: The paper proposes a novel matching method for unpaired multimodal (bimodal) data. It aligns unpaired samples across modalities using a propensity score. Based on additional treatment information, the propensity score is learned and used to align multimodal samples. The method is evaluated on three datasets: one imaging dataset and one single-cell dataset (both with available matching ground truth); and one additional single-cell microscopy dataset without available ground truth. Strengths: - Interesting idea: using a propensity score that leverages (often) available labels/treatments to learn a classifier is interesting. - Important Problem: A long line of multimodal methods describes how to leverage best-paired samples, but little work has been done on aligning unpaired samples. Besides biology, there is (probably) a lot of unpaired multimodal data. - well written and easy to read Weaknesses: - Final purpose of the method: To me, it was not fully clear what the final goal/purpose of the method is. Is it (1) the aligning of unpaired multimodal samples? Or (2), learning from unpaired, multimodal samples while aligning the samples? If (1) is the goal, what can we conclude from the improved results? If (2) is the goal, it would be interesting to see if the improved alignment/matching results in better downstream task/learning from multimodal data compared to other works (e.g., Yang et al., 2021) or just unimodal methods without any pairing of data. - Evaluation: It is not straightforward to understand the used metrics (maybe also related to the point above) and relate the performance metrics to the quality of the method (besides a relative performance between different alignment methods). Maybe reporting some classification-based performances for alignment (is the assigned sample from the other modality the correct one) or some downstream task performance (what task can we solve using the trained method?) would help. - Although the method is advertised as multimodal, no experiments or proofs use more than two modalities. Maybe the term bimodal would be more accurate? Otherwise, an experiment or derivation for more than two modalities would be appreciated. Technical Quality: 2 Clarity: 3 Questions for Authors: - what are the labels/perturbations in the image-based dataset? I did not see the information in section 6 (but maybe I missed it. Apologies in that case) - section 6.2: what is the reason for using the first 200 principal components for the gene expression modality (and not for the protein data as well)? - does assumption A1 hold in the datasets used? Is there a way to check whether the assumption holds? - there are some typos: - line 90: the our instead of the or our - line 168: form instead of from - line 227: against on instead of against or on Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors address some of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Final purpose of the method** We are most interested in (2) “learning from unpaired multimodal samples”. Our cross modality task (see Table 2) explicitly evaluate this and show that you do indeed see a significant improvement by learning from matched data (the difference in performance between our approach and Yang et al. shows that one gets a larger improvement from better matching). **Evaluation** We agree that the metrics can be difficult to interpret, we will include more details in the main text for clarification. > Maybe reporting some classification-based performances for alignment (is the assigned sample from the other modality the correct one) We report two metrics that measure how close the estimated matchings are to the correct one, when the ground truth is known (image and CITE-seq data). The trace metric corresponds exactly to a measure of how often the assigned sample from the other modality is the correct one ($Tr(M) = \sum_{i} M_{ii}$, where $M_{ii}$ is the weight given to the true pairing).The FOSCTTM is a standard metric in the matching literature, where a value of 0.3 indicates that, on average, the true sample is within the closest 30% to the true sample. Both of these metrics are standard (which is why we report them), but essentially evaluate your task (1), “aligning unpaired multimodal samples”. For what you called task (2) (learning from unpaired samples), what matters more is (i) how good a proxy the matched sample is for the true match, and (ii) performance on downstream tasks. On synthetic data, we can evaluate (i) by testing how close the latent state of the matched sample is to the ground truth match (MSE in table 1). This metric is a better proxy for downstream performance because if there are a number of samples that are very similar to the true match, then it is likely that you won’t get the exact match correct, but that matching to any of the close samples will still lead to good results (because they’re good proxies for the true match). As discussed above, we also report (ii) in Table 2 (see below for more detail). > or some downstream task performance (what task can we solve using the trained method?) We emphasize that the metrics above that measure may not be indicative of downstream especially when using soft matching techniques such as OT. The $R^2$ metric on the prediction task (section 5) can be interpreted as the amount of information that is shared between modality 1 and the paired modality 2. E.g., an $R^2$ of 0.233 can be roughly interpreted as the paired modality 1 explains 23.3% of the variation in modality 2, while even the ground truth modality 1 only explains 22.4% (Table 2), indicating that the matched samples are statistically indistinguishable from the true matches in terms of shared information. **Multimodal vs. bimodal** > advertised as multimodal, no experiments or proofs use more than two modalities. Using the term “multimodal” for settings with two or more modalities is standard in the literature. For example, Manzoor et al. 2024 state, > Multimodal systems utilize **two or more** input modalities ... which could be different from the inputs. [emphasis added] Similarly, most of multimodal methods discussed in Guo et al [2019] involve only two modalities, in some cases with no obvious way of extending them beyond just two modalities. That said, the multimodal matching could in theory be done by iteratively applying bivariate matching to a base modality, e.g., sampling a tri-modal observation $(x_i^{(1)}, x_{j}^{(2)}, x_{k}^{(3)})$ would involve sampling $x_j^{(2)}$ from $M^{(1,2)}$ and $x_k^{(3)}$ from $M^{(1,3)}$. One can also think of distance functions involving multi-pairs of propensity scores, e.g., a cost function $c(\pi_i, \pi_j, \pi_k)$, which can be aligned with [multi-marginal optimal transport](https://arxiv.org/pdf/1406.0026), but our main contribution is to introduce the common space defined by the propensity score. Manzoor, M. A., Albarri, S., Xian, Z., Meng, Z., Nakov, P., & Liang, S. (2023). Multimodality representation learning: A survey on evolution, pretraining and its applications. Guo, W., Wang, J., & Wang, S. (2019). Deep multimodal representation learning: A survey. **Questions** > what are the labels/perturbations in the image-based dataset? The perturbations are do-interventions on the latents (location) of the objects in the rendered images, it is briefly described on l253. > section 6.2: reason for using the first 200 principal component? We chose to use principal components for the GEX data due to sparsity (over ~ 20k dimensions), which is common pre-processing in bioinformatics pipelines. Note this was only used as deterministic pre-processing to train the classifiers and in theory could be avoided with a flexible enough classifier (e.g., a GEX-specific encoder). The protein data is only ~130 dimensions and the measurements are naturally more dense and continuous, and we found that a classifier trained directly on raw measurements performed adequately. >. does assumption A1 hold in the datasets used? Is there a way to check whether the assumption holds? Whether or not A1 holds, depends in part on the extent to which A2 holds: any part of Z that is not reflected in modality 1 essentially becomes part of $U^{(2)}$ and vice versa. Because CITE-Seq only measures surface proteins of the cell, there is likely part of the latent cell state that is not reflected in the proteomics assays, and some of that state is surely affected by the cell type, so we should not expect assumption A1 to hold exactly. That said, the assumptions remain useful for reasoning about the conditions under which matching is theoretically possible (see also our response to R3) and guiding data collection. E.g. our theory suggests that we will have better matching performance with an assay that measures all proteins, and not just surface proteins. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for your rebuttals and replies to my questions. --- Reply to Comment 1.1.1: Comment: Likewise, thank you so much for the time you put into your review! Please let us know if there's anything else you need to know, and if you have any other concerns that are preventing you from raising your score.
Summary: This paper presents a new method for pairing unpaired examples across different modalities using the labels of the examples. The method essentially trains a classifier for predicting labels for examples in each modality, then uses the classifier's logits across two modalities to calculate a similarity matrix. Empirical results show that the method is promising in aligning data from different modalities. Strengths: - The paper is written clearly. - The problem of aligning unpaird data in multimodal learning is quite important since in many cases there's limited or no access to paired data for representation learning. This paper proposes a new method for it, paving the way to future work in this area. - The method is relatively straightforward and easy to follow. - Empirical results show that the method perform well compared to the baseline. Weaknesses: The method uses the information in the labels in order to match labels via classifiers. There are two drawbacks with this approach: (1) the quality of matching depends directly on the choice of the classifier and its capacity/complexity. (2) more importantly, in many multimodal representation learning settings the label signal is unavailable. Technical Quality: 3 Clarity: 3 Questions for Authors: Is it possible that the use of labels as the sole signal to match examples from two modalities results in example clustering based on labels? i.e. examples from modality 1 that belong to class t are matched strongly against those from modality 2 that also belong to class t? In that case, is the method still beneficial compared to not doing any matching (i.e. just using data from individual modalities)? Having a discussion on this in the paper will be useful. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There's a short section on limitations of the method, explaining (1) the reliance of the method on labels, and (2) the strong assumptions made in the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dependence on a classifier** >There are two drawbacks with this approach: (1) the quality of matching depends directly on the choice of the classifier and its capacity/complexity. It is true that the quality of the matching depends on the choice of classifier and its capacity, but because you can easily validate the classifier’s performance on a validation set, this makes it relatively easy to choose a good classifier on which to base the matching. Training a classifier is not always trivial, but it is far easier than, e.g. training a variational autoencoder with a shared latent state as in Yang et al. [2021]. > more importantly, in many multimodal representation learning settings the label signal is unavailable. The assumption of perturbation labels is a restriction but it is extremely common in the biological settings that we study (note the concurrent work we cited, Ryu et al. [2024], which was posted just before the submission deadline, also study the same setting motivated by its biological applications). Even when experimental data is not available, it is often possible to rely on weaker class labels. E.g. in our real data experiments on CITE-Seq data, we used cell type as a label. **Question about labels** > examples from modality 1 that belong to class t are matched strongly against those from modality 2 that also belong to class t” In all of our experiments, we perform matching within a known class. So all the examples from class $t$ in modality 1 are restricted to only match to examples from modality 2 in class $t$ (and vice versa). This is why random matching still gives fairly decent results on cross modality prediction tasks (Table 2); we believe that this is roughly equivalent to the “not doing any matching (i.e. just using data from individual modalities)” baseline you refer to (but please let us know if you have something else in mind so that we can update the camera ready). The additional benefit over random comes from matching within a class. Note that matching results in a significant improvement in $R^2$ and the KL metrics.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Localized Zeroth-Order Prompt Optimization
Accept (spotlight)
Summary: This paper focuses on the prompt optimization task. The authors first propose two insights: (1) Instead of pursuing the global optimum, this paper claims that local optima are usually prevalent and well-performing. (2) The input domain for prompt optimization affects the choice of local optima. Inspired by these two observations, this paper proposes a zeroth-order optimization method that incorporates a Neural Tangent Kernel-based derived Gaussian process to search for local optima. This method achieves competitive results on benchmarking datasets. Strengths: 1. I like the analysis in Section 3. The two insights are well supported by the provided studies, and the motivation and reasoning for decision-making in this paper are explained in an informative way. 2. Compared to methods that aim to find global optimization, incorporating NTK-based Gaussian processes in prompt optimization should theoretically be much faster. 3. The input domain transformation process leads to a dense numerical feature space for prompts, making the optimization problem easier. Weaknesses: 1. This paper does not discuss the following recent prompt optimization methods: [1] Zekun Li, Baolin Peng, Pengcheng He, Michel Galley, Jianfeng Gao, and Xifeng Yan. Guiding large language models via directional stimulus prompting. Advances in Neural Information Processing Systems, 36, 2024. [2] Hao Sun, Alihan Hüyük, and Mihaela van der Schaar. Query-dependent prompt evaluation and optimization with offline inverse rl. In The Twelfth International Conference on Learning Representations, 2023. [3] Xinyuan Wang, Chenxi Li, Zhen Wang, Fan Bai, Haotian Luo, Jiayou Zhang, Nebojsa Jojic, Eric Xing, and Zhiting Hu. Promptagent: Strategic planning with language models enables expert-level prompt optimization. In The Twelfth International Conference on Learning Representations, 2024. 3 2. For those compared methods, ZOPO did not consistently show advantages in Table 1 and Table 3 in Appendix D2. 3. This paper emphasized efficiency. However, in Figure 5, Figure 10, and Appendix D.2, I can observe an obvious advantage in efficiency when compared with other methods. In addition, we can observe some results that are contradictory to the paper’s claim. In many scenarios in Figure 10, ZOPO does not show obvious advantages when the query number is small, which is contradictory to the 'query-efficient prompt' claim. Technical Quality: 2 Clarity: 3 Questions for Authors: I cannot see a consistent advantage of ZOPO in many figures and tables. Can you explain this part? Thanks. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: They listed the limitation and claimed to solve it in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer TMXX for taking the time to review our paper and appreciate the reviewer's feedback. We would like to provide the following response to address the concerns and hope it can improve your opinion of our work. --- > [W1] This paper does not discuss the following recent prompt optimization methods... In fact, we have already covered a wide range of representative and most recent related works [3, 5, 8, 10, 17, 22, 29, 43, 44] in our main paper for the field of prompt optimization. We thank you for pointing out these additional related works. We will discuss those works in our revised paper. > [W2] For those compared methods, ZOPO did not consistently show advantages in Table 1 and Table 3 in Appendix D2. > [Q1] I cannot see a consistent advantage of ZOPO in many figures and tables. Can you explain this part? Thanks. If the consistent advantages you mentioned mean that the proposed method should achieve the best performance across **the majority of tasks**, then our ZOPO has indeed demonstrated this **consistent advantage**. Actually our ZOPO has achieved the best performance on the largest number of tasks compared with other baselines in Table 1, which has dominated 14 out of 20 tasks vs. 8 out of 20 resulting from the second-best method INSTINCT [17]. The commonly used performance profile matrix [7] (defined in Eq. 10 of Appendix C.1) shown in Figure 1 has also supported this consistent advantage achieved by our ZOPO. However, if you are referring to consistent advantages to achieving the best performance on **every task**, we believe it is nearly impossible for a single algorithm to achieve these "consistent advantages", inspired by the no free lunch theorem in various fields [R1, R2]. Overall, while ZOPO may not dominate every individual task (no other baseline method does), its superior average performance and higher frequency of achieving top results in our experiments can already underscore its effectiveness in practice. We believe this is sufficient to evidence the clear advantage of ZOPO across a broad spectrum of tasks. **References** [R1] Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. *IEEE transactions on evolutionary computation*, 1(1), 67-82. [R2] Wolpert, D. H. (1996). The lack of a priori distinctions between learning algorithms. *Neural computation*, 8(7), 1341-1390. > [W3] This paper emphasized efficiency. However, in Figure 5, Figure 10, and Appendix D.2, I can observe an obvious advantage in efficiency when compared with other methods. In addition, we can observe some results that are contradictory to the paper’s claim. In many scenarios in Figure 10, ZOPO does not show obvious advantages when the query number is small, which is contradictory to the 'query-efficient prompt' claim. We acknowledge that our method does not achieve **the highest efficiency in every individual task**. However, it consistently ranks among the top three most efficient methods across a wide range of tasks while the efficiency of other methods varies a lot for different tasks as shown in Figure 5 and Figure 10. These results should be reasonably sound to evidence that our ZOPO has **generally better query-efficiency** as claimed in our main paper (refer to line 281). We would add this clarification in our revised manuscript. --- We hope our clarifications have addressed your concerns and increased your opinions of our work. We are happy to provide any further clarification in the discussion period. --- Rebuttal Comment 1.1: Title: We would like to know if you have any further questions that require additional clarification Comment: Dear Reviewer TMXX, Thank you for taking the time to review our paper and for your valuable feedback. We have provided clarifications above to address your concerns. We sincerely hope our clarifications could increase your opinion of our work. If you have any more questions or need more details, we are happy to answer them promptly within the discussion period. Best, Authors
Summary: The paper titled "Localized Zeroth-Order Prompt Optimization" proposes a novel algorithm called ZOPO (Localized Zeroth-Order Prompt Optimization) aimed at enhancing the efficiency of prompt optimization in large language models (LLMs). The authors argue that local optima, as opposed to global optima, are more prevalent and can be more effectively targeted for prompt optimization. They introduce a combination of Neural Tangent Kernel (NTK) and Gaussian processes within a zeroth-order optimization framework to improve query efficiency and optimization performance. Strengths: 1. The thorough empirical study conducted provides a detailed comparison between local and global optima, highlighting the potential advantages of targeting local optima. 2. The ZOPO algorithm is well-designed, incorporating NTK-based Gaussian processes to enhance the optimization process, which shows promise in improving query efficiency. Weaknesses: The proposed ZOPO algorithm is complex and might be challenging to implement for practitioners who are not deeply versed in NTK and Gaussian processes. This limits the accessibility and practical utility of the proposed method. Technical Quality: 2 Clarity: 2 Questions for Authors: How do the proposed prompt-tuning approaches compare to the fully fine-tuning ZO approaches such as MeZO? It would be better to justify the settings that require prompt optimization. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to Reviewer TQjw for the constructive feedback and for positively recognizing that our empirical study is **thorough** and our proposed algorithm is **well-designed**. We will incorporate the suggested discussion into our revised work. We respond below to their concerns and hope our responses can improve the reviewer's opinion of our work. --- > The proposed ZOPO algorithm is complex and might be challenging to implement for practitioners who are not deeply versed in NTK and Gaussian processes. This limits the accessibility and practical utility of the proposed method. We would like to clarify that our proposed ZOPO algorithm is in fact quite straightforward. Specifically, ZOPO has only two major components: GP-NTK in `learner_diag.py` (52 lines for its core ideas of computing empirical NTK and fitting GP with query history), and the zeroth-order optimization in `optimization.py` (about 3 lines for its core ideas of gradient estimation from derived GP) in the supplementary material we have provided. Moreover, since we have provided the codes for our ZOPO, it becomes less challenging for practitioners without deep expertise in NTK and Gaussian processes to utilize and integrate our method into their interested problems, which we believe will highly benefit the accessibility and practical utility of our ZOPO. > How do the proposed prompt-tuning approaches compare to the fully fine-tuning ZO approaches such as MeZO? It would be better to justify the settings that require prompt optimization. To clarify, the fully fine-tuning ZO approaches (e.g., MeZO) and prompt optimization ZO approaches (i.e., ZOPO) are designed for different contexts/settings. - **MeZO**: it utilizes ZO approach to **reduce the memory footprint** when fine-tuning the model parameters of **white-box** LLMs (e.g., LLaMA) on downstream tasks, as backpropagation typically requires a prohibitively large amount of memory. - **ZOPO**: In contrast, this method is tailored for scenarios where the LLMs (e.g., ChatGPT) are treated as **black box** systems, where direct fine-tuning of model parameters is not feasible. Therefore, prompt optimization becomes a better choice for adapting black-box LLMs to downstream tasks by only tweaking the text inputs for these tasks. We will add a detailed discussion comparing MeZO and ZOPO in our revised version. --- We appreciate the reviewer's valuable input and hope our answers can address your concerns and increase your opinions of our work. Thank you! --- Rebuttal Comment 1.1: Title: We would like to know if you have any further questions that require additional clarification Comment: Dear Reviewer TQjw, Thank you for taking the time to review our paper and for your valuable questions. We have provided clarifications above to respond to your questions and we hope we have increased your opinion of our work. If you have any more questions or need more details, we are happy to answer them promptly within the discussion period. Best, Authors
Summary: The paper proposes multiple contributions: 1. Establishes a new visualization technique for the objective landscapes of blackbox functions over prompts. This is done by converting the high dimensional embeddings of strings into 2D (via t-SNE), and visualizing the landscape in 3D. Using this, the paper finds several patterns: * There is a correlation between the smoothness of the landscape and the strength of the prompt generator. * Much of the landscape is filled with local minima. 2. Proposes a new Bayesian Optimization-like algorithm, with the following setup: * The regressor + uncertainty estimator is a NTK-GP using an MLP * The acquisition maximization is a gradient descent in the embedding space, with a projection back into the original prompt space. Strengths: * The proposed visualization method is simple yet surprisingly very insightful. I believe this might become a very important tool for any string-based blackbox optimization to assess the landscape. * The Bayesian optimization-like algorithm makes intuitive sense (bar the weaknesses, see below). This paper is well-written and is straightforward to read. * The conducted experiments are rigorous and comprehensive over numerous tasks with multiple baselines. Ablation studies are also relevant and insightful. Weaknesses: * Section 4.2 is not well-motivated. I understand that the idea is to construct an acquisition function expressing explore/exploit tradeoffs, and the most natural regressor to use is a Gaussian process, leading to the idea of using the NTK kernel. But this may seem overly complicated. For instance, why not use a simpler regressor / uncertainty estimator, like an ensemble of MLPs? * Since the NTK requires computing dot-products of gradients, this makes it tricky to use for larger models (which have much longer gradients as feature vectors). * (Small) I would tone down the statement that Bayesian Optimization (or more generally, regressor guided search) will do poorly in local-optima situations simply because it was designed to search for global optima. There are several previous works over traditional black-box optimization showing Bayesian Optimization remains competitive even with multiple local optima. Furthermore, one could argue that the paper is essentially a Bayesian Optimization technique given the gradient ascent over essentially an explore-exploit acquisition. Technical Quality: 3 Clarity: 4 Questions for Authors: Please address the main questions in the weaknesses above. These are more for clarification: 1. How does $h^{-1}$ (i.e. mapping an embedding back into some text) work? L165 mentions storing $(z,v)$ for constructing this inverse mapping - does this mean there are already alot of candidate prompts pre-generated forming an embedding set $Z$, and for a new $z$, we simply perform randomized rounding or projection to the nearest legitimate prompt in $Z$? 2. Eq 6: What happens if we fully attempt to argmax the acquisition (like regular Bayesian Optimization), rather than just move by a gradient step? I understand that this gradient step may be motivated by the landscape being full of local optima, but there may be missed gains here. Is this explained in L220-L230? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Section 7 discusses some limitations, but it may be worth considering the raised weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are highly encouraged by Reviewer GMgT's positive and constructive feedback! We appreciate that the reviewer positively recognizes that our visualization method is **insightful** and could be a very **important tool** for studying the black-box prompt optimization landscape, our designed algorithm is **intuitive**, the paper is **well-written** and **straightforward**, and our experiments are **rigorous and comprehensive**. We would like to address the comments as follows. --- Firstly, we would like to clarify that our ZOPO is in fact a gradient ascend-like algorithm rather than a Bayesian optimization-like algorithm. More specifically, in standard Bayesian optimization, Gaussian process will be applied to construct its **acquisition function (i.e., a surrogate function to the original objective function with GP mean and covariance)** to trade off exploitation and exploration for its **global optimization**. In contrast, ZOPO will make use of a **derived** Gaussian process (i.e., Eq. 3) from a standard Gaussian process to **estimate the gradient of the original objective function with GP mean** (see line 188) as well as to **measure the uncertainty on this gradient estimation with GP covariance** (which will be later used in our local exploration for more accurate gradient estimation in Sec.4.3) for a **local optimization**, i.e., the gradient ascend in our Eq. 6. We would like to refer you to [35] for detailed comparison between zeroth-order optimization with derived GP and standard Bayesian optimization. > [W1] Section 4.2 is not well-motivated Thank you for pointing out this interesting question. As we have clarified above, the idea in our ZOPO is in fact to leverage a derived Gaussian process to estimate gradient and then apply gradient ascend to maximize the objective function. However, as mentioned in line 189 of our main paper, the underlying objective $\widetilde{F}$ is complex and highly related to transformers, i.e., it's computed based on the inference of transformers. As a result, standard kernels may not have a powerful representation ability to approximate this objective function and hence can not provide a good gradient estimation to the underlying objective function (as supported in Table. 11). To leverage the powerful representation ability from deep neural networks for our prompt optimization, we can either train an ensemble of MLPs (the method you mentioned), or apply empirical NTK to avoid this training process while maintaining a good approximation to the predictions of neural networks and hence preserving the compelling representation ability of neural networks. Of note, such an effectiveness of the empirical NTK has in fact been widely evidenced from both theoretical and empirical perspectives [2,15,33,34]. Although empirical NTK requires computing dot-products of gradients, it is still more computationally efficient (as no training is required) than training an ensemble of MLPs, especially when we typically use a small neural network, e.g., 2-layer MLPs in our implementation, to compute this empirical NTK in practice. In light of the effectiveness and efficiency of NTK, we therefore choose to apply NTK in this paper for our ZOPO. We would like to add these discussions to our revised paper. > [W2] Statement on Bayesian Optimization Thank you for your valuable feedback. We acknowledge that our previous statement about Bayesian Optimization performing poorly in local-optima situations could be overly broad. We will revise this statement in our revised version. Below, we clarify that our ZOPO is different from Bayesian Optimization algorithms. While we acknowledge the existence of previous works on local Bayesian Optimization, our method, ZOPO, is fundamentally a gradient ascent-like algorithm, which is inherently more suited for local optimization tasks. Similar to [35], our ZOPO does not construct the acquisition function like Bayesian optimization. Instead, ZOPO will apply the derived GP mean to estimate gradient (line 187-188) and derived GP covariance for more queries to better estimate the gradient (Sec. 4.3), which emphasizes exploitation when updating the next queries rather than the exploration-exploitation trade-off in Bayesian Optimization. Importantly, our empirical results in Sec. 5 show that our local optimization algorithm ZOPO generally outperforms the global optimization (i.e., Bayesian optimization) algorithms, such as InstructZero and INSTINCT, in the context of prompt optimization, which therefore indicates the advantages of local optimization in this specific setting. > [Q1] Implementation of $h^{-1}$ Yes, your understanding is correct. To clarify, the inverse mapping is built on a **finite set**. Specifically, we pre-generate a finite set of **unique** prompt candidates (i.e., $\mathcal{V}=\{v\}$), that is, each text prompt $v$ in this set will be unique. With these unique prompt candidates, the complex embedding model usually will produce the corresponding unique embedding vectors $\mathcal{Z} = \{z\}$. Our mapping $h$ is then defined on these two finite sets $\mathcal{V}$ and $\mathcal{Z}$, i.e., $h:\mathcal{V}\rightarrow\mathcal{Z}$, which therefore leads to the one-to-one mapping. In practice, we only perform the search in the finite set $\mathcal{Z}$ (as shown in Equation 2), and all the gradient-updated points are projected back into $\mathcal{Z}$, which then finds a unique $v \in \mathcal{V}$ in the natural language space (as shown in line 201-203). >[Q2] Eq. 6 We clarify that Eq. 6 is not the acquisition function in BO and using argmax will require a global modeling like traditional BO which may not be better than our local modeling. Besides, L220-L230 states that we need more queries to improve the gradient estimation. --- Thanks for your insightful suggestions. We will incorporate these discussions above into our revised version. We hope our clarification will address your concerns and improve your opinion of our work. --- Rebuttal Comment 1.1: Title: Thanks for clarification Comment: Thanks for clarifying on the NTK procedure. I now understand that the NTK-GP is used for gradient estimation to perform local gradient updates instead. I recommend simplifying the writing to make this clear. Is it possible to further explain in more detail on how you produced the landscape images or provide code? I actually tried this method on my own data, but was unable to get such smooth landscapes. Did you apply some form of local smoothing before rendering the plot? As for other reviewer scores: It seems that other reviewers gave lower scores, primarily due to raising the issue of more comparisons. I am not bullish on requiring so many comparisons myself (there are probably 10+ different prompt tuning algorithms out there at this point anyways) as long as the method itself makes sense and is clean, so I will keep my current score. --- Reply to Comment 1.1.1: Title: Thank you for your reply and the positive recognition! Comment: Thank you so much for your positive feedback and thoughtful suggestions. We sincerely appreciate your recognition and support. --- We are glad to hear that our clarification on the NTK procedure was helpful. We will simplify the writing to make this aspect clearer in the revision. --- Regarding the smooth landscape visualizations, we are pleased that you found them compelling. Below is a detailed explanation of the process with the code snippet used for generating these plots: 1. We initialized the space (based on 300 randomly sampled prompt candidates) for each task and extracted embeddings and their corresponding values. 2. We then performed a t-SNE transformation to reduce the dimensions of the embeddings to two dimensions (X and Y) 3. We used griddata from SciPy to **linearly interpolate** the Z values on a grid. This allowed us to create a smoother surface from these scatter points before rendering. Here is the code snippet for generating the landscape visualizations: ```python import numpy as np from sklearn.manifold import TSNE import matplotlib.pyplot as plt from scipy.interpolate import griddata fig = plt.figure(figsize=(6,2), dpi=500) tasks = ['taxonomy_animal', 'cause_and_effect', 'informal_to_formal'] for i, task in enumerate(tasks): emb_space = load_init_space(task) ax = fig.add_subplot(1,len(tasks), i+1, projection='3d') data = np.array([emb_space[emb]['data'].tolist() for emb in emb_space]) Z = np.array([np.asarray(emb_space[emb]['function_value']).item() for emb in emb_space]) # We first perform t-SNE on the embedding representation tsne = TSNE(n_components=2) transformed_data = tsne.fit_transform(data) # Store the 2D t-SNE feature as x and y. X, Y = transformed_data[:, 0], transformed_data[:, 1] xi = np.linspace(X.min(), X.max(), 60) yi = np.linspace(Y.min(), Y.max(), 60) xi, yi = np.meshgrid(xi, yi) # Linearly interpolate Z values on the grid zi = griddata((X, Y), Z, (xi, yi), method='linear') my_cmap = plt.get_cmap('YlOrRd') surf = ax.plot_surface(xi, yi, zi, cmap = my_cmap, edgecolor ='none',antialiased=True, linewidth=0,rstride =1,cstride =1, vmin=0,vmax=1) ax.view_init(azim=20) plt.setp(ax.get_xticklabels(), visible=False) plt.setp(ax.get_yticklabels(), visible=False) plt.setp(ax.get_zticklabels(), visible=False) ax.set_axis_off() plt.show() ```
Summary: This paper addresses prompt optimization for a black-box API LLM. This paper empirically investigated the objective function landscape of the prompt optimization problem and derived two insights: (I) local optima are usually prevalent and well-performed, (II) choice of the input domain affects the identification of well-performing local optima. Based on these insights, a novel local prompt optimization algorithm based on NTK-GP is proposed. Empirical comparisons have been conducted to show the performance differences between the proposed and baseline approaches. Strengths: * a novel and efficient prompt optimization algorithm targeted for a black-box API LLM * promising performance over baseline approaches * analysis and visualization of the objective function landscape using t-SNE Weaknesses: * The clarity of the algorithm could be improved (see the question part) * The validity of the derived insights is not sufficiently high (see the question part) Technical Quality: 2 Clarity: 2 Questions for Authors: L 118. “We then investigate the function surface (i.e., accuracy landscape) using two different embeddings for prompt candidates in Fig. 4 (more details in Appx. D.1.2) where the embeddings are mapped into a 2-dimensional domain using the t-SNE for better visualization. “ It is not clear how the local optima in the 2D t-SNE space is related to the local optimality of the objective function in the original space? Because the relation is not clear, I am not sure if the insight derived in this paper is valid or not. It is not clear how Section 4.1 is related to Insight (II). What is the novelty of this part? This question is also related to the next question. L165. “We store (z, v) … for constructing the one-to-one inverse mapping.” Because h is a mapping from a discrete space to a continuous space, there does not exist a bijection theoretically. Therefore, it is not clear what the authors mean by this sentence. It is also not clear how this goal is achieved. Please clarify this point. L197. “theta_0 is the initialized parameters …“ Is it trained? If so, how and when? Table 1. I couldn’t find the explanation of ZOPO_{GPT}. What is the difference between ZOPO and ZOPO_{GPT}? The performance of the baselines and the proposed approaches are compared only up to 200 queries. It is interesting to see how it will change if more queries are allowed. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: A limitation has been addressed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer bDUw for recognizing that our algorithm is novel and efficient, and its performance is promising. We would like to address your concerns below and hope our response will improve your opinion of our work. --- > [Q1] Validity of our insight Thank you for your insightful comment. We address your concern below: 1. The t-SNE is well-suited for visualizing the landscape because it generally preserves the local structure of original data [R1], including the relative positions of local optima. 2. To further support this claim, we present an **additional result** in Figure[R] 1 of the rebuttal PDF to show that our derived insight is **indeed valid**. - We first identify the local optima points in the original space by comparing $\widetilde{F}(z)$ of each point $z$ with those of the k-nearest neighbors (k=10) around $z$. - We then apply the same t-SNE transformation to these identified local optima points and highlight them in red with an "x" marker in the 2D space. This visualization demonstrates that the local optima identified in the original space generally correspond well with those "peak values" in the 2D t-SNE space, confirming that our derived insights are indeed valid. We will include this clarification and the updated visualization to enhance our manuscript. [R1] Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. *Journal of machine learning research*, 9(11). --- > [Q2] Section 4.1 and its novelty Recall that Insight (II) emphasizes the importance of both the generation and representation of prompt candidates. Motivated by Insight (II), we are the **first** to integrate both the generation and representation of prompts within a **unified** problem formulation (refer to our Eq. 2), which then leads to a more general domain transformation of prompt optimization in our Section 4.1 (as one of the novel contributions of our paper). This is in contrast to previous works such as APE and INSTINCT, which focus solely on either generation or representation, but not both. Our domain transformation instead allows for improved prompt optimization by leveraging not only the remarkable generation ability from any type of LLMs (white/black-box, like ChatGPT) but also the impressive representation ability from existing embedding models (refer to our introduction and Sec. 4.1), which in fact has been widely supported by our empirical results in Sec. 5. For example, as shown in Table 1, our approach, denoted as $\text{ZOPO}_{\text{GPT}}$, achieves promising performance on many complex tasks. We believe this contribution can inspire the field and may benefit future research. > [Q3] One-to-one mapping To clarify, the mapping is in fact built on a **finite set**. Specifically, we pre-generate a finite set of **unique** prompt candidates (i.e., $\mathcal{V}=\\{v\\}$), that is, each text prompt $v$ in this set will be unique. With these unique prompt candidates, the complex embedding model usually will produce the **corresponding** unique embedding vectors $\mathcal{Z} = \\{z\\}$. Our mapping $h$ is then defined on these two finite sets $\mathcal{V}$ and $\mathcal{Z}$, i.e., $h:\mathcal{V}\rightarrow\mathcal{Z}$, which therefore leads to the one-to-one mapping. In practice, we only perform the search in the finite set $\mathcal{Z}$ (as shown in Eq. 2) and all the gradient-updated points are projected back into $\mathcal{Z}$, which then finds a unique $v \in \mathcal{V}$ (as shown in line 201-203). We will include a detailed clarification of this point in our revised manuscript. --- > [Q4] theta_0 No, theta_0 is not trained. We use the empirical NTK that is based on random initialization of network parameters to avoid the training process while maintaining a good approximation to the predictions of neural networks and hence preserving the compelling representation ability of neural networks for the GP regression in our ZOPO. Of note, such an effectiveness of the empirical NTK has been widely evidenced from both theoretical and empirical perspectives [2,15,33,34], as well as the compelling performance achieved by our ZOPO in Sec. 5. > [Q5] ZOPO_{GPT} in Table 1 The explanation of $\text{ZOPO}_{\text{GPT}}$ can be found in the third paragraph of Section 5.1. For your convenience, we summarize it as a more straightforward comparison below: - ZOPO: we use the Vicuna-13B model for both prompt generation and representation (specifically, the last token embedding). This choice was made to ensure a fair comparison against existing baselines such as InstructZero [3] and INSTINCT [17]. - $\text{ZOPO}_{\text{GPT}}$: Here, we utilize GPT-3.5 for prompt generation and SBERT for embedding representation inspired by our Insight (II). This approach leverages the superior generation ability of GPT-3.5, resulting in significantly higher accuracy on challenging tasks like `second_word_letter` and `sentence_similarity`. This demonstrates our method is capable of performing numerical optimization on ChatGPT-generated prompts. We hope this clarifies the distinctions between the two variants and highlights the strengths of $\text{ZOPO}_{\text{GPT}}$. > [Q6] More queries We would like to first clarify that our work follows the same query setting as previous baselines [3, 17, 44] (up to 200 queries) primarily for a fair comparison. For your interest in the results of more queries, we additionally performed experiments on only 4 GLUE tasks due to the limited money budget and time in this rebuttal, extending the number of queries to 1000. The experimental results, presented in Table[R] 1 of the rebuttal PDF, indicate that our proposed method ZOPO continues to achieve better or comparable results even in a query-rich setting compared with 165-queries results in Table 4. --- With our elaboration and additional results, we hope our response has addressed your concerns and increased your opinions of our work. We are happy to provide more clarifications if needed. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: The things are now clearer and the response is satisfactory. Thanks. --- Reply to Comment 1.1.1: Title: Thank you and we hope our clarification can improve your opinion of our work! Comment: Thank you very much for the prompt reply! We are happy to hear that our response is satisfactory. Do let us if you have any further questions. We would be glad to address them and sincerely hope that our clarifications can improve your opinion of our work.
Rebuttal 1: Rebuttal: ## **Global Response** We sincerely appreciate the insightful feedback provided by the reviewers, which has significantly contributed to enhancing the quality of our paper. We hope we have addressed all questions raised by the reviewers, providing our clarifications and additional results. In this global response, we have attached an **Author Rebuttal PDF** file with the table and figure as additional results to support our response. Below, we summarize the **strengths** of our paper as highlighted by the reviewers: --- The reviewers have positively accepted several aspects of our work: - The experiments conducted are thorough, rigorous, and comprehensive, demonstrating the promising empirical performance of ZOPO. The ablation studies are relevant and insightful (Reviewer `bDUw`, `GMgT`, `TQjw`). - Our designed algorithm is intuitive, novel, efficient, and well-designed (Reviewer `bDUw`, `GMgT`, `TQjw`, `TMXX`). - Our empirical analysis in Section 3 is thorough and well-supported (Reviewer `TQjw`, `TMXX`). - The visualization method we proposed is insightful and could serve as a crucial tool for studying the black-box prompt optimization landscape (Reviewer `GMgT`). - The proposed input domain transformation simplifies the optimization problem (Reviewer `TMXX`). - The paper is well-written and straightforward, with clear explanations of the motivation and reasoning (Reviewer `GMgT`, `TMXX`). --- We would like to express our gratitude once again to the reviewers for their constructive feedback. We hope that our responses and clarifications have further improved your opinion of our work. Best regards, The Authors Pdf: /pdf/838d8f4522ae61820ef5d7a6614a9f23bf221cd6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LESS: Label-Efficient and Single-Stage Referring 3D Segmentation
Accept (poster)
Summary: In this paper, the authors propose a Label-Efficient and Single-Stage referring 3D instance segmentation method, which is under the supervision of binary mask. They propose Point-Word Cross-Modal Alignment, Query Mask Predictor and Query Alignment modules to achieve cross-modal alignment. Besides, the area regularization loss and the point-to-point contrastive loss are introduced to eliminate interference caused by multiple objects and backgrounds. Experimental results show that the proposed method outperforms the existing state-of-the-art method on ScanRefer dataset with only the supervision of binary. This is a nice paper dealing an interesting research problem, presenting insights, and being well-written. Strengths: 1. This paper firstly investigated the single stage referring 3D instance segmentation task to bridge the gap between detection and matching, only under the supervision of binary mask. The proposed method can provide valuable insights to the further research in multi-modal 3D scene understanding. 2. To align features among different modalities, such as fined-grained alignment between point and word features, coarse-grained alignment between masks and sentences, several alignments modules are well designed, which exhibit a certain level of novelty. 3. Experimental results show that the proposed method outperforms the existing state-of-the-art method on ScanRefer dataset with only the supervision of binary label. 4. The paper is well-written and is easy to follow. Weaknesses: 1. Methods TGNN and X-RefSeg are not evaluated with the RoBERTa backbone. 2. The explanation of why more queries and layers as shown in Tab. 5 and 6 do not bring performance gains are missing. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why the proposed alignment modules work well? 2. How about the training time and inference time comparison? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that you recognize the significance of our work. We will respond to your concerns in the following: ## Q1: For the validation of TGNN and X-RefSeg with the RoBERTa backbone Thanks for your detailed reading and pointing out this. We reimplement the TGNN and X-RefSeg with the same RoBERTa backbone as ours to conduct experiments on ScanRefer benchmark, since these exisitng works have not applied RoBERTa backbone as language module for their experiments. We then follow their settings to train and validate the performances. As shown in the table below, we can obtain their newly corresponding results, 28.83% mIoU for TGNN (RoBERTa) and 30.72% mIoU for X-RefSeg (RoBERTa), which still shows inferior performance to ours 33.74%, which further validates the superiority of our single-stage prardigm. Besides, we observe that when the loss coefficient is set to 10, our LESS outperforms the performance of TGNN and X-RefSeg on all the metrics. More discussion will be added in our paper. | Method | mIoU | Acc@25 | Acc@50 | | :--------------------------------: | :-------: | :-------: | :-------: | | TGNN (GRU) | 26.10 | 35.00 | 29.00 | | TGNN (BERT) | 27.80 | 37.50 | 31.40 | | TGNN (RoBERTa) | 28.83 | 39.15 | 32.50 | | X-RefSeg (GRU) | 29.77 | 39.85 | 33.52 | | X-RefSeg (BERT) | 29.94 | 40.33 | 33.77 | | X-RefSeg (RoBERTa) | 30.72 | 41.54 | 34.42 | | Ours (RoBERTa) $\lambda_{area}=1$ | 33.74 | **53.23** | 29.88 | | Ours (RoBERTa) $\lambda_{area}=10$ | **35.08** | 52.30 | **34.43** | ## Q2: For the more querie and layers do not bring performance gains Thanks for pointing out this. With an excessive number of queries, too many queries become redundant, leading to overlapping or duplicate proposal masks. This redundancy does not contribute to better segmentation and can instead introduce noise, even degrading performance improvements. Besides, Increasing the number of layers can lead to overfitting, especially with the limited supervisory signals available in our label-efficient setting, binay mask supervision. ## Q3: For reasons for good performance of proposed aligment modules Thanks for your detailed reading and pointing out this. Point-Word Cross-Modal Alignment module aligns fined-grained point and word features. Also Query Mask Predictor module and Sentence Query Alignment modules make the coarse-grained alignment between masks and sentences. The coarse-grained sentences-masks and fine-grained words-points effectively coupled to enhance the capability in capturing multi-modal context, which is helpful for our aligment modules. ## Q4: For the training time and inference time Thanks for your detailed reading and pointing out this. As shown at General Respons and in the following table, we evaluate the training and inference time both of TGNN and X-RefSeg. All experiments are conducted on an NVIDIA 4090 GPU and the batch sizes of three methods are kept the same. For the two-stage training and inference of TGNN and X-RefSeg, we followed the settings in their open source codes. We can find that our LESS consumes less time than both of TGNN and X-RefSeg in both training and inference. | Method | Inference (Whole Dataset) (min) | Inference (Per Scan) (ms) | Training (Stage 1) (h) | Training (Stage 2) (h) | Training (All) (h) | | :------: | :-----------------------------: | :-----------------------: | :--------------------: | :--------------------: | :----------------: | | TGNN | 27.98 | 176.57 | 156.02 | 8.53 | 164.55 | | X-RefSeg | 20.00 | 126.23 | 156.02 | 7.59 | 163.61 | | Ours | **7.09** | **44.76** | - | - | **40.89** | Thanks again for these constructive comments, we will take all these experimental results or further discussions into consideration into our revised version. If any questions, please kindly let us know. --- Rebuttal Comment 1.1: Comment: Thank the authors for the rebuttal. First, my comments were clearly addressed. Specifically, the validation of TGNN and X-RefSeg with the RoBERTa backbone, and the comparisons of the training time and inference time demonstrate the proposed method achieves the best performance. Besides, the explanations on the performance gains and motivation of the proposed alignment module makes the contributions of the proposed method clearer. Second, I read the other reviewers’ comments as well as the rebuttal information. For the raised questions, such as the validation on other dataset, ablation studies on the mask selection strategy, explanation of the significant differences between the proposed modules and existing techniques and concerns to label efficiency, the authors also responded accordingly. In summary, this work is relatively novel, which investigated the single stage referring 3D instance segmentation task to bridge the gap between detection and matching, only under the supervision of binary mask. The proposed method can provide valuable insights to the further research in multi-modal 3D scene understanding. Therefore, I am maintaining my original score. --- Reply to Comment 1.1.1: Title: Response to Reviewer Monv Comment: We are grateful for your comprehensive and encouraging review and acknowledgement. We are pleased that you appreciate the technical contributions of our work, which could motivate us to explore more interesting works in the future. Thank you again for your support!
Summary: The paper proposes LESS, a single-stage, label-efficient pipeline for referring 3D instance segmentation. It introduces fine-grained and coarse-grained cross-modal alignment to improve feature matching and employs additional losses to reduce irrelevant predictions. Experiments are conducted on the ScanRefer dataset. Strengths: 1. The motivation of this paper is meaningful. It proposes a label-efficient method for referring 3D instance segmentation, which can significantly reduce annotation costs. 2. The paper is well-written and easy to follow. 3. The experiments on the ScanRefer dataset show promising results. Weaknesses: 1. The experiments are only conducted on the ScanRefer dataset, lacking validation on additional datasets like Nr3D/Sr3D used in TGNN[1]. 2. In the Query-Sentence Alignment module, the final mask prediction is produced by a weighted sum of the similarity score and mask prediction. The rationale for using a weighted sum is not intuitive; why not select the mask with the highest score? 3. The paper lacks references to related literature. The Query Mask Predictor module is similar to the framework used in Mask3D[2], but Mask3D is not cited. [1]. Text-guided graph neural networks for referring 3d instance segmentation, AAAI 2021. [2]. Mask3d: Mask transformer for 3d semantic instance segmentation, ICRA 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you very much for your detailed and constructive comments. ## Q1: For the validation on other dataset Thanks for your detailed reading and pointing out this. We also conduct the experiments on the Nr3d and Sr3d datasets, as shown at General Response and in the following table. We can find that the performance of our LESS surpass the performance of TGNN by 3.6% and 1.8% on Nr3d and Sr3d respectively. Such results present promising potential for the future works and shed new light on label-efficient and single-stage exploration for R3DIS task. | | Method | Overall | Easy | Hard | View-dep. | View-indep. | | :--: | :----: | :------: | :------: | :------: | :-------: | :---------: | | Nr3D | TGNN | 37.3 | 44.2 | 30.6 | 35.8 | 38.0 | | | Ours | **40.9** | **47.4** | **35.2** | **39.7** | **41.9** | | Sr3D | TGNN | 45.0 | 48.5 | 36.9 | 45.8 | 45.0 | | | Ours | **46.8** | **50.5** | **37.8** | **46.6** | **46.3** | ## Q2: For whether not select the mask with the highest score Thanks for your detailed reading and pointing out this. Firstly, using a weighted sum enables a soft decision-making process rather than a hard and binary choice. This can help in cases where multiple masks have similar scores, allowing the final prediction to be an aggregation of these maks, which can be more accurate than any single one. Besides, we also conduct the ablation study which selects the mask with the highest score in the ScanRefer dataset, as shown in the following table. It can be shown that aggregating multiply masks leads to better performance than selecting only the highest one. This approach can help in capturing nuances and fine details that might be missed when relying on a solely single mask, which provide a more robust and accurate final prediction for the final model performance. | | mIoU | Acc@25 | Acc@50 | | :------: | :-------: | :-------: | :-------: | | Top-1 | 33.18 | 52.93 | 28.65 | | Baseline | **33.74** | **53.23** | **29.88** | ## Q3: For the cite of Mask3D Thanks for reminding us with this. We will add Mask3D[1] to our related literature and extend further discussions with Mask3D. [1]. Mask3d: Mask transformer for 3d semantic instance segmentation, ICRA 2023. Thanks again for these constructive comments, we will take all these experimental results or further discussions into consideration into our revised version. If any questions, please kindly let us know. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I have read all the contents, and most of my concerns have been addressed. However, I have to admit that I'm not an expert in this field, so please consider more about the review opinions of reviewers with higher confidence. --- Reply to Comment 1.1.1: Title: Response to Reviewer pLC3 Comment: We are grateful for your comprehensive and encouraging review. Thank you.
Summary: This paper proposes a label-efficient single-stage method for referring 3D instance segmentation. Specifically, this method enhances feature extraction by integrating multi-modal features, using only binary labels for supervision. It achieves fine-grained alignment between points and words, distinguishing points or objects with similar features. Strengths: This paper introduces the single-stage method pioneered by R3DIS, which I find very appealing. The structure of the article is clear and the writing is fluent. It achieves state-of-the-art (SOTA) performance on the ScanRefer dataset. Weaknesses: I agree with the authors' point that annotating precise instance-level labels in tasks like open-vocabulary or 3D visual grounding is time-consuming and labor-intensive. Therefore, it is impressive that this method reduces label effort by approximately 10 times while maintaining SOTA performance. However, the authors only used mIoU and Acc as metrics for semantic segmentation in the experiments, without using any 3D instance segmentation metrics (e.g., AP). I hope the authors can explain in detail why they did not use instance segmentation metrics. If no instance segmentation metrics were used, why is this task named 3D Instance Segmentation? I hope the authors can validate their method's robustness across multiple datasets, as using only the ScanRefer dataset provides very limited persuasiveness. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses part. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive comments. Next we address your questions. ## Q1: For the metrics of Referring 3D Instance Segmentation Thanks for pointing out this question. For the metrics of Referring 3D Instance Segmentation task, our experimental metrics are constantly followed by the previous works[1,2], called Acc@kIOU. Acc@kIOU measures the accuracy of test samples with an IoU higher than specific threshold k, where k ∈ {0.25, 0.5}. And we can find that the calculation method is much similar with the APx metric for common instance segmentation task. Hence, we choose the used Acc@kIOU in our main paper as the previous methods for the same Referring 3D Instance Segmentation task. We hope this explanation can help to clarify your confusion. [1]. Text-guided graph neural networks for referring 3d instance segmentation. [2]. X-RefSeg3D: Enhancing Referring 3D Instance Segmentation via Structured Cross-Modal Graph Neural Networks. ## Q2: For the validation on other dataset Thanks for your detailed reading and pointing out this. We also conduct the experiments on the Nr3d and Sr3d datasets, as shown at General Response and in the following table. We can find that the performance of our LESS surpass the performance of TGNN by 3.6% and 1.8% on Nr3d and Sr3d respectively. Such results present promising potential for the future works and shed new light on label-efficient and single-stage exploration for R3DIS task. | | Method | Overall | Easy | Hard | View-dep. | View-indep. | | :--: | :----: | :------: | :------: | :------: | :-------: | :---------: | | Nr3D | TGNN | 37.3 | 44.2 | 30.6 | 35.8 | 38.0 | | | Ours | **40.9** | **47.4** | **35.2** | **39.7** | **41.9** | | Sr3D | TGNN | 45.0 | 48.5 | 36.9 | 45.8 | 45.0 | | | Ours | **46.8** | **50.5** | **37.8** | **46.6** | **46.3** | Thanks again for these constructive comments, we will take all these experimental results or further discussions into consideration into our revised version. If any questions, please kindly let us know. --- Rebuttal Comment 1.1: Title: Additional concerns to label efficiency Comment: Thank you to the authors for addressing my questions one by one. However, I believe the first question was not adequately answered. While I understand that you have adopted the same metrics as in previous methods, I would like a clearer understanding of the deeper reason for not using the AP metric in instance segmentation. My guess is that it is because each prediction involves only one instance, so measuring accuracy based on IoU alone suffices. Additionally, there is an important issue I would like the authors to clarify. A core aspect of the paper is its emphasis on being label-efficient, and Table 1 compares the label effort of two-stage methods with that of the method proposed by the authors. To my knowledge, it takes approximately 25 minutes to semantically label an entire indoor scene on a per-instance basis, and about 2 minutes to label a single instance mask, which aligns with the time mentioned by the authors. However, training a typical instance segmentation network requires annotations for about 800 scenes from ScanNet, which totals approximately 25 * 800 = 20,000 minutes. On the other hand, as mentioned in the section "Dataset and Experiment Settings," training a rendering-based instance segmentation network requires 51,583 queries of 11,046 objects. Without considering the time for matching queries with objects, the total annotation time would be at least 2 * 11,046 = 22,092 minutes, which is more than the total time required for standard instance segmentation annotations. Therefore, based on this analysis, I find it unfair and unreasonable for the authors to conclude that their method is label-efficient by comparing only the annotation time for a single sample without considering the total number of samples required for training. I look forward to a more detailed explanation of the above questions. Thank you very much! --- Reply to Comment 1.1.1: Title: Response to Reviewer 6zjS Comment: Thanks for your thoughtful reflections on this question and pointing out the confusion about the calculation and comparation of label efficiency. We will discuss these questions more deeply and below are our analysis and explanation: ## Q1: Rethinking whether not using the AP metric in instance segmentation - **Why the previous works[1,2] are called "instance segmentation" :** As we mentioned in the paper, the previous works adopt two-stage workaround. The first stage utilizes a 3D instance semantic segmentation network to get instance proposals. And the second stage leverages a network to match the query with corresponding instance proposals. Because the first stage is of great importance to this task, previous works simply named this task with "instance segmentation", while we followed the previous works to do that and will extand more discussions into our revised version to rethink how to strictly name this task. - **Why not use AP metric in the R3DIS task:** Thanks for this detailed reading and question. The target of referring 3D instance segmentation about a corresponding query is only one. Therefore the IoU metric is sufficient to measure this task. In addition, the Acc@x metrics also is used to measure the accuracy of the segmentation mask. We will consider to add more comparative definition between Acc@x and AP into our revised implementation details. - **Rethinking about our LESS:** We will take necessary in-depth discusions to explain and compare referring 3D instance segmentation with conventional instance segmentation task. Since our method is one-stage and lacks a first instance proposal stage, the title with "instance segmentation" may be not strictly align with the common instance segmentation task. However, the core contribution of our paper is to develop a simple yet effective single-stage for this community, shifting from complicated two-stage pipeline into single-stage one, which paves more spaces for future works. We will add more in-depth discussions to the final version of the paper and consider whether to revise the confusing title. ## Q2: For the labelling time on the whole dataset - **The number of scenes needed to annotate:** Previous works [1,2] contain a 3D instance segmentation and a matching network. Based on their open source codes, we find that training a typical instance segmentation network requires annotations for **1513** scenes (train+val) rather than **800** scenes (which is the number of ScanRefer). Therefore, in light of the aforementioned assumption that the time required for the annotation of a single scene is 25 minutes, the overall time required for the training process should be 25 * 1,513 = 37,825 minutes, rather than the 20,000 minutes as estimated before. - **The number of objects needed to annotate:** In order to make a fairer and more reasonable comparison of label-efficiency between our LESS and previous works, we compare the number of objects required labelling. Firstly, we follow the open source codes of the previous works to process the dataset. From the processed dataset we find that the number of annotated objects under 1513 scenes is about **45,711**, even though excluding walls and floors. As for our LESS, we only need to annotate **11,046** objects, which is 4 times fewer compared to previous works. Here we need to note that the objects referred by ScanRefer is the subset of the objects in ScanNetv2. - **For the comparative method of time in the paper:** As we mentioned above, the objects referred by ScanRefer are a subset of all objects among all scenes. In the context of the time comparison presented in the paper, the comparison of labeling effort is based on a single sample. This is because that, in consideration of reality scenarios, it is often the case that the model is adapted to incorporate new data in order to align with evolving referential requirements. Therefore, in our method, there only necessitate just a single or a few objects needed to be labelled, rather than the entire scene. When scaling up the dataset scale for this task, our advantages will become much more significant. As a result, we compared the label effort of a single sample in the paper. Thank you for pointing out this issue, and we will make further discussion on this in our paper to reduce any potential ambiguity. In summary, our method takes less time than the previous works in different comparison methods, which indicates our method is more label-efficient. Also We would like to extend our gratitude once more for your insightful suggestion. It is our sincere hope that the responses above could address your concerns. If you have any additional questions or suggestions, please let us know. [1]. Text-guided graph neural networks for referring 3d instance segmentation, AAAI 2021 [2]. X-RefSeg3D: Enhancing Referring 3D Instance Segmentation via Structured Cross-Modal Graph Neural Networks, AAAI 2024
Summary: This paper addresses the problem of referring 3D Instance Segmentation, which segments all points belonging to an object in a 3D point cloud described by a query prompt. Previous methods use a two-stage approach requiring both instance and semantic labels, which is labor-intensive. The authors propose LESS (Label-Efficient and Single-Stage), a new pipeline that only requires binary mask supervision, reducing annotation costs. Key innovations include a Point-Word Cross-Modal Alignment module, a Query Mask Predictor module, a Sentence Query Alignment module, and two new losses for weaker supervision. LESS significantly outperforms existing methods on the ScanRefer dataset. Strengths: 1. The paper is clear and easy to understand. The proposed method is straightforward and well-explained 2. Strong quantitative results are reported on the ScanRefer dataset. Weaknesses: 1. No substantial novelty in module design or architecture, they are already proposed in other contexts: + Query Mask Predictor: similar to Mask2Former: M2F3D: Mask2Former for 3D Instance Segmentation + Point-Word Cross-Modal Alignment: similar to cross-attention + Query-Sentence Alignment: cosine CLIP score 2. The paper lacks explanations of why TGNN and X-RefSeg perform worse, and in which case these methods fail. 3. The area regularization loss brings the most significant gain, but is not explored in detail, i.e. ablation of the loss coefficient. 4. Running time compared to the two-stage approaches? If it is not significantly reduced, there is no advantage compared to the two-stage approaches. 5. The results with Acc@0.5 are worse than those of TGNN and X-RefSeg (Table 1) while the results with Acc@0.25 are opposite indicating that the proposed method is better locate coarse location rather than exact matching as in TGNN and X-RefSeg. Comment: TGNN and X-RefSeg use a pretrained 3DIS network, when training on this Referring 3DIS task, only need binary mask supervision. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the loss coefficient affect the results? 2. What is the impact if there are less than 20 queries in Table 5? 3. Are there any benchmarks other than ScanRefer? The main paper only shows the result on one benchmark, which may be not sufficient. How about Sr3D and Nr3D Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, this paper has a session discussing its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive comments. Next we address to your concerns as follows: ## Q1: For the novelty of the module design The core motivation of our approach is to explore a novel single-stage R3DIS to fully embrace semantic concepts from text query and visual cues into a unified paradigm, while leveraging an efficient binary supervision. Most of existing works focus on two-stage pipeline to reach R3DIS: first instance proposal network and then a new network to classify these proposal into the corresponding text query, which separate the interaction between text query and visual cues. To investigate label-efficient and single-stage pipeline in R3DIS task, we delve into fusion mechanism for better aligning visual features with textual embeddings within a cross-attention manner. Specially, we design coarse-grained feature alignment and fine-grained feature alignment strategies, corresponding to our QMP and SQA, and PWCA, respectively. Besides, the utility of area regularization and point-to-point contrastive learning are effective to eliminate ambiguities among multiple objects and backgrounds, since single-stage approach possibly struggles with positive false and negative true samples without a well-trained proposal generator network. All of the proposed components in our paper lead to our competitive SoTA and pioneering single-stage R3DIS LESS. Moreover, our methods needs only binary mask annotations to train the model while the previous ones need heavy both instance labels and semantic labels, which strikes to label-efficient and single-stage approach for this task. ## Q2: For the explanations of why TGNN and X-RefSeg perform worse and the cases they fail Firstly, as mentioned in line 31-34 in the paper, owing to accuracy of mode and the lacking of linguistic guidance, TGNN and X-RefSeg may miss the target in the pre-segmentation stage. In this case, it is impossible to provide language-related high-quality instance proposals for the second stage. Besides, compared to our fine-grained and coarse-grained alignment, their visual-language alignment is weak, usually applying language guidance at the second stage only. We deduce these are the main reasons why both of them performed poorly. Moreover, if the targets are severely missed in the first stage or queries are complex, TGNN and X-RefSeg will perform worse. ## Q3: For the loss coefficient of area regularization We conduct further ablations in Q7 together. ## Q4: For the running time compared to pervious two-stage approaches We also conduct comparisons to address this concern, please kindly refer to the table of our **General Response to all Reviewers**. ## Q5: For the results with Acc@0.5 are worse than those of TGNN and X-RefSeg while the results with Acc@0.25 are opposite Since TGNN and X-RefSeg are all two-stage methods, that their 3D instance semantic segmentation networks in the first stage are first well-trained by both instance labels and semantic labels to support higher precision instance proposals for the second stage than ours. This benefits for the Acc@0.5 metric. However, as shown in Q7, we can still find that our LESS also surpasses both TGNN and X-RefSeg in Acc@50, which indicates our model is better on not only locating coarse location but also exact matching, thanks to our elaborate alignment strategies for training between language information and visual cues. ## Q6: For the comment of detailed setting of TGNN and X-RefSeg In our main paper for the R3DIS task, TGNN and X-RefSeg both leverage a two-stage paradigm, which consists of a 3D instance semantic network and a instance matching network. Obviously, TGNN and X-RefSeg rely on instance labels, semantic labels and binary masks to finalize referring 3DIS task rather than "only binary masks", the efficient way we used. Noting that, we trained 3D spares U-Net’s architecture from scratch without its pretrained weights, which is different from previous methods. ## Q7: For how does the loss coefficient affect the result We further conduct more ablation studies of λp2p, λseg and λarea. It can be seen that λp2p presents marginal influence in terms of final performance and we tend to set with default 0.05. And reducing λseg is advantageous for obtaining decent performance. We analyze that this is due to the reduction of λseg has the effect of amplifying the impact of the area regularization loss, thereby facilitating a stronger discrimination between backgrounds and foregrounds, leading to local mask predictions. We still observe improvements when λarea increases, especially for Acc@50 metric, which indicates our area regularization loss can effectively exclude backgrounds and get more precise masks. We have uploaded the corresponding results on the **PDF**, please kindly refer to that due to the Rebuttal Words Limitation. ## Q8: For the ablation studies of the number of queries We further conduct more ablation studies in terms of the number of queries on ScanRefer. As shown in the following table, we ablate the number of queries as {5, 15, 20}, it can be observed that keeping more queries brings higher accuracy while lower accuracy when using fewer queries. We suppose that fewer queries can not help to learn comprehensive feature patterns while an appropriate number of queries is enough to cover the needed feature patterns. | Num of queries | mIoU | Acc@25 | Acc@50 | | :------------: | :-------: | :-------: | :-------: | | 5 | 32.75 | 51.81 | 28.68 | | 15 | 33.27 | 52.46 | 28.97 | | 20 (Baseline) | **33.74** | **53.23** | **29.88** | ## Q9: For the validation on other dataset We also conduct the experiments on the Nr3d and Sr3d datasets to address this concern, please kindly refer to the table of our **General Response to all Reviewers**. **Thanks again for these constructive comments, we will take all these experimental results into consideration into our revised version.** --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. However, my concerns about the novelty of your work remain unaddressed, particularly regarding how your proposed modules differ from previous methods. I was looking for a clearer explanation of how your approach offers significant advancements rather than just adopting existing techniques. Therefore, I will maintain my original rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer NB5W Comment: Thank you for your response! We are grateful and pleased that our response can address most of your concerns. Regarding the novelty of our work, we would like to answer from the following points and discuss the differences from previous works. - The novelty of the Query Mask Predictor and Query-Sentence Alignment. As we explained in the paper, QMP can provide candidates of the target in the scene. However, different candidates have different importance to the target and most of them contains unrelated background points. To address this issue, we propose the QSA module to weight the mask and query by calculating the relationship between the candidate and the query through the cosine similarity. Moreover, **different from CLIP[1]** use cosine, which selects the mask with the highest score as the final result, our final mask prediction is facilitated by a weighted sum of the similarity score and mask prediction. Using a weighted sum enables a soft decision-making process rather than a hard and binary choice. This can help in cases where multiple masks have similar scores, allowing the final prediction to be an aggregation of these maks, which can be more accurate than any single one. As shown in Tab.x, we also conduct related experiments about this, which also confirm the advantages of our method. **In comparison to Mask3D [2]**, our QMP and SQA generate possible mask proposals guided by the language caption, whereas Mask3D's decoder enumerates all possible proposals in a scene without any guidance. | | mIoU | Acc@25 | Acc@50 | | :------: | :-------: | :-------: | :-------: | | Top-1 | 33.18 | 52.93 | 28.65 | | Baseline | **33.74** | **53.23** | **29.88** | - The novelty of the Point-Word Cross-Modal Alignment. The PWCA is based on the cross-attention structure, which is a prevalent approach in cross-modal fusion networks. **In comparison to the original cross attention model[3]**, we have removed the position encoding in order to address the disorder of 3D point clouds. Furthermore, we have incorporated an MLP Tanh gate on top of the original model in order to regulate the information flow between language and visual features. - The novelty of module design. In contrast to previous research, which solely aligned word features with proposal features, we introduce a novel one-stage coarse-to-fine point cloud language cross-modal alignment paradigm. PWCA, QMP and QSA are design to align word-point and sentence-mask, respectively. Furthermore, another significant innovation is the introduction of a coarse-to-fine background point filtering strategy, comprising Area Regularization Loss and Point-to-Point Contrastive Loss, which enables the effective processing of high-complexity, large-scale 3D scenes. - Finally, related experiments shown that compared to previous works, our LESS is simple yet effective not only in SoTA performance but also in efficient time consumption with only binary mask supervision. As mentioned in conclusion of our paper, we aims to construct a straightforward and universal single-stage baseline to establish a solid foundation for this task, rather than focusing solely on the performance enhancement offered by a single novel model architecture, which sheds new lights on future exploration on how to better align the visual feature with language feature (cross-modal) or propose different training strategies. We would like to express our gratitude for your valuable suggestions and for highlighting the existing issues. Hope that our response could provide the necessary clarification to address your concerns. If you have any additional questions or suggestions, we are very glad to have a further discussion. [1]. Learning Transferable Visual Models From Natural Language Supervision, ICML 2021. [2]. Mask3d: Mask transformer for 3d semantic instance segmentation, ICRA 2023. [3]. Attention is All You Need, NeurIPS 2017. --- Rebuttal 2: Title: Response to Reviewer NB5W Comment: Thank you for your questions and suggestions. A recent work, OpenScene[1] has established a straightforward and efficient baseline network in the field of open vocabulary 3D segmentation using a simple cross modal distillation method based on primary cosine smilarity driven, but also with a pioneering explroation. The objective of our work is to present a straightforward and effective single-stage baseline for the task of 3D referring segmentation. In addition to a single stage architecture design, our work makes a further contribution in the form of a single-stage coarse-to-fine cross-modal alignment network and a coarse-to-fine background point filtering loss function. These have been developed with the aim of achieving state-of-the-art performance under efficient binary mask supervision, which have been demonstrated with extensive ablations mentioned before. Thank you again for your suggestions. We sincerely hope that you will be able to evaluate the contribution of our work based on its overall design and the differences that it presents in comparison to previous works [2][3], and also the hitting cornerstone of single-stage pipeline of this task. [1]. Openscene: 3d scene understanding with open vocabularies, CVPR 2023. [2]. Text-guided graph neural networks for referring 3d instance segmentation, AAAI 2021 [3]. X-RefSeg3D: Enhancing Referring 3D Instance Segmentation via Structured Cross-Modal Graph Neural Networks, AAAI 2024
Rebuttal 1: Rebuttal: # General Response to all Reviewers: Dear all Reviewers, We would like to express our great thank and appreciation to all the reviewers for their constructive and thoughtful comments on our submission. In this rebuttal response, we address the questions raised by the reviewers and clarify how we will revise our manuscript in repsonse to the offered comments. Our paper aims at proposing a novel single-stage Referring 3D Instance Segmentation to fully embrace semantic concepts from text query and visual cues into a unified paradigm, which differs from most exisitng two-stage works necessitating both instance and semantic labels for each object first and using a separate interaction between text and visual cues during the training. We are motivated and glad that our proposed method is well-explained, appealing and exhibiting a certain level of novelty (Reviewer NB5W, 6zjS, pLC3 and Monv), is of clear writing and structure for understanding (Reviewer NB5W, 6zjS, pLC3 and Monv), demonstrates competitive SoTA performance on ScanRefer benchmark (Reviewer NB5W, 6zjS, pLC3 and Monv) and sheding meaningful research for the future to enable label-efficient learning for this community (Reviewer pLC3 and Monv). Here we answer some general questions of reviewers: ## Q1. For the evaluation on more benchmarks, especially on Nr3D and Sr3D Thanks for the constructive suggestions to conduct more validations on other benchmarks. We further employ our method on extra Nr3D and Sr3D as shown in table below. Following the evaluation setting in TGNN, we can observe that the performances of our LESS consistently surpasses the performances of TGNN by 3.6% and 1.8% on Nr3d and Sr3d, respectively. This further demonstrates the effiectiveness of our method, still achieving competitive performances on additional Nr3D and Sr3D, which presents promising potential for the future works. We will add these new experimental results to our revised manuscript. | | Method | Overall | Easy | Hard | View-dep. | View-indep. | | :--: | :----: | :------: | :------: | :------: | :-------: | :---------: | | Nr3D | TGNN | 37.3 | 44.2 | 30.6 | 35.8 | 38.0 | | | Ours | **40.9** | **47.4** | **35.2** | **39.7** | **41.9** | | Sr3D | TGNN | 45.0 | 48.5 | 36.9 | 45.8 | 45.0 | | | Ours | **46.8** | **50.5** | **37.8** | **46.6** | **46.3** | ## Q2. For the comparisons with pervious two-stage approaches, especially with TGNN and X-RefSeg Thanks for the constructive suggestions regarding more comprehensive comparisons with two-stage methods. Firstly, we compare the training and inference time of our method with both TGNN and X-RefSeg, shown in the following table. All comparative experiments are conducted on a NVIDIA 4090 GPU machine and the batch sizes of three methods are kept the same. For the two-stage training and inference of TGNN and X-RefSeg, we strictly follow the settings in their open source codes. From the table, we can find that, our LESS consumes less time than both of TGNN and X-RefSeg in terms of training and inference cost, which showcases the promising potential of single-stage approach, our proposed LESS. | Method | Inference (Whole Dataset) (min) | Inference (Per Scan) (ms) | Training (Stage 1) (h) | Training (Stage 2) (h) | Training (All) (h) | | :------: | :-----------------------------: | :-----------------------: | :--------------------: | :--------------------: | :----------------: | | TGNN | 27.98 | 176.57 | 156.02 | 8.53 | 164.55 | | X-RefSeg | 20.00 | 126.23 | 156.02 | 7.59 | 163.61 | | Ours | **7.09** | **44.76** | - | - | **40.89** | Secondly, we reimplement the TGNN and X-RefSeg with the same RoBERTa backbone as ours to conduct experiments on ScanRefer benchmark, since these exisitng works have not applied RoBERTa backbone as language module for their experiments. We then follow their settings to train and validate the performances. As shown in table below, we can obtain their newly corresponding results, 28.83% mIoU for TGNN (RoBERTa) and 30.72% mIoU for X-RefSeg (RoBERTa), which still shows inferior performance to ours 33.74%, which further validates the superiority of our single-stage prardigm. | Method | mIoU | Acc@25 | Acc@50 | | :--------------------------------: | :-------: | :-------: | :-------: | | TGNN (GRU) | 26.10 | 35.00 | 29.00 | | TGNN (BERT) | 27.80 | 37.50 | 31.40 | | TGNN (RoBERTa) | 28.83 | 39.15 | 32.50 | | X-RefSeg (GRU) | 29.77 | 39.85 | 33.52 | | X-RefSeg (BERT) | 29.94 | 40.33 | 33.77 | | X-RefSeg (RoBERTa) | 30.72 | 41.54 | **34.42** | | Ours (RoBERTa) | **33.74** | **53.23** | 29.88 | Overall, we can find that our approach can not only exhibits much more efficient training/inference cost but also superior performances when equiping TGNN and X-RefSeg with the same RoBERTa backbone. Thanks for the thoughful comments from all reviewers, we will take these comparisons into consideration by adding these experimental results in our revised version. Next, for the specific questions of each kind reviewer, we answer them in the reply to each one. The following PDF document contains tables for three ablation experiments for Reviewer NB5W, the inclusion of which is due to space limitations. Pdf: /pdf/fea0c9bc012a80acc0f7b5aaa860ee7752c99347.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Autoregressive Policy Optimization for Constrained Allocation Tasks
Accept (poster)
Summary: This paper studies task allocation under resource constraints and proposes a new constrained RL algorithm based on autoregressive policy optimization with a novel de-biasing mechanism. Extensive simulations are provided demonstrating the improved performance. Strengths: The paper is well-written and the algorithm design is novel. The numerical performance of the proposed algorithm is much better than the benchmark: no constraint violation but enjoys better reward than benchmark algorithms that allow constraint violation. Weaknesses: 1. Is there any theoretical guarantee to explain no constraint violation in the simulation? It is quite surprising to me that the proposed algorithm has no constraint violation since the problem formulation allows constraint violation. Can the authors provide some intuitions on such a good constraint satisfaction behavior in the simulation results? 2. In Section 4.1, does the order of the constraints affect the performance (for example, if we first determine the feasible interval for constraint a_2, then determine a_1, will it cause a difference in the performance)? If there is no difference, what is the reason? If there are differences in performances, how to choose an (near) optimal ordering of the constraints to improve the performance? Technical Quality: 3 Clarity: 3 Questions for Authors: See above. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discussed the limitations explicitly in the second last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review and helpful comments. We are happy to address your questions below: - W1: Please see our general response, where we address the theoretical guarantees in the form of a proof. - W2: In theory, it is possible to learn the optimal policy regardless of the order as long as the policy is able to sample the complete action space. In practice, however, this might not be the case when the initialization of the policy does not uniformly sample over the action space, it might not be able to overcome this bias and converge early to a suboptimal policy because large allocations for later entities are not sufficiently explored. Instead of looking for the optimal order, we provide a de-biasing mechanism that mostly eliminates the impact of the order, which can be seen in our ablations (Figure 5b). There, we show that the order does not significantly impact the performance of our approach when using our de-biasing mechanism. We performed an additional experiment to also compare the impact of the order without using our de-biasing mechanism. This figure (which can be found in the PDF attached to our general response) shows that reversing the order without our de-biasing mechanisms does create a significant difference in performance. --- Rebuttal Comment 1.1: Comment: Thanks for your responses and the theoretical discussions! However, I think it is better to resubmit this paper next time with the theoretical results added. So I will keep my score.
Summary: he paper presents Polytope Action Space Policy Optimization (PASPO), a novel RL methodology tailored for strict linear constraint allocation tasks. By autoregressively decomposing the action space into manageable sub-problems, PASPO enhances compliance and efficiency without the need for corrective actions. Additionally, an innovative de-biasing mechanism during early training prevents premature policy convergence, showing significant improvements over existing methods in constraint adherence and task performance. Strengths: The paper is very well-written. The idea of PASPO is novel and interesting. The motivation of the problem is clear and have practical significance. The experiment results are clear and performs well. Weaknesses: 1. The algorithm requires substantial computing resources, demanding high-performance computational capabilities to effectively manage its intensive processing needs. This entails advanced processors, ample memory, and significant data storage capacity to efficiently handle and execute the complex calculations involved. I appreciate that the authors also highlighted this in the article. 2. The paper lacks a comprehensive explanation of why the algorithm achieves high performance, omitting essential details on its operational principles and optimization techniques. Additionally, there is insufficient exploration of the algorithm's adaptive features, error handling, and scalability under varying conditions. These aspects are critical for understanding its potential effectiveness across diverse scenarios and applications. 3. Given that the allocation problem is formulated as a clear CMDP problem, I am curious whether the authors could provide theoretical guarantees for their approach. As I am not well-versed in this topic, I will consider the insights of other reviewers on this matter. Technical Quality: 3 Clarity: 3 Questions for Authors: none Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review and helpful comments. We are happy to address your questions below: - W1: Thank you for appreciating that we discussed this in our paper. Regarding the computational requirements, see also our response to reviewer QtCd question 1. - W2: While several reviewers like the presentation and clarity of our paper, they noted that minor details were missing or unclear, which we address in our responses. We also added a proof to show that our approach guarantees constraint satisfaction (see our general response). Furthermore, our formulation of the problem as a CMDP with cost functions caused some confusion, since our approach does not require cost functions. We did this to show the relation to Safe RL and will clarify this in our paper (see also our response to reviewer 3hEE question 4). In case our response did not cover your concerns, could you please point us to more specific questions, we are happy to hear and address them in the discussion. - W3: Please see our general response, where we address the theoretical guarantees in the form of a proof. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. I do not have further questions.
Summary: This work considers constrained allocation tasks, like portfolio optimization or server scheduling and focuses on how to do policy optimization while respecting the given constraints. In these problems, the core problem is that sampling over the polytope is challenging. The work puts forward an approach for sampling points that satisfy the constraints which can be used in known policy iteration algorithms. The approach is logical: iterate over dimensions, determining feasible range and then random sampling. Careful choice and learning of distribution parameters for sampling leads to good outcomes, removing the bias resulting from fixing earlier parameters first. This is used as the core sampling method within Proximal Policy Optimization training. Strengths: The core sampling approach PASPO is presented in a very clear way, and the sample approach is sensible and quite simple in a complicated space. The authors compare to prior approaches are are able to show that the PASPO approach is more effective and trains faster than prior work. The approach seems to improve significantly on other RL based approaches to given problems, though I am not close enough to the area to validate that. Weaknesses: The sampling approach put forward is clear and straightforward. At the same time, it is not clear the level of full novelty involved in the solution - it is a greedy sampling approach with carefully constructed weighting to avoid bias. With that said, if that gets the improved results, then much better to have that come from a simple solution than a much more complicated solution. Technical Quality: 3 Clarity: 4 Questions for Authors: You discuss the high computational cost of your approach - can you discuss it in the context of the other RL approaches compared against in Figure 4? Confidence: 1 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review and helpful comments. We are happy to address your questions below: - W1: The novelty of our approach lies in utilizing the properties of the polytope to efficiently parameterize a stochastic policy over it in an autoregressive way that can be optimized using standard reinforcement learning algorithms like PPO. Additionally, we introduce a novel de-biasing mechanism. We are happy that the reviewer acknowledges the good results that our approach achieves. - Q1: As discussed in the paper, our approach introduces some additional computational cost, since we are required to solve linear programs in our approach. Our approach does not have particularly demanding hardware requirements and can be trained on regular servers, PCs, or laptops. It is difficult to directly compare the runtimes because we trained on a cluster with various different servers and also ran multiple training runs in parallel, which impacted the duration of training. Therefore, we were hesitant to report exact numbers in the paper. Note that the runtime also depends on the number of constraints and dimensions of the action space and we did not optimize our implementation for runtime. For example, in our synthetic benchmark of dimensionality 7 and 150k environment steps, OptLayer took around 24h to train and our approach also took around 24h. In contrast to the other benchmark approaches, OptLayer can also guarantee constraint satisfaction. Approaches like IPO, CPO, etc, take around 10 minutes but do not guarantee constraint satisfaction at all times. Let us also note that we trained only on CPUs and used scipy to solve the linear programs. Note that works exist that utilize GPUs for solving linear programs (for example see [1]), which could be utilized to drastically speed up training. [1] Spampinato, Daniele G., and Anne C. Elster. "Linear optimization on modern GPUs." 2009 IEEE International Symposium on Parallel & Distributed Processing. IEEE, 2009.
Summary: The paper presents a method for a specific setting of constrained RL method. The setting deals with constraints on the simplex of action space. The authors motivate this setting with resource allocation problems. The proposed method uses sequential conditional sampling of actions to impose constraints and also they propose a de-biasing mechanism to be robust against initialization. Strengths: - The paper is clearly written and easy to understand with references made to figures often - Presents an algorithm for Hard constraint type of problems and achieves good empirical performance. - de-biasing mechanism Weaknesses: - The paper is motivated by a requirement of guaranteed constrained satisfaction however the result is empirical and not theoretical guarantees of constraint satisfaction. - The algorithm is explained in words in different parts of the paper but it misses an overall algorithm or flow of the framework. - If I had to solve the same problem (Maximizing an objective while having joint constraints on the action space), I would directly sample actions from the constrained simplex and apply an RL algorithm, e.g., PPO to it (i think you can do rejection sampling to sample from constrained simplex). Am I missing something? How would this perform as compared to PASPO? Also, it would be nice if you could explain in detail why you think one would work better than the other. Technical Quality: 3 Clarity: 3 Questions for Authors: - Since the constraint satisfaction is empirical, I would ask authors to not use the word guarantee for it. - While reading the paper, I had a question of how much de-biasing helps. It's nice that the authors did an ablation study. Which environment is used for the ablation study? And do you expect similar performance for all the environments? - In Figure 5 b) what does reverse allocation order mean? - What is the final algorithm? What is the objective of PPO and do you impose any constraints as you describe in lines 135 and 133? or are constraints indirectly applied by restricting the action sampling? - Why do you use the word auto-regressive? - why do you sample actions sequentially and not jointly all at once? - What are the state and action space dimensions of the environments used for empirical study? - comment to improve the paper (optional) I understand that resource allocation is an application where the framework directly fits. However, satisfying hard/state/action-wise constraints is also not so trivial with general constraint RL algorithms. A possibility could be that the authors do not directly focus on allocation tasks but rather on a framework for safe RL with a particular type of constraints and later motivate is with application of resource allocations. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review and helpful comments. We are happy to address your questions. We try to be as detailed as possible given the 6k limit. - W1/Q1: Please see our general response regarding the theoretical guarantees. - W2: Thank you for the suggestion. To assist readers in understanding the overall flow of our approach, we will include a detailed algorithm and a visualization of the flow. - W3: Sampling from the constrained simplex is not sufficient for RL. We need to define a stochastic policy over the polytope and update its parameters using a gradient in order to increase or decrease the probability of drawing these samples. Parameterizing such a differentiable distribution via rejection sampling is not straightforward. However, some algorithms use a related concept. They sample from a distribution over a superset, e.g., an unconstrained polytope, but instead of rejecting actions that fall outside the constrained polytope they project these onto the constrained polytope. OptLayer [18] is a prominent instance. However, action projections can lead to a biased policy gradient [15]. In contrast, our approach defines a stochastic policy only over the polytope and does not suffer from this problem. The experimental results also empirically demonstrate that we can outperform projection-based approaches like OptLayer. - Q2: We use the synthetic benchmark without additional allocation constraints (i.e., only the simplex constraint) for our ablation. As in our main experiments, this environment is 7-dimensional. Since this was not sufficiently clear in the paper, we will add more details regarding the setting to the paper. We decided not to use additional allocation constraints because this makes the effect of de-biased initialization most pronounced since each constraint limits the possible allocations (e.g., if one is required to allocate at least 90% to one entity, there is not a lot of choice for the remaining allocations). Since the bias can occur regardless of the environment, we expect a similar performance in other environments. - Q3: We will clarify this in the paper. To derive a considerably different allocation order, we reverse the allocation order. That means, instead of allocating to entity $e_1, \ldots, e_n$, we allocate in the order $e_n, \ldots, e_1$, i.e., we allocate the last entity first, the second last ($e_{n-1}$) second, and so on. - Q4: > “do you impose any constraints as you describe in lines 135 and 133? [...]” We apologize for the confusion. The last paragraph of section 3 is somewhat misplaced as it describes how the problem setting can be reformulated to apply methods from Safe RL. Thus, we will move this definition to the experiments. We do not use a cost function, instead we apply the constraints to the action space and our approach ensures that actions always comply with the constraints. > “objective of PPO?” Our approach defines a differentiable stochastic policy function over the constrained action space and can be directly optimized using the standard objective of PPO. - Q5: In our approach, each step depends on the outputs of all previous steps (cf. Figure 2 in the Appendix), similar to an autoregressive model. Hence, we use the word “autoregressive”. - Q6: RL algorithms that are policy gradient based, require sampling actions from the current policy, i.e., a distribution which is parameterized by a neural network. However, directly defining such a distribution over a polytope is extremely difficult. In fact, even efficiently generating uniform samples from a polytope is a difficult problem subject to ongoing research [8]. Therefore, our approach decomposes the problem into dependent sub-problems. Because of this autoregressive dependency, we need to sample an action sequentially. Note that this decomposition and process allows for training using standard RL algorithms. - Q7: We will add further details to the Appendix. In portfolio optimization the state space has 27 dimensions and the action space 13. In the computational workload distribution environment the state space has 18 dimensions and the action space 9. In our synthetic env, we have a discrete number of states, which are one-hot encoded. Our synthetic env uses a complex reward surface which is well suited to study the properties of various approaches, especially how well and fast approaches can optimize an arbitrary reward surface. Therefore, we do not want to focus too much on other properties of RL that are mostly orthogonal to our approach, such as delayed rewards, learning complex state representations, etc. As a result, we set the number of states to two and use a 7-dimensional action space. - Q8: Our approach can be applied to all constrained RL tasks, where the action space is a convex polytope, as long as the constraints are explicitly given. However, since many people think of Safe RL in settings where the constraints are not explicitly given and typically only soft satisfaction is required, we decided to focus on allocation tasks. However, as shown in our problem definition, constrained allocation tasks can be defined within the framework of Safe RL and we discuss the relation to Safe RL in our related work.
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed and constructive feedback. We will integrate many suggestions in the paper and are happy that our approach has been well received. Specifically, we are glad that reviewers find that: - our paper is well-written and clearly presented: - “PASPO is presented in a very clear way” (QtCd) - “The paper is clearly written and easy to understand with references made to figures often” (3hEE) - “The paper is very well-written.” (3WTq) - “The paper is well-written” (2peK) - our solution is effective and the experiments are well-conducted: - “The experiment results are clear and performs well.” (3WTq) - “Presents an algorithm for Hard constraint type of problems and achieves good empirical performance.” (3hEE) - “The authors compare to prior approaches are are able to show that the PASPO approach is more effective and trains faster than prior work.” (QtCd) - “The numerical performance of the proposed algorithm is much better than the benchmark: no constraint violation but enjoys better reward than benchmark algorithms that allow constraint violation.” (2peK) - our paper presents a novel and significant contribution: - “The idea of PASPO is novel and interesting. The motivation of the problem is clear and have practical significance.” (3WTq) - “[...] the algorithm design is novel.” (2peK) Before we address the questions raised by the individual reviewers in our specific responses to the individual reviewers, we would like to address a point that has come up in multiple reviews: ## Can we provide theoretical guarantees that our approach always ensures constraint satisfaction? **We will add a proof to the paper that shows that our approach always guarantees constraint satisfaction and that our approach is able to generate every possible constraint-compliant action. The proof is included in the attached PDF-file to this response.** In short, we show that the set of all actions $A$ that can be generated by PASPO is equivalent to the original constrained solution space $P$ of the system of inequalities. We prove via induction that PASPO will always generate a point $a^*$ that satisfies all constraints if $P\neq \emptyset$. The intuition is that when we determine the allocation to a single entity (that means we fix the value of a single variable in the inequality system), we express this by adding further constraints to the original system of inequalities. Therefore, every solution for the resulting system of inequalities must also be a solution for the original system of inequalities. Furthermore, we show that any point within $P$ can also be generated via PASPO and that $A=P$. A geometric illustration of this process can be found in our paper in Figure 2. In the individual responses, we refer to questions and weaknesses in order of their occurrence (i.e., W1 refers to the first weakness, Q1 refers to the first question). Apart from the proof, the PDF attached below also contains the result of an additional experiment conducted to better answer the question of reviewer 2peK regarding the impact of the allocation order in our approach. Pdf: /pdf/46a5ce96d7fc014ebc89ee86d9bab4992051d5a3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An engine not a camera: Measuring performative power of online search
Accept (poster)
Summary: The authors describe performative power, a pre-existing proposed measure of platform market power, and give an approach to measuring performative power using a browser extension. The browser extension perform random modifications to the search results page of results from target search engines, and measure click behavior. The modifications and user clicks are logged, providing sufficient statistics to compute a variant of the performative power metric. They deploy the extension to 85 users, resulting in about 57k clicks, and produce performative power calculations based on these interactions. The authors then provide some discussion of the results. Strengths: I feel there are two main strengths to the paper 1. The authors make the case for measuring performative power using a browser extension 2. The authors created a browser extension to measure performative power. They appear to have been somewhat careful, discussing issues with hiding the DOM until rewriting is complete, discussing some latency numbers that are not bad (around 40ms, certainly enough to impact user behavior, but not enough to be wildly visible). They discuss privacy implications of their storage and logging of user search events, and have taken a reasonable approach with this, logging only intervention ids and click positions. Weaknesses: W1: supporting ongoing work. I didn't see a reference to the source code for the browser extension, or any discussion of it being available for further work -- this seems like a surprising omission, though perhaps I didn't read the right section? W2: on novelty, I have a hard time characterizing exactly where the main contribution of the paper lies. The development of the performative power measure, and the argument for its relevance in regulation, comes from prior work. The understanding of the impact of position on click likelihood is also quite heavily studied, so the headline numbers the authors show have reasonable support in the literature already. The authors make a small modification to the definition to incorporate changes to the page beyond the organic results, but this does not seem to be the key point. The observation that PP can be computed from a browser extension seems fairly straightforward, not a centerpiece for the argument of novelty -- such approaches have been taken for modifying search results and measuring interactions in the past. The paper could arguably make the case for taking a straightforward idea (browser extension for performative power) and exploring the many thorny problems in designing and deploying this measurement, but my second primary concern below is that these issues remain largely outside the scope of the paper. Overall, I feel that there isn't a clear case to be made for the dimension of novelty. W3: platform power. The authors paint a picture of developing a causal understanding of the "power" present in the platform to differentially route user attention across resources. My concern is that some interventions are sustainable (ie, the platform could actually implement such a modification) while others are not. As an extreme and somewhat laughable example, consider the intervention that replaced every search result with a link to the CEO's gofundme account. This would change clicks dramatically, resulting in high performative power, but nobody would argue that the platforms could sustainably deploy this intervention. Instead, the goal is to consider power the platform has to alter the *ongoing* distribution of user attention to online resources, so making this measure robust requires significant attention to the issue of reasonable interventions. The related literature the authors cite (cascade, eye-tracking, MLIR) is quite careful in these areas. It is not clear that placing a pure navigational result at position 3 instead of position 1 is sustainable, and the issue is not so simple: users habituate to platform behavior, and will respond differently when the platform stops behaving as expected, both in determining which results to consider and hence where to click, and in bigger ways (ie, changing providers). Correspondingly, platforms themselves rely on user feedback, which would be adulterated by these types of interventions, with unclear implications. For a proposal intended to be a "blue print for how performative power can be used to guide digital market investigations," I think it's a significant omission not to consider these issues. Technical Quality: 2 Clarity: 3 Questions for Authors: I'd love to hear the authors' thoughts on the three issues I raised in the weaknesses section above. Here are a few smaller questions also (I'll start numbering at 4, assuming the three points above are questions 1-3) 4. Could you discuss the fact that you consider only a single click, given that the intervention is likely to produce a higher number of clicks than the control (as satisfaction with the top result will be lower)? Likewise, the browser extension has the capability to consider more details of the interaction, such as immediate click-back from an unsatisfactory result page, followed by a click on a following page. I know it's biting off a lot to consider these types of interactions, but perhaps you could discuss a little. 5. Nit: line 112 has a typo in the cite of Chuklin et al 6. Could you clarify the triggering for arrangements a4-a6? Do they trigger only on results that have ads / shopping content? How is the accounted for in the analysis? Sorry, this may already be specified in the paper and I may have missed it. 7. Latency sounds pretty good, but probably worth referencing literature about the impact on search user behavior from additional latency of this magnitude 8. Study participants are trusted individuals solicited by personal outreach -- this suggests they will likely trust the researchers, tend to give the browser plugin the "benefit of the doubt," and so forth. Could you discuss the issues of conditioning that result from this choice of users? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: I feel that the limitations I discuss above should be covered in more detail by the authors. For ethics review, the authors indicate that this type of study does not require IRB approval at their institution, so I take this at face value. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful feedback and comments. We hope to provide additional clarification and address your questions in the following. **W1: Source code.** The extension has been published in the Chrome store. The code can be inspected using the developer console in the Chrome browser. We did not include the link for anonymity reasons, but we will add it to the final version. If you wish to inspect the code, we uploaded it to an anonymized repository shared with the AC. **W2: Novelty.** The core contribution of our work is to take a theoretical concept and provide a first demonstration for how it can be instrumentalized in a practical scenario. A priori it is unclear how to relate performative power to micro experiments that are feasible to perform on digital platforms. Our experimental design together with the required theoretical assumptions outline a possible avenue for doing this. By relating tools from computer science with a policy relevant question at the heart of a major antitrust investigation our work provides a promising interface between regulatory questions and the expertise of the NeurIPS community. **W3: Choice of interventions.** Note that the instantiation of the actions set $\mathcal A$ is part of the performative power definition. Different choices are valid, also larger changes, but they lead to different conclusions about power. So it needs to be instantiated carefully in any given context. In our case we care about the effect of actions the platform has implemented in the past (downranking search results and adding visually appealing elements). Our counterfactuals are designed carefully to give us insights into performative power related to these interventions. As you pointed out it is important not to interleave experiments with treatments that have a disruptive impact on user experience and behavior, as we want to measure the effects of our interventions under natural interactions of users with the platform. Being exposed to one intervention should not impact the behavior for a subsequent query. We make this requirement explicit in Assumption 1. It is also why we refrained from including larger swaps as an additional treatment group, after feedback from an initial testing phase. All our counterfactuals constitute minimal interventions. As you point out, the largest modification we perform is to move the first search results down by two positions. This is comparable to having two Ads on top of the page and it typically does not move the result out of sight. Based on our own experience and feedback from participants there is no evidence that the extension was noticeable by any of the participants. It is also important to note that modifications are triggered at random. This offers no structure a user could implicitly or explicitly adapt to. The swap 1-3 only happens with probability ⅛ for any user query (including navigational queries), and for 45% of the queries the extension does not swap results. We checked the behavior in the control group by splitting the data collected before and after November 10. The difference in the average click rate for the first result differs in 0.8% across these two groups (the base rate is 43%) which is significantly smaller than the sampling uncertainty. We will think of a more rigorous experiment to add to the appendix to demonstrate that behavior of participants is stable across the duration of the study. **Triggering a4-a6.** Arrangements a4-a6 are triggered each with a fixed probability (p=1/6). If there is no content to hide for this query, a5 solely performs a swap and the other interventions leave the website as is. This ensures that treatment assignment is independent of the potential click outcome which is important to obtain unbiased estimates. On aggregate the effect of hiding Ads will naturally be larger if Ads are present more often. **Latency.** The latency refers to the time for loading the search page after entering the search query. A technical report on Google [1] could not find any effect for delays below 100ms on search behavior. Similarly, [2] find that “up to a point (500ms) added response time delays are not noticeable by the users.” Our delay of 40ms is below any of these thresholds and thus not expected to impact our results. We will include these references. **Considering a single click only.** As you pointed out we are not making a distinction between follow-up searches and new searches. What we measure is the ability of the platform to steer clicks treating all clicks equal. To analyze a different outcome variable, such as a specific type of click, we would have to adjust the experiment accordingly. However, the scenario you mentioned will still meaningfully affect our measurements. If a user returns to click the second result after clicking the first under intervention a1, this will surface as a reduction in the performativity gap. This leads to the interesting observation that performative power is larger in situations where equally relevant results are being ranked. **Participants tend to give the browser plugin the "benefit of the doubt."** There is a bias in the selection of participants, as people who trust us in performing a legitimate scientific experiment and take privacy seriously, are more likely to install the extension. But once they have the extension installed, the experiment is randomized at a query level and we can safely assume that they consume Google search in a natural way. There is no structured change they could respond to. We even deleted the data from the first 4 days after onboarding to be on the safe side and avoid potential biases due to participation awareness. Please clarify if we misunderstood your last questions. But we hope the additional context is helpful. References: [1] J. Brutlag. Speed matters for google web search. Online report by Google. 2009. [2] Arapakis et al. Impact of response latency on user behavior in web search. SIGIR. 2014. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for clarifying about the chrome extension -- this answers the question completely. Regarding the other questions, I think the authors have made reasonable points in terms of extensibility of the framework for other actions and measures. I think my remaining question centers around weakness #3. Let me lay out the concern here more fully, and the authors can perhaps respond with counter-arguments, point out things I'm missing, etc. The authors' rebuttal discusses that users will not habituate to the intervention because the intervention is randomized and reasonably infrequent. I don't think this is true. If navigational queries result in the correct answer at organic position #1 97% of the time, that's good -- users develop an expectation of the engine's behavior for that query class. Now, let's say that 12% of the time, an intervention moves the #1 result to #3, so the correct answer is now at #1 85% of the time. This is a massive change in the experience: the error rate grew by a factor of 5 from 3% to 15%. A competitor might find the top result 96% of the time. The intervention could be made less often to reduce this gap, but consider what is happening here: we are considering what level of intervention would not change the user's perception of value from the system, but immediately then measure the PP of the system under the assumption that the intervention is applied 100% of the time -- this clearly represents a different user experience, and the platform would resultingly occupy a different position in a new equilibrium. In the language of the original PP paper, performative power is known to go to zero as competition increases towards a state of perfect substitutability. The question of how much a search engine can adulterate its experience before it suffers competitive losses is therefore a question about the state of competition. My main concern here is that a careful handling of this question is required: just how much can the platform really modify its results before, over time, users begin to realize that a competitor provides better performance. This question seems quite tricky. So my concern is that the current paper is an empirical study of performative power in search ranking, but leaves out the attempt to tackle what appears to be perhaps the single most critical issue in understanding just how much power a platform has. Note further that the original paper considers this question of the PP of a platform with respect to viewers in a carefully-chosen setting: that of content recommendations of content hosted uniquely at the platform under study. There is no perfect substitute competitor in this setting. Web search, on the other hand, is fundamentally about providing access to a public database of content, so the issue of what competitors do is much more important. I wonder if the authors could engage a little on this and share thoughts. --- Reply to Comment 1.1.1: Title: Additional clarifications Comment: Thank you for providing additional context, now we better understand your main concern. Let us explain why measuring the effect of reranking on clicks, holding the current state of the market fixed, is actually what we care about. The Google Shopping case is about the ability of the search algorithm to distort traffic in the downstream market of comparison shopping services (CSS), a market in which Google is also competing. It is about the effect the search algorithm can have on other online businesses that receive large portions of their incoming traffic through Google search. Consider a specific competitor offering a CSS service. Now if an update to the search algorithm consistently down ranks their website by two positions this has a significant impact on the clicks they get and hence their business, without necessarily impacting users’ search experience nor Google’s position in the market of search. In fact Google would most likely aim to avoid updates that negatively impact search user's satisfaction or retention. But it is a fact that Google has significantly down-ranked competitors in their ranking for certain Shopping queries (sometimes even moving them to the second page). Our goal is to establish a plausible lower bound on the effect of such a ranking change. Thus, being careful to measure the effect of reranking without impacting user's browsing behavior is not an omission but part of the design. An important distinction compared to the example of performative power you refer to in the original paper is that here the effect and the conduct are happening in different markets. We hope this clarifies your concern. We are happy to discuss more if it is helpful.
Summary: This paper describes an online experiment seeking to measure how much power online search provides have in terms of impacting what content people consume. In short, the study attempts to measure the causal effect of small ranking rearrangements on click-rate for a population of web users. The authors present experimental data that can be used to estimate the expected impact that operators of search engines, recommender systems, and other ranking technologies might have on viewership of items when they make small changes to their rankings. Strengths: In terms of originality, quality, and clarity: - Originality is high. The authors review (in reasonably terse fashion) a number of studies that have sought to understand the impact of ranking items in search on attention (i.e., clicks and visual attention that items receive). The study is grounded in this past work and notes its major contribution is to begin studying this effect experimentally. - Quality of the study is high overall. See some concerns / questions below, but overall I would consider this to be a very convincing set of results. - Clarity is very high throughout. Experimental details are crisply described. In terms of significance - Reasonable dataset (for this kind of study) with 57,000 "real" queries from 80 participants - Known caveat that getting this data without direct access to search operator datasets is prohibitively expensive, hence why this is novel. Weaknesses: Two weaknesses (with respect to venue fit and potential of the current draft) stood out: First, a minor note: this kind of experimental work might be somewhat unusual at NeurIPS as it doesn't engage with the modelling side of ranking. Personally, I do not think this should be a blocking reason -- I think many in the community would like to see future works that incorporate this kind of experimental approach -- but it felt like an important piece of context to note. Second, some readers may concerned with perceived issues with ecological validity. To some extent these are unavoidable in any experimental study like this -- there will always be some set of ecological validity concerns, they just trade off with each other. - While this choice is reasonable, it may impact the kinds of queries used: "The study participants are trusted individuals of different age groups and backgrounds, recruited by reaching out personally or via email." - It might be helpful (esp. given the work may impact policy discussions) to know more about the domain / type of queries, but very reasonable privacy choice to avoid sharing any information about that. - Very minor: It's (by choice of technology companies) unclear if the kinds of perturbations studied here map to the kinds of a/b tests or experiments that are frequently rolled out by those companies. I don't think this is something the paper needs to address explicitly, but is also worth noting. Technical Quality: 4 Clarity: 4 Questions for Authors: A few questions that might be worth addressing in the next version of the paper - Are there major constraints you would expect to face if trying to apply these methods to arbitrary types of other ranking platforms (feeds, matching/marketplaces platforms, etc.). - To what extent, if any, do you expect the specific algorithmic / modelling choices made by platforms to matter here? - Are there domains that might be hard to study, because e.g. the number of items is too high? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Limitations of the methods are reasonably discussed throughout. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback on our work. In the following we first discuss your questions and then provide some thoughts on the additional comments below. **Applying method to other use cases.** This is an interesting point we have not discussed in our work. An important feature of our design is that the complexity of the underlying algorithm does not impact the experimental design. The intervention is implemented at the level of the user interface, rather than the algorithmic system. This means the system does not have to be changed. From a technical viewpoint, changing how results are displayed to users can in principle be applied to any website using a browser extension, independent of how these results are selected by an underlying algorithm. However, one important constraint is that the updates that can be emulated at the display level are limited to the information available on a website. This means, we can swap and hide elements, but not replace them with an alternative that is hidden. Similarly, we can not use any proprietary data the firm might use to determine content allocation. This limits the updates we can emulate. However, for the purpose of lower-bounding performative power, it is sufficient to argue for one feasible update that can be implemented. Evaluating the effect of this update provides a lower bound on the corresponding definition of performative power. Here, the swap of adjacent results usually offers a good proxy for potentially larger updates a firm could implement. **Domains that are harder to study.** In light of the constraint mentioned above the design of counterfactuals is harder for settings where the firm offers the user fewer choices, like in the case of the Amazon Buy Box where a single element is selected to be displayed behind a visually appealing button. In such cases natural updates are harder to argue without additional insights into the algorithm, or the alternative options. Large number of items offer the opposite situation which does not seem to be a problem, but we are not entirely sure we understand the question. One interesting aspect of a large number of items is usually that more of them can be relevant, which leads to a stronger effect of ranking and a larger performativity gap. But this would just impact the effect size, not the implementation. **Ecological validity.** As the reviewer mentioned, concerns related to ecological validity are unavoidable to some extent. What we can offer is an additional ablation and robustness checks where we remove individual participants from the evaluation to check the sensitivity of the statistics. Results are shown in the supplementary PDF, but error bars are still very small. **Recording query.** Thank you for supporting this design choice. Search queries are one of the most sensitive personal data. For our proof of concept it was not necessary to collect this information which is why we refrained from doing it. However, we agree that when inspecting more nuanced policy related questions additional information might be necessary to record, and the benefits may outweigh the privacy costs in a specific case. However, the exact question should come first and guide the corresponding decision. From a technical perspective there is nothing preventing us from recording user queries, we have the information readily available. For a future study we might consider storing an embedding of the search query, but for the current data we simply do not have this information at our disposal. **Why we chose NeurIPS.** We agree that this type of experimental work is less typical from the perspective of developing better ranking models. However, from a perspective of causal inference, and algorithmic auditing, directly tackling the measurement problem is quite natural. We also believe that decoupling modeling from empirical measurement is very important for developing investigative tools in the context of AI systems. Modeling behavioral aspects in the interaction of participants with platforms is very complex and this complexity should not prevent us from developing effective monitoring and measurement tools. For additional arguments for why we think our work is interesting for the NeurIPS community and falls under the call for paper can be found in the response to Reviewer 9NWC, but it does not seem that we need to convince you of this. We hope our rebuttal addresses your questions satisfactorily and we will incorporate additional discussion of the points you raised in a future version. References [1] Hardt, Jagadeesan, and Mendler-Dünner. "Performative power." NeurIPS. 2022. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for this response and additional information. I stand by my positive review, and would echo the argument for why this particular paper and kind of paper does fit the current CFP.
Summary: The paper titled "An engine not a camera: Measuring performative power of online search" presents a study on the performative power of online search engines, specifically focusing on how they can influence web traffic through the arrangement of search results. The authors designed and executed an experiment to quantify the ability of search engines to steer user clicks by rearranging search results, a concept known as performative power. The main contributions of the paper are as follows: **Conceptual Framework**: The authors build upon the recent definition of performative power to develop a framework for understanding the influence of digital platforms, particularly search engines, on user behavior. **Experiment Design and Quantitative Findings:** The study involved tens of thousands of clicks collected over a five-month period from more than 80 participants, providing robust quantitative insights into the causal effects of search result arrangements. The paper reports significant quantitative findings, such as the observation that consistently downranking the first search result by one position causes an average reduction in clicks of 42% on Google search, and even more for Bing. Strengths: **Originality:** The paper introduces a novel concept, the performative power of online search engines, which is a significant contribution to the field of digital platform policy and regulation. **Quality:** The analysis is thorough, with robust quantitative findings supported by extensive data collection over a five-month period from a diverse participant base. **Clarity:** The paper is well-structured, with clear definitions and explanations of key concepts such as performative power and the methodology used for the study. **Significance** The researchers presents that search engines are active "engines" that can significantly shape and influence the information landscape, with important sociopolitical implications. Weaknesses: 1. **Limited Sample Size and Diversity.** The paper could benefit from a larger and more diverse sample to ensure the results are representative of different user behaviors across various demographics. 2. **Lack of case studies**. While the paper provides clear quantitative results, additional qualitative analysis or case studies could help interpret why certain patterns emerge, offering deeper insights into user decision-making processes. 3. **Lack of discussion on interventions**. The researchers provide a compelling framework for measuring this performative power, but the paper does not extensively address potential countermeasures or interventions that could help mitigate the negative aspects of search engines' performative power. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author has already discussed the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback. Let us explain why we see the focus on measurement (rather than qualitative modeling) as an important opportunity of our approach, rather than a weakness. **Qualitative insights.** We agree that qualitative insights are highly valuable. However, it is possible to measure and monitor the influence of AI systems on user behavior without the need to understand the complex mechanism behind it. This is an important takeaway of our work in the context of market regulation specifically, as regulators have been struggling with the complexity of digital markets, see e.g., [1]. Our work shows that the complexity to model digital markets should not prevent us from developing effective measurement and monitoring tools. **Countermeasures.** Related to the above comment, it is useful to treat measurement separately from the design of countermeasures. Remedies are often context specific and require the balancing of many competing objectives. This goes far beyond measurement. And there are many use cases where measurement is what you primarily care about. This separation is also common in antitrust investigations for example, where there is no need to suggest an effective remedy to run a case. But as a first step towards designing effective remedies, what we can offer is an investigative tool for researchers, regulators, and potentially also platform owners to measure the effectiveness of remedies, as well as setting transparent evaluation criteria. **Sample size.** We agree that a larger pool of participants would always strengthen the results, and a larger sample size could be useful specifically to reduce uncertainty on some of the query subset evaluations, where the bootstrap confidence sets are not as tight as for the aggregate results. We can not add additional data at this point, but we have included additional robustness checks in the supplementary PDF, where we evaluate Jackknife error bars with respect to the impact of excluding a random agent from the study. They can not speak to ecological validity, but they can provide an indication for how stable the effect measures are to the observed collection of agents. References: [1] Final report: Stigler committee on digital platforms. 2019 --- Rebuttal Comment 1.1: Comment: Thanks to the author for the reply, I have no other questions. Given that the paper is quite interesting, I maintain a relatively positive attitude towards the acceptance of the paper.
Summary: The authors conduct a user study on the performative power of search engines, i.e., how much search engine providers can affect the information seen by the end user by tweaking the algorithmic ordering of results. In the specific context and assumptions formulated in this paper, this essentially amounts to measuring click-through rate differences across arrangements of search results. The authors ran a live RCT experiment by injecting different arrangements directly on the result page with the help of a browser extension of rheir design. Results show that there are notable differences in CTR and therefore commercial search engines have a large performative power. Strengths: The paper is very well written, with all methodology and assumptions being laid out clearly. Moreover the experiment design and scale seem sufficient for the task at hand according to my understanding. The results are made more solid and generalizable thanks to the use of different providers. Also this work is more complete than some previous user studies of this kind and the performative power angle is original and relevant. Weaknesses: I have two major concerns : - I am not sure this paper should appear in NeurIPS. It does contin any neural system, nor does it indicate how to create one from the results. While these results could in general be useful to an ML practitioner working with search engines, this could be said of many other research outcomes from different fields, that ypicaltly don't appear in NeurIPS proceedings. - The paper does not relate enough to previous work on position bias and other types of biases in searh results. While I believe new, updated user studies are always valuable, I think the authors should compare their results with those obtained by past studies. Technical Quality: 4 Clarity: 4 Questions for Authors: - Do your results contradict or enrich previous estimations of position bias ? If so, what explains this ? - Do you think your method base on the extension is more reliable, or complementary to other methods used in past studies, e.g., eye tracking, result randomization, intervention harvesting,... ? - How do your results relate to other types of biases that have been identified, especially trust bias ? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Limitations are correctly adressed and very clearly laid out. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback. We are glad you enjoyed reading the paper. We hope to clarify your questions by providing additional context related to the comparison points you mentioned. We will incorporate these discussions in the manuscript for a future version. **Other types of biases.** As you pointed out, position bias [6], trust bias [4], presentation bias [3], and other behavioral aspects have been shown to impact the effect of ranking on user clicks. In our work we are not interested in pinpointing any mechanism specifically, rather we are interested in directly quantifying the effect of ranking updates on clicks within the context of a given platform (we call this the performativity gap). The different types of biases will naturally enter such a measurement. We argue that it is not necessary to understand all these complex behavioral mechanisms to measure performative power and monitor algorithmic systems. In that regard a browser extension is very powerful. It serves as a tool to conduct a controlled experiment and measure the effect of specific algorithmic changes *under natural interactions of users with the platform*. As a result it implicitly captures trust, display, and other behavioral aspects specific to the platform under investigation. **Methodological differences.** *Information harvesting*, and *randomization* are methods to explicitly or implicitly exploit interventions to the ranking algorithm to measure their effect on clicks. These methods all get at the fundamental problem of causal inference; the effect of ranking can not be measured from click data alone, due to unobserved feedback and biases. Our work is also a way to gather and exploit interventional data. In contrast to prior work, e.g., the work on data harvesting in [2] where the authors consider interventions to the Arxiv ranking algorithm, the Powermeter design performs interventions at the level of the platform-user interface. We emulate updates to the algorithm without actually touching the algorithm underneath. While the observed effect is equivalent to implementing the corresponding change to the algorithm, it allows us to study the Google search algorithm, without controlling it, which is an important methodological novelty. We are not aware of any such experimental design in the literature. Finally, you mentioned *eye-tracking* studies. They are designed to measure the allocation of visual attention, e.g., to support the design of click models, such as cascade models [5]. However, they can not replace click statistics, and are complementary to performing the interventions to study. **Discussion of prior quantitative insights.** While our study pursues a different goal than estimating parameters of a click model, some of our intermediate experimental insights can be compared to prior work. While we are not aware of comparable studies related to interventions a4-a6, intervention a1-a3 constitute pairwise swaps routinely used to infer propensities in click models. The most relevant comparison point is work by researchers out of Google from 2010 [3, Figure 1] who published click statistics on FairPairs (adjacent swaps [7]) on Google search. They report an average gain in click counts of 75% when ranked first instead of second under swap 1-2, we observe a gain of 82%. For swap 2-3 they report 48% gain, whereas we see a gain of 66%. Thus, while qualitatively similar the effect size in our results is larger. This might be attributed to changes to the website design since 2009, for example, including the introduction of featured snippets that lead to a larger spatial separation between results, and may increase performativity. While this is only a hypothesis, it demonstrates how effects can change over time even for a fixed platform. With Powermeter we offer a method to reassess such effects within the context relevant for a specific investigation. Beyond swaps, we can provide insights into different counterfactuals that have not been studied before, such as the combination of downranking results and adding visually appealing elements, which is relevant in a regulatory context. **Why NeurIPS?** We build on a concept of performative power that has its origin in the NeurIPS community [1], and we offer a first account of its practical applicability which we see as an important contribution to the area of “Social and economic aspects of machine learning" referred to in the CFP. Focusing on neural systems in a broader context is not uncommon for NeurIPS, see work related to auditing and regulation. Similarly, citing again from the CFP: "Machine learning is a rapidly evolving field, and so we welcome interdisciplinary submissions that do not fit neatly into existing categories." While we understand that this is subjective, there is an easy case to be made for why our work is in scope, accessible and relevant for the broader NeurIPS audience. In any case, we believe that the AC/SAC can ultimately make an executive decision about this and we would appreciate if this could be treated independent from the assessment of the quality of our work. References [1] Hardt, Jagadeesan, Mendler-Dünner. "Performative power." NeurIPS. 2022. [2] Fang, Agarwal, Joachims. “Intervention Harvesting for Context-Dependent Examination-Bias Estimation”. SIGIR. 2019. [3] Yue, Patel, Roehrig. “Beyond Position Bias: Examining Result Attractiveness As a Source of Presentation Bias in Clickthrough Data.” WWW. 2010. [4] O’Brien, Keane. “Modeling result–list searching in the World Wide Web: The role of relevance topologies and trust bias.” CogSci. 2008. [5] Craswell, Zoeter, Taylor, Ramsey. “An Experimental Comparison of Click Position-bias Models”. WSDM. 2008. [6] Joachims et al. “Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search”. TOIS. 2007. [7] Radlinski Joachims. Minimally invasive randomization for collecting unbiased preferences from clickthrough logs. AAAI. 2006. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Regarding the relevance to NeurIPS, I don't really have a strong opinion on that. With your response, I think the AC has enough input to make a decision. Thank you for the discussion of quantitative measurements. This increase in PP/bias since 2010 is an interesting result and should appear somewhere in the paper for a final version. I understand your argument that modeling the behavorial mechanisms causing bias (and PP) is not necessary to monitor search engines, in the sense that performativity gap is enough to state how much a provider can influence the exposure by reranking its results. The usefulness of modeling the users lies in choosing what measures to take, after observing this result. See many studies on fairness in ranking where the underlying user model matters a lot in the final solution (e.g., [1] vs [2] which use the same technique but a different model). In your example with the Google shopping case, Google could argue that, indeed, the way they place results matters, but that without knowing the user behavior, there is technically no way to prove they are not already doing what's best for users/competition. I can understand how this is out of the scope of the paper, but this should be clearly stated as a limitation of the method then. Finally, regarding the methodological differences with respect to prior work, I disagree with your answer. First, it is not always impossible to recover the causal effect on ranking from observational data alone. This is the goal of the entire field of causal discovery [3] (There are some arguably strong hypotheses required). Second, intervention (not "information") harvesting does precisely that: infer user biases from click data alone. The work you cite uses interventions to the Arxiv ranking algorithm for *evaluation*; it is not part of their method. Intervention harvesting certainly has advantages and drawbacks compared to your proposal, but control of the ranking algorithm is not one of them, as neither the extension-based protocol nor the intervention harvesting approach require that. Overall, while the core study is solid, I think the paper still needs a bit more polishing to clarify its relation to prior work and which specific problem it allows to solve. [1] https://arxiv.org/pdf/2202.03237 [2] https://arxiv.org/pdf/2205.07647 [3] https://www.sciencedirect.com/science/article/abs/pii/S0888613X22001402 --- Reply to Comment 1.1.1: Title: Comments on related work Comment: We are happy to do more polishing to clarify the relation to prior work. But let us be more explicit that our goal is not to design fairer or more optimal rankings, nor to debias click data. Thus, the methods you mentioned have a different focus and they are not directly related to our work. Instead, we are interested in measuring the performative effect of a very specific ranking update; an update that was documented in the context of the Google Shopping case, and found to be anti-competitive. And we tackle the question of how to design experiments and gather data that allow us to obtain a plausible lower bound on the effect of such a specific update. What makes this challenging is that we are not in the position of the platform, but someone outside aiming to monitor a system. Intervention harvesting as we understand it is a method to build on implicit interventions and extract information from click logs of multiple historic rankers to debias click models, e.g., for better propensity estimation. But debiasing click data is not what we are looking for, as we are not interested in learning a ranking model from data. But we are interested in measuring outcomes for a specific intervention where we do not have logs available. Thus, our focus is to emulate such an intervention and gather click data to estimate its effect. This is unrelated to methods that build on top of available logs to design (fair) ranking algorithms, such as [1,2]. In general, an important distinction to most existing literature in ranking is that we take the perspective of a regulator, not the platform controlling and optimizing their own algorithm. This leads to different problem statements and challenges. For example, we aim to establish a plausible use case specific lower bound for the effect of a specific ranking update performed by a particular platform. This requires us to gather strong empirical evidence from data collected on that platform rather than relying on modeling assumptions of how users might respond to such ranking updates, because such assumptions are necessarily inexact and hard to justify as quantitative evidence, even though at the same time they can be useful for learning, or for guiding the design of potential remedies. We hope this better answers your question. If you see a relevant connection we missed, after clarifying the specific problem we aim to solve, please let us know, and we are happy to discuss it.
Rebuttal 1: Rebuttal: We thank all the reviewers for the feedback. The attached PDF contains an additional robustness evaluation to support the author-reviewer discussion. We address questions below, responding individually to all reviewers. Pdf: /pdf/8b1a1f50cf99873bcc04b527349d96274dd00a77.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Autobidder's Dilemma: Why More Sophisticated Autobidders Lead to Worse Auction Efficiency
Accept (poster)
Summary: The paper provide a fine-grained price of anarchy analysis for autobidders with non-uniform bid-scaling strategies in first price auctions, showing that first price auctions are more efficient when autobidders are less powerful and are more efficient with more balanced slices. Strengths: 1. Concrete theoretical analysis for PoA of first price auctions with non-uniform autobidders. 2. It is interesting to find the definition of balancedness of auctions, which can be regarded as a kind of power of auctioneers, are highly related to the efficiency of auctions w.r.t. autobidders. Weaknesses: 1. The main conclusions of the paper are indeed unsurprising, although it is good to know with a theoretical proof. This does weaken the contribution of this paper. 2. The framework of flice-based model is kind of restrictive. It is hard to imagine that bidders in real-world auctions bid uniformly in each slides partitioned by the auctioneer. Technical Quality: 4 Clarity: 3 Questions for Authors: NA Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and detailed comments. **Slice-based model**: first we'd like to note that our model is consistent with prior work on non-uniform bid scaling, e.g., "Non-uniform Bid-scaling and Equilibria for Different Auctions: An Empirical Study" by researchers at Google. In light of this, we believe this model already captures the essence of non-uniform bidding in reality. For example, our messages might inform a platform's decision of merging two channels or (not) splitting a channel into several, whenever practically feasible. On the other hand, another important consideration here is the tradeoff between the practicality of the model and the cleanliness of the messages. In principle, one could certainly consider more general / sophisticated models and try to derive similar results there (e.g., we sketch one such possibility in our response to Reviewer TSHw), but doing so would probably make the messages much more obscure. To this end, we'd like to argue that our approach achieves a reasonable balance on the spectrum. **Conclusions / messages of this paper**: We position our paper as a paper providing a theoretical explanation of the empirical findings from "Non-uniform Bid-scaling and Equilibria for Different Auctions: An Empirical Study". The conclusion might seem unsurprising provided the previously known empirical results but we’d like to argue that it is indeed technically non-trivial to develop such a theoretical explanation. Our result is also more interpretable and reliable (in the sense that there is a proof) than empirical ones regarding the same phenomenon. --- Rebuttal Comment 1.1: Comment: Thanks for your response.
Summary: The paper considers the efficiency of auto bidders in first price auctions that use different shading factors on different slices of the items. Claims to show that the improved efficiency of the multi-slice system yields worse social welfare in equilibrium. Strengths: The topic of auto-bidding and efficiency of the results is extremely interesting. Weaknesses: The paper fails to clearly identify the model studied. based on the rebuttal and the other reviewers comments, i now understand the model. Like reviewer TSHw I am not convinced at the reasonableness of this special case but will increase my score a bit. Technical Quality: 2 Clarity: 1 Questions for Authors: Please help me identify the model considered. what is the assumption about other bidders? Do they also use slices? and are they using the same slices? and what is the model of the value of arriving items? Independent Bayesian? worst case? Confidence: 2 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: No negative impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. In words: all bidders are symmetric and use the same slices, which are given exogenously. Other than slices, we use the standard multi-item model, where essentially all items arrive at once at the very beginning, with the values given exogenously and publicly known (whether the values are public does not affect the behavior of the bidders, since they are responding to the *bids* of other bidders). Each bidder picks a bid multiplier for each slice, so each bidder's strategy is a vector of bid multipliers. We then consider equilibrium strategy profiles in this perfect-information environment, where each bidder's strategy must maximize their own total value subject to the ROI constraint, given all other bidders' strategies. Our measure of efficiency is the price of anarchy, i.e., the welfare under the worst equilibrium, divided by the first-best optimal welfare. We hope the above informal explanation can help clarify the reviewer's questions. --- Rebuttal Comment 1.1: Title: thank you for your clarification Comment: Thank you for your clarification. Unfortunately, I agree with reviewer TSHw that the suggestion that all bidders use the same slices is very unnatural (even if other previous papers used it). Please at least clearly state this and explain why your work is limited to this case. --- Reply to Comment 1.1.1: Comment: Thank you for your continued engagement! Please refer to our response to reviewer TSHw regarding the use of the same slice partition.
Summary: This paper investigates why non-uniform bidding in first-price auctions causes inefficiency while uniform bidding does not. The authors propose a new model that partitions auctions into slices. Bidders are allowed to bid differently across different slices but need to bid uniformly in each slice. They characterize the price of anarchy of the game using the unbalancedness defined in the paper. The characterization shows that even just a partition of 2 slices will lead to inefficiency, and the balance across slices will improve the efficiency of the market. Strengths: The properties of the unbalancedness function and its relations with the PoA is interesting within the studied model. The fine-grained analysis between settings with the greatest bidding power and the least provides a reasonable explanation for the inefficiency caused by non-uniform bidding in first-price auctions. Weaknesses: 1. The model assumption that all bidders have the same auction partitions is not compatible with the empirical results it claims to explain, as bidders may have different slices of auctions. While multi-channel bidding seems a reasonable application of the model, different users may participate in different sets of channels. Moreover, there is no central authority that can adjust the balancedness across channels, which limits practical applications of the insight. 2. The definition of unbalancedness lacks intuition. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Can the results extend to cases where bidders have different slice partitions of auctions? 2. Can you provide more intuitions on the definitions of unbalancedness, or how it is derived? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and detailed comments. **Bidders have different slice partitions**: first we'd like to note that our model is consistent with prior work on non-uniform bid scaling, and in particular, our results can be viewed as a theoretical explanation for the recent empirical study "Non-uniform Bid-scaling and Equilibria for Different Auctions: An Empirical Study" from Google Research -- they also consider environments where all bidders share the same partitions. In light of this, we believe this model already captures the essence of non-uniform bidding in reality. On the other hand, we do agree that a more general model with different partitions would be even better. In such a model, one can probably extend our results in the following way: - Refine the definition of (un)balancedness. In particular, the reasonable definition of balancedness of a slice should exclude bidders who don't participate in auctions of this slice. In addition, the definition of unbalancedness should take into consideration the "adjacency" between bidders, where two bidders are adjacent if they are both active in some slice. The resulting definition would probably be based on a "balancedness graph", which degenerates to our definition when the graph is complete. - Then, when establishing the bounds, one would follow the same high-level plan of forcing a worst-case equilibrium, but now under more constraints. More specifically, one would probably need to match slices that "generate" surplus to those that "consume" surplus, subject to the constraint that the surplus of each bidder flowing out of a slice cannot exceed that bidder's market share. One would need to find the worst-case matching that forces the worst welfare. One important consideration here is the tradeoff between the complexity of the model and the cleanliness / readability of the results. The above plan for extending our results would probably result in a far more complicated bound that depends on more parameters of the market. Given this, we'd like to argue that our approach achieves a reasonable balance on the spectrum. **Intuitions on unbalancedness**: the definition "naturally" arises in the process of trying to force a worst-case equilibrium. The idea is that low welfare in auctions with autobidders is typically because in some auctions, competition is extremely low, so the winner gets surplus almost for free. These bidders who have high surplus then spend the surplus in other auctions to compete with the "rightful winner", which ultimately hurts the welfare. One observation here is that when the latter phenomenon happens, the surplus "burnt" to beat the rightful winner consists of two parts: the part that one overpays for auctions where they are the rightful winner, and the part one overpays to beat the rightful winner. While the latter amount is always the same as the loss of welfare, the former does not contribute to this loss. So, to cause loss more efficiently, one would like to minimize the former amount. Fixing the total surplus burnt, this is done by burning more surplus on slices that are less balanced -- hence the definition of balancedness. Now, another observation is that the two phenomena can hardly happen simultaneously in one slice, because the first requires a low bid multiplier, and the second requires a high one. So, to force the worst-case welfare, intuitively one would greedily allocate the most unbalanced slices to the second phenomenon and ensure that the rest of the slices generate just enough surplus for burning. This corresponds to an intricate optimization problem over a set of functions, which turns out to be controlled by the balancedness of each slice roughly in the way that the unbalancedness is defined. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. I have no further questions about the intuitions on unbalancedness. However, as for the first response, the reason that the previous empirical paper considered the same setting is not convincing to me. A practical scenario that fits in the setting should be better. Also, even if extending to the setting where bidders have different slices can lead to complicated bounds, some robust analysis would help strengthen the results and setting. --- Reply to Comment 1.1.1: Comment: Thank you for your continued engagement! See below for our further response. We note that slices can arise not only exogenously because of the existence of multiple channels or platforms, but also endogenously from the implementation of the autobidding algorithm. Since optimal bidding is computationally costly, one may view slice-based uniform bid-scaling as a fine-grained approximation to optimal bidding. That is, the (approximately optimal) bidding algorithm has an internal partition of all auctions into slices (e.g., based on features of the user, such as language, OS, region, etc.), and performs uniform bid-scaling on each slice. The finer this partition is, the closer the algorithm is to optimal bidding. Practically, autobidding algorithms are usually maintained by online platforms. For bidders using the same bidding product on the same platform, the same bidding algorithm is often used and it is natural to assume bidders have the same slices in such scenarios. Under this interpretation, our results suggest that "partially" optimal autobidding algorithms lead to worse auction efficiency. As for robust analysis under relaxed assumptions, as discussed in the original response, a fully general fine-grained bound would be quite cumbersome. Also, if we incorporate robustness by considering the worst case over arbitrary partitions, the bound would degrade to $1/2$. However, one can establish the following "in-between" statement: the PoA cannot be better than the one given in our Theorem 1 for any market A that can be divided into a set of submarkets B1 ... Bk, where not all bidders participate in each submarket, but in each submarket all participating bidders share the same slice partitions and participate in all slices. Here, balancedness is defined only on these bidders and our bound applies to each submarket individually. We can develop a PoA bound by establishing a "dominance" relationship between the market A and a weighted aggregation of submarkets B1 ... Bk, in which each slice in market A is no more balanced than the corresponding slice in the weighted sum. The resulting bound would be the sum of bounds for each Bi weighted in the same way. This enables a bound for markets in which all bidders share the same slice partitions but may not participate in all slices (corresponding to multi-channel scenarios). We believe it's also possible to derive "in-between" bounds for cases where bidders have different slice partitions by refining the above approach.
Summary: This paper studies the efficiency, measured by the price of anarchy, of a multi-round single-item first-price auction involving multiple bidders. The auctions are segmented into multiple slices, within each of which all bidders utilize uniform bidding strategies. While the bidding parameters of each bidder must remain consistent within each slice, they can vary across different slices. The authors define the unbalanceness of the multi-round auction based on all the valuations of the bidders. Based on such unbalanceness, they show that the price of anarchy of the multi-round auction is $1 - \frac{1}{2}t$ for $t \in [2/5, 1]$, where $t$ is the upper bound of the unbalanceness. As the unbalancedness increases with more subdivisions of the slices, the authors conclude that more sophisticated auto-bidders lead to less efficient auction outcomes. Strengths: 1. The fine-grained slice-based model is a powerful tool for analyzing the equilibrium outcomes of more sophisticated autobidders compared to simple uniform-bidding strategies. 2. The theoretical results are insightful, characterizing the relationship between the price of anarchy and the sophistication level of autobidders, using unbalancedness as the key intermediary. Weaknesses: There remains a gap between realistic repeated first-price auctions and the fine-grained slice-based modeling. In reality, budget-constrained autobidders dynamically adapt their bidding strategies (such as through pacing) based on auction outcomes. However, in the paper's model, the authors restrict bidding strategies to static uniform bidding within each slice. Despite this, I still find the theoretical results valuable. Technical Quality: 3 Clarity: 3 Questions for Authors: None Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and encouraging comments. **Practicality of (per-slice) uniform bidding**: while autobidders in reality are presumably more sophisticated, there is evidence that the advertising industry finds per-slice uniform bidding a reasonable approximation. In particular, the recent paper "Non-uniform Bid-scaling and Equilibria for Different Auctions: An Empirical Study" from Google Research considers a form of non-uniform bidding strategies essentially identical to per-slice uniform bidding. In their paper, they "define partitions to the queries" where "for each partition d, a non-uniform bid-scaling strategy chooses one bid multiplier", and study "how the simulation results vary with the granularity of the partition". Our results can be viewed as a theoretical explanation of their empirical findings. As such, we'd argue that conceptually, our results have similar practical implications.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Attack-Aware Noise Calibration for Differential Privacy
Accept (poster)
Summary: The paper proposes calibrating the noise in privacy mechanisms directly to MIA success metrics like advantage or TPR at low FPR instead of calibrating to a given $(\epsilon, \delta)$-bound and converting that to bounds on MIA success. The paper develops an algorithms for direct calibration for various MIA success metrics, most importantly TPR at low FPR. The algorithms use a discrete privacy loss random variable that is output by an existing privacy accountant. The paper compares the calibration methods by comparing the noise variances, and finds that calibrating directly significantly reduces the amount of noise required. The paper also does a small experiment showing that the reduction in noise translates to improved accuracy for machine learning. Strengths: The paper is mostly easy to read and understand, given the amount of theory. The case for calibrating directly to membership inference success metrics is made very well, and the algorithm for the calibration seems very practical. Weaknesses: Some parts of the proof of Theorem 3.4, the main theorem of the work, are not fully clear, and could be missing details needed to make them fully correct. I've collected these points in the Questions section. I think the issues can be fixed, and the theorem is likely true, but these should be addressed before the paper is accepted. Figure 5 shows that the $\delta'$ corresponding to the attack TPR/FPR calibration is very large compared to the standard $\delta$. The standard $\delta$ is chosen to be small, since the mechanism that randomly outputs one individual's data is $(0, 1/n)$-DP with $n$ datapoints. I think the paper should discuss how this mechanism behaves with the TPR/FPR calibration. My quick calculation suggests that with the optimal attack for this mechanism, $FNR = (1 - 1/n) \cdot (1 - FPR)$, which would give a large FNR at small FPR, suggesting that the mechanism is very private, even though it obviously isn't. Requiring the FNR bound to be large to fix this could reduce the apparent advantage that the paper's results suggest, since the paper always considers smaller FNR bounds. Minor points: - Line 97: the domain and range of $\epsilon_\omega$ are the wrong way around. - The neighbourhood relation is inconsistent: Section 2.1 describes substitute neighbourhood, while Section 2.2 describes add/remove neighbourhood. - TPR does not necessarily need to be high for a MIA to be a relevant threat. For example, if TPR = 1% at FPR = 0.01%, the attack can very accurately identify the presence of a small number of individuals, which violates the privacy of those individuals. - The results in Figure 1 could have uncertainty estimates. - Restating the theorems before their proofs and having the proofs in the same order as the main text would make the proofs much easier to read. - Line 649: $\phi$ outputs the probability that the sample came from $Q$. Technical Quality: 4 Clarity: 3 Questions for Authors: Issues with Theorem 3.4: - In the reasoning around lines 673-675, why is it not possible that $\alpha$ just happens to be one of the $k+1$ possible values of Eq. (42)? On line 659, it is said that this reasoning should work for all $\alpha$. This case is considered explicitly later, but only after this line of reasoning is concluded. - Doesn't the conclusion that there can be multiple ways values of $\gamma$ and $\tau$ that satisfy the constraint on line 695 imply that the choice of the optimal test $\phi_{\tau,\gamma}$ is not unique, though the FPR and FNR are unique? - Why is the test that is found on lines 690-698 optimal? - It is not clear whether Theorem 3.4 claims "for all $\tau$ and $\gamma$" or "for some $\tau$ and $\gamma$". Minor points: - I don't think Algorithm 1, line 2 works with $\alpha = 1$, the strict inequality is never satisfied. Though this should be easy to fix. - Line 164: should this have $\alpha^\star$ instead of $\alpha$? - Are $P$ and $Q$ the correct way around in Eq. (1)? They seem to be the other way around in Gopi et al. [2021]. - Should the support of $Y$ in line 666 be $y_1, \dotsc, y_{l+1}$ instead of $x_1, \dotsc, x_{l+1}$? - Are the image classification results in Figure 1 comparable between standard and attack risk calibrations, since you had to use RDP due to data augmentations with the standard accountant? - What type of subsampling (Poisson, sampling with/without replacement) and which neighbourhood relation did you use in the DP-SGD experiments? The fixed batch sizes suggest sampling without replacement, in which case the issues that Lebeda et al. (2024) have recently raised with privacy accounting for sampling without replacement might affect your results. Reference: - Lebeda et al. "Avoiding Pitfalls for Privacy Accounting of Subsampled Mechanisms under Composition" (2024) arXiv:2405.20769 Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The paper discusses limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for such a detailed reading of our work, and especially the proof! We appreciate it. ## Theorem 3.4 > **Q1.** In the reasoning around lines 673-675, why is it not possible that $\alpha$ just happens to be one of the possible values of Eq. (42)? On line 659, it is said that this reasoning should work for all $\alpha$. This case is considered explicitly later, but only after this line of reasoning is concluded. We agree that this line of reasoning appeared too early in the proof. We will move it to the part of the proof where we only consider $\alpha$ that are not one of the possible values of the reverse CDF of $X$ in Eq. (42). > **Q2.** Doesn't the conclusion that there can be multiple ways values of $\gamma$ and $\tau$ that satisfy the constraint on line 695 imply that the choice of the optimal test is not unique, though the FPR and FNR are unique? Not necessarily! What we are saying in line 695 is that the optimal test, which is unique, can have two different _parameterizations:_ $(\tau_1 = x_t, \gamma_1 = 0)$ and $(\tau_2 = x_{t+1}, \gamma_2 = 1)$. That these two parameterizations yield the same test follows by observing that $\phi^*_{\tau_1, \gamma_1}(o)$ outputs 1 if $\log \frac{Q(o)}{P(o)} > x_t$ and 0 otherwise, while $\phi^*_{\tau_2, \gamma_2}(o)$ outputs 1 if $\log \frac{Q(o)}{P(o)} \geq x_{t+1}$ and 0 otherwise. Since the test statistic $\log \frac{Q(o)}{P(o)}$ lives on a discrete grid and cannot take on values between $x_t$ and $x_{t+1}$, it follows that $\phi^*_{\tau_1, \gamma_1}$ and $ \phi^*_{\tau_2, \gamma_2}$ will classify each observation $o$ identically. > **Q3.** Why is the test that is found on lines 690-698 optimal? Our response to Question 2 should address this concern as well. Lines 690-698 do NOT identify two distinct tests and choose one as the optimal without proof. Instead, lines 690-698 identify two different parameterizations for the unique and optimal test. One is free to choose either parameterization (or any other!) for implementation. In our implementation, we choose $(\tau_1 = x_t, \gamma_1 = 0)$. We will make this distinction between different parameterizations and different tests clear in the final version of the proof (see next). > **Q4.** It is not clear if Theorem 3.4 claims "for all" or "for some" $\gamma$ and $\tau$. Theorem 3.4 indeed holds **for all** $\tau \in \mathbb{R}$ and $\gamma \in [0,1]$, and we will make this clear by specifying the “for all” quantifier. To clarify this in the proof, in each of the three studied regions of $\alpha$, we will specify (i) the optimal test and (ii) all possible parameterizations $(\tau, \gamma)$ for the optimal test, as described next. In the case when $\alpha = 1$, we have only one possible parameterization for the optimal test: $(\tau = -\infty, \gamma = 0)$. When $\alpha$ is one of the $k+1$ values in Eq. 42, we have the two parameterizations in Question 2 along with a continuum $\{ (\tau, \gamma) \mid \tau \in (x_t, x_{t+1}), \gamma \in [0,1] \}$ if $t<k$, and $\{ (\tau, \gamma) \mid \tau \in (x_k, \infty), \gamma \in [0,1]\}$ if $t = k$. Finally, when $\alpha$ does not fall into any of the remaining categories, we have one unique parameterization $(\tau = x_t, \gamma = \text{Eq (49)})$ for all $t$. This covers all $\tau \in \mathbb{R}, \gamma \in [0,1]$, and shows that Theorem 3.4 claims holds "for all'' parameter values. ## Catastrophic Failures > Figure 5 shows that the $\delta'$ corresponding to the attack TPR/FPR calibration is very large compared to the standard $\delta$. The standard $\delta$ is chosen to be small, as the mechanism that randomly outputs one individual's data is $(0, 1/n)$-DP with $n$ datapoints. I think the paper should discuss how this mechanism behaves with the TPR/FPR calibration. This is a great point that we thought about a lot, although a detailed discussion didn't make the final cut in this submission (it will be added to the revised version). The reviewer correctly notes that the standard convention of computing $\varepsilon$ for $\delta \ll \frac{1}{n}$ is based on the _existence_ of an $(\varepsilon, \delta)$-DP mechanism allowing catastrophic failures with probability $\delta$. Given our calibration in Figure 5 allows for $\delta > \frac{1}{n}$, there exists a mechanism achieving our target TPR/FPR with a high probability of catastrophic failure, and the reviewer provided an example of such mechanism. However, we introduce our calibration approach in the context of choosing a noise scale $\omega$ for a _given parameterized mechanism_ $M_\omega$. Most if not all practical mechanisms, including those we investigated (DP-SGD), do not admit catastrophic failures (see, e.g., this [post](https://differentialprivacy.org/flavoursofdelta/)). We argue that the _existence_ of a mechanism achieving our TPR/FPR calibration with a high probability of catastrophic failure is irrelevant when calibrating the noise parameter $\omega$ of a _specific mechanism_ that we know does not admit failures. Consequently, our methods should only be used for mechanisms without catastrophic failures. We will clarify this in the final version by adding the following paragraph in the Concluding Remarks: “Our calibration algorithms are supposed to be used with mechanisms that do not admit catastrophic failures, i.e., those that ensure attack TPR of zero when FPR is also zero, or, equivalently, mechanisms whose trade-off curve is such that $T(M(S), M(S’))(0) = 1$ and $T(M(S), M(S’))(1) = 0$ for all $S \simeq S’$. Although practical mechanisms such as DP-SGD have this property, one can in principle construct mechanisms which do admit catastrophic failures (e.g., a “name-and-shame” mechanism which outputs one of the records in the clear [see, e.g., [Aerni et al. 2024](https://arxiv.org/abs/2404.17399)]). In such pathological cases, standard calibration should be used to ensure $\delta \ll \frac{1}{n}$.” --- We respond to the minor points in the next comment. --- Rebuttal 2: Title: Additional response regarding the minor points Comment: > The neighbourhood relation is inconsistent: Section 2.1 describes substitute neighbourhood, while Section 2.2 describes add/remove neighbourhood. Thank you for pointing this out! Our analyses are independent of the choice of the neighborhood relation, and our experiments are done with the add-remove relation. We will remove the substitution relation in Section 2.1 paragraph “Setup and notation”, mention that we use the add-remove relation in our experiments in Section 2.1, paragraph “DP-SGD”. Moreover, in Section 2.2 we will emphasize that our results are not tied to the add/remove relation. > TPR does not necessarily need to be high for a MIA to be a relevant threat. For example, if TPR = 1% at FPR = 0.01%, the attack can very accurately identify the presence of a small number of individuals, which violates the privacy of those individuals. This is a great point! Ultimately, defining acceptable thresholds on TPR/FPR, as we mention in the Concluding Remarks, is an open problem. We note that acceptable thresholds can be made in a context and application specific manner. For example, from the computer security literature (see, e.g., [Wang, 2018](https://arxiv.org/abs/1802.05409)), we know that if the prior probability of membership is lower than the attack’s FPR, then the majority of the attack's positive predictions will be incorrect, even if TPR is 100% (Wang 2018, p.2; formally, prior-aware positive predictive value, ppv = P(member | attack predicts 'member') can be low even if TPR = P(attack predicts 'member' | member) is high, as it depends on the base rate P(member) and is more significantly influenced by FPR than TPR). Our framework fits nicely with this literature. If in a particular context we have a guess for the adversary’s prior probability, this can be used to inform an acceptable TPR/FPR thresholds. We will expand the short discussion in the Concluding Remarks on this limitation as follows: “We leave open the question on how to choose the target FPR $\alpha^\star$, e.g., whether standard significance levels in sciences such as $\alpha^\star = 0.05$ are compatible with data protection regulation, **as well as what are the acceptable attack TPR levels for a given FPR**. Future work is needed to develop concrete guidance on the choice of target FPR **and TPR** informed by legal and practical constraints.” > The results in Figure 1 could have uncertainty estimates. Thank you for the suggestion. In the new experimental demonstration on private histogram release, described in the [general response](https://openreview.net/forum?id=hOcsUrOY0D&noteId=4H6Ec2IhLz), we added uncertainty regions for utility measurements done over 100 mechanism runs with different random seeds. > Restating the theorems before their proofs and having the proofs in the same order as the main text would make the proofs much easier to read. We will implement this in the final version! --- > Are $P$ and $Q$ the correct way around in Eq. (1)? They seem to be the other way around in Gopi et al. [2021]. $P$ and $Q$ are the correct way in Gopi et al. We will fix this in the final version. > Are the image classification results in Figure 1 comparable between standard and attack risk calibrations, since you had to use RDP due to data augmentations with the standard accountant? Thank you for raising this point. We indeed agree that this comparison is somewhat unfair. We now reanalyzed the training pipeline from Tramer & Boneh, 2021 using Doroshenko et al. accounting so that the methods are exactly comparable. We provide the updated figure in the PDF attached in the [general response](https://openreview.net/forum?id=hOcsUrOY0D&noteId=4H6Ec2IhLz). Indeed, the TPR values obtained with standard calibration and tight accounting are somewhat lower than with RDP accounting, especially in the small $\alpha$ regime, but the general trend remains: attack-aware calibration significantly increases task accuracy at the same risk level. > What type of subsampling and which neighbourhood relation did you use in the DP-SGD experiments? We use the add-remove relationship in our experiments, as it is standard with modern accountants. We use Poisson sampling. The batch size in the experimental details is supposed to mean the _expected batch size._ To avoid confusion, we will clearly specify the neighborhood relation and write in the format “subsampling rate $p = 0.003801$ (corresponding to the expected batch size of 256)”. > Line 97: the domain and range of epsilon_omega are the wrong way around. Line 164: should this have $\alpha^\star$ instead of $\alpha$? Should the support of $Y$ in line 666 be $\\{y_1, …, y_{l+1}\\}$ instead of $\\{x_1, …, x_{l+1}\\}$? Line 649: $\phi$ outputs the probability that the sample came from $Q$. I don't think Algorithm 1, line 2 works with $\alpha = 1$, the strict inequality is never satisfied. Though this should be easy to fix. That’s right, thank you so much for spotting these! We will fix them. --- Rebuttal 3: Comment: Thank you for the comprehensive response. You addressed most of my concerns, most importantly those regarding the proof of Theorem 3.4. Regarding my point about catastrophic failures, it seems that I didn't make my concern quite clear in the initial review. You are correct in saying that many mechanism do not actually allow the catastrophic privacy failure, so it may not be necessary to set a conservative TPR@FPR bound to account for them. However, the same reasoning can be used to argue that we do not need $\delta \ll \frac{1}{n}$. As a result, the utility increase of TPR@FPR calibration over $(\epsilon, \delta)$-calibration seems to come from using this argument for one definition, but not the other. Reading the paper again, I noticed that you have alluded to this in the caption of Figure 5, but this point is important enough to discuss in the main text. Currently, the Abstract and Introduction give the impression that the utility benefit is due to some intrinsic difference between $(\epsilon, \delta$) and TPR@FPR bounds, and not simply a consequence of what values are typically considered acceptable for the bounds. On the other hand, I recognise that calibrating to attack risk is useful, and your method for this calibration is much easier to use than first finding the optimal $(\epsilon, \delta)$-bound corresponding to the attack risk, and then calibrating the mechanism. As a result, the paper makes a valuable contribution, and I'm increasing my score accordingly. --- Rebuttal Comment 3.1: Comment: > You are correct in saying that many mechanism do not actually allow the catastrophic privacy failure, so it may not be necessary to set a conservative TPR@FPR bound to account for them … However, the same reasoning can be used to argue that we do not need $\delta \ll \frac{1}{n}$ … Currently, the Abstract and Introduction give the impression that the utility benefit is due to some intrinsic difference between $(\varepsilon, \delta)$ and TPR@FPR bounds, and not simply a consequence of what values are typically considered acceptable for the bounds. Thank you for raising this point. Just to clarify, as long as the TPR@FPR bounds are computed using _the privacy curve $\varepsilon(\delta)$ (using the approaches in Appendix A) / PLRVs $(X, Y)$ (using Algorithm 1)_ and not based on a single $(\varepsilon, \delta)$-point (using Eq. 5), then indeed there is no intrinsic difference. There **is**, however, an intrinsic difference in calibrating to a target TPR@FPR using Algorithm 1 vs. calibrating to a target TPR@FPR with _a fixed delta_ (standard calibration), i.e., the difference between the two curves in Figure 4. We appreciate your time and willingness to revise your review, and we are happy to answer any other questions you may have. Your input has been very valuable!
Summary: This paper proposes new methods for calibrating noise in differentially private learning to achieve a given level of operational privacy risk, specifically focusing on the advantage and FNR/FPR of membership inference attacks. The methods reduce the noise scale compared to the standard two-step procedure (first converting to a privacy budget, then converting to a privacy assessment). Strengths: 1. The paper addresses an important practical problem in the privacy regime with significant theoretical contributions. 2. It is well-organized and easy to follow. Weaknesses: 1. I suggest the authors include more downstream tasks and utility metrics to further demonstrate the effectiveness of the theoretical results. 2. High-level intuitions for different variables in Theorem 3.4, such as $\alpha(\tau, \gamma)$ and $\beta(\tau, \gamma)$, would be beneficial. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Will choosing a discrete-valued dominating pair of the mechanism be a sub-optimal choice for advantage calibration? 2. How is the discretized PLRV typically obtained for a general mechanism? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and suggestions! **Weaknesses** > I suggest the authors include more downstream tasks and utility metrics to further demonstrate the effectiveness of the theoretical results. We have added a new experiment on a common use case of DP – private histogram release – which shows that attack-aware noise calibration also enables to significantly reduce the error when privately releasing histograms. See the [general response](https://openreview.net/forum?id=hOcsUrOY0D&noteId=4H6Ec2IhLz) and the attached PDF for details. > High-level intuitions for different variables in Theorem 3.4, such as $\alpha(\tau, \gamma)$ and $\beta(\tau, \gamma)$, would be beneficial. We will add the following lines after Theorem 3.4: “The proof for Eq. (12) works by using the Neyman-Pearson lemma to explicitly construct the optimal most powerful attack at level $\alpha$. We parameterize the optimal attack in terms of intermediate parameters $\tau$ and $\gamma$, where $\tau$ is the threshold for the Neyman-Pearson test statistic and $\gamma$ is the probability of guessing in case the test statistic exactly equals the threshold.” **Questions** > Will choosing a discrete-valued dominating pair of the mechanism be a sub-optimal choice for advantage calibration? Any dominating pair, whether discrete or continuous, suffices for advantage calibration. Moreover, the results of Doroshenko et al. 2022 imply that a carefully-crafted discrete-valued dominating pair can provide arbitrarily tight privacy bounds to a continuous mechanism. Since we use the algorithm proposed by Doroshenko et al., the answer to this question is no: a discrete-valued dominating pair is not sub-optimal for advantage calibration. > How is the discretized PLRV typically obtained for a general mechanism? In Appendix E, we detail the technique from Doroshenko et al. for constructing PLRVs for a general mechanism given its privacy curve. The technique discretizes the privacy profile curve, and builds pmfs of the dominating pair $P$, $Q$ using the discretized values. The distributions of the PLRVs can then computed using the pmfs of $P$, $Q$. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! I don't have further questions.
Summary: This paper introduces a new method for improving the utility of privacy-preserving machine learning without sacrificing privacy protection. The authors develop efficient algorithms to calculate the trade-off curve between attack FPR and FNR using f-differential privacy (f-DP). They then show how to use this information to fine-tune the amount of noise added, allowing for precise control over privacy risks. This approach offers a more refined way to balance data usefulness and privacy in machine learning, addressing a key challenge in the field. Strengths: The authors show that their direct calibration methods can significantly reduce the required noise scale compared to the standard approach, leading to an increase in utility (e.g., 18 percentage points higher accuracy) for the same level of privacy risk. They also demonstrate that calibrating for attack advantage (attack accuracy) can increase attack power in the low FPR regime, and show that calibrating for a desired FPR and FNR level mitigates this issue. By this method, the noise of DPSGD can be reduced tighter once we are aware of the privacy risk. Weaknesses: The notation in this paper is heavy, could you provide notation tables? Could you provide a more detailed algorithm in the appendix about how to use $\epsilon$ , $\delta$, and q to generate Advantage Calibration’s Xω , Yω. Its quite hard to follow in Section 3’s demonstration. I think authors need an algorithm to demonstrate how they did the experiment from the supplement's code. I am willing to increase my score if the above questions can be answered. Technical Quality: 4 Clarity: 2 Questions for Authors: see in weaknesses Confidence: 2 Soundness: 4 Presentation: 2 Contribution: 4 Limitations: see in weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and suggestions. We address some of points raised directly below, but they are also partially covered in our general response, to which we refer when relevant. > The notation in this paper is heavy, could you provide notation tables? We agree. See the [general response](https://openreview.net/forum?id=hOcsUrOY0D&noteId=4H6Ec2IhLz) for the notation table. > Could you provide a more detailed algorithm in the appendix about how to use $\epsilon$, $\delta$ , and $q$ to generate Advantage Calibration’s $X_\omega$ , $Y_\omega$. It’s quite hard to follow in Section 3’s demonstration. I think authors need an algorithm to demonstrate how they did the experiment from the supplement's code. In Section 3, following Theorem 3.4, we point to Appendix E for how we construct $X_\omega$, $Y_\omega$ for a general mechanism given its privacy curve. We will clarify in L226 as follows: “Given a method for obtaining valid PLRVs $X_\omega$ , $Y_\omega$ for any $\omega$, **such as the one provided by Doroshenko et al. 2022 (see Appendix E),** …” We want to emphasize to the reviewer that Appendix E is a summary of the algorithm proposed by Doroshenko et al. 2022, which we used in all of our experiments. Moreover, this algorithm uses the entire privacy curve and mechanism-specific functions such as $q$ to construct $X_\omega$, $Y_\omega$, not just a single $(\epsilon, \delta)$ pair. Additionally, we provide a more detailed description of the steps involved in the calibration algorithms in the [general response](https://openreview.net/forum?id=hOcsUrOY0D&noteId=4H6Ec2IhLz).
Summary: Differential privacy (DP) mitigates privacy risks in machine learning by adding noise during training, balancing privacy and utility. Traditionally, the noise scale is set using a privacy budget parameter ε, which is then translated to attack risk. This two-step method often results in conservative risk assessments and reduced utility. The proposed approach directly calibrates noise to a desired attack risk, bypassing ε, thus decreasing the noise scale and enhancing utility while maintaining privacy. Empirical evidence shows that this method improves model accuracy for the same privacy level, offering a practical way to enhance privacy-preserving machine learning. Strengths: 1. The proposed framework successfully addresses technical challenges, providing compelling insights. 2. The experimental results are robust and strongly support the framework's effectiveness. Weaknesses: 1. The paper includes numerous definitions and symbols, which can be confusing for readers. Creating a table to summarize these terms and explain their meanings would greatly enhance clarity and help readers follow along more easily. 2. In Section 2, the problem statement is not clearly articulated, making it difficult to discern the main goal and the specific problem being addressed. Highlighting the main goal and explicitly defining the problem would make this section more readable and comprehensible. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In the experiment section, how about the results of other low FPR regimes? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. In Section 3.1, please include inference for ensuring advantage calibration in the appendix. 2. The two figures at the top of page 7 lack figure names. It appears they should be labeled as Figure 2. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and the suggestions! We respond to the comments and questions next. Note that we also address some of them in the general response, and we refer to it when relevant. > The paper includes numerous definitions and symbols, which can be confusing for readers. Creating a table to summarize these terms and explain their meanings would greatly enhance clarity and help readers follow along more easily. See the [general response](https://openreview.net/forum?id=hOcsUrOY0D&noteId=4H6Ec2IhLz) for the notation table. > In Section 2, the problem statement is not clearly articulated, making it difficult to discern the main goal and the specific problem being addressed. Highlighting the main goal and explicitly defining the problem would make this section more readable and comprehensible. The problem statement appears in the discussion of Section 2.3. We will highlight this more clearly in the final version, and change the title of Section 2.3 to “Our Objective: Attack-Aware Noise Calibration” to clearly signal that the problem statement is there. To be clear, our problem is to solve the calibration problem $\min \omega \text{ s.t. } risk_\omega \leq threshold$, in particular, by providing a method to efficiently compute $risk_\omega$ for standard notions of operational attack risk in DP. **Questions** > In the experiment section, how about the results of other low FPR regimes? See the attached PDF in the [general response](https://openreview.net/forum?id=hOcsUrOY0D&noteId=4H6Ec2IhLz) for the trade-off curves for all five models in the language modeling experiment. Using these curves, one can glean the behavior in terms of FPR/FNR trade-off for different levels of noise scale/task accuracy and for all $\alpha \in [0, 1]$ as opposed to $\alpha \in \{{0.01, 0.05, 0.1\}}$. We will add these plots in the Appendix in the final version. **Limitations** > In Section 3.1, please include reference for ensuring advantage calibration in the appendix. Eq. 13 in Section 3.1 shows (1) the optimization problem corresponding to advantage calibration and (2) how we compute advantage given dominating PLRVs $(X_\omega, Y_\omega)$. Moreover, in Appendix E we explain how to obtain $X_\omega, Y_\omega$ from an arbitrary mechanism given its privacy curve, and in Section 2.3 we explain how we solve an optimization problem in the form of Eq 13. In the final version, we will consolidate all this information into one algorithm in the Appendix, as showed in the [general response](https://openreview.net/forum?id=hOcsUrOY0D&noteId=4H6Ec2IhLz). > The two figures at the top of page 7 lack figure names. It appears they should be labeled as Figure 2. Thank you for spotting this issue! Indeed, this Figure is missing a caption. We will fix this in the final version. --- Rebuttal Comment 1.1: Comment: As the author-reviewer period is coming to end, please let us know if we have addressed your concerns and if we can clarify anything else.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and feedback. We are glad the reviews found that our framework addresses an important technical problem (pcXQ), provides significant theoretical contributions (pcXQ), and compelling insights (JyJW). We are also glad that the reviews appreciated the practicality of our algorithms (tsqL), the increased utility that our approach enables (Bdkd), the robustness of the experimental results (JyJW), as well as pointed out that the paper is easy to follow (pcXQ, tsqL) and builds a strong case for attack-aware noise calibration (tsqL). We noticed some general trends in the reviewers' suggestions, addressed next. **Notation table.** Reviewers JyJW and Bdkd remarked that the paper is notation heavy, and asked for a table summarizing the different terms and their meanings. We agree and will add the notation table in the final version (see below). **Algorithms for standard and direct advantage calibration.** Reviewers JyJW and Bdkd both asked for a single algorithm detailing how we ensure advantage calibration. We acknowledge that the exact steps for calibration are spread across Sections 2 and 3, and we will consolidate them in the final version. Specifically, we will detail the exact steps as follows: _Standard calibration._ Inputs: privacy parameters $\eta^\star, \delta^\star < \frac{1}{n}$, privacy profile $\varepsilon_\omega(\delta)$. 1. Find $\varepsilon^\star$ which ensures $\eta = \eta^\star$ for a given $\delta^\star$, i.e., solve Eq. 8 for $\varepsilon$ for fixed $\delta = \delta^\star$ and $\eta = \eta^\star$. 2. Find noise parameter $\omega$ that ensures $(\varepsilon^\star, \delta^\star)$-DP using binary search as described in Sec. 2.3: $\min_\omega \text{ s.t. } \varepsilon_\omega(\delta^\star) \geq \varepsilon^\star$. This step is exactly Eq. 2. _Direct advantage calibration (ours)._ Inputs: privacy parameter $\eta^\star$, PLRVs $X_\omega, Y_\omega$. Find noise parameter $\omega$ that ensures $\eta^\star$ advantage using binary search as described in Sec. 2.3, and using Eq. 11 in Theorem 3.4 to instantiate the $risk_\omega = \eta_\omega = P[Y_\omega > 0] - P[X_\omega > 0]$ function: $\min_\omega \text{ s.t. } P[Y_\omega > 0] - P[X_\omega > 0] \leq \eta^\star.$ This step is exactly Eq. 13. **Additional plots and experiments.** Moreover, we attach a PDF with additional results: 1. Trade-off curves for the language modeling experiments, showing a more complete picture of attainable $(\alpha, \beta)$ values in addition to $\alpha \in \{{0.01, 0.05, 0.1\}}$ in the submission (following a question by JyJW). 2. A version of Figure 1 for the image classification experiments with a tight reanalysis of the method by Tramer and Boneh, 2021, using the Doroshenko et al. accountant instead of the RDP accountant to ensure fair comparisons (following a comment by tsqL). 3. Following a suggestion by pcXQ, we added a new experimental setting in which we use attack-aware noise calibration for releasing a single _differentially private histogram_. This is a simple but common usage of DP, appearing as a building block, e.g., in private query interfaces. To make it concrete, we use the well-known ADULT dataset comprising a small set of US Census data, and simulate the release of the histogram of the 'Education' attribute (with 16 distinct values, e.g., “High school”, “Bachelor’s”, etc.). To measure utility, we use the $L_1$ distance (error) between the original histogram and the released private histogram. The plot shows the increase in utility if we calibrate the noise of the Gaussian mechanism (with post-processing to ensure the counts are positive integers) using the direct calibration algorithm to a given level of FPR $\alpha^\star$ and FNR $\beta^\star$ vs. standard calibration over 100 simulated releases with different random seeds. In certain cases, e.g., for $\alpha^\star = 0.1$ and $\beta^\star = 0.75$, our approach decreases the $L_1$ error by 3x from three erroneous counts on average to one. We address the other comments in individual responses. --- | Symbol | Description | Reference | |-|-|-| | $z \in \mathbb{D}$ | Data record | | | $S \in 2^{\mathbb{D}}$ | Dataset of records | | $S \simeq S'$ | Adjacency relation of neighboring datasets that differ by one record | | | $M_\omega: 2^{\mathbb{D}} \rightarrow \Theta$ | Privacy-preserving mechanism | | | $\omega \in \Omega$ | Noise parameter of a given mechanism $M(S)$ | | | $D_\gamma(M(S) \| M(S')) \geq 0,\ \gamma \geq 0$ | Hockeystick divergence | Eq. 1 | | $\varepsilon \in (0, \infty), \delta \in [0, 1]$ | Privacy parameters in differential privacy | Def 2.1 | | $\varepsilon: \mathbb{R} \rightarrow [0, 1]$ | Privacy profile curve | Def 2.2 | | $\phi: \Theta \rightarrow [0, 1]$ | Membership inference hypothesis test | | | $\alpha_\phi \in [0, 1]$ | False positive rate (FPR) of attack $\phi(\theta)$ | | | $\beta_\phi \in [0, 1]$ | False negative rate (FNR) of attack $\phi(\theta)$ | | | $\eta \in [0, 1]$ | Maximal advantage across attacks against mechanism $M(S)$ | Eq. 7 | | $T(M(S), M(S')): [0, 1] \rightarrow [0, 1]$ | Trade-off curve between FPR and FNR of optimal attacks | Def. 2.3 | | $f: [0, 1] \rightarrow [0, 1]$ | A lower bound on the trade-off curve for all neighboring datasets | Def. 2.4 | | $P, Q$ | A dominating pair of distributions for a given mechanism $M(S)$ | Def 3.1 | | $X, Y$ | Privacy loss random variables for a given dominating pair $P, Q$ | Def 3.2 | Pdf: /pdf/d0c8256c86630dd99d2cfde75c02eaba28402e5a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Faster Algorithms for User-Level Private Stochastic Convex Optimization
Accept (poster)
Summary: This paper revisits the user-level private stochastic convex optimization (SCO) problem, where each user can possess multiple data points. The contributions of this paper are in two aspects: 1. They propose a linear-time algorithm that attains the same excess risk as the prior linear-time algorithm, but with a milder assumption on the smooth parameter $\beta$ and does not require the number of users $n$ to depend on dimension $d$, making the algorithm more applicable in practice. 2. They propose an algorithm that achieves optimal excess risk with improved gradient complexity compared to previous work in both smooth and non-smooth settings. The key insight behind the linear-time algorithm is to remove outlier SGD iterates instead of gradients. To achieve this, they prove a stability argument (Lemma 2.3), the utility then follows by the localization arguments in [FKT20]. The optimal algorithm is inspired by the item-level accelerated phased ERM algorithm of [KLL21], which is then applied to a user-level gradient outlier-removal procedure. The non-smooth case is done by a standard randomized smoothing technique. Strengths: 1. DP SCO is a fundamental problem in private machine learning. In many applications, each user can hold more than one data point. Studying user-level DP SCO is of great importance. 2. The proposed algorithms improve the previous algorithms in various settings, relaxing the assumptions and providing a better run time. 3. The writing of this paper is clear. Weaknesses: The linear-time algorithm only has a suboptimal excess risk, while the optimal algorithm has a gradient complexity higher than linear. Technical Quality: 4 Clarity: 4 Questions for Authors: In the conclusion, it is stated that whether a linear-time algorithm with optimal risk exists for smooth losses is an open question. What about non-smooth losses? If one has a linear-time algorithm for smooth losses, does the randomized smoothing give a linear-time algorithm for non-smooth losses? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Successfully discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful feedback and positive assessment of our work. We respond to your comments below. >*In the conclusion, it is stated that whether a linear-time algorithm with optimal risk exists for smooth losses is an open question. What about non-smooth losses?* Good question. The non-smooth case is much harder, so we propose that future work should first focus on fully solving the smooth case. Even in the easier item-level setting, there are no known optimal and linear-time algorithms for the non-smooth case (in general arbitrary parameter regimes). >*If one has a linear-time algorithm for smooth losses, does the randomized smoothing give a linear-time algorithm for non-smooth losses?* Unfortunately, no: Randomized smoothing increases the runtime of the algorithm. --- Rebuttal Comment 1.1: Comment: Thank you for your response.
Summary: This paper proposes new mechanisms for private SCO with user-level DP. Their approach extends prior work on this topic, and reduces the gradient complexity while attaining optimal excess risk. Three algorithms are proposed (1) a linear time algorithm that pushes state of the art (but does not achieve optimal excess risk), (2) a (m n)^{9/8} time algorithm that does achieve optimal excess risk (improving over prior work which required (m n)^(3 / 2) gradient evaluation, and (3) an optimal algorithm for non-smooth losses that requires < (m n)^2 gradient computations. Extensive theoretical results give expressions for the expected excess risk under various assumptions for each mechanism. Strengths: * The paper improves state of the art on an important problem, reducing gradient complexity while also relaxing assumptions. * The theoretical results are strong, and clearly stated, presented, and discussed. * The work is well-motivated and the authors make a compelling story. Weaknesses: * One of main claims is not supported by the evidence given. The work is motivated by "modern ML applications, such as training LLMs..." and it claims that "our algorithm and result is applicable to many practical ML problems." However, looking at Algorithms 1-2 it is not clear that this statement is true. I believe modern LLMs are pretty big (> 1B parameters) and typically saturate GPU memory, and Alg 1 requires storing C different settings of those parameters to compute the concentration scores -- is that feasible? From what I can tell, for reasonable values of n, m, epsilon, and delta, C ~ 500. * The proposed algorithms seem quite complicated. Is that complexity necessary? The paper is missing a description + summarization of simpler approaches and their limitations. E.g., what about plain-old DP-SGD? What happens when you try applying optimal item-level DP mechanisms to the user-level setting (with appropriate modifications). What are those modifications (I can think of a few natural ways to extend them). It would be nice if your mechanism reduced to a known optimal item-level DP mechanism when m=1. Technical Quality: 2 Clarity: 2 Questions for Authors: * In what ways does your algorithm exploit the user-structure of the problem (the fact that each user holds m items)? * The noise multiplier seems humongous, am I missing something? It's hard to believe this algorithm doesn't diverge --- do you have any empirical evidence (even on synthetic data satisfying all your required assumptions) to confirm / augment your theorem statements? Note I have not checked your proofs at all. epsilon = 10 delta = 1e-6 step size = 1 n = 10 million m = 100 d = 1 billion L = 1 --> C = 475 --> T = ? >= 100 --> tau = 400,000 --> sigma = 250,000 Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Limitations are discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful feedback and assessment of our work. First, we would like to kindly remind you that our work focuses on understanding theoretical complexity bounds for a fundamental problem. This is both an important goal in its own right and also lays the foundation for future works focusing on practical aspects of user-level DP SCO. We did not intend to give the impression that our algorithms are ready for large-scale practical deployments yet. For example, we did not optimize for constants in our algorithms (e.g. in $\sigma$ and $C$), since our results are stated in big-O. We respond to your specific comments below. >*Evidence for the claim "our algorithm and result is applicable to many practical ML problems."* Thank you for raising this important point. We stand by this claim, which refers to the applicability of our theoretical result. Theoretically, our Algorithm 1 is applicable whenever $\beta < \sqrt{nmd\varepsilon}$ and $n \gtrsim \log(n/\delta)/\varepsilon$. Typically, $\varepsilon$ is a small constant in practice and $n$ is quite large so that the second condition is easily satisfied. Moreover, $d$ is often large in practice and the smoothness condition holds (e.g. for linear/logistic regression). **For such problems, our algorithm and result is (theoretically) applicable**. We will revise the wording in the final version of our paper to clarify that we are referring to **theoretical applicability** of our result/algorithm to practical ML problems, in contrast to the result/algorithm of [BS23] which does not apply theoretically to these problems. That being said, you raise a valid point about GPU memory and applicability in practice. An important direction for future research is to develop practical implementations of our algorithms that are well suited for large-scale tasks like training LLMs, e.g. by parallelizing computations and optimizing for constants to allow for smaller $C$. >*The work is motivated by "modern ML applications, such as training LLMs..."* We only mentioned LLMs in order to motivate the importance of user-level DP. The full quote reads: *“in many modern ML applications, such as training LLMs on users’ data in federated learning, each user contributes a large number of training examples [XZ24]. In such scenarios, the privacy protection that item-level DP provides for each user is insufficiently weak.”* **We believe that this claim is evidenced by [XZ24]**. We did not mean to suggest that our current algorithms are ready for training LLMs in practice yet. We apologize for any confusion. >*algorithms seem quite complicated. Is that complexity necessary?* Great question. We believe that some mechanism for private outlier detection and removal is needed to obtain optimal rates for user-level DP SCO. This necessarily introduces additional complexity compared to the item-level setting, where no outlier removal is needed. That being said, we agree that finding simpler algorithms that achieve the optimal rates is an important problem for future work. We will highlight this in the conclusion of the revision. >*The paper is missing a description + summarization of simpler approaches and their limitations. E.g., what about plain-old DP-SGD? What happens when you try applying optimal item-level DP mechanisms to the user-level setting (with appropriate modifications). What are those modifications (I can think of a few natural ways to extend them)?* Thank you for the nice suggestion! We will happily add such a description in the camera-ready version if our paper is accepted. **Simpler modifications of optimal item-level DP algorithms do not result in optimal user-level DP algorithms.** For example, naively applying group privacy to item-level DP-SGD yields suboptimal excess risk. Simple DP-SGD with outlier removal was essentially the algorithm of [AL24], which our algorithm improves over. Moreover, our results cannot be obtained by applying black-box item-level to user-level conversions (e.g. [Bun et al., STOC 2023]). Please let us know if you have any other simple approaches in mind. >*In what ways does your algorithm exploit the user-structure of the problem (the fact that each user holds m items)?* We use the fact that each user holds $m$ items to add less noise than would be possible if each user only held 1 item. This is accomplished through our outlier removal procedures and by arguing that with high probability, users’ gradients (or iterates) concentrate around a ball that shrinks with $m$. >*The noise multiplier seems humongous, am I missing something? It's hard to believe this algorithm doesn't diverge…I have not checked your proofs at all.* First, our proofs show that **our algorithms are guaranteed to converge**. Moreover, the convergence rates and runtimes are optimal/state-of-the-art (up to constant and log factors), as we discuss in the paper. Second, **we do not optimize for constants** since this is a theoretical work concerned with asymptotic complexity bounds. Thus, it is very likely that a smaller noise multiplier can be used. We leave it for future work to implement practical versions of our algorithms with carefully optimized constants. >*Do you have any empirical evidence (even on synthetic data satisfying all your required assumptions) to confirm / augment your theorem statements?* To reiterate, the focus of our paper is on understanding the theoretical complexity bounds for a fundamental problem, not empirical performance. We leave it for future work to implement practical versions of our algorithms and evaluate these algorithms empirically. --- Rebuttal Comment 1.1: Comment: Acknowledging as a paper that is 100% theoretical, I think it is fair to judge the work on its theoretical rather than practical merits. However, it still seems this approach is a bit more complicated than it ought to be. For example, taking Alg 1 from [BFTT14](https://arxiv.org/pdf/1908.09970) as a starting point, we could modify it so that in step 5 each batch only contains a single example per user. Accounting wise, this user-level privacy properties of this algorithm on a dataset with n users is identical to the example-level privacy properties of Alg 1 on a dataset with n examples. As far as I can tell, Thm 3.2 still goes through, with the sqrt(d log(1/delta)) / (epsilon n) term remaining the same, as sigma depends on the number of users only. The second term 1/sqrt(n) I believe would change to 1/sqrt(n m). This bound seems comparable to Thm 2.1. While [BFTT14] was not linear time, I believe a similar modification could be made to an optimal linear-time algorithm like [FKT20](https://arxiv.org/pdf/2005.04763), with some tweaking to hyper-parameters. --- Reply to Comment 1.1.1: Comment: Thank you very much for acknowledging the theoretical nature of our paper and for giving us an opportunity to further clarify the drawbacks of other simple approaches. **The user-level DP modification of DP-SGD that you propose does not achieve the SOTA bounds that our algorithms achieve:** In particular, your *proposed algorithm cannot achieve the optimal excess risk of our Algorithm 3*. Moreover, a one-pass (linear-time) implementation of your proposed algorithm *does not achieve the SOTA excess risk in linear time of our Algorithm 1*: Note that each of the error terms in our Theorems 2.1 and 3.2 decrease with $m$, whereas the private optimization error term in your proposed algorithm would not depend on $m$. *Why the private optimization error term in your proposed algorithm does not decrease with $m$ (in contrast to our algorithms):* The variance of the additive noise in your proposed user-level DP algorithm does not decrease with $m$, since no outlier-detection/removal procedures are used and hence the worst-case user-level sensitivity of gradients in your algorithm is $\Theta(1)$ . By contrast, our algorithms add less noise scaling with $\tau \approx 1/\sqrt{m}$ by incorporating outlier-detection/removal. This is what enables our algorithms to offer superior performance compared to simple approaches that do not involve outlier-removal. By the same reasoning, **applying your simple modification to FKT20 would also not result in algorithms with the SOTA guarantees that our novel algorithms provide**. We sincerely hope that our response clarifies your concerns so that you can increase your score. Please let us know if any other questions remain. --- Rebuttal 2: Comment: Thank you very much for your timely reply. As a general comment, we emphasize that there is a long line of work on the user-level DP-SCO problem, which tries to get optimal rates via complicated algorithms. Reducing user-level DP to item-level DP, similar to your proposal, is trivial, but does not benefit significantly from larger $m$. The challenge of user-level DP SCO is designing algorithms that benefit (in all of the error terms) from large $m$, as our algorithms do. Indeed, the regime $m > n$ is more interesting theoretically for user-level DP SCO. We will discuss this and the other naive baselines (e.g. group privacy) in the final version. We respond to your specific comments below. >*it is better than your utility guarantee for Alg 1 as far as I can tell, assuming n > m...* First, note that the **runtime of your proposed *suboptimal* algorithm** (using BFTT19) **would need to be at least *quadratic* in $n$** in order to achieve the suboptimal risk bound stated in your above comment, even for smooth losses (after optimizing for $T$). This runtime is worse than any of the algorithms we provide in our paper. In particular, **our Algorithm 3 achieves *optimal* excess risk with runtime that is *subquadratic* in $n$**, scaling with $n^{9/8}$. Moreover, **our Algorithm 1 is linear-time**. Furthermore, a linear-time implementation of your proposed BFTT19-based algorithm (by simply choosing $T$ to be small) would result in a fairly strict restriction on the smoothness parameter ($\beta \lesssim \sqrt{m}$) in order to achieve inferior excess risk of $\approx 1/\sqrt{m} + \sqrt{d}/n\varepsilon$. Finally, if you try to modify FKT20 with your proposed simple approach, you obtain a suboptimal linear-time algorithm that suffers from a very severe smoothness restriction. All that being said, we agree that simplicity is a virtue and will gladly add discussion of your proposed simple approach to the final version. Thank you very much for your interesting suggestion. >*Under your I.i.d. assumption, the variance of the average of m samples would be 1/m smaller. I believe you should be able to leverage this to add sqrt(m) less noise and recover the rates you get for Alg 3, with a much simpler approach.* Notice that **your proposed approach would fail to satisfy user-level DP** without incorporating some outlier removal step or adding excessive noise (leading to suboptimal risk). This is because *one cannot assume the data is i.i.d. when bounding sensitivity/proving privacy*. That is why we need to use outlier removal in our algorithms. We really appreciate you engaging with us in this productive discussion and believe that implementing your suggestions will strengthen the final version. Please let us know if you have any further questions or comments. --- Rebuttal 3: Comment: * Reducing user-level DP to item-level DP, similar to your proposal, is trivial, but does not benefit significantly from larger m. >* This may be true, although you have not convinced me yet, but either way this discussion is missing in this paper. You need to make a convincing case of this in the paper, and such changes would be large enough to warrant a fresh review in my opinion. * Indeed, the regime is more interesting theoretically for user-level DP SCO. >* One of the things I look for based on the reviewer guidelines is "are the claims made supported by evidence." What evidence do you have to support this statement? What does it even mean to be "more interesting theoretically"? * runtime of your proposed **suboptimal** algorithm >* I don't believe you adequetely showed this was suboptimal, other than saying (without evidence) that "m > n" is the more interesting regime. At minimum, you should qualify your claims of optimality with the caveat "under some conditions". * would need to be at least **quadratic in n** >* Again, you are making these claims without evidence, and as far as I can tell is simply not true either. BFTT sets the batch sizes to n sqrt(epsilon / 4T) and runs for T <= n/8 iterations. This means that it requires sqrt(epsilon * n) / m passes over the dataset. If m > n as you purport, this is actually sub-linear, not quadratic. >* Second, BFTT14 is 10 years old -- of course there are more recent papers that achieve the same rates with much less runtime. I did not suggest running BFTT with a small T, but just used that as one of the simplest example algorithm that achieves the optimal rates. I believe the modifications I proposed are compatible with any optimal item-level DP mechanisms in principle, including the linear-time ones. * Finally, if you try to modify FKT20 with your proposed simple approach, you obtain a suboptimal linear-time algorithm that suffers from a very severe smoothness restriction. >* This very well may be true, I am not familiar enough with the prior results to confirm or deny. But given the flaws in reasoning I've identified above, I will not trust the statement **without evidence**. * **your proposed approach would fail to satisfy user-level DP** >* I didn't actually propose a specific mechanism here, and maybe it is the case that the only way to get 1/sqrt(m) rates is with outlier removal, but again, you are making this claim without providing evidence. Can you prove there is no other way to do this? In principle you should be able to run any linear-time algorithm for item-level DP, and replace the step that does item-level DP mean estimation of the minibatch gradient with a step that does user-level DP mean estimation of the minibatch gradient. This would lead to a valid algorithm, that leverages and connects to prior work already done on this problem of user-level DP mean estimation. --- Rebuttal 4: Comment: First, **there are known trivial approaches that obtain excess risk $1/\sqrt{nm}+\sqrt{d}/n\epsilon$**: e.g. reducing to item-level DP SCO via group privacy or by averaging each user’s gradients. This bound $1/\sqrt{nm}+\sqrt{d}/n\epsilon$ is the same as the excess risk bound that the reviewer claimed their simple approach obtains. Note that while these results are simple to obtain, **they do not benefit significantly from large $m$** because *the privacy term $\sqrt{d}/n\epsilon$ does not shrink as $m$ increases*. Thus, an important challenge in user-level DP SCO is getting risk bounds that decay with large $m$. Getting such bounds is non-trivial and has been the topic of a long line of work (involving complicated algorithms) that we discuss in the Introduction. By contrast, if $m$ is small, then one can simply apply a trivial approach without suffering too much. **For these reasons, the large $m$ regime is more interesting** theoretically. That being said, we clearly acknowledge that the trivial baseline rate $1/\sqrt{nm}+\sqrt{d}/n\epsilon$ can be better than our linear-time bound in Theorem 2.1 when $n > m$. However, it is not clear to us whether one can obtain the trivial bound both in linear time and with a mild smoothness assumption like the one in our Theorem 2.1. Moreover, we reiterate that **this trivial bound is suboptimal, since it is bigger than the optimal bound given in our Theorem 3.2**. We will add this comparison/discussion in the final revision. Thank you again for this valuable suggestion. Finally, we want to emphasize again that **we do not see any way for simpler approaches to obtain any of our main results** (Theorems 2.1, 3.2, or 4.1). We believe that developing simpler optimal algorithms for user-level DP SCO is an interesting open question and invite the reviewer to make progress on this problem. We do note that several of the reviewer’s ideas were already explored in early suboptimal works on user-level DP SCO (e.g. LSA+21 and BS23) and therefore will probably not result in optimal rates. We thank the reviewer once again for their feedback. --- Rebuttal Comment 4.1: Comment: Acknowledging response. I think the paper could be significantly stronger by thinking more carefully about these things and incorporating this discussion into the paper. I am keeping my original rating as is since I think it would be a net positive for the scientific community for this work to be polished a bit more. I am reducing the soundness rating since I think there are some improvements you can make to the scientific methodology that came up in the discussion. However, there are some technical merits to the paper, and if other reviewers are happy to accept it as-is, I would not object to acceptance.
Summary: This paper considers stochastic convex optimization under user-level differential privacy. Algorithm 1 achieves the previous state-of-the-art excess risk of the linear-time user-level DP algorithm with milder assumptions. The algorithm is based on item-level DP-SGD algorithms. Previous paper AL24 shows that when the data of one user changes, the empirical gradient will not change too much, hence they remove outlier gradients to ensure privacy. Instead of removing outlier gradients, this paper removes outlier SGD iterates. This technique relies on a novel stability bound of SGD iterates proved in this paper. Algorithm 3 improves the run-time for both beta-smooth and non-smooth loss functions. It applies outlier-gradient removal to random mini-batches and implements previous subroutines to reach a good performance. Strengths: This paper makes concrete improvements from the previous work. Algorithm 3 has a faster run-time under milder assumptions while achieving optimal excess risk and user-level DP. Algorithm 1 improves the previous state-of-the-art excess risk of the linear-time user-level DP algorithm. Novel techniques and analysis are put forward which may be of independent interest. Weaknesses: Section 1.1 may contain more information about the intuition of the novel techniques (e.g., what may be the key reason that outlier-iterate removal is better than outlier-gradient removal?). Contribution 2 of this paper lacks a comparison table to previous results. Technical Quality: 3 Clarity: 3 Questions for Authors: The questions are asked in the 'Weaknesses' part. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful feedback and your positive assessment of our work. We respond to your comments below. >*Section 1.1 may contain more information about the intuition of the novel techniques (e.g., what may be the key reason that outlier-iterate removal is better than outlier-gradient removal?)* Great suggestion. We will elaborate on the intuition behind our techniques in the final version. Since the item-level DP phased SGD algorithm [FKT20] adds noise to the iterates, removing outlier iterates is a natural approach for extending this algorithm to be user-level DP. If we instead attempt to remove outlier gradients, one issue is that large batch sizes lead to a severe restriction on the smoothness parameter. >*Contribution 2 of this paper lacks a comparison table to previous results.* This is a very good suggestion. The only previous linear-time algorithm for user-level DP SCO is due to [BS23]. We compare our result against [BS23] in lines 80-93 and Remark 2.2. We will put this comparison into a second table in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your response! You have answered my questions thoroughly.
Summary: This paper proposes new algorithms for stochastic convex optimization under user level differential privacy. This paper improves the computation complexity. The first algorithm achieves linear time complexity with suboptimal risk bound (the risk is SOTA among all linear time algorithms). The second algorithm achieves optimal risk with suboptimal time complexity (the time complexity is SOTA among all risk-optimal algorithms). Strengths: The authors have addressed an important problem. The method improves over Bassily et al. ICML 2023 and Liu et al. AISTATS 2024 on the time complexity. Moreover, this paper removes the assumption n\geq \sqrt{d}. I think that this paper makes a solid and interesting contribution. Weaknesses: 1. I feel that the second algorithm (Section 3 in the paper) is hard to follow. I do not understand the main ideas of the design. 2. Based on my own understanding, it seems that this paper requires $\epsilon<1$, since it divides users into $1/\epsilon$ groups. When $\epsilon>1$, there is only one group. Following the remainder of this paper, I feel that the bound with $\epsilon>1$ may not be optimal. Please correct me if it is wrong. 3.In item-level case, optimal rates with linear time has been achieved in Feldman et al. Private stochastic convex optimization: optimal rates in linear time. STOC 2020. Although this paper has cited the above paper, it would be better if the authors can provide more explanations on why achieving linear time with optimal rik is hard for user-level cases. In other words, why does the methods in Feldman et al. can not be simply extended to user-level case. My feeling is that using something like two-stage approach (such as Levy et al. Learning with user level privacy. NeurIPS 2021), the item-level methods can be converted to user-level ones with optimal rate. Minor issue: The statements of three algorithms are not with the same font. The algorithm 2 is different with algorithm 1 and 3. In general, I think that the paper indeed makes a solid contribution. However, the writing needs to be further improved and details need to be further polished. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Can the authors provide more intuitive explanations of the design of algorithms, especially the second one? 2. I wonder what will be the risk bound for strong convex loss functions. The case with strong convex loss function has been discussed in Kamath et al. Improved rates for differentially private stochastic convex optimization with heavy-tailed data. ICML 2022. This holds for item-level case. I wonder if one can derive the user-level counterparts. 3. Can the authors please provide more discussions on why it is hard to achieve linear time with optimal risk? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have addressed limitations well in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful feedback and your positive assessment of our work. We respond to your comments below. >*Main ideas of the second algorithm (Section 3 in the paper)* Our second algorithm is **inspired by the item-level accelerated phased ERM algorithm of [KLL21]**. Their algorithm runs in $\log_2 (n)$ phases. In each phase $i$, a disjoint batch of $n_i$ samples is used to define a regularized empirical loss function, $\hat{F}_i(x)$. The main ideas of the algorithm are: - A noisy DP variation of **accelerated** SGD is used to efficiently find an approximate DP minimizer $x_i$ of $\hat{F}_i(x)$. This approximate minimizer $x_i$ is then used as the initial point in phase $i+1$. - By **stability** of regularized ERM, $x_i$ is also an approximate minimizer of the underlying population loss function $F$. - **Iterative localization**: As $i$ increases and $x_i$ gets closer to $x^* = argmin_{x} F(x)$ , we increase the regularization parameter (geometrically) to prevent $x_i$ from moving too far away from $x_{i-1}$ and hence from $x^*$. We also shrink $n_i$ geometrically. Our algorithm is a **user-level DP variation of the algorithm described above**: Our **Algorithm 2 uses outlier-detection and outlier-removal procedures to get a low-variance noisy *user-level DP* estimate of the gradient** (with sensitivity scaling with $\tau \approx 1/\sqrt{m}$) in each iteration. This noisy user-level DP gradient is then used to take a step of accelerated SGD. Algorithm 3 uses Algorithm 2 as a subroutine to get a **stable**, user-level DP approximate solution of a regularized ERM problem in each phase, and applies **iterative localization**. Please see **Section 1.1 and lines 219-225** for a more detailed description of our algorithmic techniques. >*it seems that this paper requires $\varepsilon <1$, since it divides users into $1/\varepsilon$ groups…I feel that the bound with $\varepsilon >1$ may not be optimal. Please correct me if it is wrong.* Good question. In fact, **we do not require $\varepsilon <1$**. **For Algorithm 1, we do not necessarily need $\varepsilon < 1$:** it suffices that $\varepsilon < 50 \log(20 n m e^{\varepsilon}/\delta)$, so that $C \geq 2$. This condition is not practically restrictive. **Algorithm 3 is also optimal for $\varepsilon > 1$:** This algorithm does not divide users into groups that depend on $\varepsilon$. >*More explanations on why achieving linear time with optimal risk is hard for user-level cases…why the methods in Feldman et al. can not be simply extended to user-level case?* Excellent question! Note that even obtaining user-level DP algorithms with *polynomial* runtime was challenging and only recently solved by the work of Asi & Liu (2023). In the item-level DP setting, Feldman et al. give two optimal linear-time algorithms: snowball SGD and phased SGD. It is not at all clear how to extend snowball SGD into an optimal linear-time user-level DP algorithm. Therefore, we aimed to extend their phased SGD algorithm into a user-level DP algorithm with our Algorithm 1. The key challenge in obtaining optimal user-level DP excess risk with this algorithm is controlling the **user-level sensitivity of the iterates of one-pass SGD**. In the item-level case, the sensitivity of the iterates is $O(\eta L)$ [Feldman et al., Lemma 4.3], independent of the number of iterations $T$. However, in the user-level case, the sensitivity is $O(\eta L \sqrt{T})$ (our Lemma 2.3), and we believe this bound is tight. Hence the additive Gaussian noise that is needed to privatize the iterates is too large to obtain the optimal rate in the phased SGD framework. Thus, a fundamentally different algorithmic framework that leverages Lemma 2.3 in a more effective way may be needed to obtain the optimal rate in linear time. A second challenge in designing user-level DP variations of phased SGD is **instability of the outlier-detection scheme**: If the initial point $x_{i-1}$ changes by a small amount and we do outlier-detection, then the output can change greatly. Moreover, outlier detection seems to be necessary for any optimal user-level DP algorithm. Thus, another direction for future work could be to develop a new, **more stable outlier detection method**. We will add a discussion of these challenges and potential ways to overcome them in the final version of the paper. >*My feeling is that using something like two-stage approach (such as Levy et al. NeurIPS 2021), the item-level methods can be converted to user-level ones with optimal rate.* First, recall that the approach of Levy et al. does not obtain optimal rates. Moreover, their two-stage approach is similar to our outlier removal procedure. Their approach suffers from similar instability as our outlier removal approach. Thus, we do not see any way for their approach to allow for improvements over our algorithms. Please let us know if you have any ideas for how to leverage Levy et al. that we might be missing. >*What will be the risk bound for strong convex loss functions?* Great question! The risk bounds for $\mu$-strongly convex loss functions will essentially be **the square of the convex risk bound** (but with the scaling factor $LR$ replaced by $L^2/\mu$). This follows from a reduction in [FKT20]. In particular, **Algorithm 3 can easily be converted into an optimal algorithm for strongly convex functions with state-of-the-art runtime**. We will comment on this in the revision. --- Rebuttal Comment 1.1: Title: Further response Comment: Thanks for your reply. Regarding $\epsilon$, in the paper, data are divided into $1/\epsilon$ groups. So if $\epsilon>1$, how to divide it? Do you mean that data are actually divided into $\lceil 1/\epsilon\rceil$ groups? If so, my intuitive feeling is that there should be a phase transition at $\epsilon = 1$. Your bound does not have any phase transition, which looks a bit strange for me. Please correct me if I am wrong. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for giving us an opportunity to further clarify this important point. First, to be clear, we do require $\varepsilon$ to be bounded by some constant (e.g. $\varepsilon \leq 100$) in the current algorithm/analysis. Thank you for catching this. We will add this assumption to our statement of Theorem 2.1 in the revision. We believe that this assumption is very reasonable, since the privacy guarantees degrade rapidly as $\varepsilon$ grows. Next, we respond to your specific question: >*in the paper, data are divided into $1/ϵ$ groups. So if ϵ>1, how to divide it?* In fact, we don’t divide the data into $1/\varepsilon$ groups exactly, but rather we divide the data into $C$ groups, where $C$ is defined in line 2 of Algorithm 1. Note that $C \geq 2$ for any $\varepsilon > 0$. The reason that we choose $C$ the way we do is in order to ensure that the Laplace noise added in line 10 of Algorithm 1 is much smaller than $C$ with high probability. This ensures that the outlier-removal procedure succeeds with high probability. --- Rebuttal 2: Comment: >*A natural solution is to use these mean estimation algorithms to estimate the gradients, and then perform updates for $T$ steps….Could you please discuss what will the bound be if we just use these user-level mean estimation algorithms to estimate gradients?* **The suggested approach of using a user-level DP mean estimator to estimate the gradient in each iteration was taken by [BS23]**. The result of [BS23] is discussed in our paper: see e.g. row 1 of Fig. 1, lines 47-53, and remarks 3.3 & 3.4. Recall that although the [BS23] approach can obtain optimal excess risk for certain sufficiently smooth functions, it has the following limitations: (i) **the requirement on the number of users size is strict: $n \gtrsim d/\epsilon$**; (ii) **strict restriction on the smoothness parameter (see row 1 of Fig. 1) and does not work for non-smooth functions**. By contrast, *our results only require $n \gtrsim \log(d)/\epsilon$, our smoothness requirement is much milder than [BS23], and we get optimal excess risk even for non-smooth* functions. *Technical reasons for limitations (i) and (ii)*: Existing near-optimal $(\epsilon, \delta)$- user-level-DP mean estimators require $n \gtrsim d/\epsilon$ (in the two recent papers the reviewer suggests) or $n\gtrsim \sqrt{d}/\epsilon$ [BS23]. However, if we use full-batch and run DP-GD for $T$ steps, then advanced composition implies that we need $n \gtrsim\sqrt{T d}/\epsilon$. Here, $T$ needs to depend on $n$ to get optimal excess risk, which leads to severe restrictions on the smoothness parameter. In particular, their approach cannot handle non-smooth functions (assuming $\epsilon < \sqrt{d}$) since $T \sim n^2$ would be needed in their algorithm to get optimal risk for such functions. To address the above two limitations, [AL24] designed a new mean estimation sub-procedure based on incorporating outlier removal with the AboveThreshold procedure. Our algorithms build on the techniques of [AL24], as we discussed in Section 1.1. We will include this more detailed discussion/comparison of the [BS23]-type approach and the technical reasons for its limitations in the final version. >*I do not understand "one can not assume the data is i.i.d when bounding sensitivity". As far as I know, existing research on user-level DP usually assume that all samples are i.i.d.* We meant that **one cannot *use* the i.i.d. assumption in the privacy analysis**, because (user-level) DP is a strong *worst-case, distribution-free* requirement that must hold for all pairs of adjacent databases, not just i.i.d. databases. We thank the reviewer again for giving us an opportunity to further clarify these points. --- Rebuttal Comment 2.1: Comment: Thanks for your further response. **(i) the requirement on the number of users size is strict: $n\gtrsim d/\epsilon$** This is inaccurate. The requirement should be $n\gtrsim \sqrt{d}/\epsilon$ instead of $d/\epsilon$. I agree that your method have a much weaker assumption on the smoothness $\beta$. The risk bound does not depend explicitly on $\beta$. In this aspect, you have solved part of my concerns. However, I still agree with the reviewer jGa8. While I agree that this new method is novel and interesting, this paper needs to be further polished before acceptance. In particular, your discussion with reviewer jGa8 about outlier removal is not fully convincing to me. Therefore, I would still like to keep my score. --- Rebuttal 3: Comment: First, we would like to thank the reviewer for replying to our responses in a timely fashion and engaging in this important discussion to clarify the limitations of prior works compared to our work. We respond to your comments below. >*The requirement should be $n \gtrsim \sqrt{d}/\epsilon$ instead of $d/\epsilon$* Apologies for the confusion. We meant to write that $n \gtrsim d/\epsilon$ users are needed if one uses the two concurrent papers that the reviewer mentioned [1] and [2]: This can be seen by inspecting Theorem 1.2 in [1] and the discussion that follows the statement of the theorem, and by inspecting Eq. (3) and Theorems 2&3 in [2]. You are right that BS23 "only" needs $n \gtrsim \sqrt{d}/\epsilon$, which we mentioned in the second paragraph of our above response and in line 49 of our paper. Note that this **polynomial dependence on $d$ still leads to a strict requirement on the number of users in comparison to our work, which only requires the number of users to depend logarithmically on $d$**. >*your discussion with reviewer jGa8 about outlier removal is not fully convincing to me* Please let us know if there are any specific points in this discussion that you have doubts or questions about. We would be happy to clarify further why we believe that simpler approaches are unable to obtain our SOTA results (without further innovations), and why/how our outlier-removal technique is a powerful way to overcome the technical barriers faced in prior works (e.g. BS23). >*I agree that your method have a much weaker assumption on the smoothness parameter...and that this new method is novel and interesting* Thank you for acknowledging these important aspects of our work. We re-iterate that our significant improvements in **runtime** and in the **number of users** needed (logarithmic instead of polynomial dependence on $d$) are also crucial components of our SOTA results.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Contrastive losses as generalized models of global epistasis
Accept (poster)
Summary: This paper studies how the contrastive loss improves the estimation of fitness functions over the MSE loss and contributes in the following ways: (1) With noiseless data in Section 3.1, they found that the contrastive loss can estimate not only the correct ranking of fitness but also the exact fitness values; (2) With modulating the sparseness of epistasis vectors in Section 3.3, the MSE performance deteriorates quickly, unlike the contrastive loss; (3) The contrastive loss performs well in most of the setups with the FLIP benchmark data. Strengths: + **Well-motivated and simple approach to use contrastive loss**: Given that the measurement of fitness values is transformed via a monotone nonlinear function, it makes sense to consider ranking fitness values to better learn the underlying fitness function. The approach based on the contrastive loss is thus natural and easy to use. + **The robustness against the nonlinearity of $g$**: The nonlinear model in consideration is in the form of $y = g(f(x))$, where $g$ is a user-specified nonlinearity. Though in Section 3.1 the authors provide the estimation results only with $g = \\exp$, they tested the robustness of $g$ in Appendix B.1, where we can see a different $g$ may not affect the estimation results much. Weaknesses: Since I'm not familiar with the field of fitness estimation, my comments are mostly based on the perspective of machine learning researchers who work on contrastive learning and related topics. + **Unclear why we need the nonlinear $g$ in modeling**: The entire paper is based on the nonlinear measurement system $y = g(f(x))$ to incorporate the knowledge of "global" epistasis. However, the necessity of $g$ is unclear because the fitness function $f$ is modeled by a neural network, which is sufficiently capable of capturing the underlying nonlinearity. I'm not sure of the potential benefits of having an additional nonlinearity $g$. + One may argue that $g$ is necessary because this corroborates why the observed fitness values exhibit denseness---as described in the paragraph from l.245, the nonlinearity of $g$ makes fitness values dense even if the values of $f$ are sparse. While it makes sense that the nonlinearity yields the denseness, this only indicates that the nonlinearity is _sufficient_ but not necessary. + **The relationship between the entropy and sparseness might be elusive**: In Section 3.2, the authors explain the trade-off between the sparseness of fitness and epistasis by the fitness-epistasis entropy lower bound in Eq. (4). This explanation might have two concerns. First, this entropy lower bound is derived based on the "local" epistasis model $\\mathbf{f} = \\mathbf{\\Phi}\\boldsymbol{\\beta}$, but the authors argue the existence of the trade-off for the "global" epistasis, which is a leap in logic. Second, even if we focus on the local epistasis model, fitness $\\mathbf{f}$ must be sparse once epistasis $\\boldsymbol{\\beta}$ is a sparse vector (not in the sense of the entropy but in the sense of the number of nonzero elements). From this observation, the explanation by the entropy sounds a bit unnatural. + (Slightly minor) **Does the evaluation by the correlation coefficients make sense?**: The experimental results are evaluated by the Spearman/Pearson correlation coefficients. While it may look fine at first glance, this evaluation protocol is too advantageous for the contrastive loss that we cannot be sure whether the sparseness/denseness of epistasis is a key factor of the MSE deterioration or not, because the contrastive loss enhances correct ranking and kinda directly optimizes correlation. Technical Quality: 2 Clarity: 2 Questions for Authors: + The difference in the experimental setups of Sections 3.1 and 3.3 is not clear. They should be different to some extent because the former section considers a noise-less setup, and the latter considers an "incomplete setup. In my understanding, their experimental setups are almost identical, including the data-generating processes, whereas the nonlinearity $g$ is modulated in Section 3.3 to see the effect of the sparseness on the estimation quality. And by the "incomplete" observation, the authors probably indicate different sparseness degrees. If this is the case, I feel the wording "incomplete" is a bit misleading. ---- Minor comments: - It is standard to concatenate two peoples' names by ndash (--) such as "Bradley--Terry". - At l.82, space might be missing in between "Graph Fourier bases" and the reference numbers. - From l.137, it begins with "Here we propose," but since this section is background, it is better to separate the authors' approach into another section. - In Figure 2b, why not having the mean and confidence intervals? The current scatter plot is a bit hard to grasp the trend. - Section 3.2 seems to be providing background only. Why not having this in the background section? - At l.362, "[...] practical protein engineering settings it **is** also important to consider [...]" Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed the limitations of this work in Section 4. I do not see any significant negative societal concerns in this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Summary**: We thank the reviewer for their thoughtful and helpful comments, especially their recognition that our approach is a simple way to model epistasis without having to make explicit assumptions about the shape of the non-linearity (which is a limitation of current treatments). We address their remaining concerns below and happy to engage further to resolve questions. **Detailed responses** > Unclear why we need the nonlinear 𝑔 in modeling. Excellent point. The reason is interpretability and performance. The nonlinearity shows up ubiquitously in fitness functions, and when not accounted for makes true epistatic interactions hard to infer. “Removing” global epistasis by modeling $g$ enables one to infer the true local epistatic interactions by applying the Graph Fourier Transform (GFT) to the fitness function without $g$. These interactions can be important to know because they often represent close physical contacts and thus may reveal insights into the relationships between the protein's fitness and its structure or dynamics. One of our contributions is to highlight that If one were to apply the GFT without removing the effect of the nonlinearity $g$, then one would observe spurious interactions due to the global nonlinearity, making it difficult to determine which interactions are real or spurious. Modeling the fitness function with a generic neural network does not offer this advantage as it is clear with our MSE results. Further, we see performance gains from modeling the nonlinearity $g$. Modeling $g$ is a type of inductive bias added to the model, and our empirical benchmarking results demonstrate that this inductive bias provides performance improvements on a variety of practical tasks. > First, this entropy lower bound is derived based on the "local" epistasis model 𝑓=𝛷𝛽, but the authors argue the existence of the trade-off for the "global" epistasis, which is a leap in logic. If we understand the reviewer correctly, there may be a slight miscommunication in the interpretation of the equation $f=\Phi \beta$. This equation defines the GFT, which is not a “model” in the regular sense because it is a complete representation of any fitness function (analogous to how the standard Fourier transform is a complete representation of a continuous function). The coefficients, $\beta$ in the GFT represent local epistatic interactions; however, the GFT can be applied to any fitness function, even if it has been affected by global epistasis. In other words, $beta = \Phi^T g(f)$ is a valid transformation that produces a set of coefficients. Many of these coefficients will be spuriously nonzero because they result from the application of the nonlinearity g and are not true local epistatic interactions that would be observed in f itself. So there is no leap in logic: it is mathematically and conceptually valid to apply the GFT to a fitness function affected by global epistasis and to analyze the resulting coefficients (e.g. by calculating their entropy, as is done in the uncertainty principle that we present). > Second, even if we focus on the local epistasis model, fitness 𝑓 must be sparse once epistasis 𝛽 is a sparse vector It is not true that a fitness $f$ must be sparse if the coefficients $\beta$ are sparse. A simple counterexample of this assertion is the case where the first order coefficients in $\beta$ are unique and nonzero but all other coefficients are exactly zero. In this case, the fitness function $f$ is nonzero for every sequence (i.e. it is as dense as possible in the sense of sparsity), while the coefficients are very sparse, with only $Lq$ out $q^L$ possible coefficients being nonzero. > Does the evaluation by the correlation coefficients make sense?: We share the view that there is an inherent advantage in picking a ranked based coefficient to evaluate a contrastive loss training scheme. However, we didn’t pick this just to serve our purposes; Spearman correlation is the standard metric in this field for a reason (improving sequences). We also intentionally included the top 10% Recall as an additional metric to remedy this concern. As shown in Table 3, the BT loss remains competitive for this metric. > Minor comments: Thank you, we will correct these errors in the camera-ready version of the manuscript. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer NoUX Comment: I appreciate the authors for dedicatedly addressing the review comments. In my initial review comments, I had several misconceptions on the relationship between GFT and nonlinear global epistasis model. Thanks to the authors’ clarifications, I corrected my understanding. Now the trade-off between the sparsity of $f$ and $\\beta$ presented in Section 3.2 makes sense to me. The application of the nonlinearity also makes somewhat sense to me now. It looks like an interesting application, which is based on the combination of multiple technical devices including GFT, BT model, and nonlinear modeling. Although there are several parts where the presentation can be made better (in my minor comments, I mentioned that the background, method, and experiment sections are oftentimes mixed up), I raise my evaluation from 4 -> 5 given the interesting combination. --- Reply to Comment 1.1.1: Title: Thank you Comment: We sincerely thank the reviewer for their consideration. Is there anything more we could do to further improve the view of the referee of our paper?
Summary: The paper proposes using contrastive loss, specifically the Bradley-Terry (BT) loss, as an alternative to Mean Squared Error (MSE) for training global epistasis models in fitness prediction tasks. Experiments conducted on both complete and incomplete data sets, as well as benchmarked on FLIP tasks, show that the BT loss improves performance in fitness prediction. Strengths: - The authors propose the use of Bradley-Terry loss and demonstrate its effectiveness over cross-entropy loss for modeling fitness functions. - The explanation provided for BT loss being invariant to monotonic transformations of observed values aligns with assumptions in global epistasis models, which could be beneficial for fitness prediction and other biological applications. Weaknesses: - There is a lack of comparison with other standard fitness prediction tasks: For fitness prediction tasks, there are several standard experiments not performed, such as fold classification, Gene Ontology (GO), Enzyme Commission (EC) prediction, fluorescence, localization prediction, and stability prediction. Referencing benchmarks such as TAPE[1], DeepLoc[2], and dataset in DeepFRI[3] would strengthen the evaluation. - There is a lack of comparison with other standard fitness prediction models: The baseline models are restricted to CNNs. It would be beneficial to include protein language models (e.g., ESM[4], xTrimoPGLM[5]) and potentially structure-based models(e.g., ESM-GearNet[6]), as well as comparisons with other epistasis-aware networks. [1] Rao, R., Bhattacharya, N., Thomas, N., Duan, Y., Chen, P., Canny, J., ... & Song, Y. (2019). Evaluating protein transfer learning with TAPE. *Advances in neural information processing systems*, *32*. [2] Almagro Armenteros, José Juan, et al. "DeepLoc: prediction of protein subcellular localization using deep learning." *Bioinformatics* 33.21 (2017): 3387-3395. [3] Gligorijević, Vladimir, et al. "Structure-based protein function prediction using graph convolutional networks." *Nature communications* 12.1 (2021): 3168. [4] Lin, Zeming, et al. "Evolutionary-scale prediction of atomic-level protein structure with a language model." *Science* 379.6637 (2023): 1123-1130. [5] Chen, Bo, et al. "xTrimoPGLM: unified 100B-scale pre-trained transformer for deciphering the language of protein." *arXiv preprint arXiv:2401.06199* (2024). [6] Zhang, Zuobai, et al. "A systematic study of joint representation learning on protein sequences and structures." *arXiv preprint arXiv:2303.06275* (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Typically, contrastive loss involves training the model to distinguish between similar and dissimilar pairs of data points. The Bradley-Terry (BT) loss described in the paper appears more like a regression loss. Is this interpretation correct? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The experimental evaluation is limited, as it only includes CNN baselines and FLIP dataset. The impact and novelty of the work could be limited without more thorough experimental results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Summary**: We are deeply confused by the main criticism. Our focus is on supervised regression of fitness functions and Gene Ontology, Enzyme Commision and fold prediction are not fitness regression tasks. Rather, they are classification tasks on attributes of sequences mostly unrelated to fitness that would require different modeling strategies to approach (akin to asking a human height regressor to predict what car a person drives or what food they have allergies to). The application of the global epistasis framework to these tasks is inappropriate regardless of any modeling choices. The suggested comparison methods that do transfer to our setting are largely zero-shot methods, not supervised fitness prediction methods. Further, fine-tuned protein language models are included in the FLIP benchmark so our results can already be compared to these models (and the comparison is favorable). We emphasize that our goal is not just to make a SOTA model, but to enable a flexible, interpretable, and general approach to model global epistasis. In situations where global epistasis is less relevant, we don’t expect a performance advantage (though it is pervasive in fitness functions, hence the relevance of our theoretical exposition). **Detailed Response** > For fitness prediction tasks, there are several standard experiments not performed, such as fold classification, Gene Ontology (GO), Enzyme Commission (EC) prediction, fluorescence, localization prediction, and stability prediction. Referencing benchmarks such as TAPE[1], DeepLoc[2], and dataset in DeepFRI[3] would strengthen the evaluation. Most of the evaluations suggested by the reviewer such as Fold classification, GO prediction (as in DeepFRI), EC commission prediction, and localization prediction (as in DeepLoc) are classification tasks and thus are not relevant to our paper. Furthermore, the TAPE (Tasks Assessing Protein Embeddings) benchmark is explicitly designed to test embeddings from large protein models; since we do not propose a method that produces embeddings, this is also not relevant to our paper. Additionally, the reviewer suggests that we test on stability prediction, which we have already done and the results can be seen in the Thermostability section of Table 3.1 in our main paper. The only other task suggested by the reviewer to add to our paper is fluorescence prediction. We have now benchmarked the Bradley-Terry loss against the MSE loss in a fluorescence prediction task using the GFP_AEQVI_Sarkisyan_2016 dataset from ProteinGym. These results can be seen in our top-level rebuttal and corroborate our main findings. > It would be beneficial to include protein language models (e.g., ESM[4], xTrimoPGLM[5]) and potentially structure-based models(e.g., ESM-GearNet[6]), as well as comparisons with other epistasis-aware networks We are unsure if the reviewer is referring to fine-tuned language models predictions or zero-shot prediction from pretrained models. Only fine-tuning is relevant to our paper, since our goal is to test a supervised loss function, but the three papers referenced by the reviewer are primarily focused on pre-training or protein structure prediction and thus are not relevant to our scenario. Notably, the FLIP benchmark paper contains results for fine-tuned versions of ESM that can be compared directly to our results. In all but 2 of the AAV and GB1 tasks, our CNN trained with the Bradley-Terry loss outperforms the fine-tuned ESM models. The fine-tuned models outperform non-pretrained models on the thermostability tasks, which is a conclusion that is already mentioned in the FLIP paper. We will highlight this comparison in the camera-ready version of paper. > Typically, contrastive loss involves training the model to distinguish between similar and dissimilar pairs of data points. The Bradley-Terry (BT) loss described in the paper appears more like a regression loss. Is this interpretation correct? We thank the reviewer for the opportunity to clarify this. We followed Chan et. al. [1] in referring to the Bradley-Terry loss as a contrastive loss, and generally use contrastive loss and ranking loss interchangeably in our paper. Contrastive losses do indeed generally refer to losses applied to embeddings; however, the mathematical forms of these losses are often similar or identical to ranking loss functions for regression. For instance, one of the earliest contrastive loss functions is that proposed by Hadsell, et. al. [2], which applies the same mathematical loss as the Margin ranking loss to dissimilar points. We will include this reference and clarify this in the camera-ready version of the paper. [1] Chan, A. et al. Deep Extrapolation for Attribute-Enhanced Generation. Adv. Neural Inf. Process. Syst. 34, 14084–14096 (2021) [2] Hadsell, R., Chopra, S., & LeCun, Y. Dimensionality Reduction by Learning an Invariant Mapping. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2, 1735–1742 (2006) --- Rebuttal Comment 1.1: Comment: Thank you for your thorough responses. In my initial review, I misunderstood the concept of "fitness prediction". With the authors' clarifications, I've revised my understanding. I now recognize that the paper aims to offer a "flexible, interpretable, and general approach to modeling global epistasis", as stated by the authors. Regarding the second weakness, my concern was specifically about the experiments involving the use of BT loss to fine-tune pretrained models. Do you believe that BT loss could be effectively transferred across different models? If so, further testing of the loss function's application across various models could enhance the robustness of the findings. The authors also provided additional theoretical proofs and conducted further experiments using two datasets from ProteinGym. In light of these improvements, I am raising my score from 3 to 5. --- Reply to Comment 1.1.1: Title: Thank you Comment: We sincerely thank the reviewer for considering our arguments. >> Regarding the second weakness, my concern was specifically about the experiments involving the use of BT loss to fine-tune pretrained models. Do you believe that BT loss could be effectively transferred across different models? If so, further testing of the loss function's application across various models could enhance the robustness of the findings. We agree with the referee for fitness prediction tasks this is possible (still requires supervised fine-tuning). We are happy to add a fine-tuned model to our camera-ready version of the paper.
Summary: The authors study global epistasis models which are used to understand the fitness landscapes of biological sequences. They start with the observation that global epistasis is observed in real-world systems as a monotonic non-linear transform of an underlying fitness function on a genotype/sequence. The fitness function can have varying levels of local epistasis. They then introduce the Bradley-Terry (BT) loss, a "contrastive loss" that penalizes model prediction ordering, as an alternative to MSE as the loss function when training a model to predict f when observables are of the form y = g(f(x)), where g is some unknown monotonic non-linearity and f is the fitness function. In a series of simulations and on real protein fitness data they show that they can recover f better when using the BT loss in place of MSE, and suggest this may be due to the distribution of the label on the domain of the underlying non-linearity. Strengths: The paper is well-written and fairly easy to follow, although it is somewhat outside my area of expertise. The problem is relevant to a lot of work in computational biology and so its significance is high. Figures are nice, experiments are presented and analyzed objectively. Overall a nice paper to read. Weaknesses: Probably the largest weakness is that the authors don't providet a theoretical analysis that explains the difference in results between the loss functions in a concrete way, taking into account the conditions under which we could expect these results to hold. The analogy to compressed sensing is interesting but ultimately relies on the authors to conjecture about the causes. In particular there isn't a conclusive assessment of the effect of noise since many of the results rely on lossless simulation (although the authors do experiment with the FLIP benchmark). I think the paper could be improved with the addition of some rigorous analysis that links the hypothesis class of $g$ to its learnability under different loss functions. It's also maybe not surprising that the model trained on BT-loss would do better as measured by Spearman correlations, given that both are measures of rank order. Technical Quality: 3 Clarity: 3 Questions for Authors: How do the resultant models (trained on MSE vs BT) compare in terms of MSE on test data? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors cover limitations well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Summary**: We thank the reviewer for thoughtful and constructive feedback. As mainly bioML practitioners, we are bringing this work to NeurIPS partially to spawn discussion on theoretical connections. So far our efforts to create stronger theoretical connections have not yielded results, and the theoretical problem may be hard to make progress on today. However, we think our conjectures and empirical results can generate interest in the broader field to develop upon. We have added one theoretical result to our paper during the review period (see attachment). We’d also like to emphasize that we do test the effects of noise on our results (see Appendix D.2 and E) and that we test additional metrics, namely top 10% recall (key for engineering tasks) apart from spearman. **Detailed response** > Probably the largest weakness is that the authors don't provide a theoretical analysis that explains the difference in results between the loss functions in a concrete way, We agree that a more complete theoretical understanding of the reasons why the Bradley-Terry loss is more robust to the density of fitness functions than the MSE loss would be a helpful addition to our work, as we mentioned in the Discussion section. Such a result has been elusive, largely because the minimum of the Bradley-Terry loss is difficult to characterize (it is the stationary state of a time-inhomogenous Markov Chain [1]). However, we believe that our empirical results provide a compelling case for our interpretation that can motivate future theoretical work. We have bolstered the theoretical justification in another way, however. In particular, we have now shown a sufficient condition for a nonlinearity to reduce the entropy of a fitness function (which will increase the entropy in the epistatic domain if the reduction is sufficiently large due to the fitness-epistasis uncertainty principle). We have attached a PDF showing a proof of this result (note that the lemma proofs do not fit onto one page, but will be included in the Appendix of the camera-ready version of the paper). This results expands our understanding of the type of nonlinearity that can be expected to cause a decrease in entropy in the fitness domain, and a resulting increase in entropy in the epistatic domain. > In particular there isn't a conclusive assessment of the effect of noise since many of the results rely on lossless simulation (although the authors do experiment with the FLIP benchmark). We provide additional experiments in Appendix D.2 and E that test the Bradley-Terry loss under a variety of noise conditions. These results demonstrate that the BT loss is surprisingly robust to noise, even in the deliberately pathological scenario tested in Appendix E. > It's also maybe not surprising that the model trained on BT-loss would do better as measured by Spearman correlations, given that both are measures of rank order. Spearman correlation is a standard in the field of fitness prediction and is the primary metric reported by benchmarks such as FLIP, so we use it as one of our metrics. However, we agree that there is an inherent advantage in using this metric to evaluate contrastive losses. For that reason, we also report the top 10% recall in our FLIP benchmark results. As can be seen in Table 3, we observe improved performance on this metric using the Bradley-Terry loss. The top 10% recall is not a direct measure of rank order, so we hope this result will remedy this concern of the reviewer. > How do the resultant models (trained on MSE vs BT) compare in terms of MSE on test data? The Bradley-Terry loss will change the scale of predictions compared to the training data, so the MSE for models trained with the Bradley-Terry loss is always higher than models trained with the MSE loss. For this reason, we report Pearson correlation in our simulation results rather than MSE where appropriate. Spearman correlation is generally preferred over Pearson in the field of protein engineering because the aim is to design new sequences with improved performance (i.e. rank sequences over previously observed sequences), rather than designing for particular fitness values. [1] Maystre, L. & Grossglauser, M. Fast and Accurate Inference of Plackett–Luce Models. Adv. Neural Inf. Process. Syst. 28 (2015) --- Rebuttal Comment 1.1: Comment: The authors have made a good faith effort to address most of my comments. I think the extended analysis and theory is also worth including in the final version. I have adjusted my score.
Summary: This paper addresses the problem setting of modeling and extracting the sparse interactions found in global epistasis, and proposes that the Bradley-Terry loss -- a loss used in ranking problems -- is a good alternative to MSE for ranking prediction in such settings, as it does not require assumptions on the form of the non-linearity. Authors use the entropic uncertainty principle from physics and information theory to prove that observed fitness functions cannot always be represented by sparse locally epistatic interactions, due to the presence of global epistasis; since the sparsity of a fitness function determines how many measurements are necessary to estimate the fitness function in MSE settings, this renders the Bradley-Terry loss to be more performative under globally epistatic settings. This is also shown empirically; in experiments, authors examine the utility of using BT loss instead of MSE in (1) simulated data where all fitness values are available, (2) simulated data where fitness labels are only partially observed, and (3) real world benchmarks (i.e. FLIP). Strengths: * Modeling global epistasis is important in real-world settings where computational predictions are used to determine mutations in directed evolution campaigns. The usage of the Bradley-Terry loss seems to produce better empirical results on both simulated and real-world data, which is promising. * Good investigation into the theoretical underpinnings to explain why BT outperforms MSE -- I personally think this is a good example for BioML papers, which often do not delve into observed trends, and should be encouraged in a conference such as NeurIPS. * Paper is well written overall, with motivations, references, and limitations clearly denoted. Weaknesses: * While the method does achieve SOTA on FLIP benchmarks, FLIP datasets are rather limited in how they approximate the real-world -- diversity within the GB1 dataset is still rather locally epistatic Minor: * The abbreviation "CS" is used a lot in the later sections of the paper ("CS scaling laws", "CS techniques such as LASSO"); would be good to expand this to reduce confusion. Technical Quality: 3 Clarity: 4 Questions for Authors: n/a Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Summary**: We thank the reviewer for the positive and constructive feedback, especially their recognition that our work is in an area of bioML that, while not trend-following, would benefit the NeurIPS community in cross-pollinating theoretical and practical advances. Given that we are focused on multi-mutant epistasis, we think that the FLIP is the closest dataset for the real world application of protein engineering as it contains 3 distinct prediction tasks with multi-mutants of up to 29 edits. However, we have attempted to address the concern on empirical validation by adding the two most relevant datasets from ProteinGym to our evaluations. **Detailed response** > FLIP datasets are rather limited in how they approximate the real-world We chose to use FLIP because we think it is most relevant for practical protein engineering. For instance, the largest dataset in FLIP is from an AAV design task, which is a protein with therapeutic applications that is actively designed with machine learning in both academia and industry. The AAV data used in FLIP was collected using standard techniques in the field (see [1, 2] for additional examples of datasets collected using these techniques). The FLIP AAV data thus represents a highly practical scenario. > diversity within the GB1 dataset is still rather locally epistatic Indeed, the GB1 contains a large amount of local epistasis because the four positions that are mutated are in physical proximity in the native structure of GB1. However, local epistasis does not preclude the possibility of global epistasis (see [3]). Our results indicate that it can still be useful to separate these two sources of epistasis even when a large amount of local epistasis is present. Also, modeling a small number of physically proximal methods is a practical scenario that would occur if one were, for instance, to try to redesign an enzyme active site. Therefore, the GB1 tasks represent a useful and realistic protein fitness modeling scenario. It’s important to emphasize that there is widespread epistasis in the AAV dataset, with many mutants having 10-30 changes to their protein and our performance there remains high. > The abbreviation "CS" is used a lot in the later sections of the paper ("CS scaling laws", "CS techniques such as LASSO"); would be good to expand this to reduce confusion. We will fix this, thank you. [1] Zhu, D. et al. Optimal trade-off control in machine learning–based library design, with application to adeno-associated virus (AAV) for gene therapy. Sci. Adv. 10, eadj3786 (2024). [2] Ogden, Pierce J., et al. "Comprehensive AAV capsid fitness landscape reveals a viral gene and enables machine-guided design." Science 366.6469 (2019): 1139-1143. [3] Otwinowski, Jakub, David M. McCandlish, and Joshua B. Plotkin. "Inferring the shape of global epistasis." Proceedings of the National Academy of Sciences 115.32 (2018): E7550-E7558.
Rebuttal 1: Rebuttal: We thank the reviewers for providing thoughtful comments. We are particularly encouraged by reviewer iMMK’s view that this paper would offer a somewhat unconventional, but valuable, perspective to the NeurIPS’ bioML community. We also recognize that multiple reviewers questioned the sufficiency of our evaluation tasks. In our responses, we defend that FLIP, a recent NeurIPS-accepted benchmark-track paper, is the most relevant choice to test our hypotheses. We would also like to emphasize that demonstrating performance gains on benchmarks is not the only axis along which we make a contribution, as we have also demonstrated that the Bradley-Terry loss is a flexible tool to model global epistasis (and thus recover true local epistatic interactions), and have suggested intriguing theoretical connections. To address common concerns, we have made the following concrete changes to the paper in response to the feedback: * We have benchmarked the Bradley-Terry loss vs. the MSE loss on the two most relevant datasets (due to a high number of multi-mutants) from the ProteinGym benchmark: CAPSD_AAV2S_Sinai_2021 and GFP_AEQVI_Sarkisyan_2016 (which is a a fluorescence dataset). We tested uniform random splits of these datasets and used the same protocols as described in the main text. Test set results for these datasets corroborate our main findings; they are shown in the tables below and will be added to the camera-ready version of the paper: **Spearman correlation** | Dataset | Bradley-Terry | MSE | | -------- | ------- | ---------| | CAPSD_AAV2S_Sinai_2021 | $\textbf{0.920} \pm \textbf{0.003}$ | $0.912 \pm 0.003$ | | GFP_AEQVI_Sarkisyan_2016 | $\textbf{0.873} \pm \textbf{0.001}$ | $0.867 \pm 0.002$ | **Top 10% recall** | Dataset | Bradley-Terry | MSE | | -------- | ------- | ---------| | CAPSD_AAV2S_Sinai_2021 | $0.915 \pm 0.001$ | $0.915 \pm 0.001$ | | GFP_AEQVI_Sarkisyan_2016 | $\textbf{0.945} \pm \textbf{0.003}$ | $0.938 \pm 0.003$ | * We have also added additional theoretical support to our paper, by proving a sufficient condition for which a nonlinearity will reduce the entropy of the fitness function in the fitness domain. The proof of this result is attached. In addition, we will make a number of changes to the camera-ready version of the manuscript in response to reviewer feedback, including adding citations and highlighting results in previous papers. Pdf: /pdf/c91f2bb9f74ff0ac3392df1a9b2fd40aa1aa2b40.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This study assumes a global epistasis model, where the observed experimental data represents a monotonic nonlinear transformation of underlying fitness, and explores the problem of learning these underlying fitness functions. Instead of directly learning fitness functions using Mean Squared Error (MSE) loss, the authors propose using ranking-based loss, such as Bradley-Terry (BT) loss, to learn fitness functions indirectly. The experiments in this paper demonstrate that ranking-based loss outperforms MSE loss. Strengths: 1. The basic idea is clear and well-motivated. 2. The ablation experiment demonstrating that BT loss outperforms MSE loss is convincing. Weaknesses: 1. Several previous methods account for high-order interactions, including MSA-based methods like EVE[1] and MSA-Transformer[2], and language model-based methods such as Tranception[3] and esm1v[4]. The authors should include these methods for comparison. 2. ProteyGym[5] is a more comprehensive benchmark dataset widely used in the literature. Please also include the evaluation of this dataset. I will be glad to raise my score if more solid comparisons can be provided. [1] Genome-wide prediction of disease variant effects with a deep protein language model. https://doi.org/10.1038/s41588-023-01465-0 [2] MSA Transformer. https://proceedings.mlr.press/v139/rao21a.html [3] Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. https://proceedings.mlr.press/v162/notin22a [4] Language models enable zero-shot prediction of the effects of mutations on protein function. https://openreview.net/forum?id=uXc42E9ZPFs [5] ProteinGym: Large-Scale Benchmarks for Protein Fitness Prediction and Design. https://papers.nips.cc/paper_files/paper/2023/hash/cac723e5ff29f65e3fcbb0739ae91bee-Abstract-Datasets_and_Benchmarks.html Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Please refer to the Weaknesses section 2. AlphaMissense[1] should be included in the Related Work section, as it also incorporates a contrastive-like loss in its training. [1] Accurate proteome-wide missense variant effect prediction with AlphaMissense. https://doi.org/10.1126/science.adg7492 Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: I have no concerns about the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Summary**: We thank the reviewer for their comments. The reviewer’s ask to compare our argument for a generalized *supervised* loss function with unsupervised models suggests that we have miscommunicated the point of the paper. This paper is aimed at providing a straightforward, flexible way of performing supervised regression of fitness functions while improving biological interpretability. Most of the suggested comparison models are zero-shot/unsupervised models. The most applicable benchmarking datasets for our applications are multi-mutant datasets that have ample measurements with epistatic interactions, a common scenario in protein design. As such, FLIP is the right benchmark (it also does include results for fine-tuned language models such as ESM-1v). ProteinGym is largely made of single mutant assays (for variant effect prediction), which are not a good benchmark for our method. To address the reviewer’s concern on this front, we have added the two most relevant datasets from ProteinGym, and our results hold (see above). We will write down this distinction in the paper’s text to alleviate this miscommunication. **Detailed response** > Several previous methods account for high-order interactions, including MSA-based methods like EVE[1] and MSA-Transformer[2], and language model-based methods such as Tranception[3] and esm1v[4]. We want to emphasize that the primary goal of our paper is to propose a new loss function for supervised regression of fitness functions and demonstrate its usefulness and its connection to theoretical intuition. Our experiments support this conclusion by comparing loss functions with a fixed model architecture. It would not support the conclusions of the paper to compare to the four unsupervised methods, since this would not inform our understanding of supervised loss functions. Those methods also do not enable a straightforward, uncorrupted inference of epistatic factors or relax assumptions w.r.t the shape of the non-linearity (on the contrary they obfuscate them). Finally, comparisons between unsupervised and supervised models for fitness prediction have already been done in both the FLIP and ProteinGym papers; an additional such comparison in our paper would not add to the field. We also note that fine-tuned ESM-1v models are benchmarked in the FLIP paper; and these results can be compared directly to our results on the FLIP benchmark. In all but 2 of the AAV and GB1 tasks, our CNN trained with the Bradley-Terry loss outperforms the fine-tuned ESM-1v models. The fine-tuned models outperform non-pretrained models on the thermostability tasks, which is a conclusion that is already mentioned in the FLIP paper. We will highlight this comparison in the camera-ready version of the paper. > ProteyGym[5] is a more comprehensive benchmark dataset widely used in the literature. Please also include the evaluation of this dataset. We understand that from a distance ProteinGym seems like an appropriate benchmark. We chose to use FLIP instead because it is more relevant to the protein engineering applications that we are interested in, as discussed below. To address this concern, however, we have pulled the two most relevant datasets from ProteinGym, and show that our results are unaffected. The train/test splits in ProteinGym are primarily geared towards variant effect prediction (at low mutational distance) and less relevant for practical protein engineering than those in FLIP, where mutational distance is high. Many of the train and test sets in ProteinGym contain only single mutations, which is an uncommon situation in practical protein engineering. Additionally, the curated splits in ProteinGym, where certain positions (“modulo”) or segments (‘contiguous’) are held out from training, do not represent typical protein engineering scenarios and are inappropriate for the argument in this paper. In contrast, the FLIP tasks represent quite common scenarios that we are specifically interested in. For instance, the Low-vs-High tasks attempt to predict high fitness variants from only low fitness variants; this is a common situation in practice since most practical protein engineering attempts to increase fitness. Other examples are the X-vs-rest FLIP tasks, which attempt to predict the fitness of high edit distance variants from only low edit distance variants. This is another situation that commonly occurs in practice, where one wishes to design higher edit distance sequences than have been observed previously (see, e.g., [1]). We do not believe that comparing loss functions on the less relevant ProteinGym tasks would add to the results of the paper. Also, the FLIP datasets span a comprehensive range of types of protein fitness data. The GB1 set contains “deep” data for all possible mutations at 4 positions, while the AAV set contains “wide” data for a region of 27 amino acids, and the Thermostability set contains data for sequences from distinct protein families. It is not clear to us that there are common scenarios missing from this benchmark that would necessitate evaluation on a larger benchmark to support the conclusions of the paper. Finally, there are 283 datasets in ProteinGym; comparing the MSE and Bradley-Terry losses across all of these datasets with 10 replicates would require training over 5,000 models. This is an infeasible task in this short review period. However, we have chosen two ProteinGym datasets that contain multi-mutants to test our conclusions on: CAPSD_AAV2S_Sinai_2021 and GFP_AEQVI_Sarkisyan_2016. These results, which corroborate our main findings, can be seen in our top-level Rebuttal. > AlphaMissense[1] should be included in the Related Work section, as it also incorporates a contrastive-like loss in its training. Thank you, we will add this reference to the camera-ready version paper. [1] Bryant, D. H. et al. Deep diversification of an AAV capsid protein by machine learning. Nat Biotechnol 1–6 (2021) --- Rebuttal Comment 1.1: Title: Follow up Comment: We'd like to follow up to see if the reviewer had a chance to reflect on our argument about the difference in zero-shot and supervised prediction tasks, as well as the fact that many of the tasks in ProteinGym would be inappropriate for our method (we added two datasets from proteinGym that are appropriate). Please let us know if we can further clarify. --- Rebuttal Comment 1.2: Comment: Thank you for your response. My concerns have been partially addressed, and I slightly raised my score. Good luck.
null
null
null
null
null
null
Rethinking Optimal Transport in Offline Reinforcement Learning
Accept (poster)
Summary: This paper provided a novel view of offline RL, which roughly is maximizing return while keeping close to the data, as OT problem. The key contribution is demonstrating that partial OT can effectively address the challenge of stitching—a fundamental issue in offline RL—both theoretically and empirically. Strengths: 1. Formulation of offline RL as OT from stationary dist for state to datasets dist by some policy is promising. 2. The formalisation as OT is made convincing b y presenting the capability of problem of stitching by partial OT in principled way (hyperparameter w controls the degree of the algorithm between BC and naive value maximization) 3. The main claim—that partial OT effectively handles stitching—is supported by experimental results, particularly in maze environments known to require stitching. Weaknesses: In your method, stitching is done by the dynamic programing, application of bellman operator and your contribution regarding stitching is introducing partial OT to allow it work by alleviating the matching restriction. And, in high-level, it is what existing regularization methods like TD3+BC(BC regularization) or ReBRAC(KL regularization) are doing. While this is not a weakness, the paper could be strengthened by providing a clearer explanation of the advantages of OT compared to BC or KL regularization, which is partially addressed in the experiment section. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you explain the theoretical/intuitive benefit of OT in offline RL compared with other technique to make policy close to the data such as behavior cloning or KL regularization? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for a positive assessment of the paper! Your valuable feedback will help us improve the manuscript! Please find below the answers to your questions. **Q1: In your method, stitching is done by the dynamic programing, application of bellman operator and your contribution regarding stitching is introducing partial OT to allow it work by alleviating the matching restriction. And, in high-level, it is what existing regularization methods like TD3+BC(BC regularization) or ReBRAC(KL regularization) are doing. While this is not a weakness, the paper could be strengthened by providing a clearer explanation of the advantages of OT compared to BC or KL regularization, which is partially addressed in the experiment section. Could you explain the theoretical/intuitive benefit of OT in offline RL compared with other technique to make policy close to the data such as behavior cloning or KL regularization?** Compared to BC/KL, OT offers more flexibility, while including many building blocks to properly consider the geometry of the problem. For example - BC/KL regularization match the policy to the data distribution at a pointwise level, which can lead to suboptimal policy learning in scenarios where exact matching is not feasible or necessary. In OT, however, different constraints and regularizations can be used. The partial alignment proposed in our paper is a clear example. - OT methods allow the choice of arbitrary cost functions that take into account the domain of the problem. For example, we proposed to use Q-function-based cost function for transformation in RL domain. In comparison, BC/KL do not inherently consider the transformation costs between distributions. From a more general perspective, considering RL as OT bridges the gap between these two areas, allowing tools from OT to be applied in RL for better efficiency. We will explicitly include this discussion in the final version of the paper. **Concluding remarks:** We truly value your reviews. We hope that clarifications are helpful. If there are remaining issues or questions on your mind, we're more than willing to address them. --- Rebuttal Comment 1.1: Title: Thank you for your detailed response Comment: Thank you for your replay, the claim that by formulating as OT, we can leverage well-developed tool in OT in RL sounds nice. Looking forward to your future work showing their effective usage in RL. --- Rebuttal 2: Title: Please respond to the authors Comment: Hello reviewer 7ePM: The authors have responded to your comments. I would expect you to respond in kind.
Summary: The authors propose a novel perspective for offline reinforcement learning by formulating offline reinforcement learning as a partial optimal transport problem. They view the policy $\pi$ as a transport map from the state distribution $d^\beta$ to $\beta(\cdot\mid s)$ and show that the dual form of the partial optimal transport problem can be expressed as a maximin optimization problem. The resulting maximin problem can be easily optimized, unlike other optimal transport problems, due to the absence of 1-Lipschitz constraints. Experimental analyses on various offline RL benchmarks manifest the effectiveness of the proposed algorithm, especially in the antmaze environments. Strengths: The paper introduces an interesting perspective of formulating offline RL as a partial optimal transport problem. Also, the proposed algorithm outperforms existing baselines on antmaze tasks by a huge margin. Weaknesses: 1. The authors formulate offline RL as a partial optimal transport problem between $d^\beta$ and $\beta(\cdot\mid s)$ by regarding the policy $\pi$ as a transport map. Since the problem depends on the choice of $s$ in $\beta(\cdot\mid s)$, the resulting policy $\pi$ will also depend on it. This does not make much sense. 2. The explanation of the environment used in the toy example is unclear. Line 193 states that the final state yields a reward of 1, while other states have a zero reward. Then, what would necessitate the agent to seek the shortest path? The reward seems independent of the agent's action. 3. It is difficult to understand how PPL, PPL-CQL, and PPL-R work. A pseudocode for each variant would be helpful. 4. The paper has a huge room for improvement in terms of formatting. Excessive use of underlines, underfull hbox on Line 6 of Algorithm 1, and misaligned $\pm$s and ragged purple boxes in the tables hurt the paper's readability. Technical Quality: 2 Clarity: 2 Questions for Authors: Have you tried using a pre-trained Q-function learned by IQL or any other in-sample value learning algorithms instead of training the Q-function in parallel? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 4 Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and provide useful feedback. Your questions will help us improve the manuscript. Below are the answers to your questions. Please let us know if any issues remain! **Q1: The authors formulate offline RL as a partial optimal transport problem between and by regarding the policy as a transport map. Since the problem depends on the choice of $s$ in $\beta(\cdot|s)$, the resulting policy will also depend on it. This does not make much sense.** Please note that our method is completely offline, with no access to the online environment. The distribution over the states provided by the expert policy is everything we have in offline learning. All we have to learn from are the states visited by the expert policy $\beta$. The goal of our method is to extract the best policy using the given distribution of expert states and actions $\beta(\cdot|s)$. The dependence on $s$ in $\beta(\cdot|s)$ is the standard approach for offline RL, not something we invented (lines 16-17, 107-112). Please let us know if this answer addresses your concern. **Q2: The explanation of the environment used in the toy example is unclear. Line 193 states that the final state yields a reward of 1, while other states have a zero reward. Then, what would necessitate the agent to seek the shortest path? The reward seems independent of the agent's action.?** Our method works in the most common MDP settings with a discount factor of $\gamma < 1$ (lines 94-106), which prioritize immediate rewards over distant future rewards. By taking the shortest path to a reward, the agent ensures that it receives a higher discounted reward compared to taking a longer path where the same reward would be less valuable due to discounting. The Q-function trained by Eq. 7 also tends to favor the shortest path. In these experiments, we simply show that BC limits the agent's performance by closely mimicking the provided suboptimal dataset. In contrast, our method allows the extraction of the best actions according to the Q-function, ignoring the suboptimal actions. We will clarify this section in the final version. **Q3. It is difficult to understand how PPL, PPL-CQL, and PPL-R work. A pseudocode for each variant would be helpful.** All of these methods mainly differ in the way they learn the Q-function (cost), not the policy extraction. We agree with the reviewer that the description of the variants is an important component that needs to be discussed. Below is the high-level pseudocode for these methods. We will add the complete pseudocode in the appendix and refer to them in main text. Please note that for ReBrac $-Q^k(s, a) +BC$ cost is using. ``` Input: Dataset D(s,a,r,s') Initialize: Q, π, f, β, α, w, ℛ ----------Update the Q-Function (cost)--------------- for k in 1...N do (s, a, r, s') ← sample a batch of transitions from D if One-Step RL then pre-train Q by: Q^{k+1} ← argmin_{Q} E_{(s, a, s') ~ D} [(r(s, a) + γ E_{a' ~ β(s')} [Q^k(s', a')] - Q^k(s, a))^2] else if CQL then Q^{k+1} ← argmin_{Q} E_{s ~ D, a ~ π(s)} [Q^k(s, a)] - E_{s ~ D, a ~ β(s)} [Q^k(s, a)] + (1/2) E_{(s, a, s') ~ D} [(r(s, a) + γ E_{a' ~ π(s')} [Q^k(s', a')] - Q^k(s, a))^2] + ℛ(π) else if ReBrac then Q^{k+1} ← argmin_{Q} E_{(s, a, s') ~ D} [(r(s, a) + γ E_{a' ~ π(s')} [Q^k(s', a') - α(π(s') - a')] - Q^k(s, a))^2] end if ----------Update OT--------------- f^{k+1} ← argmin_f E_{s ~ D, a ~ π^k(s)} [f^k(s, a)] + w E_{s, a ~ D} [f^k(s, a)] π^{k+1} ← argmin_π E_{s~D, a~π^k(s)} [-Q^k(s, a) - f^k(s, a)] end for ``` **Q4. The paper has a huge room for improvement in terms of formatting. Excessive use of underlines, underfull hbox on Line 6 of Algorithm 1, and misaligned s and ragged purple boxes in the tables hurt the paper's readability.** We will incorporate your suggestions to improve the formatting. All misalignments will be corrected and the purple boxes will be replaced with underlining. In the main text, underlining will be minimized. If you have any other suggestions for improving the presentation of the paper, we would appreciate it. **Q5: Have you tried using a pre-trained Q-function learned by IQL or any other in-sample value learning algorithms instead of training the Q-function in parallel?.** No. We do not use IQL because this method does not align with the Optimal Transport framework. If we look at the OT optimization problems (Eq. 1, 4, 12), we can see that the map (policy) outputs are used as inputs for the cost function (Q-function in our case) during optimization. However, the IQL method is a weighted behavior cloning approach and not use learned policy outputs as inputs for the action-value function. **Concluding remarks:** In conclusion, we truly value your reviews. We hope that the revisions and clarifications will influence and improve your overall opinion. If we've managed to resolve your principal concerns and questions, we'd be thankful for your endorsement through an elevated score of our submission. On the other hand, if there are remaining issues or questions on your mind, we're more than willing to address them. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I noticed some misunderstandings and wanted to clarify a few. **Q1.** If I understood the paper correctly, you are viewing $\pi\colon S\to A$ as a transport map from the state distribution $d(\cdot)$ to the behavior policy $\beta(\cdot\mid s')$ for some $s'\in S$. The transport map will depend on the choice of $s'$. How do you choose this $s'$? **Q2.** Lines 190 and 191 state that the agent has to "navigate through 50 intermediate steps." Doesn't this mean that the episode length is fixed to 51 regardless of the agent's path? **Q5.** The value learning part of IQL can be completely separated from the policy learning part, which uses AWR, as you mentioned. IQL Q function can be trained without weighted behaviour cloning and thus can be used together with the proposed algorithm. --- Rebuttal 2: Comment: **Q1. If I understood the paper correctly, you are viewing as a transport map from the state distribution d(s) to the behavior policy $\beta(\cdot|s')$ for some s'. The transport map will depend on the choice of s'. How do you choose this s'?** Sorry for the misunderstanding. We are randomly sampling $s$ from the given dataset $\mathcal{D}$, please see derivations at lines 131 and Eq.9. For the sampled $s$ we choose the conditional distribution of the action space provided by the dataset (expert) $\beta(\cdot|s)$ as the target. We will make this explicit in the final version. **Q2. Lines 190 and 191 state that the agent has to "navigate through 50 intermediate steps." Doesn't this mean that the episode length is fixed to 51 regardless of the agent's path?** Yes, the length of the episode is fixed. This length of 50, corresponds to the number of steps provided by each expert policy in the dataset. **Q5. The value learning part of IQL can be completely separated from the policy learning part, which uses AWR, as you mentioned. IQL Q function can be trained without weighted behaviour cloning and thus can be used together with the proposed algorithm.** Thanks for the clarification, this experiment is really interesting. Following your question, we conducted a series of experiments on MuJoCo using the CORL[1] IQL implementation. In these tests, we found that the value function trained using expectile regression from the IQL method is inappropriate for off-policy evaluation. - First, we conducted an experiment *without* using our method. We trained a $Q$-function using expectile regression loss and attempted to extract a policy through direct optimization: $\min_{\pi}\mathbb{E}_{s \sim \mathcal{D}, a\sim\pi(s)} \big[-Q^{\text{IQL}}(s, a)\big]$. This approach resulted in zero rewards. - Second, we added a BC objective to avoid distribution shift: $\min_{\pi}\mathbb{E}_{s \sim \mathcal{D}, a\sim\pi(s)} \big[-Q^{\text{IQL}}(s, a)\big]+(a-\beta(s))^2$. The results were the same — $Q$-functions trained via expectile regression dramatically overestimate actions sampled by the learned policy $\pi$. - Finally, we tested our method: $\min_{\pi}\mathbb{E}_{s \sim \mathcal{D}, a\sim\pi(s)} \big[-Q^{\text{IQL}}(s, a)-f(s,a)\big]$ and observed improvements in the scores! Even with the ill-suited cost function, policy optimization with respect to the potential $-f(s,a)$ yielded the highest scores. We also tested the advantage $A(s,a) $and exponential advantage functions exp$A(s,a)$ from IQL, but did not observe any improvements. The table below summarizes our analyses. --- **Table: Averaged normalized scores on MuJoCo tasks. Reported scores are the results of the final 10 evaluations and 3 random seeds.** | Dataset | $-Q^{IQL}(s, a)$ | $-Q^{IQL}(s, a)$ + BC | $-Q^{IQL}(s, a) - f(s,a)$ (Ours) | |-------------------------|------------------|-----------------------|---------------------------------| | HalfCheetah-medium | -2.53 ± 0.1 | -2.54 ± 0.1 | 48.7 ± 0.3 | | HalfCheetah-medium-expert | -2.53 ± 0.2 | -2.54 ± 0.2 | 39.9 ± 1.4 | | Hopper-medium | 0.6 ± 0.1 | 0.6 ± 0.1 | 27.9 ± 15.4 | | Hopper-medium-expert | 0.7 ± 0.1 | 0.6 ± 0.1 | 8.7 ± 4.6 | | Walker-medium | -0.16 ± 0.1 | -0.16 ± 0.0 | 37.6 ± 3.6 | | Walker-medium-expert | -0.23 ± 0.2 | -0.21 ± 0.1 | 17.5 ± 14.3 | --- While the IQL method can be formally decoupled, we see that both components, expectile-based in-sample value learning and weighted behaviour cloning, are important parts of each other to achieve strong results. Adapting IQL value learning for better off-policy evaluation is a promising direction, but beyond the scope of our contribution. We believe that our strong performance on most tasks, combined with different types of a cost function, justifies our formulation and makes it valuable to the community. We will include these analyses in the final version of the paper. **Reference:** [1] JAX-CORL: https://github.com/nissymori/JAX-CORL --- Rebuttal Comment 2.1: Comment: Thank you for your response. **Q1.** I think I've failed to deliver my point. A transport map between the state distribution $d$ and the behaviour policy $\beta(\cdot\mid s)$ for some $s$ depends on the choice of $s$. This means the optimal transport framework will produce multiple policies $\pi_s$, one for each $s\in S$. How are you going to combine them into one policy $\pi$? It seems like you are defining $\pi(s)=\pi_s(s)$ for each $s$ (am I correct?), but this does not make much sense because any change on a measure-zero set {$s$} will not affect the result. Therefore, any arbitrary function $\pi$ would be a solution, which is definitely not what we want. **Q2.** If the episode length is fixed to 50, the discounted return will always be $\gamma^{50}$ regardless of the agent's policy. **Q5.** Thank you for running the experiments. It is interesting that using IQL Q-functions as the cost function fails so miserably. --- Reply to Comment 2.1.1: Comment: **Q1.** We learn a single neural network $\pi_\theta$ that approximates all policies $\pi_\theta(s) \leq w(\beta(\cdot|s))$ for each state $s$. But the same is true for any BC method in offline RL when we learn $\pi_\theta(s)=\beta(\cdot|s)$ using $\ell^2$, KL, or any other discrepancy. The neural approximator for the policy $\pi$ is able to generalize and provide a solution for the measure-zero set of the $d^\beta(s)$. Did we understand your question correctly? **Q2.** Yes, you are right, this was an oversight on our part. Only this toy experiment was affected. We thank the reviewer for pointing this out. We fixed this by providing the expert trajectories with a different length in the dataset. This gave us the same visual results. P.s. Although there were no explicit signals with fixed episode length, the model still learned the near shortest path. We think this is because the $Q$ function approximation implicitly learned to value actions similar to the final rewarded action more highly. **Q5.** Yes, it was really interesting results. We will make these experiments public along with the rest of the code. --- Rebuttal 3: Comment: **Q1.** Say we are going to use the log-likelihood as our regularizer, which means the objective function would be something like $\mathbb{E}_{s\sim \mathcal{D}}\left[Q(s, \pi(s))-\alpha\log\beta(\pi(s)\mid s)\right]$. For each $s\in S$, the regularizer $\log\beta(\pi(s)\mid s)$ is only affected by the value of $\pi(s)$ and nothing else. If we ignore the expressivity of a neural network, the resulting $\pi$ is straightforward to interpret: $\pi(s)=\text{arg}\max_a Q(s, a)-\alpha\log\beta(a\mid s)$. However, in the case of OT, the value of $T_s(s')$ for all $s'\in S$ matters, where $T_s$ is the transport map between the state distribution $d(\cdot)$ and $\beta(\cdot\mid s)$. I'm concerned that trying to force the same $T$ for all $\beta(\cdot\mid s)$ would cause conflicts between different $s$. **Q2.** Does it mean that an agent's actions in the intermediate 49 steps wouldn't affect the return? --- Rebuttal Comment 3.1: Comment: **Q1.** Thank you for the clarification. Optimal transport does not cause conflicts between different $s$. Our final objective (Eq.12) for policy is similar to the BC example that your gave: $\min_\pi\mathbb{E}_{s \sim \mathcal{D}} [-Q(s, \pi(s))-f(s,\pi(s))]$. For each $s$ the potential $f$ is affected by the value of $\pi(s)$ and nothing else. In the optimal transport literature, several methods with a similar objective have been considered to map into the conditional distributions [1, Eq. 8c][2, Sec. 6.2]. Please note that [1] considered the case when even $T$ is conditional, and then, use the single $T$ for all distributions, see Sec. 2.4. **Q2.** Yes, intermediate steps are not rewarded, but the $Q$-function is particularly given to estimate these intermediate steps. We added different lengths to the dataset, 20, 30, 50. Consequently, trajectories that lead to the reward faster have a higher $Q$- value. For a simpler illustration, we also made the reward for each step equal to the Euclidean distance between the current state-action pair and the final rewarded state-action pair. This simplifies the learning problem, but gives a clear intuition why the shortest path is optimal. If you find this more relevant, we will include such an experiment in the final version. **Reference:** [1] Nonlinear Filtering with Brenier Optimal Transport Maps: https://openreview.net/pdf/70633c38d3ce64c9b3b29dd7abd18c2f6b6e1dc6.pdf [2] Neural Monge Map estimation and its applications: https://openreview.net/pdf?id=2mZSlQscj3 --- Rebuttal 4: Comment: **Q1.** I'm confused now. Let's consider the simplest case where $\mathcal{D}$ has two elements $(s_1, a_1)$ and $(s_2, a_2)$. Then the state distribution $d(s)$ is $\frac{1}{2}\delta(s=s_1)+\frac{1}{2}\delta(s=s_2)$ and the behavior policies are $\beta(a\mid s_1)=\delta(a=a_1)$ and $\beta(a\mid s_2)=\delta(a=a_2)$. The optimal transport map $T_{s_1}$ from $d$ to $\beta(\cdot\mid s_1)$ should be $T_{s_1}\equiv a_1$ and the map $T_{s_2}$ from $d$ to $\beta(\cdot\mid s_2)$ should be $T_{s_2}\equiv a_2$. If you try to use the same $T$ in both cases, conflict would occur since $a_1\neq a_2$, wouldn't it? Am I missing something? **Q2.** What is the transition function? From the description in the paper, it looks like there are states $s_1, s_2, \cdot, s_T$ uniformly spaced across the x-axis and the agent will transition from $s_i$ to $s_{i+1}$ regardless of the action it takes. After 50 steps, the agent will arrive at state $s_T$ no matter what and get a reward of 1. What would encourage the agents to follow the shortest path? --- Rebuttal Comment 4.1: Comment: **Q1:** Dear reviewer. As we already noted, we consider the conditional optimal transport setup. This means that we want to simultaneously learn a family of transport plans (indexed by some condition $c$; each plan is between some distributions $p_c, q_c$). Following the standard practices of neural OT, this would means that we have to learn a map $T(c, x, z)$, where $c$ is a condition, $x$ is an input point (from $p_c$) and $z$ is a random noise to be able to learn stochastic plans. In our case, we need to learn a set of OT plans, each plan is conditioned on the given state $c=s$. Following our problem, each such plan should be a plan between distribution $p_c=\delta_s$ (this is our choice) and $q_c=\beta(\cdot |s)$. Hence, we would have to learn a function $T(c,x,z)=T(s,s',z)$ with 3 arguments. However, since the input $s'$ comes from $\delta_s$, it always coincides with $s$ with probability 1. Hence, one may merge the arguments together to simply get a function of the form $T(s,z)$. In turn, we also remove the random noise component $z$, which is standard practice in neural OT methods, unless additional regularization (variance, entropy, etc.) is used[1]. Hence, our final function is $T(s)$. We will add this details into the final version. **Q2:** The transition function is $P(s'|s,a)=1$. As we said in the previous answer, we have redefined the environment to avoid any confusion. Now the reward is given for each state-action pair $r(s_t,a_t) = -\ell^2((s_t,a_t), (S_T, A_T))$, where $S_T=0$ and $A_T=0.2$ is the target state-action pair. In this scenario, trajectories deviating from the straight line (actions define y-axis coordinates) from $S_0$ to $S_T$ will have the lower cumulative reward $\sum^{T}_0\gamma^t r(s_t,a_t)$. **References** [1] Neural Optimal Transport: https://openreview.net/pdf?id=d8CBRlWNkqH
Summary: The authors address the problem of offline RL. They rethink offline RL using optimal transport. In offline RL, often the datasets consist of several sub-optimal trajectories that are needed to be stitched together. The authors use partial OT to incorporate stitching and a maxmin formulation of this partial OT. They perform experiments majorly on D4RL. Strengths: (1) While a lot of methods that use OT, use Wasserstein distance and that requires optimizing a function constraint to be Lipschitz. This is often hard. The authors have used a maxmin formulation which does not need the function to be Lipschitz. (2) They treat offline RL as an OT problem rather than using OT as a regularizer. Weaknesses: (1) There should have been comparisons to W-BRAC. (2) The results show that PPL^{CQL} produces some marginal improvement over PPL and PPL^{R} with ReBRAC. (3) From Equation 12, you must not need \beta. But in experiments you constantly talk about being in conjugation with something or training a \beta. Maybe this wasn't clear or I misunderstood, why do you need to be in conjugation with CQL or ReBRAC or one-step RL? Why can't you simply train the maxmin objective in equation 12. The work is interesting but I would like the authors to clarify why at all there is a need for conjugation? IQL performs better than CQL, so why would I use PPL^{CQL} and not train IQL directly? I request the authors to clarify the contribution of this paper in context to this. Technical Quality: 3 Clarity: 2 Questions for Authors: (1) What is the motivation for using OT here? (2) what is d^\beta(s). Is it the visitation of the behavioral policy \beta? Then why do you say that you want to learn a policy that transfers mass from d^\beta(s) to the corresponding distribution given by the behavioral policy? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful questions. We have managed to address and improve the paper based on them. Below, we address each of your concerns: we include the W-BRAC comparison, provide an explanation of the conjunction, clarify the OT motivation, and provided details on the behavior policy visitation. **Q1: There should have been comparisons to W-BRAC.** We will include W-BRAC's results in Table 2 of our paper. Initially we compared our method with other methods that have already shown significant improvements over W-BRAC, suggesting a clear performance improvement hierarchy. **Q2: ... Why can't you simply train the maxmin objective in equation 12.? ... Why at all there is a need for conjugation with CQL or ReBRAC or one-step RL? IQL performs better than CQL, so why would I use $\text{PPL}^{CQL}$ and not train IQL directly? I request the authors to clarify the contribution of this paper in context to this.** To address the comment regarding the conjunction, we would like to clarify several reasons for that: - We don't simply train Eq. (12), because the Q-function trained via Eq. (7) and then used in Eq. (12) can suffer from overestimation bias (lines 107-113). - Consequently, we tested our method in conjunction with various methods that avoid overestimation of Q-function. $\beta$ is actually necessary for the methods used. Please note that the contribution of our method is policy extraction, not solving the overestimation bias. These overestimation-avoiding methods are used to obtain different types of cost functions in our method, showing that our method can work efficiently with any of them. - We do not use IQL as a backbone because this method does not align with the Optimal Transport framework. If we examine the optimization problems (Eq. 1, 4, 12), we can see that the map (policy) outputs are used as inputs for the cost function (Q-function in our case) during optimization. However, the IQL method is a weighted behavior cloning approach, which does not use policy outputs as inputs for the critic function. Instead, only actions from the dataset are weighted by the action-value function. In summary, we clarify our contribution: We proposed a novel optimal transport-based policy extraction method and provided an analysis of its performance on various RL-based cost functions. We will add this very explicitly to the final version of the paper. **Q3: The results show that $\text{PPL}^{CQL}$ produces some marginal improvement over PPL and $\text{PPL}^{R}$ with ReBRAC.** The goal of these experiments was to show a side-by-side comparison between the basic method and its improved version via our method. In Table 1, we compare $\text{PPL}^{CQL}$ to CQL and $\text{PPL}^{R}$ to ReBRAC, showing that regardless of the backbone used, our method allows improvement. **Q4: What is the motivation for using OT here?** This motivation follows from the need to have a *partial* policy that maps only to the best action distribution when dealing with suboptimal data in offline learning. In OT, *partial* alignment methods have been extensively developed. To integrate OT methods into RL in the simplest way, we considered offline RL as an *max-min* OT problem, meanwhile avoiding the limitations of existing OT in RL methods. **Q5: What is $d^\beta(s)$. Is it the visitation of the behavioral policy $\beta$? Then why do you say that you want to learn a policy that transfers mass from $d^\beta(s)$ to the corresponding distribution given by the behavioral policy?** Yes, you are right. Indeed, it is the visitations of the behavioral policy. Please note that our method is completely offline. Thus, the probability over the states, $d^\beta(s)$, is the only distribution over the state space that we have. We have no ability to get our distribution, $d^\pi(s)$, as we cannot interact with the environment. This is the standard way for offline RL, not something we came up with. For each state visited by the expert, we aim to map it to the most efficient part of the distribution over the actions for this state, provided by $\beta(\cdot|s)$. **Concluding remarks:** We truly value your reviews. Please respond to us, to let us know if the clarifications above, suitably address your concerns. If you finds the responses above are sufficient, we kindly ask that you consider raising score. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying my doubts. I have increased my score. --- Rebuttal 2: Title: Please respond to authors Comment: Hello reviewer qCSm: The authors have responded to your comments. I would expect you to respond in kind.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Target-kernel Alignment: a Unified Analysis with Kernel Complexity
Accept (poster)
Summary: This paper provides an in-depth error analysis of of kernel ridge regression (KRR) and truncated KRR (TKRR), where one replace the original kernel $K$ with its finite $r$-dimensional approximation $K^T$, such that their regressors $\hat{f}$ and $\hat{f}_r$ agree on the dataset. It is well known that KRR suffers from the so-called *saturation effect*, where increasing the smoothness/alignment of the target function cannot further improve the generalization performance after a certain threshold. TKRR, on the other hand, with correctly selected hyperparameter $r$, can continuously improve the generalization performance as the target function is more and more aligned with the reproducing kernel Hilbert space (RHKS) $\mathcal{H}$ of the kernel $K$. Strengths: This paper offers a very general theoretical result on kernel performance with various losses, in contrast to the square loss considered in much of the previous literature. Additionally, this paper works under realistic assumptions and presents numerical validations to support its claims. The TKRR method presented in [Amini2021] and this paper offer a simple way to bypass the saturation effect of KRR, which might have significant applications in many realistic settings. Reference: Amini, A. A. (2021). Spectrally-truncated kernel ridge regression and its free lunch. Electronic Journal of Statistics, 15, 3743–3761. Weaknesses: No major weakness is spotted in this paper. However, as a new algorithm, I think that there should be more discussion on the computational complexity TKRR, which is lacking in both [Amini201] and this paper. Technical Quality: 4 Clarity: 4 Questions for Authors: I have several remark/questions regarding the algorithmic perspective on TKRR: 1. Let $\mathbf{K} = \mathbf{U}^\top\mathbf{D}\mathbf{U}$ be the diagonalization of the kernel matrix w.r.t. inputs $X = \\{\mathbf{x}\_i\\}\_{i=1}^n$, and $ S_X : \mathcal{H}\_K \to \mathbb{R}\^n $ be the evaluation operator $f \mapsto (f(\mathbf{x}\_1),...,f(\mathbf{x}\_n))^\top$. By line 239 in the paper (there is a small typo in the definition of $\psi_k$, see below for the correct one), the finite dimensional RKHS $\tilde{\mathcal{H}}\subset \mathcal{H}$ has the basis $\\{ \psi_k \\}\_{k=1}^r$ where $r\leq n$ and $\psi_k := \text{argmin} \\{ \\| \psi \\|_K:\psi\in\mathcal{H}, S_X(\psi) = \mathbf{u}_k \\}$ with $\mathbf{u}_k$ the $k$-th column of the matrix $\mathbf{U}$. This definition of $\psi_k$ is simply the kernel interpolant: $\psi_k(x) = \mathbf{u}^\top\mathbf{K}^{-1}\mathbf{K}_x$. Hence the computational bottleneck is the inversion of $\mathbf{K}$ with complexity $\mathcal{O}(n^3)$, and thus the total complexity of TKRR should not be much larger than KRR. Although it is not difficult to see, I think it is still worth a small section/paragraph to discuss this. 2. The choice of the hyperparameter $r$ is important when the alignment parameter $\gamma$ is at least $1/2$. But how can one determine the optimal choice of $r$ if $\gamma$ is not directly accessible? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: This is a theoretical paper. All limitations are stated clearly in the statements. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your acknowledgment and positive feedback on this work. Your valuable suggestions and comments are very helpful and significantly improve this paper. Below are our point-to-point replies. **Weakness 1 \& Question 1 (Part I): More discussion on the computational complexity of TKRR.** **Answer:** Thank you very much for your precious suggestions on the computational complexity. The total computational complexity of the truncated kernel method (TKM) is composed of three parts. In the first part, spectrally decomposing the kernel matrix $\mathbf{K}$ has the computational complexity of $\mathcal{O}(n^3)$. In the second part, as pointed out by you, the basis $(\psi _k) _{1\le k \le r}$ can be simply calculated by $\psi _k(\mathbf{x})= \mathbf{u}_k^T\mathbf{K} ^{-1} \mathbf{K} _{\mathbf{x}}$, which also has $\mathcal{O}(n^3)$ computational complexity. In the last part, deriving the kernel estimator based on the $r$-dimensional RKHS $\widetilde{\mathcal{H}}$ has computational complexity of $\mathcal{O}(nr^2)$. To sum up, the overall computational complexity of TKM is $\mathcal{O}(n^3)$ and clearly, the truncated method does not impose an additional computational cost compared to the standard kernel method. Following your suggestion, detailed discussions and comparisons on the computational complexities of TKM and the standard kernel method have been added in Section 5 of the revised manuscript. **Question 1 (Part II): There is a small typo in the definition of $\psi_k$.** **Answer:** Thank you very much for pointing out this typo and correcting it. In the revised version, this typo has been fixed and we have carefully proofread the manuscript again and tried our best to correct all the typos. **Question 2: How can one determine the optimal choice of $r$ if $\gamma$ is not directly accessible?** **Answer:** Thank you very much for your question. Indeed, the choice of the hyperparameter $r$ is crucial for the theoretical results of TKM that it balances the estimation error and approximation bias as discussed after Theorem 4.2, and the optimal theoretical choice of $r$ depends on the alignment parameter $\gamma$ under the setting $\gamma\ge 1/2$. Note that a similar analysis framework is also considered in Amini et al. (2022). In practice, since the underlying parameter $\gamma$ is unknown, some data-driven strategies, such as the cross-validation procedure, can be used to determine the possible optimal choice of $r$. In our numerical example in Section 6, all the tuning parameters are tuned to the best for both competitors and in this revision, we will also add the numerical experiments with the tuning parameters selected in the data-driven fashion, and part of the numerical results are provided below. Specifically, we consider the kernel quantile regression that the data is independently generated from the model $y=f^*(\mathbf{x})+\sqrt{2}(\varepsilon-\Phi^{-1}(\tau))$ with $f^*(\mathbf{x})=\sin(6\mathbf{x})$, $\mathbf{x}=\{0, \frac{1}{n}, \dots, \frac{n-1}{n}\}$, and $\varepsilon\sim N(0, 1)$. In this experiment, we use the Laplacian kernel $K(\mathbf{x}, {\mathbf{x}}')=\exp(-||\mathbf{x}-{\mathbf{x}}'||_1)$, and the parameters $r$ and $\lambda$ are tuned by $5$-fold cross-validation. Clearly, it can be observed from the obtained numerical results, attached in the following tables, that using the data-driven choice of $r$, TKM consistently outperforms KM, which further confirms our theoretical findings that TKM can achieve superior performance across various scenarios. In the revised version, some additional numerical experiments will be added in Appendix and more detailed discussions on this issue will be provided in Section 6 and Appendix. **Table 1: Averaged MSE for different $n$ ($\tau=0.3$).** | $n$ | $100$ | $200$ | $300$ | $400$ | |------|-----------------|-----------------|-----------------|-----------------| | KM | $0.583 \pm 0.257$ | $0.220 \pm 0.104$ | $0.165 \pm 0.071$ | $0.121 \pm 0.374$ | | TKM | $0.367 \pm 0.174$ | $0.188 \pm 0.078$ | $0.140 \pm 0.004$ | $0.099 \pm 0.029$ | **Table 2: Averaged empirical excess risk for different $n$ ($\tau=0.3$).** | $n$ | $100$ | $200$ | $300$ | $400$ | |-------|------------------|------------------|------------------|------------------| | KM | $0.323 \pm 0.039$ | $0.208 \pm 0.040$ | $0.175 \pm 0.036$ | $0.155 \pm 0.059$ | | | | | | | | TKM | $0.289 \pm 0.066$ | $0.192 \pm 0.060$ | $0.161 \pm 0.021$ | $0.128 \pm 0.018$ | | | | | | | --- Rebuttal Comment 1.1: Title: Reference Comment: We apologize for any confusion regarding the references cited in our response during the rebuttal phase. Below, we have provided the complete reference information that you may need. [1] Amini, A., Baumgartner, R., & Feng, D. (2022). Target alignment in truncated kernel ridge regression. *In Advances in Neural Information Processing Systems* (pp. 21948–21960). Curran Associates, Inc. volume 35. --- Rebuttal Comment 1.2: Comment: Thank you for your response and your prompt experimental results. I would keep my score and lean to accept this paper. --- Reply to Comment 1.2.1: Title: Appreciation for Reviewer ikTW Comment: We sincerely appreciate your feedback and the positive evaluation of our paper. Thank you once again for the insightful comments during the review, which greatly contributed to the improvement of our work.
Summary: This paper investigated the impact of alignment between the target function of interest and kernel matrices. To overcome the saturation effect, the TKM was introduced and its learning rate is analyzed. Strengths: This paper is well-written and the theorems are solid. This paper analyzed different alignment level and the learning rate in these cases. TKM decreases the kernel complexity to achieve the trade-off between model complexity and approximation error. Weaknesses: The approximation bias term in Thm 4.2 requires Assumption 3.4 holds. Could you also provide some results for different decay rate? Technical Quality: 4 Clarity: 4 Questions for Authors: None. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your time in reviewing our work and thanks a lot for your question and valuable comments. We have carefully addressed your question below and provided some results for the exponential decay rate. Indeed, Assumption 3.4 is needed if we want to derive the explicit upper bound of the approximation bias term established in Theorem 4.2. Note that our technical analysis can also be modified to consider the exponential decay case that $\widehat{ \mu} _j \asymp \exp(-\alpha j)$ and ${\xi_j^*}^2 \asymp \exp(-(2\gamma \alpha + \beta) j)$ with $\alpha, \beta > 0$. Precisely, under the exponential decay setting, the explicit upper bound of the approximation bias term can be derived by $$ \sum _{j=r+1}^n{\xi^* _j}^2 \le C \int _r ^\infty \exp(-(2\gamma \alpha + \beta) t) dt = \frac{C}{2\gamma \alpha + \beta } \exp(-(2\gamma \alpha + \beta) r). $$ Note that if $r\ge \frac{\log n }{(2\gamma \alpha + \beta)}$, there holds $\sum_{j=r+1}^n{\xi^*_j}^2\lesssim \frac{1}{n}$. Consequently, we can also derive the corresponding convergence rates under these scenarios, which suggests that both TKM and KM can attain an optimal rate whatever $\gamma$ is if $r$ is greater than a certain threshold (to be added in the revised version). We also conduct some numerical experiments to verify this finding. Specifically, the experimental setup is the same as Example 1 in Appendix G of the manuscript except that we set $f^*(\mathbf{x})=\sin(6\mathbf{x}) $ and Gaussian kernel is used. The experiment result, presented in Figure 2 in the total Rebuttal pdf file, shows that TKM initially performs worse than KM for very small value of $r$. Whereas, as $r$ surpasses a threshold, TKM maintains comparable performance to KM. This observation precisely aligns with our theory for the exponential decay scenario. In this revision, more detailed results and discussions of this setting will be added in Section 4.1 and Appendix. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. My concern is addressed, and I will raise my score. --- Reply to Comment 1.1.1: Title: Appreciation for Reviewer Twpu Comment: Thank you very much for your response and raising the score! We are pleased to hear that your concern has been addressed. Once again, thank you for the positive evaluation of our paper and the constructive comments, which are greatly helpful to us.
Summary: This paper conducts a comprehensive theoretical analysis on the learning rate of kernel-based machine learning methods under a general setting. It establishes the upper bounds for standard kernel-based estimators, and demonstrates that standard kernel-based estimators suffer from saturation effect at high target-kernel alignment levels. It proves that the learning rate of truncated kernel-based estimators, where kernel matrices are spectrally truncated, continues to improve even at high alignment levels, indicating an improvement over standard estimators. It establishes minimax lower bound for both standard and truncated kernel-based estimators, and demonstrates that the standard kernel-based estimator can only attain suboptimality for the strong-aligned regime, while the truncated estimator is minimax-optimal. The author conducts various experiments on regression and classification problems to demonstrate the advantages of the truncated estimator and support the established theory. Strengths: 1) The paper reveals that the saturation effect, i.e. the phenomenon that the learning rate of kernel ridge regression (KRR) no longer improves at high target-kernel alignment levels, also occurs in general kernel-based methods other than KRR. This discovery extends the original focus on KRR’s learning rate to general kernel-based methods, and may spark more interest and research in this direction. 2) The paper demonstrates theoretically and experimentally that truncated kernel-based method (TKM) can overcome the saturation effect in a general loss-function setting. This indicates that the saturation effect can be addressed not only in KRR, but also in general kernel-based methods. It may provide insights for future research on tackling saturation effect and improve learning rate of kernel methods. 3) The theoretical analysis is comprehensive and rigorous. Sufficient details are provided to help understand and verify the paper. Weaknesses: 1) The truncated kernel-based method (TKM) considered in this paper is not a very novel approach. As mentioned in the paper, truncated KRR (Amini, 2021; Amini et al., 2022) also uses spectral-truncated kernel matrix as the underlying method; the main difference between TKM and truncated KRR is that TKM considers a more general loss function family, while truncated KRR specifies a squared loss. In other words, TKM is basically a generalized version of truncated KRR; hence, the methodological contribution of this paper is limited and largely incremental. 2) The work in this paper only considers Lipschitz continuous loss functions. It is a reasonable setting, but there do exist some non-Lipschitz continuous loss functions in practice, e.g. 0-1 loss. The paper doesn’t take such condition into account; this limits its scope of application. 3) The use of symbols and notations in some formulas is somewhat confusing. For example, in Section 1, the symbol r is used to denote target-kernel alignment level, but in Sections 3 and 4 the authors denote target-kernel alignment as gamma, and r becomes the dimensionality of reduced RKHS. It would be better if the authors could make their use of symbols and notations consistent throughout the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) If the loss function is not Lipschitz continuous (e.g. 0-1 loss), would any of the results presented in the paper change? For instance, would the learning rate bounds for standard kernel-based estimators established in Section 3 remain the same? 2) Are the choices of r and lambda mentioned in Section 4 the only feasible choices to achieve optimal learning rate for TKM? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your insightful comments and constructive suggestions are highly valued by us, and greatly contributed to the revision of this work. Below are our point-to-point replies. **Weakness 1: TKM considered in this paper is not a very novel approach.** **Answer:** Thank you very much for your comment. We agree that TKM considered in this paper is not a very novel approach and we admit that this paper is motivated by the prior work (Amini et al., 2022) where only the squared loss function is considered and the investigation on the problem of optimality is lacking. In sharp contrast, our theoretical results are established for a general loss function by using totally different technical treatments, which covers many commonly used methods in regression and classification problems, and a minimax lower bound is also established for the squared loss function, which confirms the optimality of TKM. We want to emphasize that some significant gaps exist between our work and the prior work (Amini et al., 2022) and we list some of them below. Amini et al. (2022) only considers the square loss function where the analytical solution exists, and thus their theoretical analysis heavily relies on the closed form of the solution to establish some critical results, including the theoretical bounds similar to Corollaries 3.5 and 4.3 in our paper. As opposed, our work considers a general loss function and such an explicit solution does not exist anymore, which requires different technical treatments to derive the theoretical results. Specifically, our theoretical analysis adopts an alternative analytic treatment by utilizing kernel complexity and deriving upper bounds associated with the critical radius. Moreover, we rigorously establish the minimax lower bound when the squared loss is specified, which is unsolved in Amini et al. (2022) and confirms the optimality of the truncated kernel method. Extensive numerical studies are also provided in our paper, which further supports our theoretical findings and verifies the existence of the trade-off between the target-kernel alignment and model complexity. This also highlights our contributions. In this revision, various parts have been revised and more detailed discussions have been added to highlight the contributions of this paper, including Sections 1.1, 4 and 5. **Weakness 2 \& Question 1: Non-Lipschitz continuous loss functions, e.g. 0-1 loss.** **Answer:** Thank you very much for your comment. It is true that the established results in this paper require the Lipschitz continuity assumption, and can not be directly applied to the non-Lipschitz continuous cases. Yet, we want to emphasize that this type of Lipschitz continuity assumption is commonly considered in the literature of machine learning (Steinwart & Christmann, 2008; Wei et al., 2017; Li et al., 2019; Farrell et al., 2021), which covers many commonly used loss functions as illustrated in Table 1 of the manuscript. When the loss function is non-Lipschitz continuous or even not continuous, extending our established results to the general non-Lipschitz continuous setting is not trivial due to the technical difficulties, such as the application of Talagrand’s concentration inequality requires the Lipschitz continuity of loss function. However, once the $0$-$1$ loss is considered, one possible routine for establishing the theoretical results for $0$-$1$ loss is to follow a similar technical treatment as that on Page 17 of Lai et al. (2024) with some slight modifications, where the bridge between the excess risk w.r.t $0$-$1$ loss and mean squared error is established, and based on the result in Lai et al. (2024), the excess risk only gets a slower rate compared to the rates established in our paper. It is also interesting to point out that optimizing the $0$-$1$ loss is very difficult. In literature, it is common to replace it with a convex surrogate loss function, such as the logistic loss function, which is covered by our paper. We will leave this very interesting problem for future work, but add more detailed discussions on possible routines for the extension to the non-Lipschitz continuous case in the revised manuscript. **Weakness 3: Confusion of notation and symbols in some formulas.** **Answer:** Thank you very much for pointing out this problem. We apologize for this abuse of notation. In this revision, this problem has been corrected that we consistently use $r$ to denote the dimensionality of the reduced RKHS, and use $\gamma$ to denote the target-kernel alignment level. Moreover, we have proofread this manuscript again and made our best efforts to correct all the typos, confusing notations and symbols. **Question 2: The choices of $r$ and $\lambda$.** **Answer:** Thank you very much for your question. Indeed, the theoretical orders of $r$ and $\lambda$ mentioned behind Theorem 4.2 and in Corollary 4.3 of Section 4 are the only feasible choices to achieve optimal learning rate for TKM. Specifically, under Assumption 3.4, the optimal choice of $\lambda$ is unique that $\lambda\asymp \delta_{n,r}^2 \asymp \big(\frac{(\log \iota^{-1})^2}{n}\big)^{\frac{\max(\gamma,1) \alpha}{2\gamma\alpha+1}}$ to balance the variance and bias, and the unique optimal choice of $r$ is $ r\asymp (\frac{n}{(\log \iota^{-1})^2})^{\frac{1}{2\gamma\alpha+1}}\text{I} _{(\gamma> 1)}+n\text{I} _{(\frac{1}{2}\le \gamma\le 1)}$ to balance the estimation error and approximation bias. We want to emphasize that in literature, it is common to require the explicit unique orders of the parameters to achieve the fast learning rate (Yang et al., 2017; Wei et al., 2017; Cui et al., 2021; Amini et al., 2022). In practice, the values of $r$ and $\lambda$ can be determined by using some data-driven procedures, such as the cross-validation technique. More detailed discussions on the choices of the two parameters will be added in Section 4 of the revised manuscript. --- Rebuttal Comment 1.1: Title: Reference Comment: We apologize for any confusion regarding the references cited in our response during the rebuttal phase. Below, we have provided the complete reference information that you may need. [1] Amini, A., Baumgartner, R., & Feng, D. (2022). Target alignment in truncated kernel ridge regression. *In Advances in Neural Information Processing Systems* (pp. 21948–21960). Curran Associates, Inc. volume 35. [2] Steinwart, I., & Christmann, A. (2008). *Support Vector Machines.* Springer Science & Business Media. [3] Wei, Y., Yang, F., & Wainwright, M. J. (2017). Early stopping for kernel boosting algorithms: A general analysis with localized complexities. *Advances in Neural Information Processing Systems,* 30. [4] Li, Z., Ton, J.-F., Oglic, D., & Sejdinovic, D. (2019). Towards a unified analysis of random Fourier features. *In International Conference on Machine Learning* (pp. 3905–3914). PMLR. [5] Farrell, M. H., Liang, T., & Misra, S. (2021). Deep neural networks for estimation and inference. *Econometrica,* 89, 181–213. [6] Lai, J., Huang, D., Lin, Q. et al. (2024). The optimality of kernel classifiers in Sobolev space. *In The Twelfth International Conference on Learning Representations.* [7] Yang, Y., Pilanci, M., & Wainwright, M. J. (2017). Randomized sketches for kernels: Fast and optimal nonparametric regression. *Annals of Statistics,* 45, 991–1023. 14. [8] Cui, H., Loureiro, B., Krzakala, F., & Zdeborová, L. (2021). Generalization error rates in kernel regression: The crossover from the noiseless to noisy regime. *Advances in Neural Information Processing Systems,* 34, 10131–10143.
Summary: In this paper the authors consider both truncated kernel-based method (TKM) and standard kernel-base method (KM) and how their performance is affected by the target-kernel alignment (a.k.a. the smoothness of target function in RKHS). The authors show that they have the same effects in weak and just-aligned regime, but TKM is able to tackle the saturation effect in the strongly-aligned regime since it can attain the minimax rate. The theoretical analysis of this paper includes a general class of loss function with Lipschitz continuity assumption. Strengths: - The paper characterizes the kernel complexity using statistical dimension and uses empirical process technique to derive the result - The paper discusses an interesting finding that TKM can overcome saturation effect - Can be any loss function with Lipschitz continuity assumption Weaknesses: - Some of the findings are also covered in this paper [1] [1] Arash A. Amini et al., Target alignment in truncated kernel ridge regression Technical Quality: 3 Clarity: 3 Questions for Authors: - As in the paper it is demonstrated that spectrally truncated kernel can overcome the saturation effect, can spectrally transform kernel [1] overcome it too? or improve the minimax rate? [1] Runtian Zhai et al. , Spectrally Transformed Kernel Regression Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your nice summary and precious comments on this paper. Below are our point-to-point replies. **Weakness 1: Some of the findings are also covered in Amini et al. (2022).** **Answer:** Thank you very much for your comment. We agree with you that some findings in this paper contain the results provided in Amini et al. (2022), where only the squared loss function is considered. In fact, this work is motivated by Amini et al. (2022) and part of our established results can be regarded as the extension of those in Amini et al. (2022), where a board of learning tasks is considered in our paper. We want to emphasize that there exist some significant differences between our work and the prior work (Amini et al., 2022) from multiple points of view. Note that Amini et al. (2022) only considers the square loss function where the analytical solution exists, and thus their theoretical analysis heavily relies on the closed form of the solution to establish some critical results, including the theoretical bounds similar to Corollaries 3.5 and 4.3 in our paper. As opposed, our work considers a general loss function which covers many commonly used methods in regression and classification problems, and such explicit solution does not exist anymore, which requires different technical treatments to derive the theoretical results. Specifically, our theoretical analysis adopts an alternative analytic treatment by utilizing kernel complexity and deriving upper bounds associated with the critical radius. Our results successfully capture the trade-off between the model complexity of the truncated RKHS $\widetilde{\mathcal{H}}$ and approximation bias as presented in Theorem 4.2. Moreover, we also establish the minimax lower bound when the squared loss is specified, and thus rigorously confirm the conjecture in Amini et al. (2022) stating that the truncated kernel ridge estimator attains minimax optimality. Numerically, extensive experiments are also conducted to confirm our theoretical findings and verify the existence of the trade-off between the target-kernel alignment and model complexity, which also highlights our contributions. In the revised version, more detailed discussions and comparisons on the differences between our work and the previous work (Amini et al., 2022) will be added in various parts, including Sections 1.1 and 5. **Question 1: can spectrally transform kernel (Zhai et al., 2024) overcome it too? or improve the minimax rate?** **Answer:** Thank you very much for your comment and for bringing us this interesting reference. We notice that the spectrally transformed kernel regression method (SKRR; Zhai et al., 2024) aims to use spectrally transformation for constructing a new kernel, which can leverage the information contained in unlabeled data in an explicit way. We believe that SKRR may have the ability to overcome the saturation effect as well if the transformation function can be properly chosen. The possible routine for establishing the theoretical results is discussed below. Recall that by Mercer's theorem, the kernel function admits a decomposition as $K(\mathbf{x}, \mathbf{x}')= \sum_{ j=1}^\infty\mu_j\phi_j(\mathbf{x}) \phi_j({\mathbf{x}}'),$ where $\mu_j$'s are the eigenvalues in descending ordering and $\phi_j$'s are the corresponding eigenfunctions of the integral operator (Zhai et al., 2024), and we can write the target function $f^*$ as $f^* = \sum_{ j=1}^\infty \alpha^*_j \phi_j$. For SKRR, $K( \mathbf{x}, {\mathbf{x}}')$ is replaced with a new kernel that $K'( \mathbf{x}, {\mathbf{x}}') = \sum _{j=1}^\infty s(\mu _j) \phi _j ( \mathbf{x})\phi _j ({\mathbf{x}}')$, where $s$ is the general transformation function. The idea of deriving the upper bound is that we can separately bound the estimation error $|| \widehat{f} _\lambda- f^\sharp|| _\mu$ and approximation bias $|| f^{\sharp}- f^*|| _\mu$, where $|| \cdot || _\mu$ denotes the norm equipped with $ \mathcal{L}(\mathcal{X}, \mu)$, and $f^ \sharp=\sum _{j=1}^\infty s(\alpha^* _j) \phi _j$ denotes an immediate function belonging to the RKHS induced by $K'$. Following a similar technical treatment in our paper, the upper bound on estimation error can be established. For the approximation bias, we notice that $$ ||{f}^{\sharp}_r-f^*|| _\mu^2= \sum _{j=1}^\infty ( s(\alpha _j^*)- \alpha _j^*)^2. $$ Clearly, the selection of $s(\cdot)$ is crucial and it is favorable if $s(\cdot)$ is close to the identity function for small $j$ and decays extremely rapidly as $j$ tends to infinity, such as $s(\mu_j)=\mu_j\mathbf{I}_{(j\le r)}$. Then, SKRR with some proper choices of $s(\cdot)$ may achieve similar conclusions about the upper bound as we provided in our paper. Moreover, we think the minimax lower bound can not be improved since it is independent of any specific learning algorithm developed. Since the theoretical derivation should be more involved, we decide to leave such a promising topic as potential future work, but add some detailed discussions on the possible route for establishing the theoretical results of SKRR in Appendix A of the revised version. --- Rebuttal Comment 1.1: Title: Reference Comment: We apologize for any confusion regarding the references cited in our response during the rebuttal phase. Below, we have provided the complete reference information that you may need. [1] Amini, A., Baumgartner, R., & Feng, D. (2022). Target alignment in truncated kernel ridge regression. *In Advances in Neural Information Processing Systems* (pp. 21948–21960). Curran Associates, Inc. volume 35. [2] Zhai, R., Pukdee, R., Jin, R., Balcan, M. F., & Ravikumar, P. K. (2024). Spectrally transformed kernel regression. *In The Twelfth International Conference on Learning Representations.* --- Rebuttal Comment 1.2: Comment: Thank you for your detailed response! And I will keep my already positive score --- Reply to Comment 1.2.1: Title: Appreciation for Reviewer Pn9U Comment: Thank you for your feedback. We sincerely appreciate your time and effort in reviewing our work and the valuable comments during the review.
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank you for your insightful comments and for the time you have dedicated to thoroughly reviewing our work. Your valuable and constructive feedback has significantly contributed to enhancing the quality of our work. We have carefully considered all comments, concerns, and questions and have provided detailed responses to each review separately. These responses have been meticulously incorporated into the revised paper, mainly covering the following aspects: - Highlighting the contributions of our paper from both theoretical and practical perspectives; - Providing deeper insights into the established results from both theoretical and numerical aspects; discussing the potential future direction, including the extension to non-Lipschitz continuous case; - Conducting additional experiments to further validate our theoretical findings, and part of the numerical results are contained in the attached pdf file; - Correcting all typos and ensuring clarity of the introduced symbols and expressions. Once again, we extend our sincere gratitude for your time, expertise, and contribution to our work. We would be grateful for your reply to ensure that all your concerns have been adequately addressed, and we welcome any additional comments or suggestions you might have. Warm regards, the Authors Pdf: /pdf/ce3b54e72c7b799b3e73974e023c5e9dd509b624.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper investigates the impact of target-kernel alignment to mitigate the saturation effect, where the learning rate of kernel ridge regression plateaus when the smoothness of the target function exceeds certain levels. The kernel complexity function is used to establish the upper bounds for both the standard kernel-based estimator and the truncated estimator. Also, the Fano method is employed to establish minimax lower bound when the squared loss is utilized. Strengths: - The paper is well-written and provides a comprehensive review of related work. - Detailed theoretical results are given for standard and truncated kernel-based methods. - Confirming theoretical results by using numerical simulations. Weaknesses: N/A Technical Quality: 3 Clarity: 3 Questions for Authors: - Can you clarify how the choice of loss function impacts your theoretical results? It appears that in most cases, the squared loss is necessary. Does your analysis apply to SVMs? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - A more thorough experimental analysis can be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your time and great efforts in reviewing our paper and thanks a lot for your constructive comments and suggestions. Below are our point-to-point replies. **Question 1 (part I): Can you clarify how the choice of loss function impacts your theoretical results?** **Answer:** Thank you very much for your question. In fact, the theoretical results in Theorems 3.3 and 4.2 and Corollaries 3.4 and 4.3 are established for a general loss function, belonging to a rich loss function family with Lipschitz continuity and satisfying Assumption 3.1. As discussed after Assumption 3.1 and in Appendix E, many popularly used loss functions satisfy these requirements under some mild conditions. In the revised version, we will add more detailed discussions on the effect of the choice of loss function at the end of Section 4.1. **Question 1 (part II): It appears that in most cases, the squared loss is necessary.** **Answer:** Thank you very much for your comments. Note that the theoretical upper bounds provided in Sections 3 and 4 are established for a general loss function, whereas the minimax lower bound provided in Section 4.2 is established only for the squared loss. We want to emphasize that it is common in literature to establish the upper bound for other loss functions and compare it to the lower bound for the squared loss to check the optimality (Wei et al., 2017; Lv et al., 2018; Li et al., 2019). Moreover, the theoretical results in Sections 3 and 4 are established by using the empirical process techniques, which is in sharp contrast to that in Amini et al. (2022) where only the squared loss is considered. More detailed discussions on the differences will be added in Section 5 of the revised manuscript. **Question 1 (part III): Does your analysis apply to SVMs?** **Answer:** Thank you very much for your question. Note that from Table 1 of the manuscript, the hinge loss satisfies the Lipshcitz continuity requirement with Lipshcitz constant 1. Moreover, as pointed out by Wainwright (2019) on Page 472, Assumption 3.1 holds for the hinge loss when some conditions on data distribution and function class are satisfied. Moreover, in this revision, we also added some numerical experiments using the hinge loss to verify our theoretical analysis, and the obtained numerical results as provided in Figure 1 of the total Rebuttal pdf file further support the applicability of our analysis to SVMs. In the revised version, we will add more detailed discussions on the applicability of the established results in Appendix E. **Limitations: A more thorough experimental analysis can be helpful.** **Answer:** Thank you very much for your suggestion. In the revised version, more thorough numerical experiments will be added in Section 6 and Appendix. One added numerical experiment is to investigate the problem that once the hinge loss is specified (corresponding to SVMs), how the RKHS with varying model complexities affect the numerical performance of KM and TKM. Clearly, this added numerical experiment serves as a complement to that reported in Section 6 of the original submission where the check loss is considered. Specifically, the experiment setup is the same as that of the original submission, including the selection of kernel, repeat times, and tuning method for $\lambda$ and $r$ except that the underlying true function is set as $f^*(\mathbf{x})=\sin(11\mathbf{x})$ and $(\mathbf{x}_i, y_i) _{i=1}^{300}$ is independently drawn from $y _i=\text{sign}(f^*(\mathbf{x} _i)+N(0, 4))$ with $\mathbf{x}_i=\frac{i-1}{300}, i=1, \dots, 300$. The obtained numerical results are reported in Figure 1 of the total Rebuttal pdf file. It is thus clear from Figure 1 that the error curves for the hinge loss align with those for the check loss, which further confirms our theoretical findings and also empirically supports that our theoretical analysis can apply to SVMs. Similarly, we will also add the numerical experiments when the logistic loss function is specified, and the numerical results will be added in Appendix. --- Rebuttal Comment 1.1: Title: Reference Comment: We apologize for any confusion regarding the references cited in our response during the rebuttal phase. Below, we have provided the complete reference information that you may need. [1] Wei, Y., Yang, F., & Wainwright, M. J. (2017). Early stopping for kernel boosting algorithms: A general analysis with localized complexities. *Advances in Neural Information Processing Systems,* 30. [2] Lv, S., Lin, H., Lian, H., & Huang, J. (2018). Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space. *The Annals of Statistics, 46,* 781–813. [3] Li, Z., Ton, J.-F., Oglic, D., & Sejdinovic, D. (2019). Towards a unified analysis of random Fourier features. *In International Conference on Machine Learning* (pp. 3905–3914). PMLR. [4] Amini, A., Baumgartner, R., & Feng, D. (2022). Target alignment in truncated kernel ridge regression. *In Advances in Neural Information Processing Systems* (pp. 21948–21960). Curran Associates, Inc. volume 35. [5] Wainwright, M. J. (2019). *High-dimensional Statistics: A Non-asymptotic Viewpoint* volume 48. Cambridge University Press.
null
null
null
null
null
null
DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment
Accept (poster)
Summary: The paper introduces DFA-GNN, a new framework designed to train Graph Neural Networks (GNNs) using Direct Feedback Alignment (DFA). Traditional methods like backpropagation (BP), though effective, have several limitations, including inefficiency, scalability issues, and a lack of biological plausibility. DFA-GNN aims to address these problems by utilizing a forward training mechanism specifically tailored for the unique challenges posed by graph data. Strengths: The paper is well written and easy to follow. Performance: DFA-GNN achieves better performance compared to other non-BP methods and even standard BP methods. Robustness: The framework exhibits resilience to noise and different types of attacks, ensuring its reliability in various scenarios. Efficiency: By enabling parallel gradient computation, DFA-GNN improves the efficiency of the training process. Weaknesses: The novelty of this paper is somewhat limited as it primarily extends the principles of DFA to GNNs. The core ideas are based on the foundations laid out in the Shalizi et al. (2013) paper. The paper lacks a discussion on the computational complexity of the proposed algorithm. There is a notable variation in the performance improvements observed across different datasets. For instance, DFA-GNN shows a significant accuracy improvement on the Texas and Cornell datasets, while the improvements on other datasets are relatively modest. The paper does not adequately explain the reasons behind this variation in performance. The dataset used for experiments is quite small. Technical Quality: 2 Clarity: 3 Questions for Authors: Why the proposed method doesnot outperform on the computer and chameleon dataset. How does DFA-GNN scale when applied to larger datasets e.g. flicker, Reddit, OBGN ? What is the time complexity of the proposed method? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: No societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for the time taken to assess our work and for the valuable feedback. We address each point individually. “W/Q” numbers the weakness or question followed by our response. $\textbf{\large Response to W1:} $ Thanks for your comments. Our method builds on some ingenious previous studies, especially $\textit{Nøkland A. Direct feedback alignment provides learning in deep neural networks, NeurIPS 2016}$. However, our work is the first to systematically study DFA's applicability to GNNs with extensive ablation studies and to provide update formulas and convergence proof for DFA in GNNs. The addition of pseudo-error generation is another highlight that is entirely novel for DFA. Our work demonstrates the exciting potential of non-BP training in graph deep learning, offering potential directions for addressing challenging issues such as oversquashing, topology distortion, and label scarcity, which warrant further exploration in the future. $\textbf{\large Response to W2 and Q3:} $ The time complexity of our method depends on the architecture of the GNN models. Given specific GNN layers with a defined forward propagation formula, the computational time cost of our method includes four parts: forward propagation (Eq.1), pseudo-error generating (Eqs.7, 8), node filtering (Eq.8) and direct feedback alignment (Eq.10). The complexity of forward propagation is the same as BP because our method makes no change in this process. Supposing the dimension of node features and hidden units is $d$, for the GCN model formulated in Eq.1 whose forward propagation formula for $i$-th layer is written as $\textbf{X}^{(i)}=relu(\textbf{S}\textbf{X}^{(i-1)}\textbf{W}^{(i-1)})$, the complexity is $\mathcal{O}(n^{2}d+nd^{2})$, where $n$ denotes the number of nodes. Since $n\gg d$ holds in most datasets, the complexity can be approximated as $\mathcal{O}(n^{2}d)$. For direct feedback alignment, according to Eq.10, the time complexity for each layer is $\mathcal{O}(n_{f}^{2}c+dn_{f}c+d^{2}c)$ and is further approximated as $\mathcal{O}(n_{f}^{2}c)$ due to $n_{f}\gg d \gg c$, where $n_{f}$ is the number of nodes after filtering and $n_{f}<n$, $c$ the number of category. In comparison, BP for GCN model has a complexity of $\mathcal{O}(n^{2}d+nd^{2}+d^{3})$ for each layer, which could be approximated as $\mathcal{O}(n^{2}d)$. Although both the methods have a quadratic complexity related to the number of nodes, our method makes parallel update for each layer while BP has to run update layer by layer. The other two components pseudo-error generator and node filter bring additional time cost to our method compared with BP. According to Eqs.7 and 8, the time complexity of the two parts is $\mathcal{O}(tcn^{2})$ and $\mathcal{O}(cn)$, respectively, where $t$ denotes the iteration epochs for error spreading. Although the two introduced parts result in increased time cost, they are essential for generalizing DFA to GNNs. The ablation study in Tab.2 demonstrates the significant contribution of these two components. Overall, our methods has the same quadratic time complexity as BP on GCN, which explains why our method has a consistent five to ten times consumption per epoch compared with BP, rather than an exponential difference, regardless of the dataset size. $\textbf{\large Response to W3 and Q1:} $ Thanks for the suggestion, and we will add related discussion in the revised version. Our method shows a much more significant accuracy improvement on relatively small datasets, such as Texas and Cornell. The reason mainly lies in that GNNs trained by BP often suffer from heavy overfitting due to the scarcity of supervision on small datasets, especially for semi-supervised tasks. By contrast, the pseudo error generated in our method not only is for gradient computation when adapting DFA to GNNs, but also serves as an data augmentation strategy, enhancing the robustness of GNNs (shown in Fig.3) and improving the ability for handling overfitting with scarce supervision. The gains derived from the pseudo-error strategy is not so significant on larger datasets (e.g., PubMed, Photo), but it still brings overall positive effects to our methods. For Computer and Chameleon, our method does not outperform BP, however, the accuracy gap on Chameleon is quite small (only 0.09\%). The testing accuracy depends on both the training algorithm and GNN model. Although our method underperforms BP for GCN on Computer, however, our method outperforms BP for SGC, GraphSage, APPNP and ChebyNet on this dataset as shown in Tab.3. Generally, our non-BP method has comparable results compared with BP and even show better performance on most experimental trails, which indicates our non-BP training method has the inspiring potential to become a substitute for BP in the field of graph deep learning. $\textbf{\large Response to W4 and Q2:} $ Our method is well-suited for large datasets. When the graph scale is large, for memory saving we can use edge indices instead of an adjacency matrix to store the graph. For forward propagation (Eq.1), the space requirement for neighbor aggregation can be reduced from $\mathcal{O}(n^{2}+nd)$ to $\mathcal{O}(|\mathcal{E}|+nd)$, where $|\mathcal{E}|$ denotes the number of edges. For direct feedback alignment (Eq.10), as $(\textbf{S}^{\text{T}})^{k}\hat{\textbf{E}}$ is exactly the aggregation of errors for $k$ times, the space requirement can be reduced to $\mathcal{O}(|\mathcal{E}|+nc)$, without the need to store the $k$-th power of the adjacency matrix. Similarly, space requirement reduction can also be achieved in the node filtering process. As for prediction accuracy, the experimental results on Flickr, Reddit, and ogbn-arxiv (shown in **Tab.X** in the supplementary PDF) demonstrate that our method obtains comparable results to BP (within 1% accuracy) and outperforms other non-BP methods with only a small memory cost (2049 MiB, 11675 MiB, 2277 MiB for the three datasets, respectively). --- Rebuttal Comment 1.1: Title: Response to Author's Rebuttal Comment: I would like to thank the authors for your response. However, I still have concerns in these points: 1.I could not find a table comparing the time efficiency of your method with state of the art method. 2. I have concerns about the time complexity. The additional step also contribute to an O(n²) complexity? How much it isfaster than standard BP methiod? 3. The novelty of your approach seems limited, as it primarily involves integrating DFA into GNNs. --- Rebuttal 2: Comment: Thank you once again for your valuable feedback on our paper. 1. The time comparison of our method is presented in Table 6 of Appendix A.6 in the original manuscript (for your convenience, we have displayed the table below). Here, we compare the training time per epoch of our method with BP and other state-of-the-art non-BP methods. For the semi-supervised node classification task, all methods are trained for 1000 epochs. During training, we calculate the prediction accuracy on the validation nodes after each epoch and save the model that achieves the highest validation accuracy. This saved model is then used to make predictions on the test nodes. This approach is standard in semi-supervised graph learning. Therefore, the overall training cost is simply one thousand times the numbers shown in Table 6. 2. As noted in our response to W2 and Q3, both BP and the DFA in our method have a complexity of $\mathcal{O}(n^{2})$. Besides, our method introduces an additional complexity of $\mathcal{O}(n^{2})$ due to the pseudo-error generator. As a result, the overall complexity for both our method and BP remains the same $\mathcal{O}(n^{2})$. This explains why our method consistently takes five to ten times longer per epoch compared with BP as shown in Table 6, rather than showing an exponential difference, regardless of dataset size. While BP benefits from decades of research and strong software and hardware support, our method has not yet reached comparable time efficiency of BP. However, our method demonstrates superior time efficiency compared with most non-BP state-of-the-art methods, and the current time cost difference compared with BP is not substantial. Additionally, the parallel update strategy employed by our method for each layer offers considerable potential for parallel computing, which could be further explored to enhance time efficiency. 3. We theoretically derive formulas to integrate the direct feedback alignment mechanism into GNNs, as DFA for fully connected layers is not directly applicable to graph data. Additionally, we provide theoretical proof of the convergence of our proposed method (as shown in Section 4.3 and Appendix A.3). Our work highlights the promising potential of non-BP training in graph deep learning and opens up avenues for tackling some challenges within graphs, which merit further exploration in future research. If our rebuttal has satisfactorily addressed your concerns, we would greatly appreciate it if you could consider reevaluating the score of our paper. Regardless of your decision, we are truly grateful for your guidance and the time you have invested in reviewing our work. Thank you again for your attention and support. Best regards, Authors $\newline$ **Table 6: Average running time per epoch (s). For layer-wise training methods like PEPITA, CaFo, FF, and SF, the total time taken by each layer per epoch is reported.** | Datasets | BP | PEPITA | CaFo+CE | FF+LA | FF+VN | SF | ours | |----------|---------|---------|---------|--------|--------|---------|---------| | Cora | 7.56e-3 | 8.73e-3 | 7.61e-1 | 3.14e-1 | 2.83e-1 | 5.49e-2 | 5.66e-2 | | CiteSeer | 1.06e-2 | 1.11e-2 | 7.68e-1 | 2.59e-1 | 2.61e-1 | 6.88e-2 | 5.68e-2 | | PubMed | 1.07e-2 | 1.07e-2 | 8.24e-1 | 6.94e-1 | 7.61e-1 | 5.34e-1 | 6.76e-2 | | Photo | 8.74e-3 | 1.03e-2 | 7.98e-1 | 2.11 | 1.91 | 4.87e-1 | 5.81e-2 | | Computer | 1.08e-2 | 1.05e-2 | 7.80e-1 | 4.82 | 4.14 | 7.61e-1 | 6.29e-2 | | Texas | 6.13e-3 | 1.07e-2 | 8.05e-1 | 1.47e-1 | 1.56e-1 | 6.88e-2 | 5.60e-2 | | Cornell | 5.42e-3 | 1.06e-2 | 7.46e-1 | 1.51e-1 | 1.24e-1 | 3.59e-2 | 5.53e-2 | | Actor | 9.45e-3 | 1.03e-2 | 7.83e-1 | 6.84e-1 | 6.71e-1 | 2.80e-1 | 5.80e-2 | | Chameleon| 6.24e-3 | 1.13e-2 | 7.97e-1 | 2.28e-1 | 2.09e-1 | 6.88e-2 | 5.61e-2 | | Squirrel | 7.77e-3 | 1.20e-2 | 7.78e-1 | 5.82e-1 | 5.05e-1 | 1.21e-1 | 5.79e-2 | --- Rebuttal Comment 2.1: Title: Response to Author's Rebuttal Comment: According to this statement although both the method have quadratic complexity, "our method make parallel update for each layer while BP has to update layer by layer". Therefore, the proposed method should theoretically be faster than BP. However, this is not evident from the results in Table 6 of the appendix. Could you please clarify this discrepancy? --- Reply to Comment 2.1.1: Comment: Dear Reviewer, Thank you for your valuable feedback on our paper. We have carefully addressed your comments in our response and submitted it for your review. As the deadline for responses is approaching, we wanted to kindly inquire if there are any additional questions or points of clarification you would like us to address. If any questions are not answered or our response is unclear, we would appreciate the opportunity to communicate further with you. We sincerely appreciate your time and effort in reviewing this manuscript. Best regards, Authors --- Rebuttal 3: Comment: We apologize for the confusing expression. The standalone direct feedback alignment (DFA, $\mathcal{O}(n^{2})$) as formulated in Eqs. 5 and 6 of the manuscript can potentially be faster than backpropagation (BP, $\mathcal{O}(n^{2})$) since DFA directly calculates the parameter updates for each layer from the loss, allowing for improved training efficiency through parallelization. However, to generalize DFA for graph data, we introduced the pseudo-error generator as detailed in Sec. 4.2. Consequently, the main time cost of our method arises not only from DFA but also from the pseudo-error generation (Eqs. 7, 8) and node filtering (Eq. 9). According to Eqs. 7 and 8, the time complexity of these two components is $\mathcal{O}(tcn^{2})$ and $\mathcal{O}(cn)$, respectively, where $t$, $c$, and $n$ denote the iteration epochs for error spreading, the number of categories, and the number of nodes, respectively (In our earlier analysis, we approximated the complexity of these components as $\mathcal{O}(n^{2})$ and $\mathcal{O}(n)$ to highlight that our method maintains the same quadratic complexity with respect to $n$, the most significant factor). When these two components are added, our method consistently incurs a time cost that is 5 to 10 times larger than that of BP even though they have the same order of complexity. Although these additional components increase the time cost, they are crucial for adapting DFA to GNNs and handling graph-structured data. The ablation study in Tab. 2 highlights the substantial contribution of these components. Although our method has achieved significant practical time efficiency advantages over existing non-BP methods, we will continue to explore ways to further improve the time efficiency of our training algorithm in future work to bring it closer to that of BP, and we will revise the related sections of the article to reflect these nuances more accurately. Thank you again for your attention and support. Best regards, Authors
Summary: The authors propose to apply the Direct Feedback Alignment (DFA) algorithm for backpropagation-free training to graph neural networks. The DFA algorithm is combined with a pseudo-error generation mechanism to provide additional error signals for missing targets in the setting of semi-supervised node classification. The experimental results with a 3-layer GCN model over 10 commonly adopted benchmarks show the proposed model performing generally better than backpropagation and other training algorithms. The ablation analysis confirms that improvements by the proposed DFA-GNN approach can be ascribed to the pseudo-error generation mechanism. Strengths: **Originality** The application of DFA to GNNs was already introduced in *J. Launay, I. Poli, F. Boniface, and F. Krzakala, “Direct feedback alignment scales to modern deep learning tasks and architectures,” NeurIPS 2020*. However, the addition of pseudo-error generation and the extensive experiments are entirely novel contributions. **Quality** The submission is technically sound and its claims are well supported. **Clarity** The paper is clearly written and well organized. **Significance** The results of this paper can contribute to the research on addressing the training challenges of deep graph neural networks. Weaknesses: 1. The computational cost per epoch of the proposed algorithm is 5 to 10 times larger than backpropagation. 2. No comparison of the overall training costs, including the different number of epochs required for training convergence among the different methods. 3. The ablation indicates that the accuracy improvements are ascribed to the pseudo-error propagation mechanism. As the latter could be applied also to e.g. backpropagation, such experimental evaluation would have provided further interesting insights. 4. In the experiments the model is limited to 3 layers; deeper models would have provided more interesting insights, as depth increases the issues connected with gradient backpropagation. (Long-range graph benchmarks such as those from *V. P. Dwivedi et al., “Long Range Graph Benchmark,” in NeurIPS 2022 Track on Datasets and Benchmarks* could be adopted in this case.) 5. Only node classification tasks are considered, no graph-level or edge-level tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why is pseudo-error generation spreading, which must be done at teach gradient update iteration, instead of augmenting training labels via label spreading (plus the masking of eq. (9)), which would be only required once? 2. How would the other training methods such as backpropagation perform with pseudo-error generation spreading? 3. How many epochs are required for training convergence by DFA-GNN? How do they compare with the number of epochs required by the other training algorithms? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations of this submission have not been properly discussed. For example, in lines 278-280, it is stated that the "method has a higher training time consumption compared with BP, primarily due to the additional time needed for generating pseudo errors and filtering masks". However, in the conclusions, it is claimed that *efficiency* is one of the advantages of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thoughtful comments and positive assessment of our work. After carefully reviewing your feedback, below we provide answers to the comments you raised. $\textbf{\large Response to W1:} $ The standalone direct feedback alignment formulated in Eqs.5 and 6 could be faster than BP. However, to generalize DFA to graph data we introduce the pseudo-error generator as elaborated in Sec.4.2, and the main time cost comes from the pseudo-error generation (Eqs.7, 8) and node filtering (Eq.9). According to Eqs.7 and 8, the time complexity of the two parts is $\mathcal{O}(tcn^{2})$ and $\mathcal{O}(cn)$, respectively, where $t, c, n$ denote the iteration epochs for error spreading, the number of categories and the number of nodes, respectively. Although the two introduced parts result in increased time cost, they are essential for generalizing DFA to GNNs. The ablation study in Tab.2 demonstrates the significant contribution of these two components. $\textbf{\large Response to W2 and Q3:} $ Thanks for your comments. For our semi-supervised node classification task, we train all the methods for 1000 epochs, calculating the prediction accuracy for validation nodes after each epoch and saving the model that achieves the highest validation accuracy. We use the saved model to make predictions for the test nodes. This is a common experimental setting in semi-supervised graph learning. Thus, the overall training cost is simply one thousand times the numbers shown in Tab.6. To better illustrate the training convergence of our DFA-GNN, we plot the training and validation accuracy of the proposed DFA-GNN and BP over training epochs on three datasets, as shown in **Fig. 4** in the supplementary PDF. In general, our method shows similar convergence as BP. For both BP and our method, the convergence of validation accuracy occurs much earlier than that of training accuracy due to overfitting. The validation convergent epoch of DFA-GNN is nearly the same as BP on Cora and CiteSeer (around 100 epochs), while it is 100 epochs later than BP on PubMed (around 200 epochs). Our method achieves better validation accuracy on all these datasets and suffers less from overfitting compared with BP. In terms of training accuracy, our method exhibits slightly lower convergent value and lower convergence speed than BP. The reason is that our method considers both the errors of labeled nodes and pseudo errors of unlabeled nodes as supervision signals. However, this does not prevent our method from achieving better validation results. We do not include other non-BP methods in Fig. 4 because these methods (i.e., SF, FF, CaFo, etc.) follow a layer-by-layer update according to local loss. Therefore, the convergence behaviors varies across different layers, and it is hard to directly visualize them in a global view. $\textbf{\large Response to W3 and Q2:} $ Thanks for your interesting suggestion. Although the pseudo-error generation process is essential for our method, it is also an optional choice for BP. We try integrating this component into BP, and the experimental results in **Tab.IX** in the supplementary PDF show that this component does not positively enhance BP in general. It only contributes to BP on Cora and CiteSeer while degrading the performance on the other six datasets. By contrast, it benefits DFA remarkably across all datasets and makes our method finally outperform BP in most scenarios. The different suitabilities of the pseudo-error generation to BP and DFA mainly lie in that DFA is more noise-tolerant than BP where the generated pseudo error always contains different degrees of noise, which has been discussed in some previous studies [a,b]. [a]. R. Ohana et. al., Photonic Differential Privacy with Direct Feedback Alignment. NeurIPS 2021. [b]. J. Lee and D. Kifer. Differentially Private Deep Learning with Direct Feedback Alignment. arXiv 2020. $\textbf{\large Response to W4:} $ Thanks for the suggestion. We plot the prediction accuracy with the number of layers increasing from 2 to 10, as shown in **Fig. 5** in the supplementary PDF. Both the BP and our method show a decrease in accuracy, but our method is less negatively affected and consistently outperforms BP. This superiority is derived from the fact that our method adopts direct feedback of errors for the update of each layer, reducing the problem of oversmoothing and other gradient issues caused by backward gradient accumulation. This observation indicates the potential of our method for deep GNN training. $\textbf{\large Response to W5:} $ Thanks for your suggestion, we will explore the potential of DFA-GNN for graph-level and edge-level tasks in future work. $\textbf{\large Response to Q1:} $ The pseudo error is essential for our method because the parameter update formula (Eqs.5 and 6) for each layer requires the error of all nodes. Spreading labels may not be well-suited to our update mechanism. Additionally, since the pseudo error depends on the prediction of each epoch, updating pseudo errors in real-time based on the results of forward propagation in each epoch is a more reasonable choice. Augmenting only once in an early epoch may lead to performance degradation of model. $\textbf{\large Response to Limitations:} $ We apologize for the confusing expression. The efficiency advantage of our method is mainly reflected in the comparison with other non-BP algorithms, showing significant time superiority over CaFo, FF, and SF, as presented in Tab.6. As BP has been extensively researched for decades and benefits from robust software and hardware support, our method has not yet achieved comparable time efficiency of BP. However, the parallel update strategy for each layer in our method makes it more conducive to parallel computing, which has the potential to further improve time efficiency. We will revise the conclusion section of the article to make it more rigorous. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response. I leave my score unchanged. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback and for acknowledging our efforts. We respect your decision and sincerely appreciate your constructive critique. Your insights have been instrumental in helping us refine our work, and we are grateful for your thorough and thoughtful review.
Summary: Recently, some studies have been exploring new optimization methods that can replace backpropagation, and one of the popular methods is direct feedback alignment (DFA). This paper adapts DFA to graph neural networks (GNNs). It replaces the gradient with a randomly initialized matrix multiplying the prediction error, so that we do not need to use backpropagation to update the parameters. We can directly update the model parameters based on the prediction error. Besides, for semi-supervised graphs where the label is scarce, we only have prediction errors for a few nodes, so the parameter update is not that effective. To overcome this drawback, the paper assumes that nearby nodes have similar prediction errors, so it uses a smoothing technique to generate pseudo prediction errors for unlabeled nodes. Strengths: (1) The studied problem is very interesting. The non-backpropagation optimization method is a very new field. (2) The method design is good. Though it is an adaptation of direct feedback alignment (DFA) which has been used for Euclidean data, since this area has been rarely studied, I think the adaptation is also a contribution. (3) There are some theories supporting the method. (4) Extensive experiments verify its effectiveness, and it can outperform backpropagation. Weaknesses: (1) In Formula (4), it ignores the activation function when calculating the parameter update. Since the activation function is ReLU, ignoring it might not cause a big problem. However, when we apply other activation functions, or when we use a complex GNN architecture, I am afraid this method might not work as well. (2) From Table 1, we can see that DFA achieves the best performance, while backpropagation is also very good on some datasets. I encourage the authors to add some discussions about when the proposed method is more useful than backpropagation, on what kinds of datasets, and what kinds of GNN models. Besides, the authors explained why DFA is better than previous non-backpropagation methods in Lines 254-274, but it would be better if the authors could explain why DFA is better than backpropagation. (3) The authors can consider introducing a little more about the limitations of backpropagation and the non-backpropagation optimization methods in the introduction or related work. Since most readers are not familiar with non-backpropagation optimization methods, it deserves more introduction. (4) The authors can improve some notations and make them clearer. For example, in Formula (5), the multiplication of $B^{(2)}$ and $B^{(1)}$ is written as $B^{(1)}$. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) What is $W$ in Formula (7)? (2) In Section 4.3, the authors point out that W will converge to be similar to B. Since B is randomly initialized, if B is at a high point in the loss landscape, then will the optimization fail? Is the randomness of B important to the performance? (3) I might have not fully understood the intuition behind DFA. I am confused about why DFA performs better than backpropagation. I look forward to intuitive explanations. (4) Table 6 shows that all the non-backpropagation methods require more time than backpropagation. If I have correctly understood, DFA can directly calculate the parameter update for every layer from the loss, so it does not need to calculate the parameter update layer by layer as in backpropagation, then why does it need more time than backpropagation? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations. I encourage the authors to further explain what kinds of datasets and what kinds of GNN models DFA is most suitable for. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and positive assessment of our work. Below, we provide individual responses to your questions. $\textbf{\large Response to W1:} $ We take the GCN with ReLU activation function as an example to elaborate on our method since GCN is one of the most classic GNN models and ReLU is the most popular activation function. Considering another activation function $\sigma(\cdot)$ with a deterministic derivative function $\sigma^{'}(\cdot)$, Eq.4 can be modified as: $\delta\textbf{X}^{(2)}=\textbf{S}^{\text{T}}\textbf{EB}^{(2)}\odot\sigma'(\textbf{H}^{(1)}\textbf{W}^{(1)})$ and $\delta\textbf{X}^{(1)}$ is approximated as $(\textbf{S}^{\text{T}})^{2}\textbf{EB}^{(1)}\odot\sigma'(\textbf{H}^{(0)}\textbf{W}^{(0)})$, which is similar to DFA in MLPs. The symbol $\odot$ denotes element-wise multiplication, and $\sigma'(\textbf{H}^{(k)}\textbf{W}^{(k)})$ is computed during forward propagation. As shown in **Tab.VIII** in the supplementary PDF, our method is well integrated with different activation functions (e.g., Sigmoid, Tanh, ELU and LeakyReLU) and derives consistently excellent performance. Regarding different GNN architectures, our method is portable to complex GNNs. Once $\delta\textbf{X}^{(k)}$ is derived in parallel, the parameters of each GNN layer can be computed by gradient descent. This adaptability ensures that our method remains effective while still eliminating the backpropagation of gradients among different GNN layers. The results in Tab.3 of Sec.5.4 demonstrate the excellent portability of our method. $\textbf{\large Response to W2 and Limitations:} $ Thanks for your suggestion, and we will add more discussion in the revised version. It is observed from Tab.1 that our method has overall superior performance than BP. In particular, our method shows a much more significant accuracy improvement on relatively small datasets, such as Texas and Cornell. The reason mainly lies in that GNNs trained by BP often suffer from heavy overfitting due to the scarcity of supervision on small datasets, especially for semi-supervised tasks. By contrast, the pseudo error generated in our method not only is for gradient computation when adapting DFA to GNNs, but also serves as a data augmentation strategy, enhancing the robustness of GNNs (shown in Fig.3) and improving the ability for handling overfitting with scarce supervision. The gains derived from the pseudo-error strategy is not so significant on larger datasets (e.g., PubMed, Photo), but it still brings overall positive effects to our methods. Interestingly, BP does not benefit from the same pseudo-error generation strategy (please see **Tab.IX** in the supplementary PDF, and refer to $\textbf{the response to reviewer buRu}$ for detailed discussion). Additionally, from Tab.3, it is observed that our method shows greater superiority when the aggregation strategy of the GNN models is isotropic or the receptive field of aggregation is larger (e.g., GCN, SGC and ChebyNet). We attribute this phenomenon to the fact that isotropic and larger receptive field GNN layers can better utilize the direct feedback of errors. This is an interesting finding that deserves further exploration. $\textbf{\large Response to W3:} $ Thanks for the suggestion. We will introduce more about the limitations of BP and the non-BP optimization methods in the introduction and related work in the revised version. $\textbf{\large Response to W4:} $ Thanks for your suggestion, we will provide notations for this variable in revision. $\textbf{\large Response to Q1:} $ Sorry for the typo, the $\textbf{W}$ in Eq.7 should be $\textbf{Z}$, which is the variable to be optimized. $\textbf{\large Response to Q2} $ Sorry for the confusion caused. The alignment of $\textbf{W}$ and $\textbf{B}$ means the directions of the two variables tend to converge to within 90 degrees of each other (as shown in Fig.2 (a)) rather than being numerically similar. In fact, the initialization of $\textbf{B}$ is not completely random, but follows a normal distribution that satisfies $\mathbb{E}[\textbf{B}\textbf{B}^{T}]\propto \textbf{I}$. Under such constraints, as discussed in Sec. 4.3, $\textbf{W}$ will converge to be alignment to $\textbf{B}$ even starting from a high point. We will add related description in the revised version for clarity. $\textbf{\large Response to Q3:} $ DFA is BP-free training method that has not been fully explored yet. Intuitively, DFA updates model parameters via gradient computation that is similar to BP. But differently, DFA obtains approximate gradients directly from the loss rather than backpropagation layer-by-layer. As discussed in Sections 1 and 2, this difference makes DFA more biologically plausible and promising in terms of generalization and parallelism. In previous studies, DFA usually exhibits similar or inferior performance in comparison with BP. The performance improvement of our method mainly derives from the pseudo-error generation strategy, which has been discussed in the response to W2. $\textbf{\large Response to Q4:} $ Yes, DFA can directly calculate the parameter update for every layer from the loss, and it thus can improve training efficiency by parallelization. However, to generalize DFA to graph data we introduce the pseudo-error generator as elaborated in Sec.4.2, and the main time cost is from the pseudo-error generation (Eqs.7, 8) and node filtering (Eq.9). According to Eqs.7 and 8, the time complexity of the two parts is $\mathcal{O}(tcn^{2})$ and $\mathcal{O}(cn)$ respectively, where $t, c, n$ denote the iteration epochs for error spreading, the number of categories and the number of nodes, respectively. --- Rebuttal Comment 1.1: Title: Reply to the rebuttal Comment: Thank you for your reply! Your responses have addressed most of my concerns. The authors conducted a lot of experiments to verify the proposed method from various perspectives. Nonetheless, it seems that this method has some limitations regarding its suitable scenario and training efficiency that are worth further discussion in the paper. It would be better if the authors could include more discussions about the insights and limitations in the paper. I would like to maintain my score. --- Rebuttal 2: Comment: Thank you for your insightful feedback and for recognizing our efforts. We appreciate your suggestion to further discuss the method's limitations, particularly regarding its suitable scenarios and training efficiency. During the rebuttal period, we conducted additional experiments to address these concerns, focusing on the method's suitability for various activation functions, its performance on large-scale graphs, training time, and convergence behavior. These insights have provided a more comprehensive understanding of the method's capabilities and constraints. In the final version of the paper, we will include a dedicated section discussing these aspects, offering a balanced view of the method's strengths and areas for future improvement. We respect your decision and are grateful for your constructive critique. Your feedback has been instrumental in refining our work, and we thank you once again for your thorough and thoughtful review.
null
null
Rebuttal 1: Rebuttal: $\definecolor{mybgcolor}{RGB}{249,242,244}$ $\definecolor{Flickr}{RGB}{140,27,19}$ $\definecolor{Reddit}{RGB}{140,27,19}$ $\definecolor{ogbn-arxiv}{RGB}{140,27,19}$ $\newcommand{\highlight}[2]{\colorbox{#1}{\textcolor{#2}{#2}}}$ Dear ACs, PCs and all reviewers, We would like to express our gratitude to all the reviewers for their valuable comments and feedback on our work. All reviewers recognized the soundness, presentation and quality of our manuscript. We have responded to all the questions posed by each reviewer. In this summary, we aim to provide you with a clear understanding of the changes made during the rebuttal process. $\textbf{\large 1. Inclusion of large-scale graph datasets for experiments}$ In response to the insightful suggestions from reviewer **mUbj**, we discuss the portability of our method to large graph datasets and conduct additional experiments on three new large-scale graph datasets the reviewer suggested. $\textbf{Table X}$ in the supplementary PDF displays the results on $\highlight{mybgcolor}{Flickr}$, $\highlight{mybgcolor}{Reddit}$ and $\highlight{mybgcolor}{ogbn-arxiv}$ in comparison with seven baseline algorithms. The results demonstrate our method outperforms all the non-BP methods and has comparable performance to BP (within 1\% accuracy), with only a little memory cost. $\textbf{\large 2. More extensive ablation studies}$ - As requested by reviewer **buRu**, we plot the training and validation accuracy of the proposed method and BP over training epochs on three datasets to demonstrate the convergence of our method as shown in $\textbf{Fig.4}$ in the supplementary PDF. The results show our method has similar convergence compared with BP. Additionally, we provide a detailed discussion of the figures to highlight the training effectiveness of our method. - As requested by reviewer **k6XA**, to demonstrate that our method is feasible with different activation functions other than ReLU, we conduct experiments for our method with four different activation functions (i.e., Sigmoid, Tanh, ELU and LeakyReLU) as shown in $\textbf{Tab.VIII}$ in the supplementary PDF. The results demonstrate our method is well integrated with different activation functions and derives consistently excellent performance. - As requested by reviewer **buRu**, we plot the prediction accuracy with the number of layers increasing from 2 to 10, as shown in $\textbf{Fig. 5}$ in the supplementary PDF. Both the BP and our method show a decrease in accuracy with the number of layers increasing due to oversmoothing. However, our method is less negatively affected and consistently outperforms BP. This superiority is derived from the fact that our method adopts direct feedback of errors for the update of each layer, reducing the problem of oversmoothing and other gradient issues caused by backward gradient accumulation. This observation indicates the potential of our method for deep GNN training. - As requested by reviewer **buRu**, we integrate the proposed pseudo-error spreading component into BP. The experimental results from $\textbf{Tab.IX}$ in the supplementary PDF show that although this component contributes to DFA in our method, it does not positively enhance BP overall. It indicates that DFA is more robust against noise in generated pseudo error than BP, which is consistent with the analysis in some previous studies. $\textbf{\large 3. Algorithm complexity analysis}$ In response to the insightful suggestions from reviewer **mUbj**, we provide a detailed analysis of the time complexity of our method. We present the time complexity of each component and demonstrate that our method has the same order of time complexity as BP. To apply our method to large datasets, we explain how to reduce the space requirement of graph information aggregation from $\mathcal{O}(n^{2}+nd)$ to $\mathcal{O}(|\mathcal{E}|+nd)$ through algorithm optimization. $\textbf{\large 4. Highlighting the superiority compared with BP and non-BP methods}$ - As requested by reviewer **k6XA**, we provide a discussion about when our method is more useful than BP. We point out that the superiority of our method is more significant than BP when the dataset scale is relatively small, the GNN model is isotropic, or the receptive field of aggregation is large. - As requested by reviewer **k6XA**, we provide an intuitive explanation of why our method outperforms BP. We attribute the superior performance of our method to the data augmentation from our well-designed pseudo-error generator and the robustness of the DFA mechanism to noisy augmentation. - As requested by reviewer **mUbj**, we provide an explanation of why our method shows a notable variation in the performance improvements observed across different datasets. We want to express our gratitude to the reviewers for their valuable suggestions and would greatly appreciate any further questions they may have regarding our paper. If our responses have well-addressed your questions, we would kindly ask for your reconsideration of the scores. Pdf: /pdf/cae14fb1d8e431100d13c50ec3b38be93f7e9650.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms
Accept (poster)
Summary: The present work studies the last iterate convergence of learning algorithms in zero sum games. Existing results have shown that although OMWU and OGDA both enjoy the $O(1/T)$ ergodic convergence rate, their best existing last iterate convergence rates exhibit a nontrivial gap. This paper constructs a particular hard example showing that such a gap is due to a fundamental limitation of the OMWU algorithm, rather than the sub-optimality of existing analysis. From this hard example, the key insight is that fast last iterate convergence requires learning algorithms to forget the past. Strengths: This paper is very cool. Last iterate convergence is an important problem for learning in games, and this paper naturally motivates it by the existing theoretical and empirical gap between OMWU and ODGA. To tackle this problem, the paper then presents a particularly nontrivial analysis on the learning dynamics of Optimistic FTRL in games, despite the appeared simplicity of the constructed example. The takeaway message on the importance of forgetting is elegant, which might inspire future works on algorithm design. I also appreciate the graphical illustration on the learning dynamics, which makes the very technical proof intuitively easier to understand. Related works seem to be thoroughly discussed. Frankly speaking, the analysis in this paper is quite involved, so I only have a high level understanding of the proof while struggling on the details. However, as far as I can see, the high level intuition makes sense, and the numerical experiments are quite helpful. Weaknesses: This paper is already strong in my opinion, so I don't have any major complaint. One possible aspect to improve is the presentation. There are certain parts in the analysis that are particularly involved, so it might be better to write less quantitatively in the main paper, and leave those parts to the appendix. For example, Assumption 2 and the discussion right after it. I would also appreciate a further simplified proof sketch, specifically, more words and less math. Technical Quality: 4 Clarity: 3 Questions for Authors: Some minor comments: - In line 51-51 it is quoted from (van Erven, 2021) that FTRL is conventionally believed to be better than OMD. From the perspective of online convex optimization, I feel this is actually a bit more nuanced, as the examples I'm aware of (where FTRL beats OMD) concern time varying learning rates, which is different from the constant learning rates considered in this paper. - The wording of Theorem 2 could be improved. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review and the constructive comments on the presentation. We will incorporate your suggestions in the revised version of the paper. Specifically, we will update the flow in section 3 and make Assumption 2, the discussion, and the proof sketch more reader-friendly. Below, we answer your questions. Q1: *In line 51-51 it is quoted from (van Erven, 2021) that FTRL is conventionally believed to be better than OMD. From the perspective of online convex optimization, I feel this is actually a bit more nuanced, as the examples I'm aware of (where FTRL beats OMD) concern time varying learning rates, which is different from the constant learning rates considered in this paper.* A: Thank you for raising this point! We will add a remark for this claim in the revised paper. We also conjecture that slow last-iterate convergence rates for OFTRL algorithms persist even for time-varying learning rates. In particular, we conjecture that our hard example works for decreasing step sizes. In the rebuttal PDF, we provide numerical experiment results for AadGrad stepsize, an adaptively decreasing step size schedule. The results show that the last iterate of the decreasing step-size version exhibits qualitatively similar behavior to its constant step-size counterparts. We conjecture that our results could be extended to the case of decreasing step sizes. However, a time-dependent learning rate that occasionally increases the step size or other periodic learning rate schedules might be able to address the issue of non-forgetfulness. We will discuss time-varying step sizes in the next version of the paper. We leave the question of how time-varying step sizes affect the last-iterate convergence rates of OFTRL algorithms as an interesting and important open question. Q2: *The wording of Theorem 2 could be improved.* A: Thank you for the comment! We actually accidentally omitted the term “there is no function $f$ such that” before “... the corresponding learning dynamics” in the statement. We will improve the wording in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions.
Summary: The paper investigates the last-iterate convergence properties of several classes of algorithms employed in zero-sum matrix games. As a core contribution, it is shown that a large class of iteration rules can not exhibit instance-independent convergence in zero-sum games of dimension 2. The result is further generalized to larger dimensions. Strengths: The paper tackles an important problem and makes a potentially significant contribution to our understanding of learning dynamics in zero-sum matrix games. I think the results are rigorously developed, and while I did not verify the details of all the proofs, they seem to be correct. Weaknesses: The concept of a "forgetful algorithm" could be made more explicit, perhaps as a definition. It is not clear to me what property makes an algorithm forgetful by construction. An explicit definition (if it is possible at all) could greatly clarify the work. As a possible weakness, the stated negative results apply to a particular class of algorithms (OFTRL) and potentially might be sidestepped. The formulation of the main theorems could be clarified or perhaps simplified. For instance, theorem 2 has a rather complicated statement, which to my understanding just means either the duality gap can not go to zero (in which case it can be stated as such directly), or the duality gap can not go to zero uniformly (in which case item 1 should have a statement "over all possible loss matrices"). I think one weakness of the paper is the presentation/organization. I found it difficult to follow the main ideas and the main contribution of the paper. For instance, the generalization to higher dimensions, while interesting, seems to be a minor contribution from a theoretical perspective, while significant space is dedicated to its proof ideas. Technical Quality: 3 Clarity: 2 Questions for Authors: Is it possible to extract a "forgetfulness" property for an arbitrary learning algorithm that must imply the algorithm has slow last iterate convergence? Are there significant differences between Theorems 1 and 2 to warrant stating them separately? Upon first read, it was difficult to understand what the main difference was, and Theorem 2 seems to be a corollary of Theorem 1 looking at the proof in the appendix. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging that we made a significant contribution to an important question and for the constructive comments on the presentation. We will incorporate your suggestions and improve the presentation in the next version of the paper. Below, we address your questions. Q1: *Is it possible to extract a "forgetfulness" property for an arbitrary learning algorithm that must imply the algorithm has slow last iterate convergence?* A: Our paper aims to understand the slow last-iterate convergence of OMWU and, more generally, OFTRL-type and certain Optimistic OMD-type algorithms. Our proof and hard game instances build on the intuition that these algorithms lack forgetfulness: they do not forget the past quickly and play actions that performed well in the past, even if their recent performance is bad (see also Lines 70-99 in the Introduction). We are the first to connect the intuitive idea of forgetfulness to last-iterate convergence. Even without a formal definition of “forgetfulness”, we believe that this intuitive connection with last-iterate convergence is a novel contribution. Putting the finger exactly on a formal, precise definition of forgetfulness appears to be challenging. We did try, but found it rather difficult. For example, our first attempt was to define it as any algorithm that is "mean-based", that is, the decision at each time depends on the past gradients solely through their mean. Such algorithms give equal weights to all past observations and are thus intuitively not forgetful. However, we quickly realized that while this covers all FTRL algorithms, it does not even cover their optimistic version (OFTRL, which is our main focus). We hope this convinces the reviewer that finding a formal and general definition for the lack of forgetfulness could be non-trivial. At any rate, the “lack of forgetfulness” is just an intuitive explanation for the phenomenon. We view our most important (and surprising) result to be the fact that all known OFTRL variants have these convergence issues. It would be interesting to come up with a formal definition that captures a broader class of algorithms, though that seems difficult and we leave it as future work. Q2: *Are there significant differences between Theorems 1 and 2 to warrant stating them separately? Upon first read, it was difficult to understand what the main difference was, and Theorem 2 seems to be a corollary of Theorem 1 looking at the proof in the appendix.* A: We clarify that Theorem 2 is indeed a corollary of Theorem 1. Thank you for pointing it out; we will make it clear in the next version of the paper. Theorem 1 is a more technical theorem that states the OFTRL learning dynamics have a constant duality gap even after $1/\delta$ steps. Theorem 2 is a more high-level theorem that states OFTRL learning dynamics cannot have a last-iterate convergence rate that solely depends on the dimension and the time. Thus, game-parameter dependence is unavoidable for OFTRL algorithms, which provides a clearer comparison between OFTRL and OGDA since the latter has an $O(\frac{\textnormal{poly}(d_1, d_2)}{\sqrt{T}})$ last-iterate convergence rate. In summary, while Theorem 2 is a direct corollary of Theorem 1, we state Theorem 2 separately to highlight our main contribution and the difference between OFTRL and OGDA. C1: *As a possible weakness, the stated negative results apply to a particular class of algorithms (OFTRL) and potentially might be sidestepped.* A: Although our results only hold for OFTRL, we believe they are still significant given that OFTRL is arguably one of the most studied classes of online learning algorithms and covers many standard algorithms, including the classical OMWU. We also want to emphasize that the existing positive results are even more restrictive: only OGDA and OMWU have proven concrete last-iterate convergence rates. Inspired by OMWU’s weaker convergence guarantees compared to OGDA, we demonstrate that this disadvantage is unavoidable and, in fact, applies to a broader class of OFTRL algorithms. --- Rebuttal 2: Comment: I thank the authors for the detailed responses, reading the other reviews and the rebuttal I understand that the main point of the work is analyzing the "OFTRL" class of algorithms. I agree with the other reviewers that the work merits a better score and have raised mine.
Summary: This paper shows a limitation of optimistic multiplicative weights (OMWU) with a fixed learning rate: the last-iterate convergence rate can be arbitrarily large, depending on a game-dependent constant in normal form games. This demonstrates that the current upper bounds on convergence are not loose and that there is a real barrier to this dynamic. Broader Context: Optimistic online learning algorithms play an important role in solving games. The average iterates of OMWU have been shown to converge in zero-sum games at a rate of 1/T, which is better than the non-optimistic counterpart. Another important algorithm in this context is the online gradient descent ascent (OGDA), which has been shown to have last-iterate convergence of O(1/sqrt{T}), where the constant hides polynomial dependence on the problem parameters. Last-iterate convergence is a desirable property in the context of games. Since OMWU is a central tool for solving games, understanding its limitations is a fundamental question. This paper reveals an inherent limitation of OMWU (and more generally, for some algorithms within the optimistic FTRL framework), showing that OGDA might sometimes enjoy better last-iterate convergence. The main property responsible for this limitation is the "lack of forgetfulness." Strengths: The paper answers a fundamental question in the "learning in games" community (solving games using online learning dynamics). I like also the presentation of the paper. For example, the authors made an effort to explain the intuition behind the construction of the hard game. Weaknesses: My only criticism is the following: It might be the case that OMWU can have good last-iterate convergence when using changing learning rates. These are sometimes used to avoid such an issue of forgetfulness. (But I believe that analyzing fixed learning rate is a significant contribution on its own) Technical Quality: 3 Clarity: 4 Questions for Authors: Do you think using time-dependent learning rates might improve the last-iterate convergence of OMWU? If so, maybe it's worth discussing it or writing it as a limitation. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: See the above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging comments! Q: *Do you think using time-dependent learning rates might improve the last-iterate convergence of OMWU? If so, maybe it's worth discussing it or writing it as a limitation.* A: We conjecture that slow last-iterate convergence of OMWU persists for time-dependent step sizes (learning rates). In particular, we conjecture that our hard example works for decreasing step sizes. In the rebuttal PDF, we provide numerical experiment results for AadGrad stepsize, an adaptively decreasing step size schedule. The results show that the last iterate of the decreasing step-size version exhibits qualitatively similar behavior to its constant step-size counterparts. We conjecture that our results could be extended to the case of decreasing step sizes. However, a time-dependent learning rate that occasionally increases the step size or other periodic learning rate schedules might be able to address the issue of non-forgetfulness. We will discuss time-varying step sizes in the revised version of the paper. We leave the question of how time-varying step sizes affect the last-iterate convergence rates of OFTRL algorithms as an interesting and important open question. --- Rebuttal Comment 1.1: Comment: Thanks for your great response. I will keep my positive score.
Summary: The authors, through some hard instances of games study the fundamental differences in convergence of broadly two classes of algorithms which are OOMD and OFTRL. Particular algorithms of interest include the OGDA and the OMWU algorithm respectively. They show that OFTRL (in particular OMWU) necessarily exhibits slow last-iterate convergence and this leads to it having an undesirable dependence on the underlying game. Strengths: This paper seems to capture an important property that differentiates the likes of algorithms such as OGDA and OMWU, two extremely popular optimistic algorithms known to achieve last-iterate convergence in zero-sum games. Now this paper mounts further evidence of ``slowness" of OWMU and that it is unavoidable to have game dependent constants appearing in the guarantees! I think this is an important finding for this area. Weaknesses: This is a focused study on showing certain properties of a class of algorithms such as OFTRL and OOMD. In general it is not clear if some of the results apply to every regularizer satisfying the assumptions or to the particular ones of interest such as Euclidean, Entropic etc. Beyond a few clarifications which I have asked in the next section, I do not think there are major weaknesses. Technical Quality: 4 Clarity: 3 Questions for Authors: 1) Please clarify whether Assumptions 1 and 2 are satisfied by the specific regularizers such as entropic, euclidean and log or general regularizers that are 1-strongly convex wrt to $l_2$ norm. 2) Does the hard example still hold with vanishing step sizes? 3) Is the high-dimensional example applicable only for the specific regularizers mentioned. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review! Below, we address your questions. Q1: *Please clarify whether Assumptions 1 and 2 are satisfied by the specific regularizers such as entropic, euclidean and log or general regularizers that are 1-strongly convex wrt to $\ell_2$ norm.* A: We clarify that we only verify Assumptions 1 and 2 for specific regularizers, but not all 1-strongly convex regularizers wrt to $\ell_2$ norm. In this current manuscript, we verify the two assumptions for the negative entropy, Euclidean and log regularizer. We can also show that the two assumptions hold for the family of (negative) Tsallis entropy regularizers: parameterized by $0 < \beta < 1$, $R(x) = \frac{1 - \sum x[i]^{\beta}}{1-\beta}$. Together with entropy, Euclidean, and the log regularizer, we believe our results cover every commonly-used regularizer in the online learning literature. We will add the proof for Tsallis entropy regularizers in the revised version of the paper. Q2: *Does the hard example still hold with vanishing step sizes?* A: This is a great question! Our current proof only works for fixed step sizes. In the rebuttal PDF, we provide numerical results for AdaGrad stepsize, an adaptively decreasing step size schedule. The results show that the last iterate of the decreasing step-size version exhibits qualitatively similar behavior to its constant step-size counterparts. We conjecture that our results could be extended to the case of decreasing step sizes. We will add a discussion and leave whether OFTRL with vanishing/variable step size could have a fast last-iterate convergence rate as an important future direction. Q3: *Is the high-dimensional example applicable only for the specific regularizers mentioned?* A: The reduction in the high-dimensional setting requires structures of the regularizers and might not hold for all 1-strongly convex regularizers. As shown in the current paper, the reduction works for negative entropy, Euclidean, and log regularizers. We can also prove that the reduction works for the family of Tsallis entropy regularizers (see Answer to point 1 above). We will add this result in the revised version of the paper. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I thank the authors for the clarifications and the experiments with vanishing step sizes. I will maintain my already positive score.
Rebuttal 1: Rebuttal: **Discussion on Dynamic Step Sizes**: Our negative results hold for OFTRL with *fixed* step sizes. We conjecture that the slow last-iterate convergence of OFTRL persists even with *dynamic* (time-varying) step sizes. In particular, we believe our counterexamples still work for OFTRL with decreasing step sizes. This is because decreasing the step size makes the players move even slower, and they may be trapped in the wrong direction for a longer time due to the lack of forgetfulness. In the PDF, we include numerical results for OFTRL with adaptive stepsize akin to Adagrad [1], which supports our intuition. We observe that the duality gap of the last iterate exhibits qualitatively similar behavior to its fixed step size counterparts. Investigating the effect of dynamic step sizes on last-iterate convergence rates is an interesting future direction. [1] Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive subgradient methods for online learning and stochastic optimization." Journal of machine learning research. 2011 Pdf: /pdf/5f7c9ec35b5fdc4e87ec092b6ccc6601ea196c5a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Identifying Selections for Unsupervised Subtask Discovery
Accept (poster)
Summary: This paper addresses offline subtask discovery from a causal perspective by identifying subgoals as selections, targeting at solving long-horizon tasks and acquiring transferrable skills. The algorithm design is well-motivated and shows superior performance in offline subtask discovery. Strengths: (a) The causal-graph-based algorithm design is well motivated and theoretically solid. (b) The presentation is elegant and easy to follow. (c) Superiority of the proposed algorithm is shown through both quantitative and qualitative study. Weaknesses: (a) Kitchen is a challenging benchmark, but each of its tasks consists of a sequence of subtasks. Consequently, subtask discovery from this well-structured offline data is relatively straightforward. It would be beneficial to demonstrate the subtask discovery capability of the proposed algorithm in tasks that lack clearly defined, semantically obvious subtask structures. (b) This work also focuses on multi-task learning, but it is confined to scenarios where the tasks in the set are merely different compositions of the same set of subtasks. More generalized multi-task learning can be future directions. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see the weakness part. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are profoundly thankful for your valuable feedback and the time you have dedicated to reading it, as they will surely improve the quality of this manuscript. In light of your suggestion, we incorporated additional discussions, as well as experiments to demonstrate the generalizability of our method. **Q1**: *“Kitchen is a challenging benchmark, but each of its tasks consists of a sequence of subtasks. Consequently, subtask discovery from this well-structured offline data is relatively straightforward. It would be beneficial to demonstrate the subtask discovery capability of the proposed algorithm in tasks that lack clearly defined, semantically obvious subtask structures.”* **A1**: We completely agree with you that, as you mentioned, Kitchen is a well-structured dataset thus it is easier to consider. Further, we aim to take it as a starting point to develop a principled way to tackle subtask discovery problems. More practical issues will surely need to be discussed in the future. In our setting, we deal with multi-task learning problems when new tasks are arbitrarily composed by learned subtasks. Like others, we do not use the skill structure annotations in either training or testing, so the semantically meaningful "subtasks" used in the paper are simply a way to refer to the learned segmentations. It is not necessary to have them pre-defined but it is good to have them well-structured for easier evaluation. At the same time, we do not rely on a more restricted subtask structure than other work. we adopt a common setting in the literature regarding the subtask discovery problem. By conducting a comprehensive literature review on all the datasets that subtask-discovery work considers, we found that in order to evaluate the methods that are proposed and also for better understanding, it is common that we use tasks that are naturally composed of semantically meaningful and clear subtasks. For example, the latest works: [1] consider a grid-world setting BabyAI [2] that uses well-structured language (e.g.GoTo, Pickup) to indicate subgoals to accomplish, and ALFRED [3] environment that asks the agent to perform household tasks, both involve clearly defined subtasks like navigating in the kitchen, putting into the microwave, etc. Another work [4] considers MetaWorld, training on tasks that are composed of a sequence of subtasks like grasp, and move. Overall, we do agree with your insight that the semantically clear-defined subtask structure is a straightforward scenario. Meanwhile, this kind of subtask structure is also employed by all related works, and we think that the Kitchen environment is a reasonable choice for this first step of exploring a theoretical framework for subtask-discovery, and your great suggestion inspires us to see how the algorithm would perform in other environments. [1] Fu, Haotian, et al. "Language-guided Skill Learning with Temporal Variational Inference." In Forty-first International Conference on Machine Learning (ICML 2024). [2] Chevalier-Boisvert, et al. Babyai: A platform to study the sample efficiency of grounded language learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. [3] Shridhar, M., et al. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pp. 10737–10746. Computer Vision Foundation / IEEE, 2020a. [4] Zheng, Ruijie, et al"PRISE: LLM-Style Sequence Compression for Learning Temporal Action Abstractions in Control." In Forty-first International Conference on Machine Learning (ICML 2024). **Q2**: *“This work also focuses on multi-task learning, but it is confined to scenarios where the tasks in the set are merely different compositions of the same set of subtasks. More generalized multi-task learning can be a future direction.”* **A2**: We sincerely appreciate you bringing this to our attention. Generalization to new scenarios that encompass unseen subtasks, i.e. when no discovered subtasks are reusable, can be challenging and deserves further investigation. In light of your great suggestion, we have made the following efforts, including new experiments in our revision: * *Possible solutions*: We provide some preliminary solutions to such a problem that you have insightfully pointed out in more generalized scenarios: for example, considering the ability to discover new subtasks when the target subtask is out-of-distribution of the training set; learning subtasks more granularly such that more primitive actions can serve to facilitate new tasks learning. * *Additional experiments*: We conducted additional experiments under in Kitchen, and tested the method’s generalizability to a new task composed of one more subtask, i.e. $5$ sequential manipulation subtasks. Such generalization to longer-horizon tasks is challenging and is not taken into account by our MIL baseline[3]. Please refer to the uploaded pdf for the result and the **Global Response** for experimental details. We deal with scenarios where all training and testing tasks share the same set of subtasks, while with different (unseen) compositions, just like other related works [1] [2] [3], and have validated our method through challenging experiments. We are greatly thankful for your suggestion, as it undoubtedly encourages us to pursue further investigation. [1] Yifeng Zhu, et al. Bottom-up skill discovery from unsegmented demonstrations for long-horizon robot manipulation. IEEE Robotics and Automation Letters, 7(2):4126–4133, 2022. [2] Yiding Jiang, et al. Learning Options via Compression. [3] Jiayu Chen, et al. Multi-task Hierarchical Adversarial Inverse Reinforcement Learning. In Proceedings of the 40th International Conference on Machine Learning, pages 4895–4920. PMLR, July 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed feedback. I would keep my rating since not many revisions have been made.
Summary: The paper studies the subtask decomposing problem. The paper proposes a formal definition of subtasks as the outcome of selections. The proposed seq-NMF is introduced to learn the subgoals and extract subtasks, conforming with the proposed theory. The experimental results show strong results on transferring to new tasks. Strengths: - The paper is well-written and easy to follow. - The paper offers a formal definition of subtasks, to address the challenge of interpretability that has not been discussed in the prior works. - The insight of the definition of segmented subtasks should be consistent with the true generating process is novel and interesting. - The paper validates the proposed theory through the unseen long-horizon manipulation tasks and the results present a strong performance of the proposed method. Weaknesses: - The paper only considers state sequences to extract the subgoals and subtasks. In real-world settings, the states are not available. So it is not clear if the proposed theory is also applicable to the real world with various modalities, e.g., visual perception. It would be nice to explore and discuss the possibility. - The generalization of the proposed definition of the subtask is rather specific and the shift of task distribution is not significant. So the generalization of the proposed method might be less convincing. Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thorough review and the constructive insights you provided, which will undoubtedly enhance the quality of our manuscript. In response to your comments, we have included several new discussions in the revised manuscript, and additional experiments, particularly addressing more generalized multi-task learning scenarios. **Q1**: *“The paper only considers state sequences to extract the subgoals and subtasks. In real-world settings, the states are not available. So it is not clear if the proposed theory is also applicable to the real world with various modalities, e.g., visual perception. It would be nice to explore and discuss the possibility.”* **A1**: The point that you raised is insightful and inspiring, and application to various data modalities will surely be an important aspect. It requires different techniques, and there are certainly a lot of practical issues to consider. In this work, we mainly focus on a theoretical analysis of the understanding of subtasks, and on rigorously validating our interpretation. * *Possible solutions*: When states are not available, it is still possible to extract useful representation from raw input data. For example, the CI test for high-dimensional data can be performed on the low-dimensional embedding level, followed by matrix decomposition and hierarchical imitation learning. In order to obtain the embeddings that are beneficial for downstream tasks, we essentially turn to a representation learning problem that learns a compact way to encode the sufficient feature for subtask identification and transfer, while mitigating the effect of redundant features. Various techniques can be applied to facilitate this representation extraction process [1] [2]. It is intuitively applicable that the proposed method adapts to more data modalities. We believe that our results will provide inspiration to the community for exploring more general scenarios, including multi-modalities data. It is a direction that is worth further investigation. [1] Huang, Biwei, et al. "Adarl: What, where, and how to adapt in transfer reinforcement learning." arXiv preprint arXiv:2107.02729 (2021). [2] Liu, Yuren, et al. "Learning world models with identifiable factorization." Advances in Neural Information Processing Systems 36 (2023): 31831-31864. **Q2**: *“The generalization of the proposed definition of the subtask is rather specific and the shift of task distribution is not significant. So the generalization of the proposed method might be less convincing.”* **A2**: We sincerely thank you for pointing out this possible limitation. In order for a wider generalization, we need to consider more practical issues. Your insight inspires us to provide a more general answer to the problem such that we are able to handle more distribution shift scenarios. In light of your suggestion, we have made the following efforts, including new experiments, in the updated manuscript: * *More experimental results on distribution shift*: We conducted additional experiments by considering a distribution shift problem that involves longer-horizon tasks, and incorporated the results in the uploaded pdf. Specifically, under the Kitchen environment, we keep the same training set of tasks (each task is composed of $4$ sequential manipulation subtasks), and test the method’s generalizability to a new task composed of one more subtask, i.e. $5$ random sequential manipulation subtasks. Such generalization to longer-horizon tasks is not taken into consideration by some of the existing works [3], and our empirical results show that our formulations are able to deal with such a more challenging distribution shift problem. Please refer to the **Global Response** for experiment details. * *Rationale for adopting the current generalization cases*: We totally agree with you that our settings do not encompass all kinds of task distribution shifts, but focus on the shift where new tasks are randomly composed of seen subtasks. This assumption on task distribution is also widely used in other subtask/skill discovery literature. For instance, BUDS [1] inquires about its ability to solve new task variants that require different subtask combinations; LOVE [2] tests on the grid-world environment where each subtask corresponds to picking up a specific object; MH-AILR [3] also uses the Kitchen environment, in which the tasks require the sequential completion of four specific subtasks. In short, all works consider the generalization in cases where the training set of subtasks provides full support of the target set of subtasks; our generalization experiment in the Kitchen environment is not more limited than other works. * *Possible solutions*: We provide preliminary solutions to such a problem in more types of task distribution shifts: for example, utilizing RL combined with IL when the target subtask is out-of-distribution of the training set; learning subtasks on a more granular level such that more primitive actions can serve to facilitate the discovery of new subtasks. We believe that our results could serve as a theoretical basis, which may prove helpful for many future explorations by the community. Meanwhile, we take this work as a starting point, and we aim to develop a principled, easily-extensible way to resolve the subtask-discovery problem. We thank you for your great advice, which definitely inspired us to do so. [1] Yifeng Zhu, et al. Bottom-up skill discovery from unsegmented demonstrations for long-horizon robot manipulation. IEEE Robotics and Automation Letters, 7(2):4126–4133, 2022. [2] Yiding Jiang, Evan Zheran Liu, Benjamin Eysenbach, J Zico Kolter, and Chelsea Finn. Learning Options via Compression. [3] Jiayu Chen, Dipesh Tamboli, Tian Lan, and Vaneet Aggarwal. Multi-task Hierarchical Adversarial Inverse Reinforcement Learning. In Proceedings of the 40th International Conference on Machine Learning, pages 4895–4920. PMLR, July 2023. --- Rebuttal Comment 1.1: Title: Response for the authors Comment: I appreciate the authors for their efforts in additional experiments and explanations that address my concern. - The authors clarify the gap from real-world setup and provide a potential solution. I am convinced that the paper can be a first step in applying its theoretical framework for subtask discovery and further inspire work toward real-world subtask discovery. - The authors strengthen the verification of generalization by testing the longer horizon of tasks. After reading all the reviews, which I believe have been well addressed by the response from the authors, I am willing to raise my score to strong accept. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for your encouragement, and we are also excited to explore more complex scenarios building upon the selection framework. We are glad that the additional clarifications and experiments are helpful. We thank you for the valuable insights you have contributed.
Summary: This paper studies the problem of decomposing expert trajectories (in the context of imitation learning) into sub-trajectories corresponding to subtasks. First, the authors introduce a causal framework to understand and explain what subtasks mean in this context. Subtasks are then defined to be variables that reduce uncertainty in the observed expert actions (or selections in the language of causal analysis). Motivated by this definition, a matrix factorization based task decomposition algorithm is presented. Experiments on multiple environments demonstrate that the algorithm is effective at discovering subtasks from expert trajectories. Strengths: - There are many novel aspects to the paper. The authors provide an argument to view subtasks as selection variables and this insight is used to develop the task decomposition algorithm presented in the paper. The task decomposition algorithm also seems novel and the use of matrix factorization here is very apt. - Experiments presented in the paper indicate that the proposed approach is effective at discovering useful subtasks. The experiments also show that the subtask decomposition algorithm enables learning policies (in the context of imitation learning) that generalize well to new and unseen tasks. In this context of transfer learning, the proposed approach outperforms many existing state-of-the-art methods. - I believe that the paper offers interesting insights relevant to the NeurIPS community and RL researchers working on hierarchical and compositional RL. These insights have the potential to inspire new directions of research. Weaknesses: One primary weakness is that some parts of the paper lack clarity. For instance, the mathematical objective in Equation (1) is not very clear although the overall idea and intuitive definition is clear from the text. Defining the notations early on in the paper would make it much easier to read and understand. Technical Quality: 3 Clarity: 3 Questions for Authors: - In the second line of Equation (1), should the "forall" and "exists" terms be switched? Should it read as "for all sub-trajectories, there is exists a subtask..." ? - The use of convolution in the matrix factorization algorithm suggests that it allows multiple subtasks to be "active" at a given step. Could you provide some intuition behind this? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I don't think there are any major limitations that need to be discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our work and for your insightful comments. Your valuable feedback has significantly enhanced the clarity of our manuscript. **Q1**: *“One primary weakness is that some parts of the paper lack clarity. For instance, the mathematical objective in Equation (1) is not very clear although the overall idea and intuitive definition is clear from the text. Defining the notations early on in the paper would make it much easier to read and understand.”* **A1**: We sincerely thank you for your great feedback and suggestions. Following your suggestion, we have carefully gone through the paper to make sure the notations are friendly. Specifically, in the updated version of the manuscript, we have defined the notations early before using them: > “**Problem Formulation and Notations**: Given the above context, we formulate the considered imitation learning problem as follows: we have a distribution of tasks $\mathcal{P}_e(\mathcal{T})$ and a corresponding set of expert trajectories $\mathcal{D}=\\{\tau_n\\}\_{n=1}^N$, and we aim to let the imitater learn a policy that can be transferred to new tasks that follow a different distribution $\mathcal{P}_i(\mathcal{T})$. Each task sampled from $\mathcal{P}\_{\cdot}(\mathcal{T})$ should be generated by a MDP and composed of a sequence of option $\{\mathcal{O}_j, \cdots\}$, where $\mathcal{O}_j =\langle \mathcal{I}_j, \pi_j, \beta_j \rangle _j$. We use $\mathbb{\mathcal{O}}=\bigcup\_{j=1}^J \mathcal{O}_j$ to denote all J options, and ${\xi\_{p}=\\{\mathbf{s_t},\mathbf{a_t}, ...\\}\_{t=1}^{\leq L} }$ as a sub-sequence of states and actions $(\mathbf{s_t}, \mathbf{a_t})$ from any trajectory $\tau_n$. Each trajectory can be partitioned into sub-sequences ${\xi\_{p}}$ with maximum length L.” **Q2**: *“In the second line of Equation (1), should the "forall" and "exists" terms be switched? Should it read as "for all sub-trajectories, there exists a subtask..."?”* **A2**: We highly appreciate your insightful question. We agree with your interpretation of "for all sub-trajectory, there exists a subtask such that the sub-trajectory is sampled from the option, ..." and adjust the "for all" and "exist" notation accordingly. You are completely correct and it is better to present the notations in the way you suggested. **Q3**: *“The use of convolution in the matrix factorization algorithm suggests that it allows multiple subtasks to be "active" at a given step. Could you provide some intuition behind this?”* **A3**: We sincerely thank you for the great question. We fully agree with you that, in practice, it is possible to have undesired duplicate activations (which could be due to optimization errors like local optimal solutions, etc.), while the intuition is that subtasks should have distinct patterns and be distinctively activated. At the same time, it might be worth noting that our formulation has already discouraged such situations. More specifically, given that our main insight is to view subgoals as binary selections, we enforce the learned coefficient matrix also to be $0/1$. Then if multiple subtasks are activated at the same time, the sparsity constraint $R_1$ as well as the redundancy constraint $R_\text{sim}$ would penalize those competing factors. Therefore, the goal of our regularization is to forbid the cases that you kindly mentioned, of which the effect has been empirically verified in our experiments and we observed no duplicated activations. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications and the additional experiments. I think the paper is definitely clearer to read after the revision. All my questions have been addressed by the authors and I remain in support of accepting the paper and have also raised my score to reflect this. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: We sincerely appreciate your detailed review and insightful suggestions. Thank you for checking the response and for your support.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for your dedicated time and insightful comments. We are happy to see that the novelty of this work is well-recognized by all reviewers, which lies in the idea of identifying subtasks as selections, as well as the soundness of the corresponding algorithms. We added one additional experiment to test the generalization ability of the method to long-horizon tasks, details of which are provided below, and the result is presented in the figure of the uploaded pdf. **Additional experiment in Kitchen**: In Sec. 5.3, we have tested the performance of the proposed algorithm on a new task in the Kitchen environment [1] that is composed of $4$ subtasks with an unforeseen combination, while trained on expert demonstrations [2] with matching horizon ($4$ subtasks). A further question is, is the algorithm able to generalize to more scenarios? In order to answer this question, we test on a target task of $5$ subtasks to make it a more challenging generalization scenario. Specifically, we use |bottom burner| $\rightarrow$ |top burner| $\rightarrow$ |slide cabinet| $\rightarrow$ |hinge cabinet| , and |microwave| $\rightarrow$ |bottom| |burner| $\rightarrow$ |top burner| $\rightarrow$ |hinge cabinet| in the demonstrations for training. For testing, we require the agent to manipulate: |microwave| $\rightarrow$ |bottom burner| $\rightarrow$ |top burner| $\rightarrow$ |slide cabinet| $\rightarrow$ |hinge cabinet| in order. We show the result (accumulated return w.r.t. training steps) in the additional figure. The baseline method H-AIRL[3] has only been evaluated on a 4-subtask horizon, and our algorithm is demonstrated to have better performance compared with all baselines. This suggests that the subtasks that we learned from the demonstrations as selections show better generalization ability even in accomplishing challenging long-horizon tasks, further validating the superiority of our method. [1] Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine.D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020. [2] Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning, October 2019. [3] Jiayu Chen, Dipesh Tamboli, Tian Lan, and Vaneet Aggarwal. Multi-task Hierarchical Adversarial Inverse Reinforcement Learning. In Proceedings of the 40th International Conference on Machine Learning, pages 4895–4920. PMLR, July 2023 Pdf: /pdf/fe288481badc66517e7b1ecce11fc6dbc5437e20.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Challenges with unsupervised LLM knowledge discovery
Reject
Summary: This paper reveals novel pathologies in existing unsupervised methods aimed at discovering latent knowledge from large language model (LLM) activations. Instead of extracting knowledge, these methods tend to identify the most prominent features of the activations. The paper theoretically demonstrates that arbitrary features (not just knowledge) can satisfy the consistency structure of a popular unsupervised knowledge-elicitation method, namely contrast-consistent search. Additionally, the authors conducted a series of experiments showing that current unsupervised methods for discovering latent knowledge are insufficient. While the paper proposes potential future solutions, it does not provide a definitive solution to the problem with existing unsupervised methods. Strengths: Overall, the paper is well-written, and its theoretical and analytical contributions may be useful. I am impressed about the extensive experiments. Weaknesses: More experiments on other LLMs are needed to further validate the claim. It would be better to offer possible solutions to address the problems in existing unsupervised methods. Technical Quality: 3 Clarity: 3 Questions for Authors: More experiments on other LLMs are needed to further validate the claim. It would be better to offer some possible solutions to address the problems in existing unsupervised methods. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No solutions to address the problems in existing unsupervised methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their considerations and for highlighting how our work is useful and contains extensive experiments. Weaknesses: - We already considered three LLMs. We are somewhat limited by access to model internals to carry out this work (e.g. we can’t use APIs) and by licensing restrictions for some open-source models. - Don’t provide solutions to the issue: see top-level comment. In section 6 we provide desiderata that a solution should satisfy. It was beyond the scope of this work to provide a solution, rather to point out the problem through careful experiments and theory. Providing a solution would require another paper (and would be very difficult to do, as each desiderata requires solving a difficult open problem). --- Rebuttal Comment 1.1: Comment: Although I gave an acceptable score, I am not an expert on the content of this paper. I gave a high score considering the well-written, so-called theoretical analysis, and comprehensive experiments. In my opinion, there may be some potential value in funding or observation: the latent knowledge discovered so far may be the most prominent feature of activation. On the other hand, I still think that this paper is an empirical analysis without possible solutions. In Section 6, the simple discussions may not solve the problem well because they do not have feasibility details. Besides, the opinions of other reviewers will affect my score.
Summary: This paper presents a careful study on existing methods for discovering the latent knowledge from large language models (LLMs), especially Contrastive-Consistent Search (CCS). The authors prove that CCS might not actually discover the knowledge of LLMs, instead, it could fit any features that satisfy certain conditions. Through a series of experiments, the authors further demonstrate that CCS could be distracted by random words, irrelavant texts like the character's opinion, and remain sensitive to the choice of prompt. Finally, the authors propose some general principles for the future works about unsupervised LLM task discovery. Strengths: Overall, the paper is well written and eazy to follow. The authors made interesting obervations about existing methods on knowledge discovery of LLMs. The theoretical analysis is well supported by the experiments. Sevaral guiding principles are also proposed for the future works. I think this paper would provide good information to the research community about unsupervised knowledge discovery of LLMs. Weaknesses: From my experience on unsupervised learning, I'd argue that the content of this paper *would not be sufficient to refute existing methods about unsupervised knowledge discovery (CCS)*. First of all, CCS is a method built on top of features from pretrained models. It'd definitely be sensitive to the features and thus also sensitive to the prompts, because features changes from different prompts (this could also be seen from the PCA visialization). Furthermore, as an unsupervised method, it'd be expected that the method might find multiple valid solutions, where only one of the solutions corresponds to the knowledge we are looking for. Taking the experiments from Section 4.2 as an example. The constructed dataset actually has two valid labels: the sentiment of the text and the sentiment of Alice. Depending on the optimization and the implicit bias of the algorithm, it could totally happen that an unsupervised method could found both valid labeling, or could only find one of them. I believe this is a common phenomenon shared by exsiting off-the-shelf unsupervised methods (like K-Means) cause they're searching for labels without supervision. From this perspective, I'd regard that this paper provides a method to construct "adversarial datasets" for CCS. However, it would not be a problem for CCS in practice. Furthermore, the authors don’t provide solutions to this issue. Also, I believe the mathematical notation in Section 3 could be simplified. Minor issues: typo $c(x_i^+=1), c(x_i^+)=0$ in line 102 Technical Quality: 3 Clarity: 4 Questions for Authors: Does the sensitivity of CCS come from the algorithm design, or come from the sensitivity of LLMs? As a baseline, if we add text "Alice think it's positive" at the end of every sentence, what would be the performance of (calibrated) zero-shot inference and in-context inference? If zero-shot and in-context inferences could also be easily distracted by "Alice think it's positive", then I don't think the problem discussed in this paper could be a clear shortcoming of CCS. If the dataset exists multiple valid labelings (e.g., the texts from a group of people might reflect different gender, age or cultures), what do you think a proper unsupervised method should do on uncovering the latent knowledge? What would be the performance of CCS on other langauge models (especially instruction-tuned models)? Would the model still be senstive to the prompts and could be easily distracted? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have mentioned that this paper is focused on current methods and might not be directly applied to future works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, and for highlighting that our work provides useful information to the community about unsupervised knowledge discovery. Weaknesses: - Prompt sensitivity of CCS and other unsupervised methods (including PCA): this is a key point we were making in our paper, we don’t understand how it is a weakness. If we interpret the reviewer as claiming that this point is a priori obvious, then we would question why others using the method (see table 1) have not noticed this before or made attempts to account for it. - Multiple solutions: we agree that we have successfully shown multiple solutions, and moreover that we can say for sure that one of them (which we artificially introduce and can measure) is a meaningful undesired feature (rather than some arbitrary feature). That this happens in other unsupervised methods is exactly our point: CCS does no better than off-the-shelf unsupervised methods, which is a key counterclaim compared to the CCS paper’s claim of the effectiveness of their approach. - Adversarial datasets are not a problem for CCS in practice: we constructed adversarial datasets so that we could measure their effect. In the wild, there are likely other binary features of datasets that are naturally occurring, and highly salient, and CCS may well discover those as solutions instead. We believe the problem of multiple valid solutions will be a problem in practice. - Don’t provide solutions to the issue: see top-level comment. In section 6 we provide desiderata that a solution should satisfy. It was beyond the scope of this work to provide a solution, rather to point out the problem through careful experiments and theory. That would require another paper. - Typo: thanks we can fix it in revision. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thank you for the rebuttal. There're a few points that I want to mention. **Multiple solutions**: in Section 4.1-4.3, the authors manually created multiple valid solutions and show that CCS, like other unsupervised methods, could discover undesired features. But as I said, CCS is an **unsupervised method**. It's fine for an unsupervised method to discover any possible solutions, as long as the solution satisfies the pre-defined metric. *Unless further information is provided for steering the method, it's not possible to expect an unsupervised method to find the "exact desired knowledge" from multiple valid solutions*. Otherwise, how do you know "the sentiment of the sentence" is knowledge, while "Alice's opinion" is not knowledge? Thus I don't think the claim "we show that unsupervised methods detect prominent features that are not knowledge" is convincing. Besides, I'd like to see some experiments on real datasets that have multiple valid solutions. But the question I aksed is not replyed: - If the dataset exists multiple valid labelings (e.g., the texts from a group of people might reflect different gender, age or cultures), what do you think a proper unsupervised method should do on uncovering the latent knowledge? **Sensitivety to the prompt**: One of the major contribution of is paper is to show that CCS could be sensitive to the prompts. However, I'd expect deeper analysis about the sensitivity. Does it comes from the algorithm design, or comes from LLM itself? Thus, I asked two questions in the initial review (but also not got replied): - Would (calibrated) zero-shot inference and in-context inference also be affected in this case? - For other language models (especially instruction-tuned models), what would be the performance? **Don't provide solutions**: this would be a limit of the contribution of this paper. And the principle provided in Section 6 is not making perfect sense for unsurvised methods. Overall, I still have some concerns about this paper. And some questions I asked are not answered. Thus I will keep my score. --- Reply to Comment 1.1.1: Comment: **Multiple solutions**: our claim is more that CCS (and other unsupervised approaches) will not necessarily discover the intended feature which is underspecified by the pre-defined metric. **Real datasets**: it is hard to spot the discovery of unintended features in real datasets is because one does not know what feature to measure. That's why we did this in a controlled setup, where we know what to measure, with varying degrees of realism. **algo vs LLM**: this doesn't seem possible to answer cleanly as the methods operate on top of LLM features. We don't think 0/in-context inference bears much relevance to this issue. The promise of CCS is that it could be used eg as a lie detector and so it should work regardless of whether the LLM's response is correct/incorrect. **instruction-tuning**: this is similar to the above. An unsupervised knowledge discovery algorithm operating on model internals should not be that sensitive to finetuning details of the model.
Summary: This paper studies the failure modes of the method called "constraint-consistent search (CCS)" in knowledge discovery for language models. In particular, they showed: there is no unique identification on the minimizer of CCS, as there are a class of features achieves the optimal loss; demonstrated experimentally classic unsupervised methods detect features other than knowledge; discovered features are sensitive to prompt formats. Strengths: This paper points out a popular method's overlooked short-comings and presents both theoretical and experimental results to support that CCS may not be able to discover the true knowledge feature: 1. the observation on CCS loss is driven by xor operator rather than the feature is clever; 2. given the vast space of feasible features, CCS method is very sensitive to prompts and thus deserves more careful examination if to use CCS in practice. Weaknesses: The main weakness of the paper is its lack of novelty and potential impact to the field. The paper is more an analysis work on the application of a single method [1] proposed in 2023, which given the speed of ML innovation, it is hard to see long-term benefits of this criticism. The general principles proposed in the discussion section (Section 6) are interesting and fit more into the line of proposing desiderata for the field - though in their current status, require more rigorous work. [1] C. Burns, H. Ye, D. Klein, and J. Steinhardt. Discovering latent knowledge in language models without supervision. In The Eleventh International Conference on Learning Representations, 357 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: Experiments: The experiments lack variability. Section 4.1 - 4.3 are all experiments on modification of prompts to ensure increasing natural formalization of opinions in prompts. As the authors said in line 122 - 123, the learned probe depends on "prompt, optimization algorithm, or model choice". Some experiments to show effects in optimization algorithm on knowledge discovery could be beneficial. Typos: - line 132 incomplete sentence: "Our experiments a structured didactically." - line 46 first point in contribution - imprecise meaning on "arbitrary features satisfy the CCS loss equally well." - maybe more in the line of "arbitrary features achieve optimal CCS loss." Above are some minor suggestions, though my main concern is on the potential impact of the paper to the field given it only challenges one recent method with lack of more-well rounded analysis on the general desiderata for the field. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their analysis. The reviewer mentions in the strengths that the CCS method is popular, but suggests in the weaknesses that analysis showing shortcomings of the method is unlikely to have long-term benefit, which seems slightly contradictory. Weaknesses - Focus on CCS: see top-level comment. We consider other methods in our experiments, not just CCS. Reviewer cites innovation speed of ML broadly, but to our knowledge there has not been another unsupervised knowledge discovery method proposed since CCS’s publication. CCS has been published in top-tier ML conference, displaying the importance of our work highlighting its shortcomings. Table 1 in the appendix lists the 20 most-cited papers mentioning CCS and their usage. Our paper has implications for all of these works. These works are important for ML overall especially regarding honesty and interpretability. - We would be happy to develop further the discussion of section 6, highlight it as its own section and make the desiderata more detailed. This would be a minor addition in the revision. Questions - Experiments lack variability: our experiments varied the prompts, models and unsupervised methods. We could also vary optimization algorithms of the unsupervised methods but we think this is least likely to show an understandable effect because it is not well-understood which solution optimization algorithm will find, and whether that is just due to optimization failure/hyperparameter tuning, rather than that it will discover a meaningful undesired feature (what our experiments aim for). It was by design that experiments in 4.1-3 follow naturally from each other, in order to demonstrate the issue as simply as possible (though artificially), before making it gradually more realistic. We believe that a more random assortment of experiments would have shown similar results though make the paper less readable. - Typos: thanks, we will fix in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for engaging with the rebuttal. I disagree with the opinion that an analysis on a well-cited paper in a popular field would automatically offers impactful contribution. Unsupervised learning methods are difficult to provide identifiability guarantees without strong assumptions [1, 2], which the other methods tested in experiments, e.g. k-means, PCA are all such unsupervised learning methods, therefore it is not surprising they cannot discover true knowledge. As the paper with its current stand in proposing no concrete solutions, the potential contribution is to offer understanding about the field and next steps, e.g. is unsupervised learning method in general impossible to discover knowledge; based on the empirical findings, what would the paper suggest for the field to focus on. I think the desidera in Section 6 is somewhat on the line to do that, but was not well-explored in the main sections and should, in my opinion, be more well-studied. Therefore, I will keep the score. [1] Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In International Conference on Artificial Intelligence and Statistics, pp. 2207–2217. PMLR, 2020a. [2] Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In International Conference on Machine Learning, pp. 4114–4124. PMLR, 2019.
null
null
Rebuttal 1: Rebuttal: A common theme in the reviews was that it challenges only a particular method, CCS. In fact, our experiments involved multiple other unsupervised methods which all suffer the same issue, which is a general issue, and a hard one to solve. Similarly common was that our paper should have aimed to solve the problem. We wanted the scope of our paper to just be to show this general problem is real, rather than to solve it. This would require a completely different kind of work and a separate paper. We were commended for listing desiderata that our analysis suggests are necessary for a solution, each of which we believe represents a significant hurdle, i.e. there will be no easy solution here.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Low-Rank Optimal Transport through Factor Relaxation with Latent Coupling
Accept (poster)
Summary: As the paper clearly outlines in the introduction, Optimal Transportation (OT) is widely used in various fields of machine learning, however, the cost for computing OT is computationaly expensive (quadratic scaling), even after Sinkhorn algorithm employing entropy regularization significantly alleviated the cost. The purpose of Low-rank Optimal Transport (LOT) is to reduce the complexity by constraining the transportation plan to possess a certain low-rank structure. The key contribution of the paper is (1) new parametrization of LOT using a Latent Coupling (LC) factorization, (2) The eponymous "Balance FRLC" algorithm for resolving the optimization procedure for computing LOT with respect to various objectives associated with (LOT), (3) Theoretical guarantees for the smoothness of the regularized objectives, as well as upper bounds for the convergence criteria. The algorithm is applied to illustrative toy examples and then to a more realistic example (spatial transcriptomics alignment) and the results indicate improvements over previous LOT. Strengths: The contributions of the paper is significant. The writing is excellent. The LC factorization, adapted to the LOT framework from a previous work, is well-motivated. The Balanced FRLC algorithm appears a sound and non-trivial extension of previous approaches for LOT (Sinkhorn, semi-relaxed OT). The algorithm is particularly simple and shown to be effective. The experimental results are clear and illustrative, improvements are convincing. Weaknesses: What this reviewer found missing is discussion of the case the optimal transport map is itself not low-rank. It is well-known that the optimal transportation plan is not necessarily low-rank (as indicated e.g. by their connection to differential equations with non-regular solutions), so LOT may converge very slowly to the full-rank optimal transportation plan. In section 4.2, the authors provide a simple illustrative example, but this is when the transportation plan is known to be low-rank - in this case the transportation plan seems to stabilize after some threshold rank - however, what happens when the full-rank plan does not have an inherently low-rank structure? Technical Quality: 4 Clarity: 4 Questions for Authors: - line 103: Would it be good to say the what complexity "massively improved" to? - Can the authors comment on how to pick the rank for the LOT? (This of course ties in to the point above the rank of the transportation plan.) - Perhaps a more discussion of of how different the LC factorization is to, say the previous factored coupling with (the diagonal version $P = Q \text{diag}(1/g) R^T$). This relation seems important when considering "more" LC factorizations: If this LC factorization improves the result while simultaneously allowing a simple optimization algorithm, would it then be natural to discuss further factorizations that are nested this way, with additional LC factorizations for $T$ and so on? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: This work builds on the LOT framework - so perhaps the authors comment on some scenarios where the LC factorization can potentially yield no better results, when compared to previous LOT? Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 1xw8 for their careful reading of our work, and for their feedback. > What this reviewer found missing is discussion of the case the optimal transport map is itself not low-rank. It is well-known that the optimal transportation plan is not necessarily low-rank (as indicated e.g. by their connection to differential equations with non-regular solutions), so LOT may converge very slowly to the full-rank optimal transportation plan. In section 4.2, the authors provide a simple illustrative example, but this is when the transportation plan is known to be low-rank - in this case the transportation plan seems to stabilize after some threshold rank - however, what happens when the full-rank plan does not have an inherently low-rank structure? This is a great question. We are actively investigating this, but do not have a complete answer during this short rebuttal period. We will include this point in the Conclusion of our main text: _"Another direction for further investigation is to better understand what structure LC factorizations capture when the optimal plan is known to have full rank, e.g. when the Monge map exists as has been explored by works such as [Liu et al. (2021)](https://arxiv.org/abs/2111.06546)."_ > line 103: Would it be good to say the what complexity "massively improved" to? Yes, thank you for the suggestion. For a fixed error tolerance $\varepsilon$ and entropy constant $\eta$, FRLC finds a $\pm \varepsilon D$ approximation for $D$ the diameter of the dataset in $B = \mathrm{poly}(1/(\eta \varepsilon))$ iterations which is a major improvement over the previous cubic solvers. > Can the authors comment on how to pick the rank for the LOT? (This of course ties in to the point above the rank of the transportation plan.) Thank you for your question. The best we can say regarding the rank is that the choice is problem-specific, and is much like choosing $k$ in k-means or choosing the rank $r$ for a compact SVD (or PCA). We will add the following sentence to the Conclusion (Section 5) to directly address this limitation: _"Two key limitations of our work are: (1) selecting values of the latent coupling ranks and the hyperparameter $\tau$ controlling the smoothness of the trajectory; and (2) strengthening the convergence criterion. These and other limitations are discussed in Section N of the Appendix."_ > Perhaps a more discussion of of how different the LC factorization is to, say the previous factored coupling with (the diagonal version $P=Q\mathrm{diag}(1/g)R^\mathrm{T}$). This relation seems important when considering "more" LC factorizations: If this LC factorization improves the result while simultaneously allowing a simple optimization algorithm, would it then be natural to discuss further factorizations that are nested this way, with additional LC factorizations for and so on? We agree that one strength of the LC factorization is that it can be applied again to decompose a latent coupling, leading to a multi-scale family of optimal transport problems similar to [Gerber et al. (2017)](https://dl.acm.org/doi/abs/10.5555/3122009.3176816). We leave a thorough exploration of this to future work, but will will highlight the possibility of nesting LC factorizations in the main text -- thank you. > This work builds on the LOT framework - so perhaps the authors comment on some scenarios where the LC factorization can potentially yield no better results, when compared to previous LOT? FRLC generally has better performance, especially on large and structured datasets such as spatial transcriptomics, but the difference in performance is insignificant for ultra low rank (e.g. $r _ 1=2$) on synthetic datasets. There are also cases where one might correctly assume that transitions are diagonal between clusters -- in these cases, it might be better to use the previous factorization, which implicitly clusters the two datasets with a common set of labels. We note that we can recover the previous LOT factorization: by taking ${Q}' \gets {Q}\mathrm{diag}(1/{g} _ {Q}){T}$ one may diagonalize the return as $({Q},{R},{T}) \to ({Q}',{R}, \mathrm{diag}({g} _ {R}))$, with no change to the OT cost. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. I will keep the score.
Summary: This work introduces a new low rank formulation of optimal transport based on the latent coupling decomposition introduced in https://arxiv.org/pdf/2012.11589 . Compared to previous formulations of low rank OT, this formulation allows easier extensions to unbalanced and Gromov Wasserstein settings. Strengths: I found that the presented FRLC method is a quite elegant approach to low rank OT. It decouples optimization into entropy regularized semi relaxed OT problems for $Q$ and $R$, and $T$ is updated solving entropy regularized OT of size $r$. This decoupling is enabled by the form of the inner coupling which is no longer assumed to be diagonal. This construction alleviates the need to use the Dykstra machinery as in (Scetbon 2021). With the proposed approach, $Q$ and $R$ are not required to have the same inner marginal. From what I understand, the optimization over inner marginals is handled by the semi relaxed steps and does not require a separate step as in previous approaches. Another strength of the approach is that it generalizes smoothly to the unbalanced setting. Other strengths include: * I carefully looked at the presented method and the appendix and the algorithmic approach made sense. * non asymptoptic convergence bounds are derived for the method. * the appendix includes a complete review of the necessary technical background presented in a clear way, easy to follow. * limitations are clearly stated (section N in appendix). * a new initialization scheme for LC decomposition is proposed. * there is a gain in interpretability compared to (Scetbon 2021) as the inner coupling captures the coupling between clusters in the data. Weaknesses: * from section 3.1 to 3.2, the new aspects of the approach compared to https://arxiv.org/pdf/2012.11589 should be stated more clearly. The contributions of the paper should be separated more clearly from previous works. I had to read the other paper to better understand each contribution. * In algorithm 1, it would be interesting to have the computational complexity of each step (presented in the algorithm directly). Technical Quality: 3 Clarity: 3 Questions for Authors: * How to choose $r$ in practical scenarios ? * Would it make sense to remove the entropic penalties (at the expense of smoothness) to recover results about hard clustering of the points via the decomposition of the OT coupling ? Put differently, what is the effect of entropic smoothing on the interpretability of the coupling in terms of clustering ? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer CGKE for their careful reading of our work, and for their feedback. > from section 3.1 to 3.2, the new aspects of the approach compared to [https://arxiv.org/pdf/2012.11589](https://arxiv.org/pdf/2012.11589) should be stated more clearly. The contributions of the paper should be separated more clearly from previous works. I had to read the other paper to better understand each contribution. We agree that we could have been more clear about how our method is distinct from the approach of Lin et al. (2021). We greatly appreciate your comment, and we refer you to the bulleted list in our general response highlighting the differences between our work and that of Lin et al. (2021). We will add a section in our Appendix contrasting the approaches in more detail. > In algorithm 1, it would be interesting to have the computational complexity of each step (presented in the algorithm directly). We thank you for your comment, and will make sure to include a discussion of time-complexity in the appendix and in-line with our algorithm in the main text. We refer you to the time-complexity analysis of FRLC included in our general response above. > How to choose $r$ in practical scenarios ? This is a great question. The best we can say regarding the rank is that the choice is problem-specific, and is much like choosing $k$ in k-means or choosing the rank $r$ for a compact SVD (or PCA). We added the following sentence to the Conclusion (section 5) to directly address this limitation: _"Two key limitations of our work are: (1) selecting values of the latent coupling ranks and the hyperparameter $\tau$ controlling the smoothness of the trajectory; and (2) strengthening the convergence criterion. These and other limitations are discussed in Section N of the Appendix."_ > Would it make sense to remove the entropic penalties (at the expense of smoothness) to recover results about hard clustering of the points via the decomposition of the OT coupling? Put differently, what is the effect of entropic smoothing on the interpretability of the coupling in terms of clustering? This is a fantastic question. Scetbon et al. [Scetbon et al. (2022a)](https://proceedings.neurips.cc/paper_files/paper/2022/hash/2d69e771d9f274f7c624198ea74f5b98-Abstract-Conference.html) show in their Proposition 9 that removing the entropic penalties and setting the second marginal to equal the first generalizes the $k$-means objective. [Lin et al. (2021)](https://proceedings.mlr.press/v139/lin21a.html) show that setting the distance between the anchors ($C _ {z}$) to zero also recovers $k$-means. However, because FRLC does not use Scetbon et al.'s factorization, and also does not evaluate a distance $C _ {z}$ between anchors, we do not know if removing the entropic penalties in FRLC would recover a known hard clustering (or co-clustering) method. We will add the following sentence to our Conclusion section discussing future work: _"Another interesting question is whether a relationship exists between the minimization of the Wasserstein cost over LC-factorizations without entropic regularization and calculation of a hard clustering or co-clustering between data points, analogous to Proposition 9, [Scetbon et al. (2022a)](https://proceedings.neurips.cc/paper_files/paper/2022/hash/2d69e771d9f274f7c624198ea74f5b98-Abstract-Conference.html)."_ --- Rebuttal Comment 1.1: Comment: Thank you very much for your answer. I would like to keep my score.
Summary: The paper introduces a novel framework called Factor Relaxation with Latent Coupling (FRLC) which is based on coordinate mirror descent to compute the low-rank LC factorization.The algorithm decouples the optimization into three sub-problems, offering greater flexibility, interpretability, and linear space complexity. FRLC is applicable to various OT objectives (Wasserstein, Gromov-Wasserstein, Fused Gromov-Wasserstein) and marginal constraints (balanced, unbalanced, and semi-relaxed). Theoretical results support its effectiveness, and empirical tests demonstrate superior performance in applications like graph clustering and spatial transcriptomics. Strengths: The introduction of the LC factorization and its integration into the FRLC algorithm represents an innovation in reducing the complexity of optimal transport problems. The FRLC framework is versatile, handling multiple OT objectives and marginal constraints, making it applicable to a wide range of practical problems. The latent coupling provides a interpretable description of the transport plan, which is beneficial for understanding and visualizing the results. Weaknesses: While the empirical results are good, the paper could benefit from more extensive comparisons with additional baseline methods to solidify the claims of superiority. The theoretical results are limited in that, even though $\Delta_k(x_k, x_{k+1})$ is small, it cannot be guaranteed that the iteration points converge to the optimal solution or even a stationary point. Technical Quality: 3 Clarity: 3 Questions for Authors: I suggest that Propositions E.3, E.5, E.6 should be stated in the main text to make the theoretical part easier to follow. Table 1 should be put in the main text to show the efficiency of FRLC in terms of runtime. It is hard to say that the method shows significant improvement because the runtime of LOT is already very short (less than one second). I suggest the authors conduct experiments on more challenging and diverse examples. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer pnTm for their careful reading of our work, and for their feedback. > While the empirical results are good, the paper could benefit from more extensive comparisons with additional baseline methods to solidify the claims of superiority. The only OT methods solving for a low-rank plan that we are aware of are Factored Coupling [Forrow et al. (2019)](http://proceedings.mlr.press/v89/forrow19a), LOT ([Scetbon et al. (2021)](http://proceedings.mlr.press/v139/scetbon21a.html), [Scetbon et al. (2022b)](https://proceedings.mlr.press/v162/scetbon22b), [Scetbon et al. (2023)](https://openreview.net/forum?id=d2WsCmoITF)), Latent OT [Lin et al. (2021)](https://proceedings.mlr.press/v139/lin21a.html). We did not compare to Forrow et al. (2019) because a prior work of Scetbon et al. (2021) established LOT as the current state of the art, and we did not compare to Lin et al. because they do not minimize the primal OT cost directly. However, we now added a comparison to Lin on the balanced Wasserstein objective cost, and we will add Table 1 in our above general response to our Appendix. Importantly, Lin et al. solve what they term the latent OT problem, which is a different objective than the low-rank Wasserstein objective computed by FRLC. Thus we did not feel it was appropriate to include this comparison with the others in the main text, but will include as part of a new addition to clarify the differences betweeen FRLC and Lin et al. In Section 4.4 (additional experiments) we refer to Section K of our Appendix where we also evaluated against full-rank OT solvers (from `ott-jax`) on downstream metrics for 12 graph partitioning datasets. > The theoretical results are limited in that, even though $\Delta _ k(x _ k, x _ {k-1})$ is small, it cannot be guaranteed that the iteration points converge to the optimal solution or even a stationary point. We agree. In our convergence results, we adapted the criterion of [Ghadimi et al. (2014)](https://link.springer.com/article/10.1007/s10107-014-0846-1) to coordinate-mirror descent to show non-asymptotic stationary convergence, as in other works on low-rank optimal transport [Scetbon et al. (2021)](http://proceedings.mlr.press/v139/scetbon21a.html), [Scetbon et al. (2022b)](https://proceedings.mlr.press/v162/scetbon22b). Such a result is the current baseline for convergence in the literature, but we acknowledge that the convergence criterion and guarantee is limited. A more thorough theoretical analysis with stronger guarantees might be difficult as the objective is non-convex, but would be valuable to analyze and understand more comprehensively in future work. We mentioned this limitation in the Appendix, but in the revision we will add the following sentence to the Conclusion (Section 5) to more clearly indicate the limitations: _"Two key limitations of our work are: (1) selecting values of the latent coupling ranks and the hyperparameter $\tau$ controlling the smoothness of the trajectory; and (2) strengthening the convergence criterion. These and other limitations are discussed in Section N of the Appendix."_ > I suggest that Propositions E.3, E.5, E.6 should be stated in the main text to make the theoretical part easier to follow. Absolutely, we had removed them from the main text due to the page limit, but will include them in the updated version of the manuscript. > Table 1 should be put in the main text to show the efficiency of FRLC in terms of runtime. It is hard to say that the method shows significant improvement because the runtime of LOT is already very short (less than one second). I suggest the authors conduct experiments on more challenging and diverse examples. We will move this table to the main text. We also performed additional comparisons on the large spatial transcriptomics datasets of [Chen et al. (2022)](https://www.sciencedirect.com/science/article/pii/S0092867422003993), which we include in Table 2 of our general response above. --- Rebuttal Comment 1.1: Comment: Thank for your replies. I will increase my score from 5 to 6.
Summary: The paper presents an approach for low-rank optimal transport (OT) leveraging a latent coupling (LC) factorization and solving it with mirror descent. This approach offers a new parameterization of the low-rank OT problem, providing advantages such as decoupling the problem into three OT problems and enhancing flexibility and interpretability. The authors introduce the Factor Relaxation with Latent Coupling (FRLC) algorithm, which utilizes coordinate mirror descent to compute the LC factorization. FRLC accommodates various OT objectives and marginal constraints. The paper includes theoretical results and demonstrates the performance on selected tasks like graph clustering and spatial transcriptomics. Strengths: - Generality: handles different OT objective costs and relaxations of the marginal constraints. - Significance: the proposed method, and improving OT using factorizations, has the potential to address the scalability issue of OT with large datasets, making it highly relevant for applications in machine learning and data science. Weaknesses: **Presentation and logical organization of the ideas** I found it difficult to discern the paper's particular contribution, especially concerning previous art like Latent Optimal Transport (Lin et al., 2021), Forrow et al. (2019), and Scetbon et al. (2021). For example, the main contribution is said to be "compute a minimum cost low-rank transport plan using the LC factorization," where LC factorization is precisely the factorizations proposed by Lin et al. (2021), with the further contribution being the computation of this factorization using mirror-descent. I wondered how much of the description also applies to Lin et al. (2021) and which part is particular to the novel method. Furthermore, according to Lin et al. (2021), there is an algorithm for computing low-rank factorized plans based on factorized costs using a projection method and an algorithm to factorize the costs and select anchor points without the need to have a similar number of anchor points at source and target latent space. How does your approach compare with the approach? **Comparison and empirical validation** The first empirical comparison analyzes LOT and FRLC, capturing how both methods behave as the factorization rank changes. This is a good way of understanding the effect of the rank on each case. Nevertheless, in the second setup, the comparison with LOT is not present anymore, which begs the question of how LOT performs in the second empirical evaluation. Furthermore, an extensive empirical study with the structured approach of Lin et al. (2021) would help clarify the particular contributions of the type of factorization and factoring algorithm used. I found some other issues lacking in a more in-depth discussion of the computational complexity, possible implementation difficulties, and overall limitations in the proposal. Technical Quality: 3 Clarity: 3 Questions for Authors: - Is the main contribution here actually the mirror descent formulation of the OT problem with the factorization with a different set of latent source and target points proposed by Lin et al. (2021)? - Furthermore, is the main difference between the fact that Lin et al. (2021) use a two-stage process and the novel mirror-descent method solves the OT problem in a more straightforward unified process? - What are the theoretical complexity of the algorithms, and what practical considerations are relevant when implementing the method? One presentation improvement that would change my evaluation is a summary table locating the differences in the type of factorization, whether the cost or transport is being factorized, the algorithm used for computing the factorization, etc. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I found it necessary to discuss in more depth the limitations and the tradeoffs present in the proposal compared with previous art. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer jx3E for their careful reading of our work, and for their feedback. We have abbreviated your questions due to space limitations for our responses. > [I found ... novel method.] Thank you for this comment indicating an area where we can strengthen our presentation. Our FRLC algorithm differs from [Lin et al. (2021)](https://proceedings.mlr.press/v139/lin21a.html) in several respects. We omit them here due to space, but ask you to refer to the bulleted list of differences in our general response. We will include these points with additional details in our Appendix to clarify the differences between the approaches. > [Furthermore ... the approach?] We hope our general response addresses most of this question. Additionally, we want to emphasize that Lin et al. (2021) explicitly compute anchor points in each iteration of their algorithm, whereas FRLC has no anchor points. In our evaluation in Section 4.2, we computed LC projections from the output of FRLC on each dataset in order to demonstrate the interpretability of the latent coupling. These LC projections may be interpreted as the analogs of the anchor points used in Lin et al. (2021), but importantly the two concepts are not identical. Even if we specialize to using Euclidean distance, we are unaware of a way to write FRLC cost matrices as functions of the anchor-like points obtained from LC projection. > [The first ... empirical evaluation.] First, to clarify, in our experiments section (Section 4), LOT/UL/SRL all refer to the low-rank methods of Scetbon et al. ([Scetbon et al. (2021)](http://proceedings.mlr.press/v139/scetbon21a.html), [Scetbon et al. (2022b)](https://proceedings.mlr.press/v162/scetbon22b), [Scetbon et al. (2023)](https://openreview.net/forum?id=d2WsCmoITF)), and not to the Lin et al. (2021) method which coincidentally is also named LOT. UL and SRL denote the unbalanced and semi-relaxed versions (from Scetbon et al. (2023)) of Scetbon's LOT method Scetbon et al. (2021), respectively. We understand how this terminology can cause confusion, and we will clearly distinguish the method names in our revision. To avoid further confusion in this response (as in our paper), we use LOT to reference the work of Scetbon et al., and write "Lin et al." to refer to Lin et al.'s Latent OT. We assume that "second setup" refers to Section 4.3, where we compare against UL and SRL rather than LOT. As noted above, UL and SRL are the unbalanced and semi-relaxed variants of LOT, respectively. We note that we also included a comparison between the balanced versions of FRLC and LOT for this dataset (Figure 8 in our Appendix). Thus, all of our comparisons -- including Section 4.1 (Figure 2) and Section 4.2 (Figure 3) -- are directly head-to-head with LOT (with the exception of Appendix K where we compare against the full-rank solvers of `ott-jax`). > [Furthermore ... algorithm used.] We ran the Lin et al. (2021) method on several synthetic datasets and report the OT-cost (low-rank Wasserstein cost) found by this method and FRLC in Table 1 of our general response. > [I found ... the proposal.] To address the first point, we offer a brief complexity analysis in the general response above. Regarding implementation difficulties and limitations, we included a discussion on limitations at the end of our Appendix, and will add the sentences indicated in our general response to our Conclusion. > [Is the main ... Lin et al. (2021)?] Not quite -- as we noted above the FRLC algorithm does not have latent points (anchor points) at all, but rather optimizes directly over the sub-couplings $Q$, $R$, and $T$. This has several advantages as noted above, including that the steps in our algorithm are themselves OT sub-problems, unlike Forrow et al. (2019), Scetbon et al. (2021), and Lin et al. (2021) which use Dykstra projections. Moreover, FRLC is the first algorithm which solves this factorization for general costs $C$ and other objectives (GW, FGW). We refer to the bullet points above in the general response for more discussion on the differences and contributions. > [Furthermore ... unified process?] Yes, we agree that FRLC solves the OT problem in a more straightforward and unified process: we only optimize the sub-couplings, while Lin et al. (2021) alternate updates on the anchor points and the sub-couplings. However, this is only one of multiple differences between the methods. > [What are ... method?] We give a brief analysis of the time complexity of FRLC in our general response above, and will include this in our main text and Appendix. Regarding practical considerations in implementing FRLC, in practice taking the hyper-parameter $\tau$ to be close to $0$ approaches a theoretically optimal step on the sub-couplings while large $\tau$ regularizes their trajectory. Thus, it is an open question how to choose this hyperparameter, as discussed in the limitations component of our general response, and likewise with the choice of dimensions for the latent coupling. > [One presentation ... etc.] This is a great suggestion -- we include a table here, and will add this to our Appendix. | Method | Factorization | Cost | Variables | Algorithm | Subroutine for sub-couplings | |-|-|-|-|-|-| | Factored Coupling (Forrow et al. (2019)) | Diagonal | $k$-Wasserstein barycenter| Anchors \& sub-couplings| Lloyd-type | Dykstra | | Latent OT (Lin et al. (2021)) | Non-diagonal | Extension of $k$-Wasserstein barycenter| Anchors \& sub-couplings| Lloyd-type | Dykstra | | LOT (Scetbon et al. (2021)) | Diagonal | Primal OT cost | Sub-couplings | Mirror-descent | Dykstra | | FRLC (our work) | Non-diagonal | Primal OT cost | Sub-couplings | Coordinate mirror-descent | OT | **Table:** Comparison of latent and low-rank OT methods. > [I found ... previous art.] We included a discussion on limitations in Appendix N but will now add the sentence given above regarding limitations to the main text. --- Rebuttal Comment 1.1: Comment: Thank you for carefully engaging with the main points of my review. I believe the shared concern with other reviewers regarding the presentation of the idea and how it differs from previous art is fundamental to improving the quality of this paper. I believe the table with the comparisons could be in the main text; those technical differences and how they map to papers and methods could be immediate for those working directly with these problems, but it can easily be lost for the larger community, although being relevant. I am satisfied with the complexity analysis and extended empirical discussions. --- Reply to Comment 1.1.1: Comment: Dear reviewer jx3E, we thank you for your comments and are glad to see that this response helped clarify a few of your questions. We agree that placing this work in context is essential, and will add the table to the main text (rather than the appendix) following your suggestion. If you have any further questions, we are more than happy to offer any clarifications during the remainder of the discussion period.
Rebuttal 1: Rebuttal: Here we include information for all reviewers. We thank each reviewer for their helpful feedback and questions. > Regarding time complexity raised by reviewers jx3E, CGKE: The time complexity of FRLC $O(BLr^{2}(n+m))$, where $B$ is the number of Sinkhorn iterations, $L$ the number of mirror descent iterations, $n$ is the number of samples in the first dataset, $m$ is the number of samples in the second dataset, and $r$ is both the rank of the latent coupling and the rank of the distance matrix $C$. The time complexity is thus linear in $n+m$ per iteration. The number $L$ of outer iterations follows from the convergence rate of coordinate mirror descent. The number of iterations required for each projection follow from the convergence of Sinkhorn where, for $\varepsilon$ a fixed error tolerance and $\eta$ the entropy constant, one finds a $\pm \varepsilon D$ approximation for $D$ the diameter of the data in $B = \mathrm{poly}(1/(\eta \varepsilon))$ iterations. Each step of the projection is $O((n+m)r^{2})$ assuming a rank-$r$ factorization of $C$, allowing every matrix multiplication to have one dimension on the order of the constant rank $r$. Therefore, the runtime is $O(B L r^2 (n+m))$. > Regarding comparisons to Lin et al. (2021) brought up by jx3E, CGKE, we benchmarked FRLC against their method on the Wasserstein objective across four synthetic datasets: | Dataset | OT-cost (FRLC) | OT-cost (Lin et al.) | |-|-|-| | 5th and 10th roots of unity (rank $r _ {1},r _ {2}=5,10$) | 1.174 | 2.124 | | Two-moons and 8-Gaussians (rank $r=20$) | 2.716 | 4.291 | | 2D Gaussian mixture (rank $r=20$) | 0.552 | 0.922 | | 10D Gaussian mixture (rank $r=20$) | 1.038 | 1.298 | **Table 1:** Comparison against Lin et al. (2021) in primal OT-cost $\langle {C}, {P} \rangle _ {F}$ The table shows that FRLC achieves a lower cost on all datasets. Importantly, Lin et al. solve what they term the latent OT problem, which is a different objective than the low-rank Wasserstein objective minimized by FRLC. Thus we did not feel it was appropriate to include this comparison in the main text, but will include as part of a new addition to our Appendix to clarify the differences between FRLC and Lin et al. > We also report our runtime on more challenging datasets, large spatial transcriptomics datasets of [Chen et al. (2022)](https://www.sciencedirect.com/science/article/pii/S0092867422003993), as requested by reviewer pnTm: | Dataset | LOT (seconds) | FRLC (seconds) | OT cost (LOT) | OT Cost (FRLC) | |-|-|-|-|-| | Mouse embryo (E9.5-10.5) | 2.545 | 1.112 | 0.4396 | 0.385 | | Mouse embryo (E10.5-11.5) | 4.209 | 1.190 | 0.3714 | 0.344 | | Mouse embryo (E11.5-12.5) | 8.667 | 1.889 | 0.478 | 0.439 | **Table 2:** Comparison of methods on MOSTA Stereo-Seq mouse embryo spatial transcriptomics datasets (GPU). This is using default settings with `min_iter=10`, `max_iter=100`, rank $r=50$. LOT denotes the low-rank method of Scetbon et al. (2021), as a point of comparison. > In response to several reviewer questions about hyperparameter selection and our convergence criterion, we will add the following sentence to our Conclusion section: _"Two key limitations of our work are: (1) selecting values of the latent coupling ranks and the hyperparameter $\tau$ controlling the smoothness of the trajectory; and (2) strengthening the convergence criterion. These and other limitations are discussed in Section N of the Appendix."_ > Regarding questions raised by reviewers jx3E, CGKE on how FRLC differs from the method of Lin et al. (2021), here is a short summary of the differences: * Lin et al. (2021) optimize two sets of variables: sub-couplings $(Q, R, T)$ and anchor points $(z^x, z^y)$. FRLC only has $(Q,R,T)$ as variables of the optimization. * Cost matrices used in Lin et al. (2021) are built from distances between each dataset and its representative anchor points for $Q, R$, or the distances between the two sets of anchor points for $T$. In contrast, ground costs used in FRLC to update $(Q,R,T)$ are always derived from the distance matrix $C$ in the Wasserstein objective $\langle C, P\rangle _ F$. * Specifically, the cost matrices used by Lin et al. (2021) are: $$ [C _ Q^{\mathrm{Lin}}]_{ik} = \Vert x _ i - z _ k^x \Vert _ 2^2, \quad [C _ R^{\mathrm{Lin}}] _ {j\ell} = \Vert y _ j - z _ \ell^y \Vert _ 2^2, \quad [C _ T^{\mathrm{Lin}}] _ {k\ell} = \Vert z _ k^x - z _ \ell^y \Vert _ 2^2, $$ optionally using a Wasserstein distance for the entries of $C_T^{\mathrm{Lin}}$ * FRLC costs are given in the exponents of the Gibbs kernels written above and below equation (9) in our paper. These are derived directly from the rank-$r$ Wasserstein problem $\min _ {P \in \Pi _ r(a,b)} \langle C, P \rangle _ {F}$ and differ substantially from those of the proxy objective used in Lin et al. (2021). * The different objectives and variables lead to different algorithms: Lin et al. (2021) alternate updates to the sub-couplings $(Q,R,T)$ using Dykstra, with updates to the latent anchor points $(z^{x}, z^{y})$ using first-order conditions. In contrast, FRLC alternates semi-relaxed OT to update $(Q,R)$ and balanced OT to update $T$. * Moreover, because FRLC does not require anchor points to define costs, FRLC can handle cost matrices which are not simple functions of distance. For example, if $C _ {ij}$ is the price of transporting good $i$ to warehouse $j$ one may not be able to re-evaluate a price $c(x _ {i}, z _ {k}^x)$ between $x _ i$ and latent anchor $z _ {k}^x$. In such situations, while finding a low-rank plan may make sense (e.g. to approximate an assignment for a massive dataset), an "anchor" may not have clear definition in the setting of general cost matrices. * The Lin et al. (2021) objective is only a proxy for a Wasserstein-type loss, and Lin et al. do not explore extensions to Gromov-Wasserstein (GW), or Fused GW, which FRLC readily generalizes to.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fixed Confidence Best Arm Identification in the Bayesian Setting
Accept (poster)
Summary: This paper considers the best arm identification (BAI) problem in the fixed confidence setting and in the Bayesian setting, where the mean rewards of each arm is drawn from a known prior distribution. The authors formulate the problem and discuss the related literature. The authors provide the lower bound of the expected sample complexity of finding the optimal arm, which is proportional $L(\mathbb{H})/\delta$ where $L(\mathbb{H})$ is defined by the authors to characterize the sample complexity. Then the authors show the current optimal algorithms in the frequentist setting: the top two and tack-and-stop algorithms are suboptimal in the Bayesian setting. The authors then propose a new algorithm based on the successive elimination algorithm in the frequentist setting, showing that this algorithm reaches the optimal sample complexity up to a logarithmic factor. The authors provide the upper bound of the algorithm on the expected sample complexity. Finally the authors run two simulations to show that the top-two algorithm is indeed suboptimal comparing with Algorithm 1, and also show the superior performance of algorithm1 compared to the no-elimination version of it. Strengths: 1. The problem is clearly defined and the paper is nicely presented. 2. It is nice to show that the optimal algorithms (top-two and tack-and-stop) are suboptimal in the Bayesian setting 3. The authors illustrate the theorems and proofs using a simpler 2-arm version which can be better understood. Weaknesses: 1. The assumption of the arms may be stringent, i.e. all arms are from the Gaussian distribution. The current literature considers many more general distributions, such as sub-gaussian, exponential family, or even non-parametric cases. 2. The confidence interval is not optimized to the tightest confidence interval which is suggested in the lil-UCB paper from Jamieson: "lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits" 3. The lower bound proof (and possibly the upper bound proof) uses standard techniques from the literature. Perhaps the authors could point out the difficulty or challenge in the proofs. 4. The upper bound is not optimal compared to the lower bound, with an extra logarithmic factor. Is it possible to reduce this gap using tighter confidence bound or do some modification to the algorithm, such as modification of top-two or tack-and-stop instead of modify the SE algorithm (which is not optimal in the frequentist setting) 5. The simulations are limited to just two sets of experiments. The baselines or compared algorithms are just 2/1 in the respective setting. And it might be better to include different $\delta$ values since the authors consider the fixed confidence setting and change priors as well. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Is it possible to use tighter confidence intervals to get a better upper bound? 2. Is it trivial or direct to consider similar algorithms when the distribution of the arms is more general, like subgaussian, or non-parametric? 3. Is it possible to modify top-two or tack-and-stop to make it better in the Bayesian setting? 4. Just out of curiosity, is it possible to modify LUCB or lil-UCB to this setting so that it achieves similar performance when the authors modify the successive elimination algorithm? Since the former two algorithms have better sample complexity in the frequentist setting. 5. Can the upper bound be modified to a high probability bound instead of an expectation bound on the sample complexity? Would it be challenging or trivial? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable review and comments. The following are the responses to the questions you raised. > Q1) better upper bound using lil-UCB We assume your question refers to the improvement of the confidence bound from $\sqrt{\log(N_i(t)/\delta)/N_i(t)}$ to $\sqrt{\log(\log(N_i(t))/\delta)/N_i(t)}$. It is challenging to verify all parts of the proof during this short rebuttal period, but we conjecture this can improve the dependency on $\log(L(H))$ but would not change the dependency with respect to $\log(1/\delta)$. Since this is the first paper on the fixed-confidence Bayesian best arm identification, we consider the current analysis to have enough novelty. > Q2) extending the distribution of the arms is more general, like subgaussian, or non-parametric? Many results in our paper, such as Lemma 1 (Volume lemma), Lemmas 11, 13~17, and Theorem 18 hold also in non-Gaussian settings. However, several results such as Lemmas 12 and 17 are tailored for the Gaussian setting and we cannot extend those results easily to other environments. Therefore, our lower and upper bound results need some additional work to extend to other settings. As we mentioned in Section 7, extending the current results to more diverse environments is one of the interesting future research topics. > Q3, Q4) modify other algorithms, such as top-two, track-and-stop, LUCB or lil-UCB to make it better in the Bayesian setting As we have mentioned in our future work section (Section 7), it is another interesting research topic. We chose the Elimination algorithm since it is easier to manage the number of suboptimal arm pulls. Elimination also attains an orderwise optimal ratio in a frequentist setting, so we believe that using TT, T&S, LUCB or lil-UCB won't make an orderwise improvement, especially with respect to $\delta$. > Q5) a high probability bound instead of an expectation bound on the sample complexity? In the Bayesian setting in general, there is no natural idea of a high-probability bound, since the bandit instance $\mu$ is also another random variable. For example, in many FC-BAI studies in a frequentist setting, they propose the high-probability bounds of the stopping time as the form of **'$\tau \leq f(\mu)$ with probability $1-\delta$'**. It is reasonable in a frequentist setting since $\mu$ is a fixed instance in their setting. However, in our case, $\mu$ is also a random variable, so it means now the bold inequality compares two random variables. We thought that whether such results could be accepted as high-probability bounds might be controversial depending on the reader. Therefore, we presented expectation bounds. In the case of the high-probability bound when the bandit instance $\mu$ is fixed, it is relatively trivial. Our Lemma 17 can be seen as the high probability bound for each arm when $\mu$ is given, and from this result we computed the overall expectation, which was computationally challenging (about 5 pages of integral computations). > W3) The lower bound proof (and possibly the upper bound proof) uses standard techniques from the literature. Our lower bound proof is novel and we suggest the first framework to prove Bayesian FC-BAI lower bound. Our proof structure is significantly different from the standard technique in the Frequentist FC-BAI lower bound, such as Kaufmann et al., (2016). The main differences come from the different definitions of the error probability (PoE) between the frequentist setting and the Bayesian setting. As mentioned in Lines 178-179, Bayesian $\delta$-correctness is more lenient than the frequentist $\delta$-correctness. This means we need to consider more diverse algorithms for the lower bound since the lower bound is for all possible $\delta$-correct algorithms. We want to mention three challenges in our proof. - First, in our case, we suggest a novel interpretation of PoE to an optimization problem (Opt0 -> Opt1, given the upper bound of the stopping time, how small the error probability can be?) - Second, we propose a relaxation (Opt1 -> Opt2), which changes the optimization parameter from algorithms to the positive real functions $\tilde{n}: \mathbb{R}^2 \to [0,\infty)$. - Lastly, we found out that the main problem happens in the 'instances with small suboptimality gaps' and made a novel modification (Opt2 -> Opt3), and use Jensen's inequality in a creative way (Claim 1 in Appendix D) to achieve a computable optimal answer. There are also other minor novel techniques, such as extension of the [Lemma 1, Kaufmann et al., 2016] (Lemma 11 in our paper) which helped us to transform the little-kl divergence to easier notation. > W5) The simulations are limited to just two sets of experiments. The baselines or compared algorithms are just 2/1 in the respective setting. And it might be better to include different values since the authors consider the fixed confidence setting and change priors as well. We made a brief comparison between Algorithm 1 and TTUCB in $K=10$ arm environment with different prior means, prior variances and instance variances. (because of the time constraint, it was not easy to finish TTTS at the same time.) Details of experiments are in the next comment. After 500 simulations, the expected stopping time of our algorithm was $1.1\times 10^5$, while TTUCB attains $7.1\times 10^5$. Even after giving an advantage to TTUCB, our algorithm shows a much smaller sample complexity than TTUCB. To avoid possible confusion, allow us to clarify our experiment result again. In Table 1, the compared algorithms have 10 times larger expected stopping time than ours, and about 500 times larger maximum stopping time. --- Rebuttal Comment 1.1: Title: Details on experiment (W5) Comment: **Experiment Setting** - Number of arms $k=10$ - Prior means: we sample 10 random numbers before the experiment starts, and set them as prior means. Here's the list of prior means: [-0.053 0.528 -0.332 -0.368 -0.273 0.909 0.418 -1.17 0.873 -0.405] - Prior variance: we sample 10 random numbers before the experiment starts, and set them as prior means. Here's the list of prior std: [0.604 1.477 1.163 0.988 0.560 0.513 0.997 1.332 0.828 0.833] - Instance variance: we sample 10 random numbers before the experiment starts, and set them as prior means. Here's the list of instance std: [1.498, 1.262, 1.485, 0.963, 1.375, 0.969, 1.357, 1.238, 1.088, 0.699] - Number of experiments: 500 - We stop additional sampling of TTUCB when its number of samples is over $10^8$ because of the time constraint. This means we gave some 'advantage' on TTUCB about expected stopping time (since it makes TTUCB stop earlier than it should.) --- Rebuttal Comment 1.2: Comment: Thanks to the authors for addressing all my questions and concerns. The explanations are pretty convincing and I have bumped my score. Good luck.
Summary: This paper considers a best arm identification problem in Bayesian multi-armed bandit setting. The arms' distributions are generated according to the unknown prior, and the probability of error is averaged over this prior distribution. The paper makes two key contributions: first, a lower bound characterizing the fundamental hardness of BAI in the Bayesian setting (where the L(H) parameter is identified as playing a key role), and second, and achievability part that proposes a successive elimination-based algorithm (with early stopping in case the arm instance has very close best arms). The algorithm achieves within a logarithmic factor of the lower bound. Strengths: The paper brings out an interesting twist on BAI, where the Bayesian setting is statistically easier on "an average" compared to the frequentist setting. The paper explains the intuition clearly throughout, and is executed competently. Weaknesses: There is no glaring weakness. It appears to be obvious in hindsight that "very close" instances do not occur with substantial probability, when we have a Gaussian prior, making the hard frequentist instances statistically irrelevant in the Bayesian setting. This makes the BAI problem easier in terms of the expected stopping time. If the prior model creates close instances with non-vanishing probability, I suppose we'd be back to the frequentist performance. Technical Quality: 3 Clarity: 3 Questions for Authors: The definition of $\mathcal H_\mu$ at the beginning of page 4 may not be precise...is it an event or a $\sigma$-algebra? In the former case, with continuous Gaussian priors, it is not clear if the event has non-zero probability, which makes all the conditional probabilities technically dubious. Please clarify. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are acknowledged in the conclusions reasonably. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of the paper and the insightful comments. In the following, we address the question raised. > The definition of ${\mathcal{H}}_{\mu}$ at the beginning of page 4 may not be precise...is it an event or a $\sigma$-algebra? In the former case, with continuous Gaussian priors, it is not clear if the event has a non-zero probability, which makes all the conditional probabilities technically dubious. Please clarify. We are using the law of total expectation $\mathbb{E} [X] = \mathbb{E} [\mathbb{E} [X|\sigma(Y)]] = \mathbb{E} [g(Y)]$ where $\sigma(Y)$ is the sigma algebra generated by random variable $Y$ and $g(y) = \mathbb{E} [X|Y=y]$. This argument holds even when {Y=y} has zero probability. See for example 1. Example 4.1.6 in 'Durrett, Rick. Probability: theory and examples. Vol. 49. Cambridge University Press, 2019' 2. Example 4 in https://www.stat.cmu.edu/~arinaldo/Teaching/36752/S18/Notes/lec_notes_6.pdf 3. https://math.stackexchange.com/questions/1332879/conditional-probability-combining-discrete-and-continous-random-variables In particular, in our case $Y$ is $\mu$ and we are computing $\mathbb{E}\_{\mu_0 \sim \mathcal{H}} [\mathbb{P}(J\neq i^*(\mu)|\mu=\mu_0)]$. In this sense, $\mathcal{H}_{\mu_0} = \{\mu = \mu_0\}$ is an event. In the Bayesian setting, $\mu$ is a random variable, and we wanted to express the probability of error (PoE) as the form of the law of the total expectation, to emphasize that our probability of error is the marginalized form of the PoE. We used the notation $\mathcal{H}\_{\mu}$ to simplify notation. --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: I appreciate your technical clarification. I keep my score.
Summary: In this paper, the focus is on the fixed-confidence best-arm identification problem within a Bayesian setting. The objective is to determine the arm with the largest mean with a certain confidence, given a known prior distribution. Existing work in FC-BAI is mostly in the frequentist setting. This paper shows that the popular algorithms used in the frequentist setting are not optimal in this setting. Additionally, a lower bound for this setting is derived, using a sampling complexity constant and the confidence level. Lastly, a successive elimination method that matches the order of the derived lower bound along with experiments comparing the performance of popular frequentist algorithms and this algorithm are presented. Strengths: - The authors show that popular frequentist algorithms are suboptimal in the Bayesian setting. - The paper presents a novel algorithm, which is significant because, according to related work, it is uncommon to have a stopping time in Bayesian optimization. - In the successive elimination method, the indifference zone is not a parameter. - The derived lower bound is novel. Weaknesses: 1. 47: Example 1 lacks proof or reference for the number of samples. 2. The authors do not discuss how they derived the constant L(H) and its significance. Since this constant represents sampling complexity and is not a traditional term, it is harder to compare the complexity to existing work. 3. 171: In Chapter 3, the authors claim there is a known lower bound for the expected stopping time without providing any reference. 4. 196: The derivation of the inequalities in Section 3.1 is not clear. 5. 242-243: In Chapter 5, there is no explanation for why this particular confidence width was considered. 6. In Chapter 6, there is no discussion on why Algorithm 1 shows higher error percentages than the other two algorithms. 7. 313: there are two dots instead of one. Technical Quality: 3 Clarity: 2 Questions for Authors: - 267-268: In Remark 3, can the authors explain if in the general case the result matches with the lower bound? - 309-313: The authors state that the expected stopping time of the track-and-stop method is at least half of the TTS and TTUCB methods, which according to Table 1, is on average at least 300. This number is less than that for Algorithm 1, so can the authors explain why they didn’t compare? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The research was done only for the Gaussian distribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of the paper and the insightful comments. We will revise the typos and references in our final version. In the following, we address the main questions raised. > W2) The authors do not discuss how they derived the constant L(H) and its significance. Since this constant represents sampling complexity and is not a traditional term, it is harder to compare the complexity to existing work. From Section 3, we find that measuring the probability when $\mu$ achieves a small suboptimality gap is crucial. The constant $L(H)$ represents the ratio between the gap $\Delta$ and the probability that $\mu$ achieves a suboptimality gap smaller than $\Delta$, as outlined in Lemma 1. We will include a discussion on this in our final version. > Q1) 267-268: In Remark 3, can the authors explain if in the general case, the result matches with the lower bound? Our lower bound holds only when $\delta$ is sufficiently small, whereas the result in Remark 3 (upper bound) is the case of moderately large $\delta$, so we cannot say the result matches with the lower bound. > Q2) 309-313: The authors state that the expected stopping time of the track-and-stop method is at least half of the TTTS and TTUCB methods, which according to Table 1, is on average at least 300. This number is less than that for Algorithm 1, so can the authors explain why they didn’t compare? To avoid possible confusion, allow us to clarify our experiment result again. In Table 1 of the paper we submitted, our algorithm has a stopping time of $10^4$, TTUCB has a stopping time of $1.5\times 10^5$, and TTTS has an even higher stopping time. Even if the stopping time of Track-and-Stop is half that of TTUCB, it would be $7.6\times 10^4$, which is still over 7 times larger than our algorithm. > W4) 196: The derivation of the inequalities in Section 3.1 is not clear. We will add more explanation on Section 3.1 in our final version. Corollary 3 implies that for any fixed mean vector $\mu \in \mathbb{R}^2$, the stopping time is lower bounded by $\log (1/\delta)/(\mu_1 -\mu_2)^2$. For the following sequence of equations (equation in line 196~197), $\mathbb{E}\_{\mu \sim H} [\tau] \geq \mathbb{E}\_{\mu \sim H} [\tau \cdot \mathbb{1}[|\mu_1 -\mu_2| \leq \epsilon]] \geq \frac{\log \delta^{-1}}{\epsilon^2} \mathbb{P}\_{\mu \sim H} [|\mu_1- \mu_2| \leq \epsilon] = \Omega(\frac{\log \delta^{-1}}{\epsilon})$ 1) The first inequality holds since $\tau$ is a positive random variable. 2) The second inequality holds by the following - law of total expectation $\mathbb{E}\_{\mu \sim H} [\tau \cdot \mathbb{1}[|\mu_1 -\mu_2| \leq \epsilon]] = \mathbb{E}\_{\mu \sim H} [\mathbb{E}[\tau |\mu] \cdot \mathbb{1}[|\mu_1 -\mu_2| \leq \epsilon]]$ - and Corollary 3 $\mathbb{E}\_{\mu \sim H} [\mathbb{E}[\tau|\mu] \cdot \mathbb{1}[|\mu_1 -\mu_2| \leq \epsilon]] \geq \mathbb{E}\_{\mu \sim H} [\frac{\log \delta^{-1}}{(\mu_1 - \mu_2)^2} \cdot \mathbb{1}[|\mu_1 -\mu_2| \leq \epsilon]]\geq \frac{\log \delta^{-1}}{\epsilon^2} \mathbb{P}\_{\mu \sim H} [|\mu_1- \mu_2| \leq \epsilon]$ 3) Finally, by Lemma 1, we can replace $\mathbb{P}(\cdots)$ as $L(H)\cdot \epsilon$ which leads the RHS. > W5) 242-243: In Chapter 5, there is no explanation for why this particular confidence width was considered. The bound is for assuring $LCB(i,t) \leq \mu_i \leq UCB(i,t)$ for all $t\in \mathbb{N}$ and $i\in[k]$ with high probability. For more details, please check Lemma 15 (Appendix E). In our final version, we will add more explanation about the proof in Section 5. > W6) In Chapter 6, there is no discussion on why Algorithm 1 shows higher error percentages than the other two algorithms. When our algorithm finds the suboptimality gap is too narrow, instead of trying to identify the best arm, our algorithm stops sampling to avoid excessive sample complexity. On the other hand, the other two algorithms always keep sampling until they identify the best arm. This difference causes the error rate difference. However, as we have mentioned in Section 3, the algorithm should set an indifference condition in the Bayesian setting or the algorithm will attain an excessive scale of sample complexity, even diverging to infinity in expectation. --- Rebuttal Comment 1.1: Comment: Thank you for providing detailed explanations in response to the points I raised in my review. I appreciate the effort and clarity with which you addressed my concerns. After reviewing your responses, I understand and acknowledge the explanations provided. While your rebuttal has clarified several aspects of your work, I believe that a score of 6 is an appropriate reflection of the significance of your research. At this point, I do not plan on changing my score. I wish you the best of luck with your submission.
Summary: The paper studies the problem of FC-BAI; the goal is to find the arm with largest mean with a given probability of correctness. It analyzes FC-BAI for a Gaussian bandit model where the arms reward mean is sampled from a known prior and the reward variances are known. It proves other (frequentist) FC-BAI algorithms could fail at this setting. Then it proves a lower bound for the sample complexity of this problem. Finally, an elimination algorithm is introduced which is near-optimal. The main difference of this algorithm with previous algorithms is in its less conservative stopping criteria, where it stops if the remaining arms have small enough optimality gap. Strengths: *Soundness*: the paper clearly defines the problem, introduces a lower bound for the problem, theoretically proves the sub-optimality of previous algorithms, and conducts simulation studies along with ablation experiment. *Presentation*: The paper provides all the proof sketches and pseudo-codes required for reproducibility of the results. It efficiently uses its notation and avoids overloading symbols. Weaknesses: *Contribution*: The paper considers a small subset of bandit models, Gaussian reward with know prior and variance. This limits applicability of the algorithm and theoretical results. As also mentioned in the discussion section, it would be great if they could generalize the algorithm to deal with misspecified or under-specified prior (see [1]), or unknown variance. [1] Azizi, J., Kveton, B., Ghavamzadeh, M. and Katariya, S. 2023. Meta-Learning for Simple Regret Minimization. Proceedings of the AAAI Conference on Artificial Intelligence. 37, 6 (Jun. 2023), 6709-6717. DOI:https://doi.org/10.1609/aaai.v37i6.25823. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. In Thm 4, the constants in the lower bound seem very small ($\frac{1}{16 e^4} \approx \frac{1}{2^8}$), so it could entail a trivial lower bound in most cases, specially since it is the lower bound to the sample complexity (natural number). What is the actual value of this lower bound in some of your example simulations? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the encouraging evaluation and valuable comments. The following is the response to the question you raised. >In Thm 4, the constants in the lower bound seem very small ($\frac{1}{16e^4}\approx 2^{-8}$), so it could entail a trivial lower bound in most cases, especially since it is the lower bound to the sample complexity (natural number). What is the actual value of this lower bound in some of your example simulations? We agree that our lower bound tends to be much smaller than actual values. For example, when $\delta=0.01$, for the instance in our Example 1 ($\mu=(1,0.9,0.1)$), the sample complexity lower bound is 0.0468, smaller than 1. However, the main objective of our lower bound is to check the order of the sample complexity. Through this analysis, we found out that L(H) is a crucial variable in designing the algorithm, and we were able to propose our Algorithm 1 which is optimal up to a logarithmic factor.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Transformers are Minimax Optimal Nonparametric In-Context Learners
Accept (poster)
Summary: This paper analyzed in-context learning of a transformer consisting of a DNN and a linear attention layer pretrained on nonparametric regression tasks. The authors derived a general bound on the generalization error consisting of the approximation error, in-context generalization error and pertaining generalization error. They also showed that the icl prediction given by the pretrained TF is minimax optimal when the nonparametric regression functions are from the Besov space or piecewise $\gamma$-smooth class. Strengths: The paper is well-written and the theoretical results are solid. The authors made reasonable assumptions on the regression tasks and the TF function class that do not significantly simplify the problem. From a technical perspective, this work leverages the approximation ability of DNNs and the ICL ability of single-layer linear attention to derive the near-optimal generalization error bounds. A few generalizations are also considered, e.g., anisotropic Besov space and piecewise $\gamma$-smooth class. Weaknesses: The paper considered a model different from the standard TF models in practice, in the sense that a trainable DNN-based feature map is applied to the tokens before the attention layer. The proof in this works heavily rely on this feature map, as it is used to approximate the basis functions of the Besov space. Lack of simulation results. The work showed that the empirical risk minimizer is minimax-optimal but didn't analyze the training dynamics. So it would be good to have empirical evidence showing that pretraining indeed finds an empirical risk minimizer. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. The functions in the Besov space are well approximated by linear combinations of finite number of basis functions in the space. Suppose now the feature map $\phi$ can well-approximate the basic functions (assumption 3) and is fixed to be $\phi^*$, and only the linear weight in the attention layer is trainable. I wonder how much differences there are between this simplified setting and previous works on ICL for linear regression. 2. I wonder if it is possible to theoretically analyze the training dynamics under the setting where the feature map is given and fixed and only the linear attention is trainable. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: As the author already mentioned, this work only analyzed the generalization error of the ERM but not the training dynamics. The transformer model in this work is limited to a single layer of self-attention and does not include the softmax activation and the MLP layer. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and positive assessment of our theoretical contributions! Here are our responses to the comments. **Weakness 1 & Limitations.** While the reviewer mentioned in the Limitations section that our model does not include the MLP layer, we find it illuminating to consider our setup as a deep Transformer with all attention layers (except the last one) and layernorm/output layers removed, so that the MLP layers have combined into a feedforward DNN. Since this simplified model already achieves optimal ICL, we can expect a full Transformer to achieve the same. Moreover, our techniques can be partially applied to settings with multiple attention layers or nonlinear (e.g. softmax) attention; please see **Item 1** of the global response where we address this in depth. **Weakness 2.** Motivated by the reviewers' comments, we have conducted new numerical experiments justifying our simplified model setup (Weakness 1) and the assumption of empirical risk minimization (Weakness 2). Please see **Item 3** of the global response for details. **Question 1.** Indeed, if we completely remove the DNN and set both the features and basis to be simply the coordinate mappings $x_1,\cdots,x_d$, this reduces to the class of linear maps in prior works, e.g. Zhang et al. (2023). In a nutshell, our contribution is extending this to infinite-dimensional nonparametric task classes via learnable representations, enabling fine-grained sample complexity analysis and guaranteeing optimality of ICL with deep architectures. **Question 2.** If the features are fixed, the dynamics are arguably less interesting. Since the attention layer output is linear in the attention matrix $\Gamma$, the pretraining loss function is always convex (ignoring clipping). Hence gradient descent is easily shown to converge exponentially fast to a minimizer regardless of the number of samples, random initialization, etc. which can also be written down explicitly by computing the matrix derivative of line 145 to zero and solving for $\Gamma$. The linear regression setup of the previous work by Zhang is not quite as simple, in that they also include a scalar multiplier $\sigma$ representing the value matrix and study the joint dynamics. In that paper, convergence is proved by showing a PL inequality under restrictive assumptions on initialization. Rough calculations show that a similar result can be obtained in the fixed feature setting as well, justifying the fixed value matrix assumption in our model. Some other works also indicate that the attention mechanism may possess structures favorable for optimization; see our discussion in Section 1.1. Nevertheless, the main novelty of our work comes from incorporating the representational capability of the MLP layers, for which a dynamical analysis is unfortunately still out of reach. --- Rebuttal 2: Comment: Thank you again for your time and effort in reviewing our paper! As the discussion period is ending soon, we are following up to see if our response was satisfactory in addressing the reviewer's questions. If not, we would be happy to discuss any remaining concerns. --- Rebuttal Comment 2.1: Comment: Thanks for the response. I will maintain my positive score.
Summary: This paper explores the ICL capabilities of transformers from a statistical learning theory perspective. It focuses on transformers with a deep neural network and a linear attention layer, pretrained on nonparametric regression tasks from function spaces like the Besov space and piecewise gamma-smooth class. The authors demonstrate that sufficiently trained transformers can achieve or even surpass the minimax optimal estimation risk by encoding relevant basis representations during pretraining. The paper also establish result that explains the pretraining and in-context generalization gaps, which is essential for understanding ICL. Strengths: 1. The topic of this work is both interesting and important, addressing key aspects of ICL in transformers. 2. The results developed, especially Theorem 4.5, are very encouraging and could help the community better understand LLMs and their ICL capacity. 3. The analysis provided is rigorous and sound, offering new tools and methodologies for theoretical analysis in the ICL literature. Weaknesses: 1. The writing and presentation of the paper need improvement. For instance, in Section 4, the flow feels disjointed, with several results appearing to be stitched together without a clear, organized connection. 2. Although the ICL capabilities of transformers are powerful and of great interest, the simplified transformer model presented in this paper is not very realistic. While it is common to use simplified models for analyzing LLMs, the paper would greatly benefit from numerical experiments demonstrating the claimed theoretical findings. Technical Quality: 3 Clarity: 2 Questions for Authors: While reading this paper, I feel it provides new insights into understanding ICLs. However, I cannot find concrete evidence of how this theoretical framework can offer supportive insights for practitioners using ICL. The simplified model without numerical experiments cannot indicate how real LLMs behave. Could the authors provide more detailed explanations on how this work connects to the mechanisms of ICL and LLMs in practice? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your through review and helpful suggestions, which have helped us greatly to improve our paper! **Response to Weakness 1.** Unfortunately, Section 5 was completed last minute which resulted in the flow of the paper being rather disjoint. Together with improved lower bounds, we restate our take-home message (besides in-context optimality in $n$) as follows. We believe this novel understanding will be helpful to both theoreticians and practitioners. * (In the Besov space setting) The obtained upper bound $n^{-\frac{2\alpha}{2\alpha+d}}$ when $T\gtrsim nN$ which is minimax optimal in $n$, has also been shown to be jointly optimal in $n,T$. Hence **ICL is provably optimal in the large $T$ regime.** * (In the coarser basis setting) We obtained an improved lower bound $n^{-\frac{2\alpha}{2\alpha+d}} + (nT)^{-\frac{2\tau}{2\tau+d}}$ using the method in Appendix E.3. If $T=O(1)$, this gives the slower $\tau$ rate, while if $T$ is larger this gives the faster $\alpha$ rate. This aligns with Corollary 4.9 (which only gave an upper bound), hence **ICL is provably suboptimal in the small $T$ regime.** When combined, these results are stronger than optimality in only $n$. (Since the Transformer requires $n$ in-context samples plus $n\times T$ pretraining samples, one could argue that standard minimax rates do not apply. However, the above lower bounds do apply rigorously to any meta-learning algorithm in the $n\times T$ samples.) They also align with experimental observations of task diversity threshold (Raventos 2023), rigorously supporting the importance of varied tasks over increasing prompt length. **Response to Weakness 2 & Question 1.** Motivated by your comments on practical relevance, we have conducted new numerical experiments justifying our simplified model setup and the assumption of empirical risk minimization. Figures can be found in the attached PDF of the global response. We implement and compare the following toy models: * Linear Transformer: the model studied in our paper, where a DNN (2 hidden layers) feeds into a simplified linear attention layer * Softmax Transformer: the same model as (a) but with softmax attention * Full Transformer: an ordinary transformer with 2 stacked encoder layers Architectures are compared in Figure 1. The number of feedforward layers, width of hidden layers, learning rate etc. are set to equal for a fair comparison. * Figure 2 shows training (solid lines) and test loss curves (dashed lines) during pretraining. All 3 architectures exhibit similar behavior and converge to near zero training loss, justifying the use of our simplified model and also supporting the assumption that the empirical risk can be minimized. * Moreover, Figure 3 shows the converged losses over a wide range of $N,n,T$ values (median over 5 runs). We verify that increasing $N,n$ leads to decreasing train and test error, corresponding to the approximation error of Theorem 3.1. We also observe that increasing $T$ tends to improve the pretraining generalization gap up to a threshold, confirming our theoretical analyses. Again, this behavior is consistent across the 3 architectures. * Although the dimensions and architectures are relatively small due to limited time and compute, we plan to scale up our experiments and add them to the final version of the paper. Please also see **Item 1** of the global response where we discuss how our theoretical results may be extended to more complex transformer architectures. We also humbly ask the reviewer to consider raising their score if their concerns were addressed. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. My questions are partly addressed and I'll maintain my rating. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our rebuttal! As your criticisms were generally on the lack of numerical experiments, if there are any other types of experiments which you believe will further benefit our paper, please inform us and we will strive to implement them to gain more insight into our analyses.
Summary: This work shows that ICL can perform non-parametric regression at an optimal rate. Section 3 gives an upper bound on ICL error in terms of a metric entropy of the representation class. Section 4 instantiates the bound for DNN representations and shows it to be optimal. Section 4.3 explores ways to reduce the dependence on dimension. Section 5 gives lower bounds for any ICL (not just transformers). There is a sufficiently expressive representation class $\mathcal{F}$ (here we take DNNs) from which we can draw a $\phi^*$ such that linear attention on top of this solves regression well. In that sense, this paper generalizes the idea from linear attention for linear regression that the attention inverts the data-covariance. Strengths: The paper contains a general analysis of ICL error for regression problems. The comparison to non-parametric rates is interesting and possibly the right way to extend the so far mostly linear analysis in the literature to problems beyond linear regression. Weaknesses: This is general analysis of a very specific simplification of the transformer. The setting is that there are trainable layers before the attention layer, and then exactly one linear attention layer. Is there any hope to extend this analysis to more attention layers/ non-linear attention/ constant $N$? When $N$ is constant, the error is constant, which seems vacuous considering everything is bounded. Is that an artifact of analysis? Is it possible to have a setting where we removed the DNN and force the features themselves to satisfy Assumption 3? Would that reduce to the linear regression setting of prior work? If so, it seems very important to justify how we can find a $\phi^*$ satisfying Assumption 3 for DNNs. Technical Quality: 3 Clarity: 2 Questions for Authors: A matter of notation: in assumption 1, why aren’t the basis functions the first N functions, rather than the functions from $\underline{N}$ to $\overline{N}$? Also, are the functions past $\overline{N}$ not spanned by the basis? Remark 2.1 offers some explanation. I think it would be helpful to write the main paper in the setting where $\psi$'s span the space and leave the generalization for the appendix. What role does the sparsity $S$ play, why is it set to be $O(1)$? Isn't is unrealistic that $N$ should scale in $n$ for a real transformer (one of the key selling points about transformers are how the parameters dont scale in the context length)? Do the authors think this is due to the analysis or is it fundamental for attention? It would be helpful to instantiate a target function class in the main paper, specify what $\alpha$ is, etc. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and insightful questions! Our responses are as follows. * **It would be helpful to instantiate a target function class in the main paper, specify what $\alpha$ is, etc.** (We answer this question first to help in understanding our paper.) **Throughout Section 4, we consider concrete task classes including the Besov space, anisotropic Besov space, and piecewise $\gamma$-smooth function class.** Our main result for DNNs (Theorem 4.5) is stated when $\mathcal{F}^\circ$ is the unit ball in Besov space $B_{p,q}^\alpha(\mathcal{X})$, which is a very wide function class generalizing Holder and Sobolev spaces; please see Section 4.1 for details. Here $\alpha$ is the smoothness and $p,q$ are additional parameters controlling spatial inhomogeneity. These relate to the decay rate $s$ of Assumption 2 by $s=\alpha/d$; the high-level Assumptions 1-3 all follow from Assumption 4 in this setting. We also show the curse of dimensionality can be avoided when $\mathcal{F}^\circ$ is extended to the *anisotropic* Besov space in Section 4.3. Furthermore, for the multilayer Transformer setting (Section 4.5) the inputs $x$ are sequences and $\mathcal{F}^\circ$ is the piecewise $\gamma$-smooth class, which is a more realistic model for text by allowing the position of important tokens to depend on input. * **Is there any hope to extend this analysis to more attention layers/non-linear attention/constant $N$?** Please see Item 1 of the global response where we address this question in depth. * **When $N$ is constant, the error is constant, which seems vacuous.** While our bounds (e.g. Theorem 4.5) hold for all $N$, the risk dependency is not an artifact but a natural result of how our task class is set up. After pretraining our transformer has learned $N$ feature maps $\phi_1,\cdots,\phi_N$ and its output $f_\Theta(\mathbf{X},\mathbf{y},\cdot)$ is always contained in the span of these $N$ functions. However, the true test task $F_\beta^\circ$ is randomly chosen from an infinite-dimensional space, and so the $L^2$ regression error is fundamentally lower bounded by its **Kolmogorov width** -- the minimum approximation error by an $N$-dimensional linear subspace -- which scales as $N^{-\alpha/d}$ for the Besov space. Hence this error is unavoidable for *any* fixed-basis approximation scheme; an implication of our analysis is that Transformers attain this optimal rate (the first term in Theorem 4.5). Besides the approximation error, the pretraining and in-context generalization errors decrease as $O(1/T)$ and $O(1/n)$, which is still a useful result if one is concerned with improving generalization. * **Is it possible to have a setting where we removed the DNN?** Indeed, if we completely remove the DNN and set both the features and basis to be simply the coordinate mappings $x_1,\cdots,x_d$, this reduces to the class of linear maps in prior works. In a nutshell, our contribution is extending this to infinite-dimensional nonparametric task classes via learnable representations, enabling fine-grained sample complexity analysis and guaranteeing optimality of ICL with deep architectures. * **It seems very important to justify how we can find a $\phi$ satisfying Assumption 3 for DNNs.** That is exactly what Appendix C.1.1 (Verification of Assumptions) and Lemma 4.4 are for! We show that the abstract Assumptions 1,2,3 are all replaced by the single concrete Assumption 4 when the target class is set to the Besov space. For the multilayer Transformer setting (Section 4.5), Assumption 3 instead follows from Theorem D.2. We will make this point clearer in the paper. * **Why aren’t the basis functions the first N functions?** In fact, this is to address a **unique technical difficulty** that we newly uncover: the necessity of the inverse covariance matrix $\Psi_N^{-1}$ to approximate the attention matrix. Wavelet systems of Besov-type spaces are fundamentally co-dependent and form a multi-resolution scheme: there are $O(2^k)$ independent basis elements at each resolution $k\in\mathbb{N}$, which can always be refined as a linear combination of higher resolution wavelets. This makes $\Psi_N$ singular and affects various decay rates, so we cannot simply take the first $N$ elements. This was not a problem in prior attention-only works ($\text{Var}(x)$ was simply assumed to be positive definite) nor any existing minimax optimality works (which don't have a product structure). Nonetheless, we were still able to prove optimality by decomposing all wavelets to the finest resolution -- numbered from $\underline{N}$ to $\overline{N}$ -- via the refinement analysis in Appendix C.3, hence the additional steps to account for the aggregated coefficients $\bar{\beta}$. While this additional notation may cause confusion upon first reading, we have left it in since specializing to the wavelet system to obtain optimality is an important part of our work. * **What role does the sparsity $S$ play, why is it set to be $O(1)$?** The sparsity bound $S=O(1)$ is merely to clarify the minimum number of essential parameters and can be completely removed. Recall that a fully connected network has $S\sim LW^2$. The dominating term from the DNN class entropy (line 675) is $SLN\log W$, so that letting $S=O(\log N)$ (fully connected depth $L$ network) would only incur an additional log factor to yield $N(\log N)^2$. Since the $N^2\log N$ term from the attention matrix entropy dominates this, the overall bound remains unchanged. (continued in comment) --- Rebuttal 2: Comment: (continued from rebuttal) * **Isn't is unrealistic that $N$ should scale in $n$ for a real transformer?** In practice we agree that the architecture should not depend on $n$, especially if prompts of any length are allowed. However since we assume **all** prompts (during both pretraining and ICL) are of fixed length $n$, it is reasonable to choose a more powerful architecture for larger $n$. Strictly speaking, $N$ does not need to scale in $n$ since our bounds hold for all $N,n,T$. But the bound itself is natural: as we mentioned, approximation error must decrease in $N$ due to the infinite dimensionality of the target class, and generalization error should of course decrease in $n,T$. For example if $N$ is considered fixed, the bound is interpreted as an excess risk of $O(1/n+1/T)$. However we want to obtain the *overall* sample complexity rate in $n$, which necessitates $N$ to scale in $n$. The rate itself is less important than the fact that it is the *best rate attainable by any estimator*, and the optimality result should be viewed more as guaranteeing tightness of the derived bounds (further justified by Section 5) than a prescription of how $N,T$ should scale. We also mention that this approach is standard in minimax analyses. For example, rate-optimal scaling for ordinary supervised regression with DNNs (Suzuki, 2019) also requires the width to scale as $n^{\frac{1}{2s+1}}$ even though the smoothness of the target is unknown. * **Rebuttal End** Finally, we have also obtained **improved lower bounds** which reinforce the message of our paper, and newly conducted **numerical experiments** justifying our assumptions and results. Please see Item 2 and Item 3 of the global response for details. We also humbly ask the reviewer to consider raising their score if some of their concerns were addressed. --- Rebuttal 3: Comment: Thank you again for your time and effort in reviewing our paper! As the discussion period is ending soon, we are following up to see if our response addressed the reviewer's questions. If so, we hope that they would be willing to increase their score. If not, we would be happy to discuss any remaining concerns. --- Rebuttal 4: Comment: Thank you for your response. I will raise my score to a 5.
Summary: This paper studies in-context nonparametric learning using transformers. In the setting used in the main result of the paper, the transformer is trained on a dataset consisting of multiple sequences/tasks. For each task, the target function $F_\beta$ is drawn from the span of a certain countable set of functions - in the setting used for the main result of the paper, the target function is drawn from a Besov space spanned by a B-spline wavelet basis. The task then consists of multiple in-context pairs $(x_k, y_k)$, where $y_k$ is $F_\beta(x_k) + \xi$, where $\xi$ is random noise. Certain technical assumptions are made on the basis functions $\psi_j$ and the coefficients $\beta_j$: the $\psi_j$ are assumed to have a property that is similar to linear independence/orthonormality, while the $\beta_j$ are assumed to decay at a certain rate. The model in this paper's setting consists of a feature map $\phi$ applied to all of the $x_k$, followed by a linear attention layer applied to the $\phi(x_k)$ and the $y_k$. The feature map $\phi$ is assumed to be expressive enough to approximate the $\psi_j$. In the main result of this paper, $\phi$ is chosen from the class of deep neural networks (DNNs) with a logarithmic number of layers (logarithmic in the size of the feature map), and $O(1)$ width and $O(1)$ entries per layer. The class of feature maps $\phi$ is denoted as $\mathcal{F}_N$, and the overall model class is denoted as $\mathcal{T}_N$. The main result of the paper, Theorem 4.5, is that with a sufficient number of tasks during pretraining, in-context learning with transformers will achieve the optimal minimax risk. This is shown as follows. Theorem 3.1 first gives a bound on the expected test loss in terms of the covering number of the model class $\mathcal{T}_N$, and the minimum test loss achievable by some member of $\mathcal{T}_N$. Next, the paper gives a particular construction for a set of parameters which achieves low test loss. The weight matrix for the linear attention layer is chosen similarly to the optimal weight matrix given in Zhang, et al. (2023) and nonlinear feature map outputs a subset of the basis functions. Additional results are also given. This paper shows how the curse of dimensionality can be avoided when the target function is drawn from an anisotropic Besov space. Also, under certain assumptions, even if the target function is drawn from a Besov space with smoothness $\tau < \alpha$, it is possible to achieve the same minimax rate as in the case where the target function is drawn from a Besov space with smoothness $\alpha$. Finally, the paper gives similar results in the setting where the tokens are themselves sequences of unbounded length, and gives minimax lower bounds that match the upper bounds. Strengths: - This work seems to be the first to study in-context learning using transformers in Besov spaces (prior work studied settings such as regression with DNNs). - The paper is very well-written. Weaknesses: I think the main weakness is that the techniques used to show Theorem 4.5 are somewhat standard. - Theorem 3.1, which shows that the empirical risk minimizer achieves good test loss, seems to be using relatively standard arguments from previous work, based on covering numbers. - The construction used to show that low approximation error can be achieved also seems somewhat straightforward. The weight matrix for the attention layer is chosen similarly to Zhang, et al. (2023), and the feature map outputs a subset of the basis vectors. Technical Quality: 4 Clarity: 4 Questions for Authors: - In Lemma 4.4, how important is the assumption that the width/sparsity of the deep neural network is $O(1)$? Would the guarantee obtained in Theorem 4.5 change if the deep neural network is allowed to have more nonzero entries? - Could the setting in this paper be considered similar to the linear regression setting studied by previous works such as Zhang, et al. (2023), with the difference being that the linear regression weight vector is replaced by the vector $\beta$ of coefficients for the basis functions? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your through review of our work! We hope our responses can help clarify some important points. **Weakness 1.** Theorem 3.1 is indeed a standard result, as we have indicated it is a straightforward adaptation of Lemma 4 of Schmidt-Hieber (2020). While it is only the first step of the proof of the main results, we included the statement to explicitly show how the in-context and pretraining generalization gaps arise from different elements of the risk: the former manifests when evaluating the approximation error w.r.t. $n$, while the latter is the usual entropy term. **Weakness 2.** The attention matrix construction can be seen as an extension of Zhang (2023), however we urge the reviewer to take the following points into consideration. * The optimal construction is completely determined for a linear attention layer since the L2 linear regression loss is convex; even the construction in Zhang et al. (2023) is stated to be a generalization of Oswald et al. (2023). So while we agree the construction is not surprising, it is an important step allowing us to reduce to the analysis of the DNN module which facilitates the main risk analysis. Conversely, Zhang and Oswald do not consider the MLP at all and only study the attention matrix. * The given form (line 174) is only an example construction to upper bound the empirical loss. Unlike Zhang, we do **not** claim the ERM or optimization dynamics must result in such a simplified form, which may not be true in a deep Transformer setting. * The idea of approximating the target function with a well-chosen basis is of course fundamental to all functional analysis and is not unique to our approach. Indeed, our goal is to **establish a strong connection between ICL and ordinary supervised regression** by isolating the pretraining generalization gap. We dive deeper into this idea by pointing out a new limitation of ICL stemming from non-adaptive function approximation (Remark 4.6). * Moreover, a **unique technical difficulty** that we newly uncover and address is the necessity of the inverse covariance matrix $\Psi_N^{-1}$ to approximate the attention matrix. Wavelet systems of Besov-type spaces are fundamentally co-dependent and form a multi-resolution scheme which makes $\Psi_N$ singular and affects various decay rates. This was not a problem in Zhang's work ($\Lambda$ was simply assumed to be positive definite) nor any existing minimax optimality works (which don't have a product structure). Nonetheless, we were still able to prove optimality by decomposing all splines to the finest resolution and performing the analysis strictly at that scale, hence the additional steps to account for the aggregated coefficients $\bar{\beta}$. * We have added experiments justifying the use of our simplified transformer model; please see Item 3 of the global response. We also implore the reviewer to judge our result not based on the perceived difficulty level of the proof, but its implications towards understanding the effectiveness of in-context learning. **Question 1.** The assumption can be substantially relaxed; the stated form is mostly to clarify the minimum number of essential parameters. Recall that a fully connected network has $S\sim LW^2$. The dominating term from the DNN entropy (line 675) is $SLN\log W$, so that letting $S=O(\log N)$ (fully connected depth $L$ network) would only incur an additional log factor to yield $N(\log N)^2$. Since the $N^2\log N$ term from the attention matrix entropy dominates this, the overall bound remains unchanged. The same holds even if width is relaxed to $W=O(N)$, and furthermore even if $S=O(N)$ in this case. However if $W=O(N)$ *and* the network is fully connected, that is $S=O(N^2\log N)$, the pretraining gap will worsen to around $N^3/T$. This reflects the fact that wavelets are 'easier' to approximate with DNNs than Besov functions are with wavelets. **Question 2.** That is indeed the setup we were aiming for, however please consider our response to Weakness 2 as well as the following points. * Linear attention has been studied in many other papers including Zhang, Oswald, Mahankali, and Ahn's works (cited in the paper), all yielding similar 'optimal matrix' constructions. As mentioned before, we do not claim originality in this regard. However, our focus is on reduction to the **representational capability of the MLP layers**. Except constructive works such as Bai et al. (2023), The NN+attention setup had only been studied before very recently by Kim and Suzuki (2024), and they only considered a shallow MLP. * We also find it illuminating to consider our setup as a deep Transformer with all but the last attention layer removed. Since this simplified model is already in-context sample optimal, we can expect a full Transformer to be the same. Moreover, we indicate how to extend our results to nonlinear attention or multiple attention layers in Item 1 of the global response. We have also obtained **improved lower bounds** which reinforce the message of our paper, and newly conducted **numerical experiments** justifying our assumptions and results. Please see Item 2 and Item 3 of the global response for details. We also humbly ask the reviewer to consider raising their score if some of their concerns were addressed. --- Rebuttal Comment 1.1: Comment: Thank you again for your time and effort in reviewing our paper! As the discussion period is ending soon, we are following up to see if our response addressed the reviewer's questions. If so, we hope that they would be willing to increase their score. If not, we would be happy to discuss any remaining concerns. --- Rebuttal Comment 1.2: Comment: Thank you for the detailed reply. I am not familiar with the literature on wavelet systems, but since the covariance matrix may not be invertible, this suggests that the analysis in this work is more than a direct extension of Zhang, et al. The lower bound in the coarser basis setting is also an interesting takeaway. I will increase my score to 6. Could you please clarify the following: - For your main setting, you mention in the global response that your upper bound is jointly optimal in $n, T$. In what sense is it optimal in $T$? - How do you select $N$ in Theorem 4.5 and Corollary 4.9? Would it be possible to optimize the bound in Corollary 4.9 further by selecting a different $N$? --- Reply to Comment 1.2.1: Comment: We are very grateful to the reviewer for re-evaluating our contributions and raising their score! Here are some further clarifications: * Wavelet systems form a hierarchy ordered by resolution, and wavelets with lower resolution can always be written as a combination of those with higher resolution (forming the basis of *multiresolution analysis* theory). Hence the covariance matrix is indeed never invertible, necessitating our new 'aggregation' techniques. * ICL can be viewed as a meta-learner which takes as input not just $n$ samples from the current task but also $(n+1)T$ samples from related tasks. Now also consider $T$ as a variable. It could be the case that a well-designed meta-estimator (possibly not constrained to be fixed at inference time) can achieve a rate faster than $n^{-\frac{2\alpha}{2\alpha+d}}$ by taking all these samples as input, as the ordinary minimax rate only applies to estimators which only learn from $n$ ground-truth samples. Our lower bound shows that this is impossible: *any* estimator that takes the $n$ ground-truth \& $(n+1)T$ related samples as input is still lower bounded by $n^{-\frac{2\alpha}{2\alpha+d}}$. Since our upper bound matches this when $T$ is sufficiently large, we conclude that ICL is **optimal in both** $n,T$. Conversely, the new lower bound $n^{-\frac{2\alpha}{2\alpha+d}} + (nT)^{-\frac{2\tau}{2\tau+d}}$ in the coarser basis setting allows us to conclude that ICL (or any other meta-learner) is **suboptimal in both** $n,T$ when $T<n^{\frac{(\alpha-\tau)d}{\tau(2\alpha+d)}}$, supplementing our upper bound in Corollary 4.9. * The selection of $N$ is standard rate analysis, except that we also have to consider the effect of $T$ through the pretraining gap. Let's look at Theorem 4.5. First assume $T\gtrsim nN$ so that the 3rd term can be ignored compared to the 2nd term. Then we want to choose $N$ such that $N^{-2\alpha/d}+\frac{N\log N}{n}$ is minimized, which can be found by balancing $N^{-2\alpha/d}\asymp\frac{N}{n}$ and thus $N\asymp n^\frac{d}{2\alpha+d}$ (ignoring log terms). However if $T<nN$ then the 3rd term dominates and $N^{-2\alpha/d}+\frac{N^2\log N}{T}$ is minimized when $N\asymp T^\frac{d}{2\alpha+2d}$, so the overall risk is now bounded suboptimally as $T^{-\frac{\alpha}{\alpha+d}}$. * Corollary 4.9 is similarly tuned and the bound cannot be optimized further as we also proved it is optimal in both $n,T$ when $T$ is large. Again there is some suboptimal scaling when $T$ is small, but -- as we indicate in our global response -- for this regime it is better to look at the information-theoretic lower bound which gives a rigorous proof of suboptimality.
Rebuttal 1: Rebuttal: ## 1. Extending to Multiple/Nonlinear Attention As reviewers have noted, our setup builds on previous works which establish ICL of a single attention layer as linear regression in order to extend to complexity analysis of nonparametric regression. ICL of multiple/nonlinear attention layers can be fundamentally different, possibly exhibiting complex behavior such as induction heads or data clustering which are not yet well understood. Nonetheless, our techniques may be partially applied to these settings as follows. **Multiple attention layers:** We consider a setup with multiple attention layers in Section 4.5, where the preceding Transformer module performs dynamical feature extraction and the final attention layer performs regression based on the learned representations. This aligns with the empirical observation by Guo et al. (2024) that lower layers of Transformers learn representations of the input and upper layers perform linear ICL. We also find it illuminating to consider our setup as a deep Transformer with all but the last attention layer removed, so that the feedforward layers combine into a deep MLP. Since this simplified model already achieves sample optimal ICL, we can expect a full Transformer to achieve the same. **Nonlinear attention:** The simplest approach is to linearize the kernel, for example by taking a small rescaling factor $\gamma$ and approximating $e^{\gamma x^\top\Gamma x'}\approx 1+\gamma x^\top\Gamma x'$. Ignoring the normalizing factor, this shows that nonlinear attention has at least as much expressivity as linear attention. Moreover, linear attention has been empirically shown to capture many other prominent aspects of softmax attention (Ahn et al. 2024). However ideally we want to characterize the full expressive capability of nonlinear attention. To this end, we can take the following kernel-based approach. By introducing an additional kernel transformation $K(x,x')$ such as the RBF kernel $e^{x^\top\Gamma x'}e^{-\lVert x\rVert_\Gamma^2/2}e^{-\lVert x'\rVert_\Gamma^2/2}$ for (rescaled) softmax attention, we can use Mercer's theorem to decompose $K(x,x') = \sum_{j=0}^\infty \lambda_j e_j(x)e_j(x')^\top$ and approximate $K$ using the top few eigenfunctions $e_j$. For example for the RBF kernel, $e_j$ is an appropriately scaled Gaussian density times the $j$th Hermite polynomial. Then the main problem is to evaluate the approximation capability of the maps $e_j\circ\phi$ composed with the DNN module. This can be done assuming some compatibility conditions between the kernel eigenfunctions and target class basis functions. (This is particularly straightforward if e.g. the kernel is such that $\lambda_j$ is learnable, so that the terms can be considered separately in $j$.) ## 2. Take-home Message Together with improved lower bounds, we restate our take-home message (besides in-context optimality in $n$) as follows. We believe this novel understanding will be helpful to both theoreticians and practitioners. * (In the Besov space setting) The obtained upper bound $n^{-\frac{2\alpha}{2\alpha+d}}$ when $T\gtrsim nN$ which is minimax optimal in $n$, has also been shown to be jointly optimal in $n,T$. Hence **ICL is provably optimal in the large $T$ regime.** * (In the coarser basis setting) We obtained an improved lower bound $n^{-\frac{2\alpha}{2\alpha+d}} + (nT)^{-\frac{2\tau}{2\tau+d}}$ using the method in Appendix E.3. If $T=O(1)$, this gives the slower $\tau$ rate, while if $T$ is larger this gives the faster $\alpha$ rate. This aligns with Corollary 4.9 (which only gave an upper bound), hence **ICL is provably suboptimal in the small $T$ regime.** When combined, these results are stronger than optimality in only $n$. They also align with experimental observations of task diversity threshold (Raventos 2023), rigorously supporting the importance of varied tasks over increasing prompt length. ## 3. Numerical Experiments Following comments by reviewers on the lack of experiments connecting our analysis to practical transformers, we have conducted new simulations justifying our simplified model setup and the assumption of empirical risk minimization. Figures can be found in the attached PDF. We implement and compare the following toy models: * Linear transformer: the model studied in our paper, where a DNN (2 hidden layers) feeds into a simplified linear attention layer * Softmax transformer: the same model as (a) but with softmax attention * Full transformer: an ordinary transformer with 2 stacked encoder layers Architectures are compared in Figure 1. The number of feedforward layers, width of hidden layers, learning rate etc. are set to equal for a fair comparison. * Figure 2 shows training (solid lines) and test loss curves (dashed lines) during pretraining. All 3 architectures exhibit similar behavior and converge to near zero training loss, **justifying the use of our simplified model** and also supporting the assumption that the **empirical risk can be minimized**. * Moreover, Figure 3 shows the converged losses over a wide range of $N,n,T$ values (median over 5 runs). We verify that increasing $N,n$ leads to decreasing train and test error, corresponding to the approximation error of Theorem 3.1. We also observe that increasing $T$ tends to improve the pretraining generalization gap up to a threshold, confirming our theoretical analyses. Again, this behavior is consistent across the 3 architectures. * Although the dimensions and architectures are relatively small due to limited time and compute, we plan to scale up our experiments and add them to the final version of the paper. Pdf: /pdf/94ff07841f2a2481d5c45fad27e236bed8720eca.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DALD: Improving Logits-based Detector without Logits from Black-box LLMs
Accept (poster)
Summary: This paper proposes a framework named Distribution-Aligned LLMs Detection (DALD) to improve the performance of surrogate models in detecting LLM-generated text from both closed-source and open-source models. The method enhances detection performance by aligning the distribution of the surrogate model to better match the distribution of the target model. The paper verifies the effectiveness of DALD on various advanced closed-source and open-source models through extensive experiments and demonstrates its "plug-and-play" enhancement capability in a zero-shot detection framework. Strengths: - It is gratifying that this paper reflects on existing logits-based methods and proposes the DALD framework, which improves detection performance by aligning the distribution of the surrogate model, achieving superior performance. - The proposed method uses small-scale datasets for fine-tuning, which is cost-effective and highly adaptable, allowing it to quickly adapt to rapid iterations and updates of models, demonstrating good usability. - Ablation experiments are comprehensive. The framework also shows good performance in scenarios such as Non-English Detection and Adversarial Attack. Weaknesses: - The method's interpretability is still insufficient. I am not entirely sure, but one point that confuses me is why training DALD on datasets from multiple source models performs better than training on a dataset from a single source model (see Table 2). Since DALD essentially aligns the logits distribution of the surrogate model with that of the source model, this is a crucial starting point of the paper. However, training DALD on datasets from multiple source models theoretically and intuitively would not align distributions better than a single source model, which is contradictory. Similarly, in the Claude-3 setting, DALD trained on a GPT-4 dataset even outperforms DALD trained on a dataset from the source model Claude-3, which is perplexing. - There is a lack of detail about the datasets used for fine-tuning, such as their sources, topics, or other attributes, which could help readers understand and reproduce the method. In fact, I am very curious about the impact of the fine-tuning dataset on DALD, aside from the sample size. Technical Quality: 2 Clarity: 3 Questions for Authors: - Please refer to the weaknesses, especially Weakness 1. - An additional question: I noticed that DetectGPT and Fast-DetectGPT are described as black-box methods. Is this because they use surrogate models? In fact, all logits-based methods (including Likelihood and Rank) can use surrogate models, depending on whether the scenario is white-box or black-box. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: - The testing dataset is insufficient; all datasets contain only 150 human-written samples, and the experimental results obtained may actually be biased, which is indeed my concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your detailed review. Please check our response to your concerns. **Response to Weakness 1**: First, regarding the concern of the better performance of the model trained on multiple source data, a reasonable explanation could be that the optimization space of logits is large. Therefore, aligning the logits of one model does not necessarily affect the alignment of others in the LLM detection point of view. It suggests that continuing to learn from the corpus generated by multiple models simultaneously has no negative impact on the overall performance. Moreover, by collecting from multiple sources, the training data is augmented, which, given homogeneous models, is similar in spirit to the widely used data augmentation for improving generalization. Augmentations induce extra stochasticity during training, which effectively flattens the loss landscape [3]. The surrogate model may also benefit from the complementary information from different sources that can potentially improve the alignment of distributions, and eventually enhance the model's generalization performance. We will add discussions in the revision. This indicates that training DALD on datasets from multiple source models is indeed effective and not contradictory. Besides, regarding the point you raised about the perplexing outperformance of DALD trained on a GPT-4 dataset compared to a dataset from the source model Claude3, we would like to point out that their performance is close. In the general direction, a closer distribution of logits can help produce better detection results. However, there is a certain degree of randomness involved, as we are not comparing the direct value of logits, but rather the scoring function in Fast-DetectGPT, which uses logits to calculate the score as the detection metric. In most cases, our results align with our assumptions. **Response to Weakness 2**: Sorry for the unclearness of our fine-tuning datasets. As we discussed in the main text, the only requirement of the fine-tuning dataset is that the texts are generated by the same target model. More concretely, in our experiments, we utilize the corpus generated by ChatGPT and GPT-4 from WildChat[1]. As for Claude-3, the data is from the huggingface[2]. For the open-source model, we randomly select 5000 prompts from Wildchat and generate the output from the open-source models to obtain fine-tuning data. We will share our code with the public for reproduction. **Response to Question 2**: DetectGPT and FastDetectGPT can be applied in both white-box and black-box settings. In the white-box setting, the output logits of the target model will be accessed. However, it is unavailable in the black-box setting, thus methods like DetectGPT and FastDetectGPT utilize a surrogate model to compute the output logits. Our method focuses on the black-box setting since the black-box scenario is more practical but challenging due to the closed-source trend of current models. Therefore, we report and compare the results of DetectGPT and FastDetectGPT in the black-box setting. **Response to Limitation**: Thank you for your feedback on the size of the test dataset. First of all, we follow previous works such as DNA-GPT and FastDetectGPT and utilize the same amount of data for fair evaluation. Moreover, we are happy to provide the evaluation results of our method on different test data sizes, as shown: |Num. of Samples|\||ChatGPT|\|||GPT4|| |---|---|---|---|---|---|---| ||\||Pubmed|\||Pubmed|Xsum|Writing| |150|\||0.9853|\||0.9785|0.9954|0.9980| |300|\||0.9842|\||0.9821|0.9924|0.9980| |500|\||0.9806|\||0.9828|0.9929|0.9974| It is observed that there is little difference in performance as the number of samples increases, indicating the robustness of our method to test data size. **Reference**: [1] Zhao, Wenting, et al. "Wildchat: 1m chatGPT interaction logs in the wild." arXiv preprint arXiv:2405.01470 (2024). [2] lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT [3] Geiping, Jonas, et al. "How much data are augmentations worth? An investigation into scaling laws, invariance, and implicit regularization." arXiv preprint arXiv:2210.06441 (2022). --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer bnRm Comment: I would like to thank the responses provided by authors. However, my concerns about the Weakness 1 are not well addressed. Specifically, regarding the observation that "DALD trained on the GPT-4 dataset performs better than the source model Claude3's dataset," are the logits distributions of GPT-4 and Claude3 truly similar? This contradicts my experience, and I still have doubts about the consistency between the assumptions and experimental results. --- Reply to Comment 1.1.1: Title: Response to Reviewer bnRm Comment: Dear Reviewer bnRm, Thank you very much for your reply. We really appreciate your suggestions on our work. Regarding your concern about our assumption and experimental results, as we discussed in the main text, the training data generated by GPT-4 is from WildChat[1], which is a high-quality dataset. However, WildChat does not include the data generated from Claude3. In order to save money, we didn't call Claude3 API to generate training data. The data of Claude3 was collected from the public output in Huggingface without a quality guarantee[2]. Therefore, the performance of the model trained by data generated from Claude is relatively worse than that trained by data generated from GPT-4. For a fair comparison, we sampled 5k prompts from wildchat and called Claude API to generate the training dataset. Then, we utilize the generated dataset to fine-tune the surrogate model and provide the results here(we will also include this in the next version): | | PubMed | Xsum | Writing | |---------------------|--------|--------|---------| | DALD(GPT-4 data) | 0.9875 | 0.9993 | 0.9977 | | DALD(Claude-3 data) | 0.9942 | 0.9994 | 0.9993 | The results show that the model trained by data generated from Claude3 is better than the one trained by data from GPT-4, demonstrating the correctness of our assumption and the consistency of experimental results. [1] Zhao, Wenting, et al. "Wildchat: 1m chatGPT interaction logs in the wild." arXiv preprint arXiv:2405.01470 (2024). [2] lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT
Summary: This paper proposes a method to improve black-box detection of machine-generated text, tackling the problem of performance degradation when a surrogate model's output is poorly aligned with the closed-source target LLM. By LORA fine-tuning the surrogate model on text generated by the target model, the authors align the surrogate model's output distribution to the target model, thus improving detection performance. The authors show that this alignment can be used to outperform/improve existing logit detection methods such as DetectGPT, DNA-GPT and Fast-DetectGPT; is effective across a range of surrogate models; and requires only a small amount of fine-tuning (and thus can be keep surrogate models up-to-date when newer models are released). Strengths: - **Consistent performance improvements**: Ablation study (Table 4) suggests that DALD leads to consistent and significant improvements across target models and datasets, for all three logit-based methods - **Surprisingly effective**: it's quite surprising to me that a relatively low amount of data/fine-tuning is so effective at aligning the probability distributions of target and surrogate models. I wonder how effective DALD is for more niche tasks/text genres, in cases where there may be less overlap between the training corpus and the evaluation text? (the authors briefly touch on this when discussing PubMed results) - **Application to non-English & practical settings**: The authors demonstrate that DALD can also help tackle the issue of non-English detection bias by showing performance improvements for German. They also show that their method is robust to machine-generated text that has undergone adversarial attacks. Weaknesses: - **Generalizability may be overstated** (Table 2): The authors cite the superior performance of their one-for-all surrogate model as evidence that their method could be extended to detect texts of unknown source models. However, the improvements seems very slight: I agree with their interpretation that current closed-source models tend to have a similar distribution, but this homogeneity might not necessarily be true for future versions of these models and/or other future closed-source models. - **Effectiveness across surrogate models** (Table 3): DALD doesn't significantly improve the detection performance of all surrogate models. For example, its effect on GPT2-Neo 2.7B, though positive, seems fairly minor. Does this indicate a limitation of their DALD method (e.g. that it is only effective when the performance gap/parameter size/pre-training data difference between surrogate and target is relatively small)? - **Limitation and Future work section** is extremely brief, and does not adequately address limitations Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses Section. (and Strengths bullet point 2) Also, minor nitpicks / typos: Typos: - Line 187: is freezing - Line 215: OpanAI - Line 244: pertraining - Line 574: We Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations section was extremely brief and only discussed extending their evaluation of multilingual performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the constructive and detailed reviews. We provide detailed responses to your concerns. **Response to the question about different domains**: We appreciate your interest in the effectiveness of DALD for different domains. It's worth noting that our training data is entirely distinct from our test dataset, so there is no overlap between the two datasets. Training data is the corpus from the publicly shared outputs of leading models while testing data is the specific datasets such as PubMed prompted by the target models. Additionally, we've conducted experiments on other domain-specific datasets, namely RAID[1], including poetry, news, books, etc. Results are shown: ||\||Poetry|News|Abstract|Books|Recipes|Reddit| |---|---|---|---|---|---|---|---| |**Fast-DetectGPT**|\||0.8553|0.9116|0.8600|0.9123|0.9116|0.9134| |**DALD**|\||0.9709|0.9567|0.9876|0.9675|0.9998|0.9862| We select 1000 human texts and 1000 GPT-4 generated texts for each domain in RAID dataset and compare the evaluation results with FastDetectGPT. The results demonstrate that DALD performs admirably in diverse domains. **Response to Weakness 1**: Thank you for raising this point as it provides a deep insight into our method. We acknowledge empirically that current closed-source models have homogeneity characteristics, as evidenced by the slight improvement in multisource settings. Therefore, we cannot guarantee the one-for-all surrogate model can perform optimally when encountering new models without homogeneity. However, it's important to note that the generalizability of our approach also lies in its ability to continuous learning. By collecting small-size data from new models and continually fine-tuning the surrogate model, we can adapt to newly published models with minimal cost. This ongoing effort represents a worthwhile alternative compared to retraining the entire model from scratch. **Response to Weakness 2**: We appreciate this careful review. The modest performance improvement of GPT-Neo-2.7B as the surrogate model only happens on Claude3 detection. As shown in the table below, upon further testing on GPT-4, it is observed a substantial improvement in its performance. We believe this might be attributed to that the parameter number of GPT-Neo-2.7B is relatively small and may not be sufficient to generalize well across all datasets. We intend to present a comprehensive overview of GPT-Neo-2.7B in the next version. ||\||Pubmed|Xsum|Writing| |---|---|---|---|---| |**Fast-DetectGPT**|\||0.8179|0.8179|0.9521| |**DALD(GPT-Neo-2.7B)**|\||0.9020|0.9732|0.9800| **Response to Weakness 3**: We acknowledge that the Limitation and Future Work section is brief since we have to condense this section in the current version due to space constraints. However, we appreciate your feedback and will ensure to include a more comprehensive discussiont in the next version of the paper. Regarding future work, we plan to explore the influence function to improve data efficiency for finetuning. Additionally, future work can utilize our pipeline for analyzing the homogeneity of different models and versions such as investigating whether there is more variability in non-transformer architectures or black-box model alignment from different companies. These are crucial areas that we will focus on exploring further. **Response to Typo**: Thank you for this careful review and pointing out the typo in our paper. We appreciate your attention to detail and will make sure to revise the typo accordingly in the next version of the manuscript. Your feedback is invaluable, and we are grateful for your thorough assessment of our work! **Reference**: [1] Dugan, Liam, et al. "RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors." arXiv preprint arXiv:2405.07940 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for your response. I appreciate the additional results you've provided on domain-specific datasets as well as for GPT-Neo-2.7B on GPT-4 (Weakness 2). --- Reply to Comment 1.1.1: Title: Response to Reviewer 3RMB Comment: Dear Reviewer 3RMB, Thank you very much for your comments. We will revise our manuscript accordingly in the next version. Best, Author
Summary: This paper introduces Distribution-Aligned LLMs Detection (DALD), a novel framework for detecting AI-generated text from large language models (LLMs). DALD addresses limitations of traditional detection methods, particularly when dealing with black-box or unknown LLMs. It aligns surrogate models with unknown target LLM distributions, enhancing detection capability across various models without requiring access to their internal logits. The approach leverages corpus samples from publicly available outputs of advanced models to fine-tune surrogate models, achieving state-of-the-art performance on both closed-source and open-source models. DALD can be integrated into existing zero-shot detection frameworks and demonstrates robustness against revised text attacks and non-English texts, offering a versatile solution for the evolving challenge of distinguishing AI-generated content from human-written text. Strengths: 1. The paper proposes a highly effective text detection framework. 2. The paper utilizes the fact that models can generate text under black-box settings to train a surrogate model, achieving efficient prediction in black-box scenarios. 3. The framework presented in the paper is actually plug-and-play and can be integrated into logit-based detection methods such as DetectGPT and Fast-DetectGPT. Weaknesses: 1. Since DALO is a training-based method, it may be unfair to compare it with the zero-shot method(Detect-GPT, DNA-GPT, and Fast-DetectGPT). Comparing it with other SOTA training-based methods such as Ghostbuster[1] is necessary. 2. Only testing PubMed on ChatGPT while testing all three datasets on GPT-4 and Claude. 3. For reliability, it is important to see how well the detector performs at low FPR regimes [2]. 4. Expanding your evaluation to include recent open-source models such as Llama 3 and Mistral/Mixtral would enhance the breadth and relevance of your study. 5. The method is easy and lacks contribution in technology. [1] Vivek Verma, Eve Fleisig, et al. "Ghostbuster: Detecting text ghostwritten by large language models" (2023). [2] Sadasivan, Vinu Sankar, et al. "Can ai-generated text be reliably detected?" arXiv preprint arXiv:2303.11156 (2023). Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Could you compare it with other SOTA training-based methods such as Ghostbuster[1]? 2. Why did you just test PubMed on ChatGPT while testing all three datasets on GPT-4 and Claude? 3. Could you provide the ROC curves for your detector? For reliability, it is important to see how well the detector performs at low FPR regimes [2]. 4. Why not conduct some experiments on open-source LLM like Llama 3 and Mistral/Mixtral? [1] Vivek Verma, Eve Fleisig, et al. "Ghostbuster: Detecting text ghostwritten by large language models" (2023). [2] Sadasivan, Vinu Sankar, et al. "Can ai-generated text be reliably detected?" arXiv preprint arXiv:2303.11156 (2023). Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your detailed and careful reviews. We will give our response to your questions. **Response to Weakness 1**: We appreciate the suggestion of comparison with Ghostbuster and adding experiments comparing our method to Ghostbuster. We utilize the official code of Ghostbuster and evaluate it on the same datasets in our experiments. The results are: ||\||GPT-3.5|\|||GPT-4||\|||Claude3|| |---|---|---|---|---|---|---|---|---|---|---| ||\||Pubmed|\||Pubmed|Xsum|Writing|\||Pubmed|Xsum|Writing| |**GhostBuster**|\||0.8108|\||0.7269|0.8384|0.9614|\||0.7722|0.946|0.9675| |**DALD**|\||0.9853|\||0.9785|0.9954|0.9980|\||0.9630|0.9867|0.9981| DALD achieves the best performance on all datasets compared with Ghostbuster. We will add the results of Ghostbuster and more discussions to the next version. In addition, we would like to point out that we have already compared our method to training-based methods like Roberta (we gently invite the reviewer to check Table 1), demonstrating its effectiveness in comparison to existing approaches. Concerning the fairness of comparison, we would like to clarify that our method significantly differs from traditional training-based methods, as we only utilize a small amount of data (e.g., around 2k data) to fine-tune the model (less than 10 mins on a single A6000), whereas training-based methods typically require much more training data (250K in Roberta). Therefore, we believe that to demonstrate the uniqueness and advantages of our method, it is essential (and has been) to compare it with both zero-shot and training-based methods. **Response to Weakness 2**: We appreciate your careful review. Due to space constraints in the paper, we omitted results from two other datasets on ChatGPT, so we had to prioritize the study using the most recent datasets available on PubMed. We add the results here (which will also be included in the next version): ||ChatGPT-Pubmed|ChatGPT-Xsum|ChatGPT-Writing| |---|---|---|---| |**DNA-GPT**|0.7788|0.9673|0.9829| |**Fast-DetectGPT**|0.9309|0.9994|0.9967| |**GhostBuster**|0.8108|0.9832|0.9983| |**DALD**|0.9853|1.0000|1.0000| It is observed Xsum and Writing on ChatGPT can be well detected by FastDetectGPT. Furthermore, our method further improves the performance on top of FastDetectGPT. **Response to Weakness 3**: Thank you for this suggestion. We will add the ROC curves to the pdf page. We would like to invite you to check Figure 1 on the pdf page. We compare the curve with DNA-GPT and FastDetectGPT. Our method achieves the best performance at low FPR on all datasets. **Response to Weakness 4**: We appreciate the suggestion to incorporate recent open-source models such as Llama 3 and Mistral in our assessment. It's worth noting that the results for Llama 3-8B are already documented in Table 6. Additionally, we intend to present further findings on open-source models like Mistral-7B and the new Llama 3.1-8B. These results are available for review: ||\||LLama3.1|||\||Mistral||| |---|---|---|---|---|---|---|---|---| ||\||Pubmed|Xsum|Writing|\||Pubmed|Xsum|Writing| |**Fast-DetectGPT**|\||0.8668|0.9914|0.9958|\||0.6880|0.7931|0.9211| |**DALD**|\||0.9059|1.0000|0.9998|\||0.7733|0.8822|0.9573| The results show that our method achieves the best on all open-source models, demonstrating its effectiveness. **Response to Weakness 5**: While our method might be easy to implement, it's essential to consider the practical implications and benefits it offers. We want to highlight that our work's most significant contribution is the observation that lightweight distribution alignment (less than 10 mins on a single A6000) between the surrogate model and the target model can significantly improve detection accuracy and effectively address the issue of model updates. The empirical results demonstrate the strong sample efficiency of the proposed approach, which is also supported by theoretical analysis. Furthermore, the empirical results reveal an interesting observation: the approach exhibits relatively strong generalization capabilities and can be used to enhance all existing logit-based detectors. --- Rebuttal Comment 1.1: Comment: I'm glad for your comprehensive clarification. I'm curious about what you mentioned regarding space constraints in weakness 2, as I recall that the appendix isn't page-limited. Besides, I agree with the reviewer bnRm's weakness 1 about training on different domains. I will keep my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer yZjY Comment: Dear Reviewer yZjY, Thank you very much for your advice on our work, which greatly enhances the solidity of our work. We will add all related experimental results to our main text in the next version. Regarding your concern about our assumption and experimental results, as we discussed in the main text, the training data generated by GPT-4 is from WildChat[1], which is a high-quality dataset. However, WildChat does not include the data generated from Claude3. In order to save money, we didn't call Claude3 API to generate training data. The data of Claude3 was collected from the public output in Huggingface without a quality guarantee[2]. Therefore, the performance of the model trained by data generated from Claude is relatively worse than that trained by data generated from GPT-4. For a fair comparison, we sampled 5k prompts from wildchat and called Claude API to generate the training dataset. Then, we utilize the generated dataset to fine-tune the surrogate model and provide the results here(we will also include this in the next version): | | PubMed | Xsum | Writing | |---------------------|--------|--------|---------| | DALD(GPT-4 data) | 0.9875 | 0.9993 | 0.9977 | | DALD(Claude-3 data) | 0.9942 | 0.9994 | 0.9993 | The results show that the model trained by data generated from Claude3 is better than the one trained by data from GPT-4, demonstrating the correctness of our assumption and the consistency of experimental results. [1] Zhao, Wenting, et al. "Wildchat: 1m chatGPT interaction logs in the wild." arXiv preprint arXiv:2405.01470 (2024). [2] lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT
Summary: The paper addresses the challenge of detecting machine-generated text from black-box LLMs without access to their logits. Traditional methods using surrogate models suffer from performance degradation due to misalignment with target model distributions, particularly as new models are introduced. The proposed Distribution-Aligned LLMs Detection (DALD) framework aligns surrogate model distributions with unknown target LLMs. DALD enhances detection capabilities, achieving state-of-the-art performance in black-box settings, and offers a plug-and-play enhancement to existing zero-shot detection frameworks. Extensive experiments validate DALD's high detection precision against revised text attacks and non-English texts. Strengths: 1. The detection of machine-generated text is an important problem. 2. The paper includes theoretical analysis on the effectiveness of fine-tuning. Weaknesses: 1. The idea lacks novelty; collecting data from models with the same version as the target model and then aligning the surrogate model is not novel. 2. The proposed method appears to heavily rely on alignment data collection and specific model versions. Does this mean the surrogate model needs continuous fine-tuning, when the detection task varies? If so, what is the generalizability? 3. The paper missing abalation studies on how much fine-tuning data is required when the detection task varies in different categories. If the data requirement is large, it could pose a problem for the detection task. 4. The paper is missing important baselines such as RADAR [1] and could benefit from evaluation on more models like Gemini and additional tasks, including coding tasks. ## Reference [1]. Radar: Robust ai-text detection via adversarial learning. NeurIPS 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Please refer to the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your detailed reviews. Here are our responses to your questions: **Response to Weakness 1**: The detection of black-box commercial LLM models is a critical topic. The rapid updates in these models present a significant and ongoing challenge, as they lead to decreased detection efficiency in existing logit-based zero-shot detectors. Traditional methods require the manual and careful selection of different surrogate models. Our work's most significant contribution is the observation that lightweight distribution alignment (less than 10 mins on a single A6000) between the surrogate model and the target model can significantly improve detection accuracy and effectively address the issue of model updates. The empirical results demonstrate the strong sample efficiency of the proposed approach, which is also supported by theoretical analysis. Furthermore, the empirical results reveal an interesting observation: the approach exhibits relatively strong generalization capabilities and can be used to enhance all existing logit-based detectors. **Response to Weakness 2**: There are two aspects of the generalizability of our method. On the one hand, the model trained on single-source data can perform effectively on different target models, which indicates one-for-all ability. We gently invite the reviewer to check the 3rd line results in Table 2. Our model is only trained on 5K texts generated by GPT-4, yet it demonstrated large performance improvement on ChatGPT, GPT-4, and Claude3. Additionally, we further tested its performance on the newly published models GPT-4o and GPT-4o-mini and found the model also has strong performance on them. These results highlight the generalizability of our approach. ||\||GPT-4|||\||Claude3|||\||GPT-4o|||\||GPT-4o-Mini||| |---|---|---|---|---|---|:---------:|---|---|---|---|---|---|---|---|---|---| ||\|| PubMed|Xsum|Writing| \||PubMed|Xsum|Writing |\|| PubMed|Xsum| Writing|\|| PubMed | Xsum| Writing| | **Baseline**| \| |0.7995|0.7072|0.9299|\|| 0.8877|0.9143| 0.9248|\||0.8163|0.6595| 0.9505|\|| 0.7994 | 0.5877| 0.9150| | **DALD** | \|| 0.9785 | 0.9954 | 0.998 |\|| 0.9875 | 0.9993| 0.9977 |\|| 0.9877 | 0.9965 | 0.9994 |\|| 0.9857 | 0.9976 | 0.9992| On the other hand, reviewer bnrm also highlighted that the model has the capability of continuous learning. We encourage the reviewer to refer to the 5th line of Table 2 where we present the results of our model which has been trained using 5K texts from multiple sources. Its exceptional performance suggests that if a new closed-source model is released, we can gather a small corpus generated by the model and use it to continue training the surrogate model. Thus, our model can rapidly and continuously adapt to new models with minimal fine-tuning costs. **Response to Weakness 3**: First, our paper contains in-depth experiments that investigate the required data size for fine-tuning. We would like to invite the reviewer to refer to Figure 4, which depicts the performance at various data sizes. This result shows that just about 2k of training data is enough to yield significant improvements, indicating the minimal amount of data needed. Moreover, we performed a theoretical analysis of the amount of training data (refer to the Appendix and the Method section). One of our key contributions is demonstrating that only a minimal tuning cost is required. **Response to Weakness 4**: Thank you for your suggestions about adding more baseline models and different tasks. As suggested, we add comparisons with RADAR: ||\||GPT-3.5|\|||GPT-4||\|||Claude3|| |---|---|---|---|---|---|---|---|---|---|---| ||\||Pubmed|\||Pubmed|Xsum|Writing|\||Pubmed|Xsum|Writing| |**RADAR**|\||0.8953|\||0.8818|0.9926|0.8496|\||0.8295|0.9900|0.8780| |**DALD**|\||0.9853|\||0.9785|0.9954|0.9980|\||0.9630|0.9867|0.9981| We utilize the official codebase of RADAR. RADAR is evaluated on the same test dataset as our DALD. As shown in the table, DALD achieves better performance on all datasets. We will add the comparison and discussion to our draft accordingly. We also include the performance comparison on the coding task. We follow [2] to apply APPS[1] dataset as the coding task. We sample 150 coding tasks from APPS and generate the coding results by calling GPT-4 API. Our method is only trained by the corpus generated by GPT-4 as we previously did. The results are as follows: ||\||GPT4-APPS| |---|---|:---:| | **RADAR**| \| | 0.5067| | **Fast-DetectGPT** | \| | 0.6836| | **DALD** | \| | 0.9078| Our method obtains significant improvement on the coding task. Finally, we conduct the experiments on Gemini, as shown in: ||\||Gemini-PubMed| |---|---|:---:| | **RADAR**| \| |0.8496| | **Fast-DetectGPT** | \| | 0.9352| | **DALD** | \| | 0.9902| Since Gemini has the protection mechanisms(recitation), we only conducted the experiments on the PubMed dataset to evaluate the performance. Our method is still the best compared with other methods, demonstrating the effectiveness of our method. **Reference**: [1] Hendrycks, Dan, et al. "Measuring coding challenge competence with apps." arXiv preprint arXiv:2105.09938 (2021) [2] Yang, Xianjun, et al. "Zero-shot detection of machine-generated codes." arXiv preprint arXiv:2310.05103 (2023). --- Rebuttal Comment 1.1: Title: Reply to Reviewer iaeF Comment: Dear Reviewer iaeF, This is a kind reminder. We wanted to kindly follow up on the rebuttal discussion regarding. We highly value your insights and would greatly appreciate your response at your earliest convenience. Please let me know if there is any additional information you require or if there are any concerns we can address. Thank you for your time and consideration. Authors
Rebuttal 1: Rebuttal: **PDF Pages**: This PDF page includes the ROC curves comparison with DNA-GPT and Fast-DetectGPT. We would like to invite all reviewers to check the Figure in the page. Thanks a lot. Pdf: /pdf/725ff24e4c1b8b9a91534d4955a834ff3d79eaac.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Understanding Multi-Granularity for Open-Vocabulary Part Segmentation
Accept (poster)
Summary: This paper proposed PartCLIPSeg for open-vocabulary part segmentation. This framework leverages generalized parts and object-level contexts for generalization in fine-grained parts, integrating competitive part relationships and attention control techniques. Extensive experiments on three datasets demonstrate the superior performance of PartCLIPSeg. Strengths: 1) The paper is well-organized, good writing and easy to understand. 2) The topic of open-vocabulary part segmentation is meaningful and underexplored. 3) Significant improvement compared to the baseline methods. Weaknesses: 1) Figure 3 (overall architecture) can be refined to clearly emphasize the contribution. 2) The analysis and explanation of the experiment are not sufficient. Although the method achieves a significant improvement on unseen categories, the results of seen categories on the Pred-All setting of the three datasets show different trends. Specifically, the results of the seen categories show a decrease compared to the baseline in Tables 1 and 3 (There exists an annotation mistake (41.70).), and a significant improvement in Table 2. Please discuss this result. 3) Ablations of loss functions. We notice that adding some loss function in some cases may lead to suboptimal results and show different trends on different datasets. For example, the second column of the Pred-All setting in Table 5. Please discuss this difference. Technical Quality: 3 Clarity: 3 Questions for Authors: How is the threshold γ in Equation 7 determined? How does this threshold affect the results? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The unseen categories of the three datasets in the experiment may have been seen during the CLIP pre-training, and we want to explore how well PartCLIPSeg generalizes to the “truly” unseen categories that have not been seen in the pre-training or from other fields Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and valuable suggestions. ## **Weakness 1**: Fig. 3 Refinement We refined Fig. 3 to make it clearer and to effectively emphasize our contribution. Specifically, we had made modifications to distinguish between the modules involved in training, fine-tuning, and the frozen. Additionally, we have delineated the data and operational components and added dotted lines to expand the explanation, highlighting our contribution more clearly. ## **Weakness 2**: Discussion in Experiment Results Thank you for pointing out the annotation mistake. We have corrected it. PartCLIPSeg enhances generalization ability by disentangling object-specific part guidance into object-level context and generalized parts. This improves performance for unseen classes compared to previous methods. However, this focus on generalization may cause some performance degradation for seen classes. Previous approaches often overfit to seen classes, resulting in high performance on seen classes but low performance on unseen classes. On the other hand, our approach leverages generalized information extracted from supervised seen classes, which can sometimes reduce performance on seen classes. Different trends depending on datasets: In ADE20K-Part-234, which contains many diverse and small objects (as shown in Fig. R5), the use of object-level priors enhances generalization performance. This is because directly predicting parts smaller than the small objects is much more challenging. However, in Pred-All setup of Pascal-Part-116 and PartImageNet, can lead to sub-optimal outcomes compared to directly using only object-specific part information if the object-level prior is noisy. Nonetheless, this framework allows us to leverage advanced methods for acquiring the object level prior, as demonstrated by gains in the Oracle-Obj setting. Specifically, modern approaches like SAM [C1], BEiT [C2], and EVA [C3], which excel in object-level predictions can support the validity of our approach. Since these models improve the accuracy of object-level prior, they can mitigate the performance degradation in the seen classes. Future research on OVPS can explore methods to maintain performance on seen categories while improving performance on unseen categories. [C1] Kirillov, Alexander, et al. "Segment anything." ICCV 2023. [C2] Bao, Hangbo, et al. "Beit: Bert pre-training of image transformers." ICLR 2022 [C3] Fang, Yuxin, et al. "Eva: Exploring the limits of masked visual representation learning at scale." CVPR 2023. ## **Question 1**: Threshold γ The threshold was set empirically, and we have included the evaluation results for other values (0.1, 0.2, …, 0.5) as in the below table. It shows that our method is robust to the choice of threshold. We will add more detailed information about gamma in the main text and supplementary materials to provide further clarity. (Pascal-Part-116 / Oracle-Obj / mIoU) |threshold (γ)|seen|unseen|harmonic mean| |:-:|:-:|:-:|:-:| |.1|47.34|32.24|38.35| |.2|47.45|32.20|38.37| |.3|50.02|31.67|38.79| |.4|51.10|31.18|38.73| |.5|48.71|31.16|38.01| ## **Limitation**: Generalization to Truly Unseen PartCLIPSeg leverages the results of CLIP's pretraining, which, as you noted, presents a limitation in generalizing to categories not seen during CLIP's pretraining. This is a common limitation shared by methods across various open-vocabulary tasks, not just Part Segmentation. We agree that further research is needed to address the challenge of "truly" unseen categories in the future. The potential future work to address the challenge of truly unseen categories could explore the utilization of one-shot training and visual correspondence methods. One-shot training offers the potential to extend the model's capabilities to new categories with minimal training examples. By incorporating visual correspondence, there is a possibility to enhance the model's adaptability and improve its performance in recognizing and segmenting unseen categories. This approach could mitigate the limitations of pretraining and enable more flexible and effective handling of truly unseen scenarios. --- Rebuttal Comment 1.1: Comment: We apologize for the oversight in our latest rebuttal. The response to Weakness 3 was accidentally omitted. We regret any confusion this may have caused. The response to Weakness 3 is provided below. ---- ## **Weakness 3**: Loss Functions Ablations Discussion We will provide additional explanations to the initial explanation to address missing details. The separation loss reduces the overlap between parts, while the enhancement loss strengthens the underrepresented parts. These two losses are not independent; true effectiveness can be achieved when used together. Without the enhancement loss, the separation loss can lead to smaller parts being diminished when adjacent to larger parts. As illustrated in Fig. R2, applying only the separation loss results in missing small parts: minimizing the intersection may cause larger parts, such as sheep’s torso, head, and muzzle, to overshadow smaller parts, like sheep’s neck. Therefore, the enhancement loss is essential to ensure that the small parts, in the bottom row of Fig. R2, are accurately segmented and not overwhelmed by larger neighboring parts. We hope this additional explanation clarifies the role of separation and enhancement losses. --- Rebuttal Comment 1.2: Comment: Hi dear reviewer, Please read through rebuttal, and see if the rebuttal has addressed your concerns or you have further comments. Thanks, AC
Summary: This paper proposes PartCLIPSeg, which builds upon the CLIPSeg model and extends it to handle the unique challenges of part segmentation. This paper introduces several components to address the challenges in OVPS. 1. Generalized parts with object-level contexts: This approach combines generalized part information with object-level context to mitigate the lack of generalization in fine-grained parts. 2. Attention control: This method introduces competitive part relationships and attention control techniques to minimize overlap between predicted parts and enhance activation for underrepresented parts. This mechanism is a new addition to the field. The paper conducts experiments on OVPS’s baseline datasets. The result shows that PartCLIPSeg outperforms existing SOTA methods in most settings. The Ablation primarily focuses on the impact of the proposed attention control loss, which lacks a comprehensive analysis of other key components of this paper. Strengths: - Introducing generalized parts with object-level contexts: The paper proposes an approach that combines generalized part information with object-level context to enhance the model’s ability to generalize to unseen object-part combinations. - Developing an attention control mechanism: The paper introduces an attention control mechanism which helps to address the issues of ambiguous part boundaries and missing small or infrequent parts. - Promising result on baseline benchmark: The paper demonstrates state-of-the-art performance in most settings of OVPS. Weaknesses: - Insufficient ablation studies: The ablation studies primarily focus on the proposed attention control loss, which is not sufficient to prove the effectiveness of the proposed method. 1) The impact of combining object-level and part-level pseudo-label is not explored. 2) The paper does not provide sufficient evidence that the proposed losses work as intended, beyond their impact on the final metric. Further discussion is needed to validate whether the losses effectively address the challenges they were designed to tackle. - Limited comparison with other OVPS methods: The paper compares PartCLIPSeg primarily with CLIPSeg and CATSeg, which are designed for open-vocabulary segmentation and not specifically for OVPS. The comparison with other OVPS methods is lacking. - Limited discussion on computational complexity: Since the paper introduces a new attention control mechanism, a detailed analysis of the computational requirements and speed trade off could be helpful for a more comprehensive understanding of the proposed method. Inadequate qualitative analysis: Insufficient exploration and insights of the model’s performance on challenging and failure cases. Technical Quality: 2 Clarity: 3 Questions for Authors: See Weakness. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: No societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. ## **Weakness 1.1**: Impact of Object-Level and Part-Level Labels We conducted additional experiments to verify the impact of object-level and part-level pseudo-labels. We varied $\lambda_1$ and $\lambda_2$ in Equation 5, setting each to 0 and 1 in turn, to assess the influence of each guidance on the overall model performance. Table R2 shows that reducing the weight of the object-level loss ($\lambda_1$) improved performance in the Oracle-Obj setting (with provided object-level masks). On the other hand, increasing the weight of object-level loss enhanced performance in the Pred-All setting. This indicates the advantage of focusing on part information prediction when object-level masks are available (Oracle-Obj setting). ## **Weakness 1.2**: Validation of Addressing Challenges We initially integrated the solutions to the three challenges into the main body of the paper. However, we realized that this approach compromised clarity. Based on the precious feedback, we have included additional qualitative assessments to demonstrate how each issue has been resolved in Fig. R1 of the attached PDF. Also, we conducted additional quantitative studies on boundary IoU [C1] to validate the accuracy of the boundaries as shown in Table R3. Overall (in addition to Fig. R1) * (a) Lack of generalization: * Fig. 5, row 3 shows improvements in detecting areas that models like CLIPSeg struggled with in the Pred-All setting, indicating enhanced generalization performance. * (b) Ambiguous boundaries: * Fig. 5, row 1 illustrates how competitive part segmentation helps mitigate ambiguous boundaries. * Additional metric mean boundary IoU (in Table R3) verified the superiority of PartCLIPSeg. * (c) Missing underrepresented parts: * Fig. 5, row 2, and Fig. 4 highlight improvements in the segmentation of underrepresented parts. * We also have detailed this in Section 4.3 through an ablation study. * Fig. R4 demonstrates that our method helps identify small part classes. [C1] Cheng, Bowen, et al. "Boundary IoU: Improving object-centric image segmentation evaluation." CVPR 2021. ## **Weakness 2**: Comparison with Other OVPS Methods We have now included a performance comparison with VLPart (ICCV 2023) [C1]. VLPart is a pioneering study in OVPS that uses a Mask R-CNN-based approach leveraging the DINO feature for correspondences. Notably, VLPart focuses on instance segmentation with masks. In contrast, our research compares semantic segmentation methods like ZSSeg, CATSeg, and CLIPSeg, following the recent OVPS research [C2] This distinction also led to the omission of a direct comparison with VLPart in the OpenReview of OV-PARTS (NeurIPS 2023) [C2]. Based on the feedback, we recognize that a performance comparison would enhance the overall understanding and context. Therefore, we conducted experiments under the same conditions to compare VLPart with our method. We adapted the mask outputs of VLPart, which uses mAP-based evaluation, by converting them to the semantic segmentation label with the highest pixel-wise confidence. It's important to note that VLPart's original experiments on Pascal-Part only included dog and bus as unseen classes, while our current experimental setup on Pascal-Part-116 (following OV-PARTS) includes bird, car, dog, sheep, and motorbike. Therefore, we re-trained VLPart on Pascal-Part-116 to ensure a fair comparison with our method. As shown in Table R1, our results show that PartCLIPSeg outperforms VLPart in both the Pred-All setting and the Oracle-Obj setting. This is because VLPart relies on the nearest seen class when predicting unseen classes, whereas our model understands disentangled object-level and part-level contexts and utilizes attention control. We believe leveraging the nearest seen class can be disadvantageous when the number of unseen classes increases, especially if these unseen classes are unrelated or distant from the seen classes. The qualitative results can be found in Fig. R3. [C1] Sun, Peize, et al. "Going denser with open-vocabulary part segmentation.", ICCV 2023. [C2] Wei, Meng, et al. "OV-PARTS: Towards open-vocabulary part segmentation.", NeurIPS 2023. ## **Weakness 3**: Computational Requirements **[Computational complexity]** For PartCLIPSeg, although the parameters increase due to the computations related to attention control, there is an advantage in not having to maintain weights for each object-specific part due to the use of generalized parts. Detailed information regarding parameters and memory is provided in Table R4. **[Challenging and failure cases]** PartCLIPSeg uses ViT-based CLIP, and since the proposed attention modulation is dependent on the resolution of the attention map, it has limitations on achieving optimal performance for overlapping small parts. ## **Limitation** The section related to societal impact was added to the supplementary materials (Ln 30 - 47). Although this is still in its early stages and immediate applications are limited, we aim to present a foundation for future applications in areas such as autonomous robot guidance, image editing, and fine-grained understanding. --- Rebuttal Comment 1.1: Comment: Hi dear reviewer, Please read through rebuttal, and see if the rebuttal has addressed your concerns or you have further comments. Thanks, AC
Summary: This paper identifies three key problems of current open-vocabulary part segmentation, namely, lack of generalization, ambiguous boundaries, and missing underrepresented parts. The paper then proceeds to propose solutions to fix these problems. Experimental results show that the proposed method outperforms previous state-of-the-art methods. Strengths: This paper identified some key issues in current open-vocabulary part segmentation and proposed effective solutions respectively. The paper was easy to follow, with good figures to demonstrate key concepts. The experiments also demonstrate that the methods can lead to nontrivial gains. Weaknesses: 1. **Limited discussion of previous open-vocabulary part segmentation work and their relationship to this work.** For example, I quickly searched and read the VLPart paper [1], which seems to parse objects first, then parts, thus avoiding the object-level misclassification issue (or the "lack of generalization" as named by the authors). Indeed, the VLPart seems very natural to me. So I am actually surprised to find that there is such an issue (mixing objects and parts during classification) as indicated by this paper. In general, the related work section provides very limited introduction of the previous open-vocabulary part segmentation methods, esp. how they are related to the proposed approach. There are also **no descriptions of the compared baselines in the experiment section**.This made readers hard to make a fair assessment of the contribution of this paper. 2. **Limited ablation studies to understand the proposed approach.** Some key questions are unanswered: how much does each proposed component (object-level context, attention control) contribute to the performance gain? In what way or what types of data? It might also be helpful to compare the Pred-Obj performance because that helps disentangle the localization and classification ability of the approaches. Current comparison mixes many components together and it's difficult to gain insights about how the gain is achieved. 3. **Lack of addressing directly the identified key problems**. The paper starts with three key issues and then propose different ways to fix them. However, in the experiments, there are no direct addressing to these problems any more (except Table 6 for small parts). It would be great if the experiments can clearly demonstrate how the three problems are mitigated by the proposed approaches. 4. **Lack of discussion on the training details in the paper**. For readers who are not familiar with CLIPSeg, it is impossible to tell how the model is trained and what datasets are used for training. There is also **a lack of descriptions of the compared baselines in the paper**. Without these information, it's hard to understand the performance gains. Minor issues: 1. I found it confusing to call it "lack of generalization" when the dog's parts are misclassified as sheep's or cat's. Isn't it just misclassification, or object-level misclassification? 2. Texts on the figures are way too small, esp. in Figure 2 and Figure 4. 3. The "supplementary material" for this paper should be put as an appendix to the main paper file, instead of as a separate file. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. What is "incomplete guidance from knowledge-based, multi-granularity characteristics of parts"? It is unclear from the texts. It seems like an important motivation for the design of the attention control. Some more analysis or discussion would help here. 2. In Table 1 and Table 3, why are the performance of PartCLIPSeg on "seen" classes so low? From the model design, I don't see a reason for this. Do the authors have an explanation? 3. How to understand the "failures" in Table 6 where PartCLIPSeg cannot improve the performance from the CLIPSeg (such as "bird's eye") or PartCLIPSeg also has very low performances (such as "sheep's eye", "cow's neck")? And are all these parts indeed *small*? For example, "cow's leg" doesn't seem to be small to me. How do the authors define what are small parts? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have addressed the limitation and potential societal impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. ## **Weakness 1**: Discussion of Previous OVPS We have now included a performance comparison with VLPart (ICCV 2023). VLPart is a pioneering study in Open-vocabulary Part Segmentation (OVPS). VLPart focuses on instance segmentation with masks, whereas our research compares semantic segmentation methods like ZSSeg, CATSeg, and CLIPSeg. This distinction also led to the omission of a direct comparison in the OpenReview of OV-PARTS (NeurIPS 2023). However, based on the feedback, we recognize that a performance comparison would enhance the overall delivery of context. Therefore, we conducted experiments under the same conditions to compare VLPart with our method. Our results show that PartCLIPSeg outperforms VLPart in both the Pred-All and the Oracle-Obj settings. VLPart is based on a Mask R-CNN model; it may encounter errors at the object-level classification (in the Pred-All setting). Even if it detects parts within an object, the overall performance can be compromised if the object-level detection itself is not accurate. We have included the qualitative and quantitative evaluation results in Table R1 and Fig. R3. (Due to space limitations with other questions, we kindly ask you to refer to the response to Reviewer LDit's Weakness 2 for additional information.) ## **Weakness 2**: Ablation Studies Additional ablation studies related to object-level context and attention control were conducted. The experiments include various object-level and part-level guidance with and without attention control. This allowed us to analyze the influence of object-level guidance, part-level guidance, and attention loss through their respective coefficients. Detailed information is provided in Table R2. (Due to space limitations with other questions, we kindly ask you to refer to the response to Reviewer LDit's Weakness 1.1 for additional information.) ## **Weakness 3**: Addressing of Key Problems Based on the precious feedback, we have included additional qualitative assessments to demonstrate how each issue has been resolved in Fig. R1. Also, we conducted additional quantitative studies on boundary IoU(Intersection of Union) as in Table R3. (Due to space limitations with other questions, we respectfully refer you to the response to Reviewer LDit’s Weakness 1.2 for detailed information.) ## **Weakness 4**: Training Details and Baseline Descriptions We add more explanations regarding our training details and CLIPSeg, as well as descriptions of the baseline methodologies: ZSSeg+ and CATSeg in the main text. CLIPSeg: Extends a frozen CLIP model with a transformer-based decoder. It operates with both text and image prompts, using skip connections from the CLIP encoder to the decoder and modulating the decoder's inputs using FiLM conditioning. Through this, CLIPSeg inherits CLIP’s zero-shot ability and can be fine-tuned to different downstream tasks. Our method utilizes these CLIPSeg pre-trained on the PhraseCut dataset and frozen CLIP. We fine-tuned the model respectively for each experiment using the training set of datasets. ## **Minor issues** Regarding incorrect object-level prediction, it can indeed be seen as misclassification. However, we refer to it as a “lack of generalization” because it indicates an insufficient disentanglement of object-level and part-level information. Moreover, the misclassification in the unseen class suggests that the model does not fully generalize the holistic understanding and contextual information from the seen classes to the unseen classes. We will clarify this in the revised version. Thank you. We made the corrections for minor issues 2 and 3. ## **Question 1**: Characteristics of Parts Our intention was to convey that, unlike object-level segments, parts may not have clear, well-defined boundaries that may be defined by human convention or knowledge, making perfect guidance challenging. For instance, the "head" may include the neck or only the face. We highlight that supervision can be inherently ambiguous since the boundaries of parts are subjective and competitive, based on agreed terminology. This challenge motivates the design of our attention control mechanism. Instead of relying solely on ambiguous supervision, we use common knowledge that parts should not overlap significantly. Our approach focuses on reducing the overlap between parts while simultaneously enhancing underrepresented parts. We have further clarified this in the main text. ## **Question 2**: Performance on "Seen" Classes PartCLIPSeg is designed to improve generalization by focusing both on object-level context and generalized parts, which significantly enhances performance on unseen classes. However, the emphasis on generalization can sometimes lead to sub-optimal performance in seen classes. (Due to space limitations with other questions, we respectfully refer you to the response to Reviewer Vaxb's Weakness 2 for detailed information.) ## **Question 3**: Understanding Failures in Small Parts We define a small part as any part that occupies less than 20% of the object mask. We have added this definition for clarity. Our model, along with other OVPS methods, faces challenges in accurately predicting small parts. Specifically, for unseen classes such as "bird's eye," the proposed model still exhibits lower prediction accuracy due to the limited resolution of the attention map. We have included detailed information on the proportion of object-specific parts, like the “cow’s leg”, relative to the object-level mask in Figure R4. This provides additional context on the size of these parts in relation to the entire object. --- Rebuttal 2: Title: Thanks for the response and a few more comments Comment: I first want to thank the detailed responses from the authors, esp. the added experiments, they are very helpful to understand the method. And I still have a few more comments: 1. Related to W1: "VLPart focuses on instance segmentation with masks, whereas our research compares semantic segmentation methods". Does it make sense to comparing semantic segmentation for this work in the first place? As far as I understand, this paper is targeting the part segmentation problem, then it definitely should treat it as at least an instance segmentation problem, or more appropriately, a panoptic segmentation problem. Especially given prior works VLPart already works on instance segmentation problem, I cannot understand the choice from the authors. I don't want to jump to negative guesses, so maybe the authors can give some more explanation on the chosen baselines and the reasoning behind it. 2. Related to W3: it's great to have more qualitative examples and additional quantitative studies on boundary IoU. But I still think it doesn't demonstrate clearly enough how the proposed method solves the key problems and why the method is able to. It's probably more about the paper presentation. At this point, it might even be helpful to rethink what's the key problems that the method has resolved and how to present them better. Then additional quantitative results would be useful to support the reasoning. 3. Final comments: I appreciate the authors adding many details like related works and training details. However, these should definitely have been present in the paper in the first place. These made me feel the paper was a bit hurried. To be frank, I am also a bit worried about whether a full comparison with other works is complete. While I did mention VLPart, that's because I only searched this paper. As a non-expert in this subfield, I am not sure whether there are many more or not. The lack of many details made me less confident of the overall paper. I will maintain my rating for now. --- Rebuttal Comment 2.1: Comment: Thank you for providing the opportunity to clarify several points. ## **1. Related to W1** * VLPart (ICCV 2023) approaches OVPS through instance segmentation, while OV-PARTS (NeurIPS 2023) utilizes semantic segmentation. We followed the OV-PARTS protocol, which is the more recent and higher-performing method, due to the following reasons. * We believe that the lack of text-image alignment is a significant issue in the current OVPS landscape. * Addressing semantic segmentation first allows us to focus on the core alignment issues within OVPS. * OV-PARTS has demonstrated the most notable performance among existing OVPS methodologies, and we have used it as a baseline for our task. * In response to the reviewer’s suggestion that instance segmentation may be more suitable for part segmentation, we believe that further research is needed, particularly in open-vocabulary settings. * While it may be logical for part segmentation to evolve into instance or panoptic segmentation in the long term, it is critical first to develop models that can extend to various categories under semantic segmentation, especially in zero-shot environments like open-vocabulary. For this reason, semantic segmentation is actively researched in object-level open-vocabulary domains (e.g. OVSS), as seen in recent works like CAT-Seg (CVPR 2024), PnP-OVSS (CVPR 2024), and FC-CLIP (NeurIPS 2023). * **Methods based on instance segmentation have not yet demonstrated effective performance in fine-grained recognition tasks like part segmentation compared to semantic segmentation**. * As shown in Table 4 of VLPart, despite being tested in a simpler experimental setup (with only two novel classes, dog and bus), VLPart exhibits low performance, with a mAP of 4.2 and mAP_50 of 11.0 on novel classes. Similar trends were also confirmed under our experimental conditions, as shown in Table R1. * These results highlight the limitations of instance-based segmentation, which do not outweigh its purported advantages. --- Rebuttal Comment 2.2: Comment: ## **2. Related to W3** In this study, we identified three key challenges associated with existing OVPS methodologies (VLPart, OV-PARTS) and proposed an enhanced OVPS approach to address these issues: (a) Lack of generalization, (b) Ambiguous boundaries, and (c) Missing underrepresented parts. In the following, we clarify how the proposed approach addresses the key challenges. ##### **(a) Lack of generalization:** To address generalization issues, we develop generalized part guidance and object guidance. (Section 3.2: L117-176) Generalized part guidance is designed to learn the part class across (object-specific part) base classes in the training set so as to improve performance on novel classes in testing. Object guidance is used to resolve the ambiguity of the same part category name belonging to a different object class by using object-level supervision. Our results confirm that misclassification has reduced: qualitatively in Fig. 5 (row 3), Fig. R1 (a), Section 3.2.2 (suppl.), and quantitatively in Tables 1, 2, 3 (especially under the Pred-All setup), Table 4 (cross dataset), and Table R2 based on the m-IoU metric. An ablation study in Table R2 clearly justifies the roles of both guidances. ##### **(b) Ambiguous boundaries:** To overcome the ambiguity in part-level supervision, we leverage the common understanding that these parts should be distinct and non-overlapping. To achieve this, we introduce a constraint $\mathcal{L}\_{sep}$, which ensures that attention activations from different object-specific parts remain disjoint within an object. (Section 3.3.1: L194-203). It is supported by qualitative results in Fig. 5 (row 1) where ambiguous boundaries are mitigated, Fig. R1(b), and Section 3.2.2 (suppl.), where non-fully covered part predictions occur in other baselines, particularly in the Pred-All setting. Quantitative results are presented in Table R3, based on the boundary IoU metric. An ablation study in Tables A5 and A6 further demonstrates the effectiveness of our proposed loss function in mitigating ambiguous boundaries. ##### **(c) Missing underrepresented parts:** To ensure that underrepresented parts, such as small or less frequent ones, are not ignored, we propose $ \mathcal{L}\_{enh} $. This is achieved by maximizing the attention activation values for these parts, allowing them to be more effectively identified. (Section 3.3.2: L204-214) Our method is supported by qualitative results shown in Fig. 4, Fig. 5 (row 2), Fig. R1 (c), and Section 3.2.1 (suppl.) where small part classes such as bird’s tail and beak are correctly segmented. Quantitative results are provided in Table 6, Fig. R4, Table A1, and Table A2, using the metrics m-IoU and recall. Ablation studies on m-IoU in Section 4.3, Table 5, and Recall in Table A3, and Table A4 further highlight the effectiveness of our proposed method. **(Dataset / Evaluation Setting / Metric)** **[Table A1] (Pascal-Part-116 / Oracle-Obj / Recall)** |model|seen|unseen|harmonic Recall| |:-:|:-:|:-:|:-:| |ZSSeg+|65.47|32.13|43.10| |CAT-Seg|56.00|43.20|48.77| |CLIPSeg|55.71|43.35|48.76| |PartCLIPSeg|58.46|47.93|52.67| **[Table A2] (ADE20K-Part-234 / Oracle-Obj / Recall)** |model|seen|unseen|harmonic Recall| |:-:|:-:|:-:|:-:| |ZSSeg+|55.78|40.71|47.07| |CAT-Seg|43.48|39.87|41.60| |CLIPSeg|49.59|48.11|48.84| |PartCLIPSeg|53.31|51.52|52.40| **[Table A3] (Pascal-Part-116 / Oracle-Obj / Recall)** |model|seen|unseen|harmonic Recall| |:-:|:-:|:-:|:-:| |w/o $ \mathcal{L}\_{sep} + \mathcal{L}\_{enh} $ |58.97|46.47|51.98| |w/ $ \mathcal{L}\_{sep} + \mathcal{L}\_{enh} $|58.46|47.93|52.67| **[Table A4] (ADE20K-Part-234 / Oracle-Obj / Recall)** |model|seen|unseen|harmonic Recall| |:-:|:-:|:-:|:-:| |w/o $ \mathcal{L}\_{sep} + \mathcal{L}\_{enh} $ |51.64|50.99|51.31| |w/ $ \mathcal{L}\_{sep} + \mathcal{L}\_{enh} $|53.31|51.52|52.40| **[Table A5] (Pascal-Part-116 / Oracle-Obj / B-IoU)** |model|seen|unseen|harmonic B-IoU| |:-:|:-:|:-:|:-:| |w/o $ \mathcal{L}\_{sep} + \mathcal{L}\_{enh} $ |36.24|37.87|37.04| |w/ $ \mathcal{L}\_{sep} + \mathcal{L}\_{enh} $|36.15|39.07|37.55| **[Table A6] (ADE20K-Part-234 / Oracle-Obj / B-IoU)** |model|seen|unseen|harmonic B-IoU| |:-:|:-:|:-:|:-:| |w/o $ \mathcal{L}\_{sep} + \mathcal{L}\_{enh} $ |24.99|22.41|23.63| |w/ $ \mathcal{L}\_{sep} + \mathcal{L}\_{enh} $|25.67|22.46|23.96| --- Rebuttal 3: Comment: ## **3. Final comments** Thank you for your feedback. We understand your concerns. However, we had a thorough understanding of the existing research (including VLPart) and excluded VLPart from the experimental comparison only due to differences in experimental protocols. We also emphasize that our experiments on semantic segmentation protocols are thorough and more competitive in terms of performance. Besides, we even suggest a new evaluation criterion, the Pred-All setting. * We focused on the semantic segmentation task of OVPS for the reasons mentioned earlier in answer 1. * We adopted the state-of-the-art OVPS baseline from OV-PARTS and conducted a thorough comparison with all comparable methodologies in this task. * Although VLPart was mentioned in the introduction and related work, a detailed comparison was initially omitted because VLPart adheres to the instance segmentation protocol. However, this has been strengthened thanks to the reviews (Table R1). * Baselines such as ZSSeg+, CATSeg, and CLIPSeg (introduced in OV-PARTS) along with their training details were previously introduced in Section 2.2 of the supplementary materials under Implementation Details (L77-L98). However, based on the feedback, we will revise the related work to make these descriptions more prominent and ensure that they are not overlooked. * We were aware of existing OVPS studies, including VLPart, from the initial submission. OVPS is an emerging and underexplored research area with limited research focused on improving the direct alignment between images and part-level text. * In the Part Segmentation section of the introduction and related work, we briefly described all the latest OVPS-related methods we could find at the time of submission, including VLPart [35] (L33-34, L84-86), OV-PARTS [40] (L34-37), and OPS [30] (L83-84). * **To the best of our knowledge, OV-PARTS achieved the highest performance in OVPS and there have been no follow-up studies** specifically addressing the part-level semantic segmentation task in an OV setting without relying on additional mask proposal models, aside from our work. [30] OPS: Towards Open-World Segmentation of Parts (CVPR 2023) [35] VLPart: Going Denser with Open-Vocabulary Part Segmentation (ICCV 2023) [40] OV-PARTS: Towards Open-Vocabulary Part Segmentation (NeurIPS 2023 D&B) --- Rebuttal Comment 3.1: Title: Thanks for the authors' responses Comment: I want to thank the authors for their detailed responses and clarifications. They help clarify some of my misunderstandings. I re-read the paper and the other comments. Based on my final understanding, I decided to keep my current rating as borderline acceptance. --- Reply to Comment 3.1.1: Comment: We sincerely appreciate your thoughtful review and valuable comments. We are pleased to have resolved the questions raised and appreciate your positive decision. Thank you again for your time and insight.
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely appreciate all reviewers for their thorough and insightful feedback. The valuable reviews have significantly enhanced the overall delivery of the proposed method. We are particularly thankful that all reviewers (Vaxb, LDit, KLx2) found our paper show promising results and non-trivial improvement. We appreciate the reviewers for highlighting that our paper is well-organized and easy to follow (Vaxb, KLx2). Additionally, we are grateful for noting that the task of OVPS is meaningful and underexplored (Vaxb). Finally, we are thankful for acknowledging our identification of key issues in OVPS and our effective approach to addressing these issues (LDit, KLx2). Through this review, we were able to answer the following questions. We kindly ask you to refer to the individual responses and the attached PDF. The progress made based on the review can be summarized as follows. * Additional Baseline: VLPart (ICCV 2023) * Additional experiments related to the OVPS baseline were conducted. * Qualitative and quantitative results of VLPart can be found in Table R1, Fig. R2. * Qualitative and Quantitative Results of Initial Challenges * New qualitative results have been added in Fig. R1. * We conducted extra quantitative experiments of boundary IoU for Challenge B (Ambiguous boundaries), and results have been added in Table R3. * Additional Ablation Studies and Analysis * Futher ablation studies have been conducted for different hyperparameters of $\lambda_1$, $\lambda_2$, attention control ($ {\mathcal{L}\_{sep}} $, $ {\mathcal{L}\_{enh}} $), and $ \gamma $ as in Table R2. * New qualitative explanations of attention losses are added in Fig. R2. * The analysis of the datasets in the aspect of object-specific part size has been added in Fig. R4 and the overall object size in Fig. R5. * Detailed computation resource information has been added in Table R4. We refined the paper for clarity and hope our responses address the reviewers' concerns. We are happy to answer any further questions. We sincerely thank you again for the valuable feedback. Pdf: /pdf/e0a2195e56360c11f82cfc377752bd037310fae5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Surge Phenomenon in Optimal Learning Rate and Batch Size Scaling
Accept (poster)
Summary: The paper analyzes the intricate relationship between optimal learning rate and batch size scaling for adaptive optimizers, such as Sign SGD and Adam. Building on prior analysis for SGD by McCandlish (2018), this work reveals a non-monotonic relationship between optimal learning rates and batch size. The optimal learning rate initially increases, reaches a peak, and then decreases, eventually saturating to a non-zero value, referred to as the surge phenomenon. These theoretical predictions are validated through experiments in both image classification and NLP tasks. Strengths: * The paper provides new insights into the relationship between optimal learning rate and batch size for Adam optimizers. The drop in the optimal learning rate after the peak is new, to the best of my knowledge, and is relevant given that Adam is the default optimizer choice. * Prior results suggesting a square root scaling for Adam are reproduced, further supporting the findings. Weaknesses: * At times, the paper assumes that the reader is well-versed with the prior work of McCandlish (2018). For instance, lines 129-131. It would be helpful to reiterate prior results to motivate the analysis. * The empirical results are not very convincing. If we only consider the experiments (for instance Figure 4), without any reference to theoretical results, the surge phenomenon does not seem appreciable. A finer learning rate search has to be performed to demonstrate the surge phenomenon clearly. In Figure 4(b), the optimal learning rate is oscillating around two points. It is unclear if this is due to the surge phenomenon or just random fluctuations. I would request the authors to help me understand their empirical results better. * The theoretical results are derived for sign SGD, while it's known that Adam parameters beta1 and beta2 are crucial hyperparameters. It's unclear why the theoretical results can be generalized to Adam. Technical Quality: 3 Clarity: 3 Questions for Authors: * What is the practical implication of the decrease in the optimal learning rate after the peak? * Given that the peak shifts through training, can the authors propose guidelines for scaling the learning rate and batch size? * What is the intuition behind the training speed and data efficiency relationship being the same as the SGD case? * Why the ResNet model (Figure 3) is trained with random 10k samples at every epoch? This should affect the overall results. Also, why this experiment is performed with sign SGD only? * If the results of Figure 2(b) and 2(c) are combined, then it should predict (1) a drop in the optimal learning rate, (2) saturation, (3) increase, and finally saturation. How does this result align with the main result (Figure 1)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: * The theory is built on the quadratic approximation of the loss function. In the last few years, it has been established that modern neural networks are typically trained at large learning rates (Edge of Stability, see Refs. [1-2]), which cannot be captured using quadratic approximations of the loss [3]. * Gaussian distribution for gradients is assumed for the theoretical analysis, whereas it is known that the gradient distribution is not Gaussian, and this is precisely why Adam performs better than SGD in language modeling. It is unclear whether the results hold for such settings. [1] Gradient descent on neural networks typically occurs at the edge of stability Cohen et al. (2021) arXiv:2103.00065 [2] Adaptive gradient methods at the edge of stability Cohen et al. (2022) arXiv:2207.14484 [3] Self-stabilization: The implicit bias of gradient descent at the edge of stability Damian et al. (2022) arXiv:2209.15594 [4] Linear attention is (maybe) all you need (to understand transformer optimization) Ahn et al. (2022) arXiv:2310.01082 Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful reviews. Below, we address your comments in detail. > W1. A1. Due to paper page limitation, we were unable to delve deeply into references [1-2]. We acknowledge this limitation and will provide a more comprehensive analysis in a future edition. Regarding the analysis of training speed and data efficiency in lines 129-131, we provide a detailed exposition in A6. In summary, Eq. 20 plays a pivotal role in both [1] and [2], particularly in its modification of the scaling law in [2]. If we demonstrate that Eq. 20 holds in the context of Adam, the modified scaling law conclusion in [2] naturally follows. > W2. A2. Following your suggestion, we conducted finer search and presented the results in Fig. 2 of the attached file. The outcomes from these extended experiments provided clearer result which aligns with our theoretical analyses. > W3. A3. In Appendix A of our paper, we explain why the theoretical results in sign SGD can be generalized to Adam. In both our original experiments in our paper (section 3.1 and 3.3) and our supplementary Experiment-1, we set both hyperparameters beta1 and beta2 to non-zero values, and still observed the proposed surge phenomenon in the experimental results. More details and results be found in the global rebuttal and the appendix. > Q1. A4. Our motivation for investigating the scaling law of learning rates with respect to batch sizes is to gain deeper insights into the training dynamics of deep learning models. This understanding can aid in fine-tuning hyperparameters, enhancing convergence speed, and avoiding exhaustive grid searches. Leveraging prior knowledge that the optimal learning rate decreases after reaching its peak, researchers and engineers can effectively adjust the learning rate to achieve more efficient training. > Q2. A5. Given that the peak shifts during training, a straightforward approach is to periodically adjust hyperparameters to redetermine the scaling law, including hyperparameters $\epsilon_{max}$ and $\mathcal B_{noise}$. Our observation may serve as inspiration for developing adaptive learning rate or batch size methods in the future, which can effectively harness this knowledge. > Q3. A6. Section 2.3 of [1] demonstrates that integrating its Eq. 2.7 during the optimization process results in Eq. 2.11 (Eq. 20 in our paper). This implies that if the delta loss after each optimization step follows $\frac{\Delta L_{max}}{1 + \frac{\mathcal{B}}{B}}$, similar conclusions to Eq. 2.11 can be derived. Consequently, proving Eq. 18 ensures the validity of Eq. 20. This suggests that, Adam and SGD only affect the value of $\Delta L_{max}$, and do not involve other interference within the relation between the delta loss and batch size. > Q4. A7. Considering the substantial training costs associated with grid search in our experiments, we utilize random 10k samples to mitigate these expenses. This approach accelerates model convergence while yielding consistent conclusions. Notably, when attempting to use the original dataset, we encountered the need for larger models to fit the data, resulting in significantly longer training times—far beyond what we can afford. Beyond the considerations of training costs, selecting a distinct value for the hyperparameter $\beta_{i}$ does not compromise the validity of our conclusions. We provide a comprehensive discussion on the relationship between signSGD and Adam in Appendix-A. Furthermore, our paper and supplementary file include experiments that explore outcomes related to the Adam scenario. More details and results can be found in our paper (section 3.1 and 3.3) and our supplementary Experiment-1. > Q5. A8. Thank you for your observation. In Fig. 2, we deliberately chose scenarios with higher loss to emphasize the decrease and subsequent increase learning rates parts. In these cases, the batch size corresponding to the peak of the curve (the red dashed lines) is relatively small in this case, leading to the grid search not fully capturing the rising range of the curve. However, our other experiments, documented in both the main paper and the supplementary file, reveal curves that more are closely align with the trend depicted in Fig. 1. We include these scenarios to gain a clearer understanding of the underlying dynamics. > L1. A9. Thank you for your insightful suggestions. Our experiments, including the supplementary ones, demonstrate that the conclusions drawn from the second-order expansion approximation of the loss function effectively predict the surge phenomenon observed in most mainstream scenarios. In our final paper revision, we will thoroughly review and discuss the referenced articles. Additionally, we recognize the potential of exploring higher-order approximations with respect to scaling laws as promising future work. > L2. A10. As observed in Fig. 2(a) of our paper and Fig. 4 in our supplementary file, the distribution of gradients along the dataset direction approximately adheres to Gaussian distributions. Furthermore, as demonstrated in Equation 11, this equation indeed possesses a functional extremum as long as the distribution satisfies ($f(B) \propto B$). This implies that, under broader conditions, both the batch size and learning rate still exhibit a surge during the scaling process. We appreciate the suggestions provided in L1-2 and plan to explore this issue more thoroughly in future work, incorporating the ideas discussed therein. Thanks again for appreciating our work and providing constructive suggestions. We hope you will consider raising your score. Please let us know if you have further questions. --- [1] An Empirical Model of Large-Batch Training, 2018, McCandlish et al. (reference [25] in our paper) [2] Scaling Laws for Neural Language Models, 2020, Kaplan et al. (reference [26] in our paper) --- Rebuttal Comment 1.1: Comment: I thank the authors for their extensive rebuttal. I have gone through the rebuttal plots and I still don't find the empirical results regarding the surge phenomena convincing. For instance, Figure 2 of the rebuttal does not look similar to the claimed curve from Figure 1. I would request the reviewers to further help me understand the difference. --- Reply to Comment 1.1.1: Comment: Thank you for your time and feedback. The surge phenomenon refers to the behavior where the optimal learning rate increases with an increase in batch size, then decreases, and finally increases slowly. This is illustrated by the orange solid line in Figure 1 of the original paper: rising from 0 to 0.4e8, falling from 0.4e8 to 1.5e8, and rising again from 1.5e8 to 4e8. Supplementary Material Figure 2 provides a finer grid search in the batch size range of 80-120 and learning rate range of 0.002-0.004, expanding Figure 4(b) from the original paper. This highlights that the observed phenomenon of decreasing and then increasing within the batch size range of 80-120 in Figure 4(b) is not due to randomness. From a broader perspective in Figure 4(b), as the batch size increases, the optimal learning rate first increases from 0 to 65.8, then decreases from 65.8 to 105, and finally slowly increases from 105 to 120, displaying a similar trend to Figure 1 in the original paper. Although the specific **batch size** ranges for each phase may vary with different models and datasets, the general trend described by the surge phenomenon is consistent across these scenarios.
Summary: The paper presents a heuristic analysis of the scaling of the optimal learning rate with batch size for Adam-style optimizers in the framework of [1]. The analysis is accompanied by experiments to support the predictions. Notably the authors demonstrate that the optimal learning rate can decrease with batch size in a certain range, a phenomenon that had not been identified previously and which is not present for SGD. The analysis also recovers the square-root scaling rule in the small batch size regime identified in other work. [1] McCandlish, S., Kaplan, J., Amodei, D., & Team, O. D. (2018). An empirical model of large-batch training. arXiv preprint arXiv:1812.06162. Strengths: The paper tackles a practically important problem using a mix of heuristic theory and experiments. The prior literature on linear scaling rules with large batch training applies to SGD but not to Adam style optimizers which are the dominant optimizers for transformers. The surge phenomenon is interesting and novel. Weaknesses: The presentation is not great. A lot is assumed from [1], but it would make reading easier to make things more self-contained. Equations like Eq. 22 should be better explained and plots like in Fig 3 are hard to parse. The process for making the Fig. 3 plot is unclear. It is unclear what is going on in Figure 1 between the solid and dashed Adam curves. The takeaways and consequences for a practitioner are unclear. Minor: the Latex parentheses look sloppy. [1] McCandlish, S., Kaplan, J., Amodei, D., & Team, O. D. (2018). An empirical model of large-batch training. arXiv preprint arXiv:1812.06162. Technical Quality: 3 Clarity: 1 Questions for Authors: My questions are related to the perceived weaknesses 1. In Fig. 1 how is the solid Adam curve generated? Shouldn't the curve eventually asymptote? It doesn't seem like that from the plot. 2. How is Fig. 3 generated? The linear fits look very strange. 3. Do the results suggest using any modification of the square root scaling rule in practice? If so it would be helpful to have a comparison to understand potential benefit. 4. Is there any characterization of the **intermediate** (i.e. large but not infinite batch) behavior of the scaling? It would be helpful to have an analog of Figure 1 for the empirical data. Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for your valuable insights. We address your comments as follows. > W1. The presentation is not great. A lot is assumed from [1], but it would make reading easier to make things more self-contained. A1. Due to page constraints, we currently include only essential assumptions and conclusions from related work. In the final version of our paper, we aim to make the contents more self-contained and enhance our presentation. > W1. It is unclear what is going on in Figure 1 between the solid and dashed Adam curves. > Q1. In Fig. 1 how is the solid Adam curve generated? Shouldn't the curve eventually asymptote? It doesn't seem like that from the plot. A2. The solid line of Fig. 1 in our paper is a schematic diagram that we created based on Eq. 13 and 17, while the dashed line of Fig. 1 is an illustration for Eq. 13. According to Eq. 11 and 13, there exists an extreme value of learning rate, resulting in a tendency to increase first and then decrease. As $B$ continues to increase, $\mathcal E_{i}(B)$ grows slowly according to Eq. 10 and 17, and finally asymptotically approaching a value, causing the solid line to gradually rise after a downtrend. For better visualization, we only showed part of the figure without plotting the final asymptote. Since the specific values of the downward and upward inflection points are related to the current loss and are hard be determined, the schematic diagram is only presented here for illustration purposes. Furthermore, in supplementary Experiment-3, we can provide additional evidence to support our conclusions by visualizing the experiment results in accordance with the trend shown in Fig. 1. More details can be found in the global rebuttal. We will provide more detailed explanations in the final version of our paper. > W1. Equations like Eq. 22 should be better explained and plots like in Fig 3 are hard to parse. The process for making the Fig. 3 plot is unclear. > Q2. How is Fig. 3 generated? The linear fits look very strange. A3. Eq. 22 is derived from Eq. 20-21, and the detailed derivation is in Appendix G. We will add more explanations in the final version of our paper. In Fig. 3, the left part is the grid search results of batch sizes and learning rates, while the right part is the fitted curve of Eq. 22 using the S and E data from corresponding (batch size, best learning rate) pairs in the left part. We will improve the presentation of the experiments. > W1. The takeaways and consequences for a practitioner are unclear. > Q3. Do the results suggest using any modification of the square root scaling rule in practice? If so it would be helpful to have a comparison to understand potential benefit. A4. In our experiments (see Fig. 2-4 in our paper and also the figures in the attached file), our fitted curve outperforms the square root scaling (corresponding to the alpha=0.5 curve). We have discussed the benefits of this approach in Section 3.3 and global rebuttal. Notably, while square root scaling may perform well in small batch size scenarios, it is essential to recognize that in large batch size scenarios, the optimal learning rate may no longer scale linearly and could decrease. For practitioners, we recommend following the papers [1-2] to approximate $\mathcal B_{noise}=B_{crit}$ using scaling law $B_{crit} = \frac{B^*}{L^{\frac{1}{\alpha}}}$, then use one simple search for a pair of (batch size, optimal learning rate) to determine the last hyper-parameter $\epsilon_{max}$. Subsequently, the conclusion of our paper can be applied to avoid costly grid-search. We will clarify the process in the final version of our paper. > Q4. Is there any characterization of the intermediate (i.e. large but not infinite batch) behavior of the scaling? A5. In our supplementary Experiment-3, we validated our theory by referring to publicly available experiment results from another paper, which included data on scaling in the intermediate transition phase. We have also visualized these results to further support our findings. More details and results be found in the global rebuttal and the appendix file. > W1. Minor: the Latex parentheses look sloppy. A6. We will improve the readability of our paper. Thanks again for appreciating our work and for your constructive suggestions. We hope you will consider raising your score. Please let us know if you have further questions. --- [1] An Empirical Model of Large-Batch Training, 2018, McCandlish et al. (reference [25] in our paper) [2] Scaling Laws for Neural Language Models, 2020, Kaplan et al. (reference [26] in our paper) --- Rebuttal Comment 1.1: Comment: Thank you for the responses and additional results. I will increase my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer ASLs, We would like to express our gratitude for your valuable feedback and for increasing the score of our paper. Your perceptive remarks and recommendations have significantly enhanced the quality of our work, and we sincerely value the time and effort you dedicated to reviewing our submission.
Summary: The paper gives an optimal choice of learning rate and batch size for neural networks. Different from the previous results on SGD-style optimizers. The authors give such solutions for Adam-style-optimizers. Strengths: 1. Batch size and learning rate will affect the performance largely and cost a lot to select a good one. It is important to understand the optimal batch size and learning rate for Adam-style optimizers. 2. The experimental results match the theorem proposed by the authors. Weaknesses: 1. It seems that Lemma 1 can only apply to quadratic problems. In the appendix, the relation is approximately equal. But in Lemma 1, it becomes "equal" without any further assumptions. 2. It is unclear how to select the optimal batch size or learning rate based on the theorem because either S, E or $\mu,\sigma$ is hard to estimate for a large network. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why does the theorem apply to general network optimization? 2. How to make the result of the theorem in use? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the time and effort you have taken in reviewing our paper and for your thoughtful feedback. Here, we respond to your comments. > W1. It seems that Lemma 1 can only apply to quadratic problems. In the appendix, the relation is approximately equal. But in Lemma 1, it becomes "equal" without any further assumptions. > Q1. Why does the theorem apply to general network optimization? A1. We would like to clarify that Lemma 1 can be applied to **ALL** gradient descent-based machine learning processes. In this context, the quadratic term arises from the Taylor expansion of the loss with updated parameters, **NOT** from quadratic problems. General network optimizations all have the form of gradient descent-based learning. Here we use the "equal" sign for simplicity. The similar analysis and simplification were also proposed in OpenAI's paper[1] Eq. 2.4-2.7. We will improve the clarification and the symbol, and add the reference in Lemma 1 in the final version of our paper. > W2. It is unclear how to select the optimal batch size or learning rate based on the theorem because either S, E or &mu;, &sigma; is hard to estimate for a large network. > Q2. How to make the result of the theorem in use? A2. Our work extends the conclusions on SGD optimizers in OpenAI's papers[1-2], focusing on Adam-style optimizers. Though in experiments we used grid-search to validate our conclusions, in practice the scaling law only contains two unknown hyper-parameters $\epsilon_{max}$ and $\mathcal B_{noise}$. Following papers [1-2], $\mathcal B_{noise}=B_{crit}$ can be efficiently approximated using scaling law $B_{crit} = \frac{B^*}{L^{\frac{1}{\alpha}}}$. We only need one simple search for a pair of (batch size, optimal learning rate) to determine the last hyper-parameter $\epsilon_{max}$. Therefore, the costly grid-search can be avoided. We will improve the presentation of our paper and add this clarification. Thanks again for appreciating our work and for your constructive suggestions. We hope you will consider raising your score. Please let us know if you have further questions. --- [1] An Empirical Model of Large-Batch Training, 2018, McCandlish et al. (reference [25] in our paper) [2] Scaling Laws for Neural Language Models, 2020, Kaplan et al. (reference [26] in our paper) --- Rebuttal Comment 1.1: Comment: Can you provide some experimental results showing that A2 is possible because the current results are based on grid-search on all hyperparameters? --- Reply to Comment 1.1.1: Comment: We appreciate your questions. To demonstrate the feasibility of A2, we follow the procedure outlined in A2 to determine the optimal learning rates relative to batch sizes, using the MoE experiment as an example (the data used below are from Fig. 1 in the supplementary file). --- **Step 1 Approximate $\epsilon_{opt}$ using a small amount of data** To demonstrate the generalizability of A2, here we try 3 starting points to derive the scaling laws separately. For each of the 3 different batch sizes (3145728, 4718592, 6291456), we search 5 (learning rate, training loss) pairs to fit $\epsilon_{opt}$, in which, the training loss uses $normalized(L) = \frac{L - L_{min}}{L_{max} - L_{min}}$ . | | BS-1 | BS-2 | BS-3 | |:--------------:|:---------------------:|:-------:|:-------:| | Token BS | 3145728 | 4718592 | 6291456 | | LR (1e-4) | $normalized(L)$ | | | | 1.0 | 1.0000 | 1.0000 | 1.0000 | | 2.0 | 0.4837 | 0.5134 | 0.5558 | | 3.0 | 0.1786 | 0.2144 | 0.2269 | | 4.0 | 0.0000 | 0.0000 | 0.0269 | | 5.0 | 0.0828 | 0.0186 | 0.0000 | | A | 0.0946 | 0.0782 | 0.0688 | | -B | 0.7995 | 0.7168 | 0.6658 | | $\epsilon_{opt}=-\frac{B}{2A}$ | **4.2257** | **4.5831** | **4.8387** | --- **Step 2 Approximate $\epsilon_{max}$ using a pair of (B, $\epsilon_{opt}$)** Following the referenced papers [2] and using their scaling law $B_{crit}=\frac{B^*}{L^{1/\alpha}}$, we can get $B_{crit} \approx 10^7$. We then use one pair of (B, $\epsilon_{opt}$) to approximate $\epsilon_{max}$, and the results are shown below. The last column is the actual result from grid search for comparison. | | BS-1 | BS-2 | BS-3 | AdEx 1 | |:-------------:|:-------:|:-------:|:-------:|:------:| | Token BS | 3145728 | 4718592 | 6291456 | | | $\epsilon_{opt}$ (1e-4) | 4.2257 | 4.5831 | 4.8387 | | | $\epsilon_{max}$ (1e-4) | **4.9374** | **4.9001** | **4.9628** | 4.9363 | --- **Step 3 Use the fitted values to predict the best learning rates $\epsilon_{opt}$ for different batch sizes** Using ($B_{crit}$, $\epsilon_{max}$) from Step 2, we can predict the optimal learning rate $\epsilon_{opt}$ for different batch sizes with the formula $\epsilon_{opt}=\frac{\epsilon_{max}}{\frac{1}{2}(\sqrt{\frac{\mathcal B_{noise}}{B}} + \sqrt{\frac{B}{\mathcal B_{noise}}})}$ in our paper. The predicted $\epsilon_{opt}$ values and the actual values from grid search are shown below. We can see that the relationsip between the optimal learning rate and batch size derived from these 3 batch sizes all align with the actual results from the grid search. | Token BS | $\epsilon_{opt}$ (1e-4) | | | | |:--------:|:------------:|:-------:|:-------:|:-------:| | | BS-1 | BS-2 | BS-3 | AdEx 1 | | 196608 | 1.3654 | 1.3551 | 1.3724 | 1.3651 | | 294912 | 1.6561 | 1.6436 | 1.6646 | 1.6558 | | 786432 | 2.5799 | 2.5604 | 2.5931 | 2.5793 | | 1572864 | 3.3982 | 3.3724 | 3.4156 | 3.3974 | | 3145728 | 4.2257 | 4.1937 | 4.2474 | 4.2247 | | 4718592 | 4.6180 | 4.5831 | 4.6417 | 4.6170 | | 6291456 | 4.8140 | 4.7776 | 4.8387 | 4.8129 | | 9437184 | 4.9361 | 4.8988 | 4.9614 | 4.9350 | | 12582912 | 4.9018 | 4.8647 | 4.9269 | 4.9007 | Therefore, by starting with one batch size and its corresponding five learning rate results, we can approximate the scaling law between optimal learning rates and batch sizes. --- Rebuttal 2: Comment: Dear Reviewer JT4e, We would like to express our sincere gratitude for your valuable feedback and for increasing the score of our paper. Your insightful comments and suggestions have greatly improved the quality of our work. We will ensure to incorporate these suggestions into the final manuscript, and we truly appreciate the time and effort you have put into reviewing our submission.
Summary: This work provides a scaling law between learning rate and batch size for Adam. Namely, this work finds that the optimal learning rate increases and then decreases as the batch size becomes larger; and, the peak of this curve corresponds to the trade-off between training speed and data efficiency. Strengths: - The paper is well motivated. While prior works studying the relationship between batch size and learning rate have focused on SGD, this work focuses on Adam (which is more popular / widely used). - The paper is written clearly and is well organized. I appreciate the “summary” notes included by the authors - The paper includes empirical evidence for CV and NLP tasks to support theoretical claims. Weaknesses: - I think this paper could benefit from experiments with more popular architectures for the NLP tasks (e.g. maybe it would be useful to include some experiments on tasks with llama or mistral models). - I also think it would be useful to have experiments with more datasets. Recent work shows that the data itself matters. E.g., for fine tuning LLMs, many factors wrt the data (e.g., data quality, variable sequence lengths, deduplicated data) can affect training. It would be interesting to see if the surge phenomena is agnostic to these factors or not. Technical Quality: 3 Clarity: 3 Questions for Authors: For NLP related tasks, the number of tokens in a batch might vary. Would the surge phenomena still apply for learning rate vs. num tokens? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: It would be interesting to see results on a larger variety of modern and widely used datasets and architectures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and constructive suggestions. We address your comments in the following paragraphs. > W1. I think this paper could benefit from experiments with more popular architectures for the NLP tasks (e.g. maybe it would be useful to include some experiments on tasks with llama or mistral models). A1. We provided additional experimental analyses on both sparse MoE and dense Transformer models for NLP tasks. The details can be found in the global rebuttal. For the sparse MoE structure, we experimented on the fine-grained MoE model with shared experts[1], which is a popular model and has a similar structure to the well-known Mistral-MoE[2]. As for the dense model, we validated our proposed conclusions by referring to another paper's experiments[3]. Please refer to the global rebuttal for the configurations, results, and analyses of these experiments. > W2. I also think it would be useful to have experiments with more datasets. Recent work shows that the data itself matters. E.g., for fine tuning LLMs, many factors wrt the data (e.g., data quality, variable sequence lengths, deduplicated data) can affect training. It would be interesting to see if the surge phenomena is agnostic to these factors or not. A2. In the global rebuttal and our attached files, we incorporated additional datasets to enhance the robustness of our findings. Specifically, the Experiment-1 utilized the RedPajama-v2 dataset, and Experiment-3 used a combination of Deepseek's in-house data and OpenWebText2. Please refer to global rebuttal for datasets' details. We acknowledge that data-related factors may affect training. However, due to their complexity and abundance, we leave the impact of these factors on surge phenomena as future work, which could lead to a separate, valuable research endeavor. > Q1. For NLP related tasks, the number of tokens in a batch might vary. Would the surge phenomena still apply for learning rate vs. num tokens? A3. In our paper and previous analysis[4], "batch size" is actually "token batch size", which considers the total number of tokens in a batch. In our experiments, we use the packing strategy to splice the data in each batch, so the number of tokens in all batches can be considered equal. We will address the misuse of this terminology in the final version. Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions. --- [1] DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models, 2024, Dai et al. [2] Mixtral of Experts, 2024, Jiang et al. [3] DeepSeek LLM Scaling Open-Source Language Models with Longtermism, 2024, Bi et al. [4] An Empirical Model of Large-Batch Training, 2018, McCandlish et al. (reference [25] in our paper) --- Rebuttal Comment 1.1: Comment: Thank you for your response. I acknowledge that I have read the author's response. I will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer ytaJ, We sincerely appreciate your valuable comments and the time and effort you invested in reviewing our paper. We will ensure to incorporate these suggestions into the final manuscript. We believe that these changes have significantly improved our paper. Once again, we express our sincere gratitude for your valuable contribution to our work.
Rebuttal 1: Rebuttal: Dear reviewers, Thank you for your reviews and constructive suggestions. We have incorporated additional experimental analyses to strengthen our conclusions. **Experimental Analyses** We made experimental analyses on both sparse MoE and dense structures. For the sparse MoE structure, in Experiment-1, we experimented on a fine-grained MoE model with shared experts[1], which has a similar structure to Mistral-MoE[2], a representative of the sparse models. As for the dense model, in Experiment-2 and Experiment-3, we validated our proposed conclusions with a more fine-grained experiment on DistilGPT-2, as well as an experiment from another paper[3], which showed aligned results with our theorems. The following is our specific experimental configurations and results. --- **Experiment - 1** We performed a grid search on a 500M parameter MoE model with 16 experts using the RedPajama-v2 dataset. | Key | Value | Key | Value | |---------------------|----------------|---------------------|----------------| | `VOCAB_SIZE` | 32000 | `Adam Beta1` | 0.9 | | `N_LAYERS` | 12 | `Adam Beta2` | 0.999 | | `MAX_SEQ_LENGTH` | 4096 | `Weight decay` | 0.00001 | | `D_MODEL` | 768 | `MAX_GRAD_NORM` | 1 | | `N_HEADS` | 12 | `AUX_LOSS_ALPHA` | 0.1 | | `HEAD_DIM` | 64 | `WSD warmup` | 2000 | | `EXPERTS_TOTAL_DIM` | 12288 | `WSD decay ratio` | 0.1 | | `TOP_K_ROUTER` | 4 | `Dataset` | RedPajama-v2 | | `N_EXPERTS` | 16 | In the experiments, we aimed to find the largest best learning rate across token batch sizes (token bs, i.e., considering the number of tokens in a batch). For simplicity, we fitted the final training loss at different learning rates using a quadratic function of the form $f(x) = Ax^2+Bx+C$, which served as a good approximation. Subsequently, we utilized the extremum $-\frac{B}{2A}$ of the quadratic function to estimate the optimal learning rate. We have presented the fitted curve in the attached file Fig. 1. Our method aligns well with the observed data points. Other methods, including linear scaling and square root scaling, cannot align with the best learning rates. The results reaffirm our original conclusions: the optimal learning rate first increases and then decreases as batch size grows. This further validates our scaling law between learning rate and batch size for Adam. --- **Experiment - 2** We conducted a detailed grid search on DistilGPT-2 using the ELI5 category dataset. The results are plotted as Fig. 2 in the attached file. --- **Experiment - 3** Additionally, we note that a publicly available paper[3] provides grid search results that align with our theoretical predictions. We have plotted the fitted curves in Fig. 3 of the attached file. For better visualization, we removed the data points with large loss values in the experimental results. Notably, the $B_{crit}$ value is approximately 0.1M for (a) and 1.5M for (b), estimated using the Scaling Law $B_{crit} = \frac{B^*}{L^{\frac{1}{\alpha}}}$ , matching the peak position of the optimal learning rate observed in their experiments. This observed trend also conforms to the pattern depicted by the orange solid line in Fig. 1 of our paper, further validating our theoretical results. --- [1] DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models, 2024, Dai et al. [2] Mixtral of Experts, 2024, Jiang et al. [3] DeepSeek LLM Scaling Open-Source Language Models with Longtermism, 2024, Bi et al. Pdf: /pdf/959bb2cb0099ea241fc7b934947244b809fa6d66.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Equivalence Between Static and Dynamic Regret Minimization
Accept (poster)
Summary: This paper demonstrates that dynamic regret minimization can be reduced to static regret minimization by embedding the comparator sequence into a higher-dimensional space, allowing the use of static regret algorithms for dynamic settings. It establishes a trade-off between the penalties associated with loss variance and comparator sequence variability, proving that achieving regret bounds purely based on the squared path-length without additional penalties is infeasible. Additionally, the paper introduces a new measure of variability based on the locally-smoothed squared path-length, providing a more practical approach to managing dynamic regret. Strengths: - The paper establishes a novel equivalence between static and dynamic regret minimization. This theoretical contribution provides a unified framework for analyzing and designing algorithms for dynamic regret minimization, leveraging the well-established techniques for static regret. - By establishing the trade-off between penalties due to the variance of the losses and the variability of the comparator sequence, the paper provides deep insights into the inherent limitations and trade-offs in dynamic regret minimization. This understanding is crucial for developing more effective algorithms and sets a foundation for future research in the area. - The introduction of the locally-smoothed squared path-length reduces variance penalties, balances adaptability by smoothing out noise, and facilitates the decoupling of regret terms, leading to more practical and robust dynamic regret bounds. Weaknesses: - My major concern is the main titile could be somewhat overclaimed, as the proposed reduciton is restricted in OLO (or reduce OCO to OLO). Though I admit that analysis in non-convex problems (in bandit feedback) could be extremely hard, dynamic regret and static regret is still discussed in these problems. So may be ''DR is equal to SR in OCO'' may be more suitable. - The approach involves embedding the comparator sequence into a higher-dimensional space, which could significantly increase computational complexity. Related concerns should be discussed, particularly in terms of implementing the proposed methods in large-scale or real-time applications. - The paper does not provide intuitive examples or case studies to illustrate how to obtain dynamic regret in practical problems. I believe a simple example of existing problems (for example, non-stationary OCO) allows us to compare dynamic regret obtained by this reduction with existing studies, making it easier to understand. - I'm not fully convinced by purely theoretical results. May be adding some simple evaluations to demonstrate the dynamic regret is indeed same as static regret by the proposed reduction can provide empirical evidences. - The reasons for choosing this locally-smoothed square path-length is not clear. If I get it right, path-lengthes sharing similar local-smoothness would all fullfill the requirements. Meanwhile, this locally-smoothed square path-length is too detailed such that it may loose empirical intuitions or physical meanings to some extent. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. Does the high-dimensional embedding reduction also hold for adpative regret (to static regret)? 2. Can we directly design a unified algorithm (best-of-both-worlds) without knowing the types of regret based on this reduction? Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 4 Limitations: \ Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > My major concern is the main titile could be somewhat overclaimed, as > the proposed reduciton is restricted in OLO We actually believe that our title is carefully worded to avoid overclaiming: we do indeed present *an* equivalence between static and dynamic regret --- it is one which holds in a particular context, and our abstract and introduction state very clearly the scope of this work. Note also that, as discussed in [this comment](https://openreview.net/forum?id=hD8Et4uZ1o&noteId=CJxEDImAbf), the equivalence can be applied more generally for OCO. That said, we do not think this is a big issue either way: we can add "for Linear Losses" to the title if it solves the major concern of the reviewer. > The approach involves embedding the comparator sequence into a > higher-dimensional space, which could significantly increase > computational complexity. Related concerns should be discussed, > particularly in terms of implementing the proposed methods in > large-scale or real-time applications. Computational complexity is indeed one of the main considerations when choosing the dual norm pair $(\\|\cdot\\|,\\|\cdot\\|_{*})$ (i.e., choosing the matrix $M$ in the context of Theorem 2), and we do discuss it on page 4, line 136. Moreover, the computational complexity of the algorithm achieving the smoothed square path-length bound (characterized in Proposition 3) is discussed on page 8. We can add some additional discussion regarding the related implementation concerns. Note that these concerns are shared by all other dynamic regret algorithms achieving a non-trivial dependence on the comparator variability, as our algorithm can be implemented with the *same* computational $O(d\log T)$ per-round complexity as prior works, as discussed on page 8. > Does the high-dimensional embedding reduction also hold for adpative > regret (to static regret)? No, adaptive regret involves making guarantees over all sub-intervals simultaneously, while the proposed reduction explicitly considers the total loss over the entire interval $[1,T]$. These are quite separate notions of performance and there is no known general equivalence between them. > Can we directly design a unified algorithm (best-of-both-worlds) > without knowing the types of regret based on this reduction? If by types of regret you are referring to static vs. dynamic regret, the answer is *yes*: in fact, our guarantee in Theorem 2 captures static regret as a special case. If by types of regret your are referring to adapting to various different measures of variability (e.g., Theorem 2 with different choices of $M$), *the answer is also yes*: see the last part of our [global response here](https://openreview.net/forum?id=hD8Et4uZ1o&noteId=2OutnXLWT0) for details. > The reasons for choosing this locally-smoothed square path-length is > not clear. If I get it right, path-lengthes sharing similar > local-smoothness would all fullfill the requirements. Meanwhile, this > locally-smoothed square path-length is too detailed such that it may > loose empirical intuitions or physical meanings to some extent. Due to space constraints, please see our [global response here](https://openreview.net/forum?id=hD8Et4uZ1o&noteId=2OutnXLWT0) for a detailed discussion of these concerns. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed explainations, all my concerns are correctly addressed.
Summary: The paper addresses the problem of minimizing dynamic regret. The main goal is to provide a unified perspective on the problems of minimizing both dynamic and static regret. The first key contribution, presented in Proposition 1, is an interesting and straightforward observation demonstrating a general reduction from dynamic to static regret, showing the equivalence between these two notions. In Theorem 1, the paper establishes a lower bound, indicating that squared path-length incurs a linear penalty, making adaptation to squared path-length impossible. Furthermore, using a parameter-free algorithm in Algorithm 2, the paper achieves an upper bound that matches the previously derived lower bound for any weighted norm $\|\cdot\|_{M}$. In Section 4.1, the authors discuss the suitable choice for $M$. By setting $M=(HH^{\top}) ^{−1}$, where $H$ is the unnormalized Haar basis matrix, they derive in Proposition 3 an upper bound for the regret that fully decouples comparators and gradient terms. Strengths: The message of the paper is clear, and the theoretical results are solid, consisting of a collection of small steps and observations that lead to important and surprising conclusions. The paper is extremely well-written and a pleasure to read. Weaknesses: I do not see any particular weaknesses in the paper. Minor point: There are typos in lines 164 to 167. $V$ should be $V_T$ and $\epsilon$ should be $\epsilon_T$. Technical Quality: 3 Clarity: 3 Questions for Authors: I would like to know which measures of variability the authors expect to obtain with different choices of $M$ that yield reasonable and significant regret bounds. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review, we are glad you enjoyed reading our paper! > I would like to know which measures of variability the authors expect > to obtain with different choices of that yield reasonable and > significant regret bounds. So far the literature mostly revolves around the (unsquared) path-length as the measure of variability. One of the exciting things about this work is that it opens the door for many unexplored directions in terms of the variability measure, and provides a convenient framework for studying new notions of variability and their trade-offs. In this work, we focused on achieving results related to squared path-length, but it may be possible to achieve results scaling with more general distance metrics, for instance by leveraging the notion of group norms. The variance-variability coupling guarantees in Section 5 are also a very open and almost entirely unexplored direction that we would like to investigate further. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgment Comment: I would like to thank the authors for the rebuttal. For now, I will maintain my current score. I plan to discuss the paper with the other reviewers and look forward to the author's discussions with them as well. I will update my score accordingly.
Summary: This paper studied dynamic regret minimisation in convex optimisation. First, they proved some interesting equivalence between dynamic regret and static regret on extended decision space. Based on this observation, they showed a lower bound which implying that the hypothesis that optimal dynamic regret scales with square path length is impossible. The authors then provided an algorithm that incur regret scales with new notion of path length, which match the novel lower bound. Strengths: The main contribution of this paper is demonstrating the equivalence between dynamic regret and static regret in extended space, which is quite interesting. This allows the authors to present a new lower bound and prove a new upper bound for dynamic regret. Although I am not very familiar with the literature on dynamic regret, I believe the result is significant if it is correct, especially in comparison to previous results stated in this paper. Weaknesses: I found the writing of this paper quite difficult to follow, especially in the main sections (3 and 4). The comparison between the lower bound and upper bound could be better explained. My question: The lower bound is $\tilde \Omega(\max_M G\sqrt{||\tilde{u}||_M \mathrm{Tr}(M) })$, and the upper bound essentially is $\tilde O(\min_M G\sqrt{||\tilde{u}||_M \mathrm{Tr}(M) })$. However, without strong duality on regret, there still seems to be a gap between the lower bound and upper bound. Correct me if I misunderstood, but I cannot find the reason why the lower bound and upper bound are tight as claimed in the paper. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Authors did discuss the limitation and potential future works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > My question: The lower bound is > $\tilde\Omega (\max\_{M}G\\|\tilde u\\|\_{M^{-1}}\sqrt{\mathrm{Tr}(M)})$, > and the upper bound essentially is > $\tilde O(\min\_{M}G\\|\tilde u\\|\_{M^{-1}}\sqrt{\mathrm{Tr}(M)})$. There seems to be a misunderstanding of the quantification of $M$ in the lower and upper bounds. The lower bound holds for *any* $M$ satisfying the conditions, not just for the worst-case $M$. Similarly our upper bound holds for any valid choice of $M$ rather than just the best one. So the upper and lower bounds do indeed match. To clarify the ideas, it might be useful to compare it, for example, with online mirror descent with $p$-norms: there exist upper and lower bounds that both hold for any $p$, not just with the best/worst ones. --- Rebuttal Comment 1.1: Comment: Many thanks for your clarification.
Summary: This paper presents a reduction from the dynamic regret minimization for Online Convex Optimization (OCO) to a static regret minimization problem over an extended domain. Using this reduction, the authors establish a lower bound that highlights the trade-off between the variation of the comparator and the variance of the gradient norm. This general lower bound indicates that a squared path-length bound is impossible. Furthermore, the authors provide a general upper bound for dynamic regret based on techniques from parameter-free online learning. By specifying $M$, the paper demonstrates that a new type of squared dynamic regret bound is achievable. Strengths: - This paper provides a simple yet effective reduction from the dynamic regret minimization problem to the static regret minimization problem. Subsequently, the techniques developed for comparator-adaptive methods can be applied to the dynamic regret minimization problem. - The paper offers a new analysis of the lower bound for dynamic regret minimization, revealing the intrinsic trade-off between the variability of the comparator sequence and the gradient norm. - A new type of squared dynamic regret bound is introduced in this paper, based on this novel perspective. Weaknesses: - One of my main concerns about the paper is that the results appear to be somewhat overstated. The abstract claims that "we show that dynamic regret minimization is equivalent to static regret minimization in an extended decision space." This claim is somewhat misleading to me, as the proposed reduction only applies to the online linear optimization problem (or OCO with linearized surrogate loss). It is unclear if the proposed reduction holds for a broader range of dynamic regret minimization problems, such as those involving exp-concave or strongly convex loss functions. If not, it would be more appropriate to explicitly mention this limitation in the title and abstract and to provide a more detailed discussion on these limitations in the main paper. - The high-level idea of this paper is quite similar to that of Zhang et al., 2023. Both approaches convert the dynamic regret minimization problem into a static regret minimization problem in another domain, using comparator-adaptive online learning methods to minimize extended static regret. While the authors discuss this in the related work section, it might be beneficial to explicitly state that previous work has already somewhat demonstrated equivalence between the dynamic regret minimization and static regret minimization problems. - As mentioned earlier, a similar reduction was presented in Section 2.2 of Zhang et al., 2023. It would be helpful if the authors provided a more detailed comparison of the two reductions. In the related work section, the paper states, "We take a similar but slightly more general approach." A more detailed explanation of how Proposition 1 generalizes previous work would be beneficial. Specifically, it would be useful to clarify whether the proposition can be derived by selecting a specific dictionary in Zhang et al., 2023. A more detailed discussion in Section 2 of this paper would be advantageous. Overall, I find the paper interesting, but I am concerned about the overstated contributions and the insufficient comparison with previous work. I would be happy to raise my score if these issues are properly addressed. Reference: Zhiyu Zhang, Ashok Cutkosky, and Ioannis Ch. Paschalidis. Unconstrained dynamic regret via sparse coding, 2023. ===post-rebuttal=== My primary concern was the significance of the proposed framework compared with Zhang et al. (2023) and the overstatement regarding the generality of the framework. After several discussions with the authors, I am convinced that the paper offers a more flexible and general framework for handling the dynamic regret minimization problem beyond the work of Zhang et al. (2023). To reflect this, I have updated my score to 6. Nevertheless, I still encourage the authors to clarify the limitations and provide a more detailed discussion on the contributions of the proposed framework in the revision. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the weakness part. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: This paper is theoretical in nature, and I do not identify any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The abstract claims that \"we show that dynamic regret minimization is > equivalent to static regret minimization in an extended decision > space.\" This claim is somewhat misleading to me, as the proposed > reduction only applies to the online linear optimization problem (or > OCO with linearized surrogate loss). It is unclear if the proposed > reduction holds for a broader range of dynamic regret minimization > problems, such as those involving exp-concave or strongly convex loss > functions. The proposed reduction can be applied to any convex losses, including strongly convex and exp-concave losses, by bounding $\sum_{t=1}^{T}\ell_{t}(w_{t})-\ell_{t}(u_{t})\le \sum_{t=1}^{T}\langle g_{t},w_{t}-u_{t}\rangle= R_{T}^{\text{Seq}}(\tilde u)$ via convexity. We can also be more precise, obtaining an **equality for convex losses**, by observing that $$\begin{aligned} \sum_{t=1}^{T}\ell_{t}(w_{t})-\ell_{t}(u_{t})=\sum_{t=1}^{T} \langle g_{t},w_{t}-u_{t}\rangle-D_{\ell_{t}}(u_{t}\|w_{t})\end{aligned}$$ by definition of Bregman divergences. Since this is an equality, any possible improvements that one might hope to leverage in settings with curvature are reflected in the terms $-D_{\ell_{t}}(u_{t}\|w_{t})$, while our reduction can be applied to the first term, $\sum_{t=1}^{T}\langle{g_{t},w_{t}-u_{t}}\rangle=R_{T}^{\text{Seq}}(\tilde u)$. As is usually the case, one should design their algorithm in such a way that these curvature terms $-D_{\ell_{t}}(u_{t}\|w_{t})$ are properly leveraged in the end. For instance, for strongly-convex losses in the static regret setting, Algorithm 6 of Cutkosky & Orabona (2018) provides a reduction for OLO which makes a guarantee that implies logarithmic regret when the curvature terms satisfy $-D_{\ell_{t}}(u\|w_{t})\le -\frac{\alpha}{2}\\|u-w_{t}\\|^{2}$ (that is, when the losses are $\alpha$-strongly convex). A similar reduction could potentially be applied here and would be an interesting direction for future investigation. > A similar reduction was presented in Section 2.2 of Zhang et al., > 2023. It would be helpful if the authors provided a more detailed > comparison of the two reductions. In the related work section, the > paper states, \"We take a similar but slightly more general > approach.\" A more detailed explanation of how Proposition 1 > generalizes previous work would be beneficial. Specifically, it would > be useful to clarify whether the proposition can be derived by > selecting a specific dictionary in Zhang et al., 2023. A more detailed > discussion in Section 2 of this paper would be advantageous. > > \[\...\] it might be beneficial to explicitly state that previous work > has already somewhat demonstrated equivalence between the dynamic > regret minimization and static regret minimization problems. The key difference is that we provide a general equivalence between static and dynamic regret that *any strategy* can be plugged into, while Zhang et al. (2023) provide a reduction which prescribes a particular strategy which allows the user to leverage static regret minimizers for dynamic regret. That is, our Proposition 1 holds regardless of what strategy you end up choosing to set $w_{t}$, and simply shows that writing the dynamic regret in a different way leads to static regret in a different decision space. Because there is no restriction on how $w_{t}$ is set, our proposition actually captures the approach of Zhang et al. (2023) as a special case. Note that this also demonstrates that there is no choice of dictionary in the framework of Zhang et al. (2023) from which Proposition 1 can be derived --- Proposition 1 is strictly more general. In contrast, the approach proposed by Zhang et al. (2023) requires that the designer commit to choosing $w_{t}=\mathcal{H}\_{t}x\_{t}$ for some matrix $\mathcal{H}\_{t}$ and where the coordinates of $x_{t}$ are separate 1-dimensional learning algorithms. Because their approach prescribes a strategy that the user must design around, their approach *does not* imply an equivalence between dynamic and static regret, but rather provides a reduction that allows one to leverage a particular class of static regret algorithms to design dynamic regret algorithms. This potentially limits the types of guarantees that can be achieved using the approach; for instance, the algorithm achieving the smoothed path-length result in Section 4.1 can not be represented in their framework, since it does not use separate coordinate-wise 1-dimensional updates. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. However, I remain concerned about the generality of the proposed reduction as claimed by the authors. Regarding the first point, it is still unclear to me how the proposed reduction can be applied to minimize the dynamic regret bound for exp-concave or strongly convex functions. While one could indeed introduce an additional Bregman divergence term, the subsequent analysis required to achieve improved dynamic regret for exp-concave functions (e.g. the rate in [1]) is far from straightforward to me. In this regard, the proposed reduction does not necessarily simplify the analysis for minimizing dynamic regret but rather shifts it to an analysis focused on the comparator-adaptive static regret bound, a perspective first introduced by Zhang et al., 2023. As for the comparison with Zhang et al., 2023, I appreciate the authors' feedback on the generality of the proposed reduction. However, the significance of this generality remains unclear. Both in the main paper and the feedback, the analysis of the squared path-length bound is presented as a prominent example demonstrating the advantage of the proposed framework. As other reviewers have pointed out, the advantage and significance of the proposed squared path-length are still not entirely evident. [1] Baby and Wang. Optimal Dynamic Regret in Exp-Concave Online Learning. COLT 2021. --- Rebuttal 2: Comment: > it is still unclear to me how the proposed reduction can be applied to minimize the dynamic regret bound for exp-concave or strongly convex functions. We are genuinely puzzled by this comment and we kindly invite the reviewer to reconsider it. We showed above that our equivalence *can* be extended to other losses as well *just because the reviewer wondered ``if the proposed reduction holds for a broader range of dynamic regret minimization problems''*, but this was never a claim in our paper! It seems very unfair to be penalized for something that is completely orthogonal to the contribution of our paper. It should be very clear that we focus on linear losses and linearized convex losses, but we are happy to emphasize it more in the text, to avoid any misunderstanding. That said, focusing on linear losses is actually very common in this literature, even in the recent paper by Zhang et al. (2023), which also applies to linear losses and has no straightforward way to achieve the optimal bound for exp-concave losses. > Both in the main paper and the feedback, the analysis of the squared path-length bound is presented as a prominent example demonstrating the advantage of the proposed framework. We respectfully but firmly disagree with this mischaracterization of our results: as we just agreed with Reviewer YDMs, the averaged squared path length is just an example, not the main contribution. Instead, as we wrote above, our main contribution is about *a framework that allows to reason on the trade-offs of measures of dynamic regret, both in terms of upper and lower bounds.* We also agreed with Reviewer YDMs that our text did not clearly convey this message: We apologize for it and we will improve it. Clearly, none of these trade-offs are present in the beautiful work of Zhang et al. (2023), that instead focuses on orthogonal aspects. This is evident from the fact that their framework does not enable one to prove lower bounds. --- Rebuttal Comment 2.1: Comment: I agree that applying the work to exp-concave losses and strongly convex functions should not be a critical point, but it does reveal that the paper shares the same limitations as Zhang et al. (2023), rather than being a truly "general framework." Some aspects of the presentation are confusing to me. For instance, the abstract claims, "we show that dynamic regret minimization is equivalent to static regret minimization in an extended decision space." without any statement on the condition. Similarly, the response states, "The key difference is that we provide a *general equivalence* between static and dynamic regret that *any strategy* can be plugged into." (I believe it is still necessary to use comparator-adaptive algorithms to achieve meaningful dynamic regret.) Furthermore, line 30 asserts, "Note that OCO problems can *always* be reduced to OLO via the well-known inequality," and lines 121-123 claim, "Proposition 1 is a regret equivalence—we *lose nothing* by taking this perspective, yet it allows us to immediately apply *all the usual techniques* and approaches from the static regret setting." In my view, a more concise contribution would be to present a general framework to reduce minimizing dynamic regret to obtain comparator-adaptive static regret bound for OLO case. I strongly recommend that the authors clarify this point, particularly in the abstract, introduction, and reduction sections. --- Reply to Comment 2.1.1: Comment: First of all, we thank the reviewer for reconsidering their criticism about exp-concave losses and strongly convex functions, we appreciate the intellectual honesty. > the paper shares the same limitations as Zhang et al. (2023), rather than being a truly "general framework" We might be nitpicking on the definition of ``general framework''. Does our framework recover any optimal rate for any kind of convex loss used in the dynamic regret literature? Clearly not, but it was never our aim. However, I think we can agree that we are strictly more general than Zhang et al. (2023), for the reasons we listed above, and this generality is non-trivial (for instance, it allows us to prove lower bounds that the framework of Zhang et al (2023) could not). Also, our paper and Zhang et al. (2023) focus on different aspects of the dynamic regret problem, so we do not really see why these two approaches ought to be considered as being in conflict; they both make unique and novel contributions to the literature. > Some aspects of the presentation are confusing to me. While in our eyes our scope and limitations were clear, it is possible we might overlooked this aspect and we appreciate the feedback to make the message of our paper sharper. As already discussed with Reviewer YDMs and bfmX, we are very happy to change the text to emphasize that we focus on linear and linearized losses in all the relevant parts of the paper, and clearly specify that we do not explicitly leverage the curvature of the losses, as well as specify that our main contribution is about a way to analyze trade-offs in measures of dynamic regret. Future work might explore the aspect of curvature, possibly following the approach we sketched above. > I believe it is still necessary to use comparator-adaptive algorithms to achieve meaningful dynamic regret. This is not necessarily the case; the use of comparator-adaptive algorithms is just a convenient way to achieve adaptivity to an unknown comparator norm (and thus to the variability of the comparator sequence via our reduction), but it is not necessarily the only way to achieve meaningful adaptivity to the comparator sequence. For instance, the Metagrad algorithm of van Erven \& Koolen (2016) achieves a guarantee which adapts to an unknown comparator in the static bounded domain setting but it is even stronger than the usual comparator-adaptive algorithms which are leveraged by this work and by Zhang et al. (2023). It may be possible to apply such an algorithm in our framework to get new results which would not otherwise be possible using the framework of Zhang et al (2023). Of course, this would entail additional computational considerations, but solving all these details is beyond the scope of this rebuttal and would be an interesting direction for future work. --- Rebuttal 3: Comment: I think there may be some misunderstanding from the authors regarding my previous response. My intention was never to criticise the paper for not achieving improved dynamic regret for exp-concave functions. Rather, it is taken as an example to illustrate that some parts of the paper are overstated, as evidenced by the claim I referenced earlier. Nevertheless, I appreciate the authors' efforts in making the revisions. > The use of comparator-adaptive algorithms is just a convenient way to achieve adaptivity to an unknown comparator norm I understand that one can use other methods to achieve adaptivity to the comparator sequence, but it is clear that not just any strategy can be applied. Achieving meaningful dynamic results still requires specific techniques to minimize the converted static regret. While the range of algorithms that can be used might extend beyond those in Zhang et al., 2023, the earlier statement, "The key difference is that we provide a general equivalence between static and dynamic regret that any strategy can be plugged into," along with the claim, "yet it allows us to immediately apply all the usual techniques and approaches from the static regret setting," is overstated to me. --- Rebuttal Comment 3.1: Comment: Thank you again for taking the time to honestly engage in the discussion with us. We understand your perspective and will incorporate all the insights gained from our discussion here to improve the clarity of the main text. > I understand that one can use other methods to achieve adaptivity to the comparator sequence, but it is clear that not just any strategy can be applied. Achieving meaningful dynamic results still requires specific techniques to minimize the converted static regret. As a brief final note, would like to point out that the two referenced statements referenced do not contradict. While it is true that only certain choices will lead to meaningful guarantees, this is true of *any* general framework and is not unique to ours. For instance, we hope we can agree that FTRL and mirror descent can be considered ``general frameworks'' for static regret minimization, yet not all sequences of convex regularizers lead to meaningful guarantees. This does not make these frameworks less valuable, and indeed many of the interesting questions in online learning revolve around *which choices* of regularizer lead to meaningful guarantees under which assumptions. This is analogous to the choice of strategy to be applied in our framework.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments and taking the time to carefully read our paper. Below, we address some of their common questions. A common point contention in the reviews was the significance of the average squared path-length dependence guarantee achieved in Proposition 3. We provide additional discussion about these concerns below, but we would first like to stress that **the power of our approach lies in its generality**. Indeed, our lower bound shows more generally that *there exists a fundamental trade-off between the comparator variability penalty and the gradient variance penalty*, so there is no one measure of variability that is strictly better across all problem instances. This is similar to the trade-off existing in terms of dual norms in online mirror descent. We are not aware of any other paper on dynamic regret pointing out these fundamental trade-offs, and this result is due to our simple equivalence with static regret. Our general result in *Theorem 2 provides a framework for achieving any of these trade-offs*, simply by changing the choice of weighting matrix $M$. In Proposition 3, we provided as an example one instance of the more general result because we believe it is a reasonable compromise to the square path-length, but we stress that our result is much more general than this one example. Achieving more favorable trade-offs via cleverly crafted weighting matrices $M$ is an important direction for future work. Moreover, as we show below, given multiple different choices of $M$ we can always easily ensure the best of their guarantees. **Motivation of averaged squared length/intuition.** It is important to understand that we proved that it is impossible to adapt to the squared path-length $P_{T}^{\\|\cdot\\|^{2}}$ without making the regret guarantee vacuous. More fundamentally, the lower bound says that this is the wrong quantity to consider, so we must look for some achievable compromise. The average squared path-length is one such reasonable compromise for the following reasons: 1\) It is never be worse (up to polylog factors) than the usual path-length $P_{T}^{\\|\cdot\\|}$; 2\) The algorithm which achieves adaptivity to the averaged square path-length can be implemented in the same $O(d\log T )$ per-round complexity enjoyed by prior works; 3\) It shares a similar physical meaning to the squared path length. It should be clear why points 1 and 2 are important, so let's now focus on point 3. The averaged squared path length is nothing else than the sum of the squared path length at different time scales, see line 232. In this view, the averaged squared path length includes the squared path length, but it seems to inflate it just enough to avoid the vacuous result given by our lower bound (i.e., Proposition 2). Now, one might wonder by how much this is inflated, and it turns out that in the worst case this is only a $\log T$ factor more than the square path length at the worst scale, see first inequality in line 239. So, ignoring the log factor, we are essentially substituting the squared path length at time scale 1, with the maximum over $\tau$ of the squared path length at time scale $\tau$. Thus, the original squared path length measures the drift only at high frequency, while the averaged path length measures the drift in all frequencies, like a more holistic measure of drift. Moreover, this measure can be very close (or even equal!) to the squared path-length in some instances. Indeed, consider a comparator sequence which alternates between $a$ and $b$: on each of the timescales $\tau=2^{i}$, the averages on each of the intervals of length $\tau$ will be $\bar u_{i}=(a+b)/2$. Hence the average square path-length at timescale $\tau$ is $0$ for any $\tau$ except for $\tau=1$ so the two measures are *exactly* equivalent here. However, it is impossible compare favorably with the square path-length across *all* possible problem instances due to the fundamental trade-off discussed above. Nevertheless, we show below that one can easily combine the guarantees achieved by different choices of $M$. Finally, we would also like to point out that Proposition 3 is the *first square path-length bound of any kind* that holds for arbitrary comparator sequences, so we believe the result is of interest to the community despite the fairly reasonable compromises discussed above. **Best-of-Both Worlds Guarantees.** As discussed above, our lower bound demonstrates a trade-off such that no $M$ can lead to uniformly better guarantees across all problem instances. Luckily, given any two instances of the algorithm in Theorem 2, call them $A$ and $B$, with difference choices of the matrix $M$, we can always achieve the better of their two guarantees using the standard iterate-adding argument of Cutkosky (2018). The idea is simple: observe that for any choice of $M$, Theorem 2 provides an algorithm guaranteeing $R_{T}(0,\ldots,0)\le O(\epsilon)$. Hence, letting $w_{t}^{A}$ and $w_{t}^{B}$ denote the outputs of algorithms $A$ and $B$ on round $t$, suppose we play $w_{t}=w_{t}^{A}+w_{t}^{B}$. Then our dynamic regret is $$\begin{aligned} R_{T}(\vec{u})= \sum_{t=1}^{T}\langle g_{t},w_{t}^{A}+w_{t}^{B}-u_{t}\rangle = R_{T}^{A}(\vec{u})+R_{T}^{B}(0,\ldots,0)\le O(R_{T}^{A}(\vec{u})+\epsilon),\end{aligned}$$ and similarly, we have $R_{T}(\vec{u})\le O(\epsilon + R_{T}^{B}(\vec{u}))$. Since both of these hold, overall we have $R_{T}(\vec{u})\le O(\epsilon + \min\{R_{T}^{A}(\vec{u}),R_{T}^{B}(\vec{u})\})$. The argument immediately extends to combining the guarantees of $N$ algorithms at the cost of only an $O(\epsilon N)$ penalty. Moreover, the parameter $\epsilon$ in Theorem 2 appears only in $O(\epsilon)$ and $\log(1/\epsilon)$ terms, so we can safely scale it by $1/N$ so that combining $N$ algorithms requires only an $\log N$ penalty.
NeurIPS_2024_submissions_huggingface
2,024
Summary: The main contribution of the paper is showing that, by `lifting' an online linear optimization (OLO) problem to a higher dimensional space, the dynamic regret in the original setting is equal to the $\textit{static}$ regret achieved in the higher dimensional setting. While simple, this clean reduction allows easier and more elegant reasoning about how to develop and analyze dynamic regret algorithms since static regret algorithms and analyses are so well studied and understood. They then employ this reduction to prove the existence of a trade-off between comparator variability penalty in terms of a $M$ matrix norm and $G^2 Tr(M^{-1})$, where $G$ is the Lipschitz parameter of the losses. By choosing $M$ in such a way that the comparator variability term becomes the squared path-length, they are able to show that adapting a dynamic regret rate to the squared path-length implies a supra-linear (i.e., vacuous) guarantee. They complement the above general trade-off by providing an algorithm that is able to match the lower bounds along the frontier implied by different specifications of $M$. They instantiate this with a particular matrix setting and attain a corresponding positive result which says it is possible to instead adapt to a modified definition of squared path-length (more specifically, a locally smoothed counterpart). Strengths: While simple, their reduction is quite elegant and insightful. This conceptual contribution is also nicely complemented by the authors employing it to achieve a new lower-bound (that can be matched via existing methods). The presentation is clear and intuitive. Weaknesses: Some weaknesses of the paper are that the focus on matrix norms in Section 3 may be a bit limited in scope and that the statement of Theorem 1 is quite convoluted. More broadly, having additional implications of the equivalence would be desirable (e.g., having a more thorough and completed version of Section 5 and/or a more comprehensive list of observations in a similar vein). Technical Quality: 3 Clarity: 3 Questions for Authors: - Do you believe there are any implications of your reduction that are particularly useful from a practical/deployment perspective, or do you see the contribution as being a strictly conceptual one? - Do you have any additional intuition for the locally-smoothed path-length quantity? For example, how smoothed would one generally expect this quantity to be? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Do you believe there are any implications of your reduction that are > particularly useful from a practical/deployment perspective, or do you > see the contribution as being a strictly conceptual one? We believe that the most important implication is that the reduction allows us to use any present and, more importantly, future result from static online learning, to solve dynamic regret problems. For example, the algorithm that we show in Section 4.1 has the same computational complexity as previous ones (such as Zhang et al. (2018), Jacobsen & Cutkosky (2022), and Zhang et al. (2023), to name a few) so it has the same applicability as these works. Practically speaking, in many real-world problems the "solution" to the problem (represented, e.g., by the ideal model parameters) can change over time, due to unpredictable changes in environment or underlying data-generating process. These changes to the solution are captured by the time-varying comparator in dynamic regret. A major benefit of these comparator-adaptive dynamic regret guarantees is that the algorithm *automatically adapts* to this changing comparator sequence, without requiring *any* prior knowledge whatsoever about this sequence or how it is changing. We believe this is an exceptionally powerful property which is indeed useful from a practical/deployment perspective. > Do you have any additional intuition for the locally-smoothed > path-length quantity? For example, how smoothed would one generally > expect this quantity to be? Due to space constraints, please see our [global response here](https://openreview.net/forum?id=hD8Et4uZ1o&noteId=2OutnXLWT0) for a detailed discussion of these concerns. --- Rebuttal Comment 1.1: Title: Acknowledgement of response Comment: Thank you for clarifying the practical implications! I maintain my positive score (7).
Summary: This paper investigates the problem of dynamic regret minimization in unbounded domains. The authors propose a novel lossless reduction from dynamic regret minimization to static regret minimization by treating the changing comparators in dynamic regret as a fixed one in another decision space with higher dimensions. The key in the reduced static regret minimization is to choose an appropriate matrix norm which is used to transform the static regret bound in lifted domains into a meaningful dynamic regret in the original decision space. Denoting by $M$ the norm-related matrix, this work proposes a novel lower bound regarding the variability penalty $\\|\tilde{u}\\|_M$ and a variance penalty $G^2 \text{Tr} (M^{-1})$. Consequently, the authors proved that a dynamic regret depending on the squared path length is impossible. By contrast, by choosing a specific matrix norm defined by Haar basis matrices, it suffices to achieve a dynamic regret bound with the squared path length defined on some averaged comparators. Strengths: The dynamic-to-static regret is pretty impressive. Although it is a little conceptual and less practical, it provides us with a novel way for dynamic regret minimization. The lower bound in Theorem 1 is also meaningful by depicting the influences of the variability penalty $\\|\tilde{u}\\|_M$ and the variance penalty $G^2 \text{Tr} (M^{-1})$, providing guidance for consequent algorithm design. Weaknesses: I have some major questions for the results regarding the squared path length and I hope that the authors can answer the questions to solve my concerns. 1. I believe that the lower bound in Theorem 1 is correct, which lower-bounds by the dynamic regret with $\\|\tilde{u}\\|_M$ and $G^2 \text{Tr} (M^{-1})$. However, I am doubtful about Proposition 2 when translating this lower bound to a dynamic regret measured by the squared path length. Proposition 2 only shows that choosing a specific $M$ does not suffice to achieve a valid squared-path-length dynamic regret. However, it does not rule out the possibility of other choices of $M$. Or, is there some one-to-one mapping between the choice of $M$ and the squared path length? In my point of view, discussions of this issue are missing and I hope that the authors could revise the paper according to it in the revised version. 2. As for upper bounds, the obtained bound (in Section 4) is less favorable than I originally expected. The squared path length is defined on some averaged comparators, which are averaged on the intervals induced by the Harr OLR algorithm. The relationship between the current averaged squared length and the desired squared length is not discussed. On the top of Page 8, the authors show that their result can recover the previous dynamic regret defined with $P_T$ (non-squared path length). However, on Page 2, the authors stated that $P_T$ based dynamic regret is not favorable enough with an overly pessimistic diameter quantity. As a result, implying $P_T$ based dynamic regret is not convincing enough, at least for me, to validate the significance of the obtained result. 3. The obtained regret also suffers from additional $\log T$ terms, which is not favorable enough in the full information setup. Do previous works with similar setups also suffer from the same issue? 4. The dynamic-to-static reduction is novel. However, when designing algorithms, the computational complexity is not discussed at all, which is pretty important because the dimension is related to time horizon $T$ in the reduced problem. I suggest that the authors could add discussions about computational complexity in the revised version. Overall, my main concern is that despite the novelty of the dynamic-to-static reduction, to what extent this reduction is significant is not fully discussed. I believe that this reduction is of interest to the community. However, the currently obtained results do not suffice to validate its significance. Technical Quality: 3 Clarity: 3 Questions for Authors: A minor type maybe: In Line 24, a missing inverse operator in the matrix $S$? (i.e., $[H_n H_n^\top]^{-1}$ but not $H_n H_n^\top$?) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations and, if applicable, potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The authors stated that $P_T$ based dynamic regret is not favorable enough with > an overly pessimistic diameter quantity. > > The relationship between the current averaged squared length and the > desired squared length is not discussed Given that the smoothed version of squared path-length is a new notion of variability, it is natural to show that it is always at least as good as the standard notion of variability. However, it should be clear this is only a sanity check and this notion can often be much better since it does not incur the pessimistic diameter penalty. Please see [our global response](https://openreview.net/forum?id=hD8Et4uZ1o&noteId=2OutnXLWT0) for a detailed discussion. > The obtained regret also suffers from additional terms, which is not > favorable enough in the full information setup. Do previous works? All prior works which achieve dynamic regret guarantees in unbounded domains do indeed incur additional logarithmic penalties, even in the full information setting. See Jacobsen & Cutkosky (2022,2023) and Zhang et al. (2023). Similar to the static regret setting, such logarithmic penalties appear to be the price paid for achieving adaptivity to an arbitrary comparator sequence. Such penalties are known to be **necessary** in online learning with unbounded domains, see the lower bound in Orabona (2013), and recently proved in the stochastic setting as well by Carmon & Hinder (2024). Works which assume a bounded domain, such as Zhang et al (2018) can avoid this additional term because they are leveraging prior knowledge about the comparator sequence, ie, that $\\|u_{t}-w_{1}\\|\le D$ for all t. In this work, we focus on achieving novel guarantees in the unbounded setting, though our reduction in Proposition 1 is applicable in the bounded setting as well. > the computational complexity of the reduction is not discussed The computational complexity is related to the choice of $M$, and is one of the main considerations that one must consider when choosing $M$. This is discussed on page 4, lines 136-138. In fact, one of the key properties of the $M$ we choose in Proposition 3 is that it supports sparse updates, requiring $O(d\log T)$ per-round computation to update the algorithm, matching the computational complexity enjoyed by prior works in this setting. This is discussed on page 8, starting on line 243. > proposition 2 only shows that choosing a specific M does not suffice This is a very good point, thanks for raising it. The choice of $M$ uniquely exposes the squared path length up to a constant offset term, but it is easy to see that the offset term does not make any difference for our claim. To see why, consider the 1-dimensional setting and note that for any positive definite $M$ there is a unique $\Sigma$ such that $M=\Sigma^{\top}\Sigma$. Hence, without loss of generality we can focus on $\Sigma$ satisfying $$ \\|\tilde u\\|\_{M}^{2}=\langle\tilde u,M\tilde u\rangle=\langle \Sigma\tilde u, \Sigma\tilde u\rangle = \langle v,\tilde u\rangle^{2} + \sum_{t=2}^{T}\\|u_{t}-u_{t-1}\\|^{2}, $$ for some $v\in\mathbb{R}^{T}$ and such that $M=\Sigma^{\top}\Sigma$ is positive definite. Note that the constant offset term is unavoidable: it is what captures the static regret guarantee in the case where $u_{1}=\ldots=u_{T}=u$. Proposition 2 considers $v=(0,\ldots,0,1)$ to get $\\|\tilde u\\|^{2}\_{M}=\\|u_{T}\\|^{2}+\sum\_{t=2}^{T}\\|u\_{t}-u\_{t-1}\\|^{2}$, though below we will show that any vector $v$ would still lead to $Tr(M^{-1})=\Omega(T^{2})$. It is clear that the only way to construct expressions of the form above is via matrices $\Sigma$ satisfying $$\Sigma\tilde u=c\begin{pmatrix}u\_{1}-u\_{2}\\\\ u\_{2}-u\_{3}\\\\\vdots\\\\u\_{T-1}-u\_{T}\\\\ \langle v,\tilde u\rangle\end{pmatrix},$$ where $c\in\{-1,1\}$ and the order of the rows indices of the vector can be permuted without loss of generality. In particular, the only matrices that can produce these expressions (again noting that the rows can be permuted without loss of generality) are of the form $$ \Sigma = c\begin{pmatrix}1&-1&0&0&\dots&0&0\\\\ 0&1&-1&0&\dots&0&0\\\\ 0&0&1&-1&\dots&0&0\\\\ \vdots& & &\ddots&&&\\\\ 0&0&0&0&\dots&1&-1\\\\ \hline\\\\ v\_{1}&v\_{2}&v\_{3}&v\_{4}&\ldots&v\_{T-1}&v\_{T} \end{pmatrix} =: c\begin{pmatrix} \Delta\\\\ \hline\\\\ v^{\top} \end{pmatrix}, $$ so $M=\Delta^\top\Delta + v v^\top$. Adding a rank-1 update to $\Delta^{\top}\Delta$ will increase its eigenvalues $\lambda_{i}=\lambda_i(\Delta^{\top}\Delta)$ by some $a_{i}\ge 0$, such that $\sum_{i}a_{i}=\\|v\\|^2$. Moreover, there is a unique zero eigenvalue of $\Delta^{\top}\Delta$, corresponding to the eigenvector $(1,\ldots,1)/T$. Therefore, $Tr(M^{-1})=1/a_{1}+\sum_{i>1}\frac{1}{\lambda_{i}+a_{i}}$. This implies that the worst-case choice of $a_{1},\dots,a_{T}$ is given by a convex constrained optimization problem which yields $a_{i}=0$ for $i>1$ and $a_{1}=\\|v\\|^2$ via the KKT conditions. Hence, the worst-case choice of $v$ is the one which increases only the eigenvalue corresponding to the uniform eigenvector, so the worst case choice of $v$ is proportional to the uniform eigenvector and $Tr(M^{-1})\ge \Omega(\frac{1}{\\|v\\|^2}+\sum_{i>1}\frac{1}{\lambda_{i}})\ge \Omega(\sum_{i>1}\frac{1}{\lambda_{i}(M_{0})})$, where $M_{0}$ is the matrix from Proposition 2, obtained by setting $v=(0,\dots,0,1)$. Notably, from the proof of Proposition 2 it can be seen that even if we dropped the largest eigenvalue of $M_{0}^{-1}$, we would still have $Tr(M^{-1}_0)\ge \Omega(T^{2})$. Hence, any of the matrices which produce the squared path-length dependence must incur the same $\Omega(T^{2})$ variance penalty up to constant factors. We will expand this reasoning into a more formal proof in the paper and we can provide additional details if needed, but the above claims can be easily verified numerically. --- Rebuttal Comment 1.1: Title: Comments to Rebuttal Comment: Thanks to the authors for the detailed reply. I still have some follow-up questions about the authors' rebuttal, and hope that the authors could resolve my concerns. 1. I have read the global response about the motivation of the averaged squared path length. The authors said, "It turns out that in the worst case, this is only a $\log T$ factor more than the square path length at the worst scale; see first inequality in line 239". Could the authors provide some further explanations or derivations on this statement? Does it mean that the averaged path length can be upper-bounded by the desired squared path length only by a $\log T$ factor overhead? 2. I do not agree with the first motivation for the squared path length, which shows that it is never worse (up to polylog factors) than the usual path length $P_T^{\\|\cdot\\|}$. When transforming the averaged path length into $P_T^{\\|\cdot\\|}$, there is an additional term of $\bar{D}$ (see the equation in line 239). However, as the authors have said in the introduction part, this diameter quantity is overly pessimistic and thus cannot be used to validate the significance of the obtained results. 3. Being able to aggregate multiple algorithms, as stated at the end of the global response, is an advantage. However, given that we cannot identify a good enough choice of $M$ here (please refer to the next point below), how good the aggregated guarantee will be cannot be explicitly illuminated. I acknowledge that this is an advantage of the obtained result. However, this advantage seems to not be able to bring some explicit improvements compared with the current bound. 4. For my first weakness in the original review, the authors responded that "Hence, without loss of generality we can focus on $\Sigma$ satisfying...". I do not understand why this choice of $M$ is "without loss of generality" since this choice seems to rule out a large number of possible Ms. Could the authors provide more explanations on this point? --- Reply to Comment 1.1.1: Comment: Thanks for engaging with us, we are happy to discuss these points further. However, before the specific points, we would like to stress once again the averaged squared path length is only an example of the generality of the framework and not our main contribution. Focusing too much on it might hide the fact that the main message of our paper is *a framework that allows to reason on the trade-offs of measures of dynamic regret, both in terms of upper and lower bounds*. As an illustrative analogy, the seminal work of Nemirovski & Yuudin (1983) introduces the mirror descent framework and its general guarantee, and then proceeds to show specific examples such as $p$-norms. The example is not the main contribution; the framework and the existence of a trade-off are the key contributions upon which a large literature of work has been developed by the community. >\"this is only a $\log$ factor more than the square path length at the worst scale" [...] Could the authors provide some further explanations or derivations on this statement? From its definition, the average squared path length is the sum of the squared path length at *all possible time scales*. Given that there are $O(\log T)$ time scales (line 230), in the *worst case* the averaged path length can be at most $O(\log T)$ times the path length at the worst scale (i.e., $\sum_{\tau=1}^{N}\bar{P}(\tau,\vec{u})\le N\max_{\tau}\bar{P}(\tau,\vec{u})$ where $\bar{P}(\tau,\vec{u})$ denotes the averaged square path-length at timescale $\tau$), as detailed in the global response. Note that the standard squared path length corresponds to scale $\tau=1$. >I do not agree with the first motivation for the squared path length, which shows that it is never worse (up to polylog factors) than the usual path length We are not sure how this could be controversial: any new measure should be at least as good (up to polylog factors) as the measure used in prior works. However, this is not a proof of superiority, as the reviewer seems to imply; this is a sanity check showing that *in the worst case*, the new variability penalty is at least as good as the one in prior works. However, there are cases where it can be significantly better: in our global response, we provide an example in which the averaged square path-length is significantly smaller precisely thanks to the diameter: the averaged squared path length is equal to the squared path length and it is $T (a-b)^2$, while the $M P_T^{\\|\cdot\\|}$ dependence, where $M$ is the diameter term, is $O(T |a-b| \max(b,a)),$ which can be arbitrarily worse. >Being able to aggregate multiple algorithms... Our results show there are trade-offs and the best choice of $M$ will be data-dependent, so trying to quantify gains is hopeless in general. Instead, we can easily aggregate algorithms and compete with the best matrix among a number of different ones. Is it important? We very much believe so! Once again, we would like to point out the similarity with $p$-norms: Theorem 2 in Cutkosky (2019) used the same idea to have a static regret with respect to all $p$-norm: no explicit gain is showed as well, but it should be clear the importance of his result. We show how to do this for *different measures of dynamic regret* and we are not aware of anything similar in this literature. >I do not understand why this choice of is \"without loss of generality\" since this choice seems to rule out a large number of possible Ms. Could the authors provide more explanations on this point? We are not completely sure we understand your question. In case you are wondering how it is possible to decompose $M$ into a product of a matrix and its transpose, this is always true because $M$ is symmetric and positive definite. Note that only the matrix $M$ matters and not its representation through $\Sigma$. In case you are wondering why we restrict our attention only to the matrices $M$ that result in the term $\langle v, \tilde u\rangle^{2} + \sum_{t=2}^{T}\\|u_{t}-u_{t-1}\\|^{2}$ and not to some other expression. Indeed, we proved that adding another terms to the above two terms can kill the $\Omega(T^2)$ factor: this is exactly the case of the averaged squared path length that we propose. However, any such addition would result in a measure that it is not the squared path length anymore. Hence, our claim is correct: The squared path length cannot be achieved without incurring the stated penalty. Finally, in case you are wondering if it is possible to find two different matrices that result in $\langle v, \tilde u\rangle^{2} + \sum_{t=2}^{T}\\|u_{t}-u_{t-1}\\|^{2}$, this is possible only up to permutations of the rows of $\Sigma$, as we explained in the global response. Please let us know if you meant something else.
null
null
null
null
Continuous Partitioning for Graph-Based Semi-Supervised Learning
Accept (poster)
Summary: In this submission, authors proposed a framework for graph-based semi-supervised learning based on continuous nonconvex quadratic programming. Strengths: This paper studies the graph-based semi-supervised learning problem, which has attracted many attentions. In this submission, authors proposed a framework for graph-based semi-supervised learning based on continuous nonconvex quadratic programming. Experiments show that the proposed CutSSL framework significantly surpasses the current state-of-the-art on k-nearest neighbor graphs and large real-world graph benchmarks. Weaknesses: There are many problems in this article. The most problem is that the motivation and contribution are unclear. At the beginning of abstract, authors argue that Laplace learning algorithms for graph-based semi-supervised learning have been shown to suffer from degeneracy at low label rates and in imbalanced class regimes. Then, they propose the CutSSL, a framework for graph-based semi-supervised learning based on continuous nonconvex quadratic programming. Although experiments validate the superiority of the proposed CutSSL against competing method across a variety of label rates, class imbalance, and label imbalance regimes (by the way, what is the difference between class imbalance and label imbalance), I cannot find the relationship between the motivation and the technical design of CutSSL. Moreover, the meaning of the term "Continuous Partitioning" in title is also unclear. For the proposed CutSSL, according to the summarized contributions at the end of Section 1, the main contribution of CutSSL is the mathematical framework and its ADMM-based optimization algorithm. These contributions have little to do with the aforementioned motivation of this paper. From a technical point of view, the main contribution of the CutSSL is Eq.(9), where S=sD. Then, another contribution is how to optimize this formulation, which is presented in Section 3.2. Note that in Eq.(10), there are only two matrix parameters to be optimized (i.e., X and T), but it is hard to find throughout this paper that how to determine s and B. Similar problems exist in many places. Ref.[16] is the main reference of this paper, but do not rely on the notations or backgrounds in it too much. Besides, there is a recent review named ''Graph-Based Semi-Supervised Learning: A Comprehensive Review'' published in TNNLS is not mentioned in this paper, which means a literature review should be carefully conducted. Technical Quality: 2 Clarity: 1 Questions for Authors: Please clarify the motivation and contribution problem mentioned above. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Some limiattions are discussed in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. We address reviewer comments below and will edit the final version of the paper according to the reviewer’s comments. ***Motivation of CutSSL*** In contrast to Laplace learning, which is degenerate in the low-label rate regime, we start with a framework that is nondegenerate, even without any labels (min cut, stated in eq (4)) and derive a nonconvex continuous relaxation that incorporates label information. To show the relaxation is exact (eq (9)), we prove two statements (propositions 3.1 and 3.2). First (proposition 3.1), we show that at integer solutions (valid cuts), the relaxation coincides with the combinatorial problem, i.e., the min cut with respect to the relaxation is the min cut with respect to the combinatorial problem. Next (proposition 3.2), we show that (1.) local minima of the relaxation are integer-valued and (2.) the number of local minima can be controlled (and is via our framework) to prevent a badly behaved optimization problem with many spurious local minima. Thus the global minimum of the relaxation is exactly the min cut. To provide additional motivation for CutSSL, we provide the experiment below. We show that in the absence of labels purely unsupervised graph cut-based methods (e.g. Spectral Clustering (SC)) yield a graph cut / partition such that the resulting clusters are well-aligned with the labels. To demonstrate this, we calculate a partition using both SC and a graph cut solution to equation 8 (‘cut’), i.e.this is CutSSL in the absence of labels. To evaluate the alignment of the clusters yielded by the cut with the labels, we solve a linear sum assignment problem using the true labels to find the best permutation of cluster assignments. This assignment of clusters to classes shows that the graphs cut are relatively well-aligned with the class labels in contrast to Laplace learning (LP) which degenerates in the low label rate regime (i.e. just a single label per-class). Thus, cut-based techniques work since they are derived from a fundamentally non-degenerate framework and our CutSSL augments the multi-way graph partition with labeled nodes. | Method | MNIST | F-MNIST | CIFAR10 | | -------- | ------- |------- |------- | | Laplace/LP | 16.1 (6.2) | 18.4 (7.3) | 10.4 (1.3) | | CutSSL | 97.5 (0.1) | 64.3 (4.6) | 44.7 (5.9) | | Cut | 95.9 (0.0) | 62.7 (0.0) | 41.1 (0.0) | | SC | 92.6 (0.0) | 60.5 (0.0) | 35.3 (0.0) | ***Clarification of the term Continuous Partitioning*** Thank you for pointing out this issue. We will clarify this term in the final version. To be precise, we design a continuous relaxation of the combinatorial partitioning problem with label constraints. This term is typically used in the numerical optimization and partitioning community [1, 2], but is perhaps not as well-known in a machine learning context. We will clarify this term in the final version. [1] Pardalos, Continuous Approaches to Discrete Optimization Problems, Nonlinear Optimization and Applications, 1996. [2] Discrete Optimization via Continuous Relaxation, Symposium on Bridging Continuous and Discrete Optimization ***Clarification of the class and label imbalance and notation*** To clarify, an imbalance in the class distribution refers to an imbalance in the number of samples for each class. A label imbalance refers to an imbalance in the number of samples that are labeled for each class. Note that the expression for the variable $B$ is given in equation (3). ***Choice of $s$*** Thanks for pointing this out. This is an important question to address and we will provide additional clarification in the final version of our paper. We highlight a result that appears in the main text and several experiments presented in the appendix of our paper. In particular, the choice of $s$ is governed by Proposition 3.2, which provides an upper bound on the choice of $s$ that guarantees recovery of integer-valued solutions (for intuition see figure 1). A trivial lower-bound is given by one convex relaxation of the problem ($s = 0$). Given the upper and lower-bound, we construct a homotopy path, a sequence of solutions $X(s)$ that vary in $s$. In practice, we choose a linear or geometric sequence s between $0$ and the upper bound. Many times, the upper bound need not be reached to recover a solution that is very close to integer (see Figure 6.b in the appendix). Likewise, we demonstrate that in practice, one can take a very coarse sequence of $s$. This result is presented and discussed in Figure 6 in appendix C.5. In fact, to reproduce the results in the main text, only a single nonzero data-dependent value of s is used (i.e. the homotopy path consists of constructing solutions for $s = 0$ and $s = 0.1$). ***Additional references*** Thanks for notifying us of this missing review. We have added it to our paper. It will appear in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for your response, some of my concerns are clarified. I am open to hear discussions from other reviewers. --- Reply to Comment 1.1.1: Comment: Thanks for investing time and for providing valuable feedback! We appreciate the suggestions and your help with improving the quality of our paper. Having two days left for the discussion period, we would be happy to run additional experiments, address any remaining issues, or clarify sources of confusion, if there are any. If our responses have adequately addressed the concerns, we kindly ask the reviewer to consider raising our score.
Summary: The paper considers the task of semi-supervised learning under cardinality constraints and proposes a framework based on a reformulation into a non-convex constrained quadratic program. The authors provide sufficient conditions for the exact recovery of integer solutions. Moreover, they present an algorithm called CutSSL which is based on the ADMM method to optimize the non-convex objective, which is shown to converge to a KKT point of their objective. The paper also shows the connection of their framework to Laplace Learning, which provides an explanation for the efficacy of the commonly used mean-shift heuristic. The method is evaluated on three image datasets, citation networks as well as other large scale networks. The proposed approach consistently achieves superior results in terms of accuracy compared to the competing methods, while being in the same order of magnitude in terms of runtime compared to the fastest method. Strengths: The paper proposes a new graph-based framework as well as an iterative algorithm which is widely applicable. In the experiments, the approach is shown to consistently outperform state-of-the-art methods on a wide range of datasets including image datasets and large scale networks. Weaknesses: It was not clear to me how exactly the sequence of s values is chosen. In the experimental section it says that s=0.1. I assume this is a typo since it would mean that s is kept fixed after the first step. What is meant here, the maximal s value (denoted s_T in the appendix), or the difference between s values (denoted d in the appendix)? Also, is the sequence of s values chosen in advance or is there some criteria to stop increasing the s values? Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss limitations of the approach, regarding the dependence on the choice of underlying graph structure, in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments. Below we comment on the choice of the hyperparameter $s$. ***Choice of s*** Thanks for pointing this out. This is an important question to address and we will provide additional clarification in the final version of our paper. We highlight a result that appears in the main text and several experiments presented in the appendix of our paper. In particular, the choice of $s$ is governed by Proposition 3.2, which provides an upper bound on the choice of $s_T$ that guarantees recovery of integer-valued solutions (for intuition see figure 1). A trivial lower-bound is given by one convex relaxation of the problem ($s_0 = 0$). Given the upper and lower-bound, we construct a homotopy path, a sequence of solutions $X(s_t)$ that vary in $s_t$. In practice, we choose a linear or geometric sequence s between $0$ and the upper bound (e.g. determined by that $d$ parameter). Many times, the upper bound need not be reached to recover a solution that is very close to integer (see Figure 6.b in the appendix). Likewise, we demonstrate that in practice, one can take a very coarse sequence of $s$. This result is presented and discussed in Figure 6 in appendix C.5. In fact, to reproduce the results in the main text, only a single nonzero data-dependent value of s is used (i.e. the homotopy path used for the main experiments consists of constructing a pair of solutions for $s = 0$ and $s = 0.1$). Generally, convergence can be measured according to proximity to an integer solution. E.g., if $X(s_t)$ is the CutSSL solution with respect to $s_t$, let $Proj_B(X(s_t))$ be the projection onto the set of feasible binary matrices (those binary matrices with one "1" in each row and with column sum $m$). Then, convergence has been reached if $||X(s_t) - Proj_B(X(s_t))||_F$ is sufficiently small. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: In the rebuttal, the authors provide additional details regarding the choice of the parameter s, as requested in my review. Thank you for your response.
Summary: The paper proposes an approach for graph-based semi-supervised learning which is based on a cardinality-constrained extension of the classical Laplacian learning approach due to Zhu et al. Known theoretical limitations of standard Laplacian in the low-label rate regime motivates the approach. Theoretical properties of the approach are studied, including a sufficient condition for avoiding the need for rounding solutions and a practical optimization technique is provided for the cardinality-constrained Laplacian objective. Experiments demonstrate empirical advantage in the low-label rate regime without deterioration in standard regimes, and gains in the class imbalance regime. A connection is established to the mean-shift heuristic. Strengths: - The paper studies the fundamental problem of semi-supervised learning in important low label rate and class imbalance regimes, which have important practical consequences (excess unlabeled data does not become a disadvantage; fairness). - The overall approach is a principled extension of Laplace learning to cardinality constraints with optimization techniques for the resulting non-convex objective. - The experiments indicate practical usefulness of proposed approach. Weaknesses: - There is limited theoretical evidence for the claims that the proposed cardinality-constrained graph partitioning approach resolves the issues of low label rate and class imbalance. - $s$ seems to be an important hyperparameter in the approach but is set to a constant value 0.1 without adequate justification. - The connection to mean-shifted Laplace learning seems somewhat unclear. One, it is not clear if solution for (17) is close to CutSSL. Second the guarantees seem to be weak in the low-label rate regime (the bound connecting the two methods diverges in the regime of interest). - I find it surprising that label/class imbalance experiments are relegated to the appendix despite being a repeated claimed contribution, and also there does not seem to be theoretical evidence for better performance in this regime either. ----------------------------------------------------------------------------------------------------------------------------------------------------------- Post-rebuttal: Some of the above concerns (role of $s$ and connection to mean-shifted Laplace learning) have been addressed in the rebuttal and I have increased my score accordingly. Technical Quality: 2 Clarity: 2 Questions for Authors: - How can the cardinality constraints for different classes be known/estimated in practice? - Line 28: "thresholding can further exacerbate the aforementioned degeneracy." Is there a reference or theoretical justification for this? Further elaboration on how thresholding exacerbates degeneracy would be helpful. - $0 − 1$ minimizer, solution, assignment $\rightarrow$ $0$-$1$ - Line 39-40, it would be good to summarize what the connection is. - What is the difference (if any) between formulations (8) and (9)? - Is the (smallest) $s$ satisfying Proposition 3.2 easy to compute practically? - Line 195: "The convergence of the standard two-block ADMM for convex and nonconvex problems has been thoroughly established in the literature". What is the relevant convergence rate for the current non convex problem? - Line 232: "Our analysis of this problem provides new evidence to explain the empirical performance of this heuristic, while also revealing that it is suboptimal in some sense." Can you elaborate on this? Proposition 4.1 seems to imply that solutions of mean-shift heuristic are close to cardinality constrained Laplace learning (17). - Reference [8] is incomplete. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors note that a rigorous theoretical analysis of exact cut-based methods is missing and note more limitations in a dedicated section in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of our work and for their positive and constructive comments and for appreciating the significance of our work. We address reviewer questions and suggestions below. ***Choice of S*** We refer to the response to all authors. In summary, Proposition 3.2 provides an upper bound on the choice of $s$ that guarantees recovery of integer-valued solutions. A lower-bound is given by one convex relaxation ($s = 0$). Given the upper and lower-bound, we construct a homotopy path, a sequence of solutions $X(s)$ that vary in $s$. Empirically, the upper bound need not be reached to recover a solution that is very close to integer and that in practice, one can take a very coarse sequence (Figure 6 in appendix C.5.) ***Connection to mean-shift Laplace learning and Proposition 4.1*** This is an interesting and important question. We agree that a more careful analysis needs to be conducted to derive conditions on the parameters of the problem for the solutions to (17) and cutSSL for $s=0$ to be close (we “show” this experimentally, see the paragraph below (17)). To summarize, we derive a connection between a special case of our framework and Laplace learning (proposition 4.1). This connection bridges our method with existing literature on Laplace learning. In particular, note that mean-shift Laplace learning is a reasonable approximation of CutSSL only in the case where $s \approx 0$. When $s$ is sufficiently large, the approximation does not hold. The indefiniteness of the quadratic yields an unbounded problem. We will clarify this in the final version of the paper. Regarding the bound, we believe that the fact that the approximation does not hold to be of interest. The purpose of the bound is to explore an existing way to address the degeneracy of Laplace learning: the mean-shifted Laplace learning. Our perspective of this algorithm as a heuristic for solving (17) results in (1.) a characterization of the degeneracy as a rank-1 perturbation of the solution to (17) and (2.) a simple improvement on the mean-shift heuristic. Note that the degenerate term (equation 32 in appendix B) grows as the amount of labeled data diminishes resulting in a larger gap between (17) and mean-shift laplace learning. ***Label/class imbalance performance*** We agree that investigating guarantees for label and class imbalance performance is an important future direction. The main barrier to developing these theoretical results is nonconvexity and a dependence on the graph, the underlying labeling of the vertices, and the distribution of provided labels. In the future we plan to explore equivalent reparameterizations of the problem, e.g. a convex objective over a nonconvex set. We agree with the reviewer’s comment on the placement of the experiments for label/class imbalance and we will move these results to the main text in the final version of the paper. ***Additional questions*** We thank the reviewer for carefully reading our work. We appreciate the detailed questions. We will add additional clarification to the final version of the paper. _Estimation of the cardinality constraint:_ In many situations, a prior may be obtained on the class distribution (e.g. using the provided labels). However, in general these may not necessarily be known. In this case, a typical assumption is that the classes are balanced (i.e. a uniform cardinality constraint). In the case that one does not have exact estimate of the prior, it would be interesting to investigate a variation of our problem which alters the cardinality constraint with upper and lower bounds: i.e. $ m - \epsilon<= X^\top 1_n <= m + \epsilon$. _Thresholding Laplace learning:_ In our paper we provide an alternative characterization to the degeneracy in prop 4.1 that was originally rigorously studied in (Calder et al., Poisson Learning: Graph Based Semi-Supervised Learning At Very Low Label Rates, 2020). This degeneracy results in predictions that are constant at low label rates. We should clarify that there are several ad-hoc heuristics that can be applied to map continuous valued predictions to discrete predictions, although thresholding is almost always the default choice. Although it is possible that alternative heuristics might perform better in the degenerate regime, we show specifically that thresholding is a poor choice (see figure 4 in the appendix) as it tends to make constant predictions. _Connection to spectral graph theory:_ Thanks for pointing this out. This is an important question to address. At a high level, both spectral partitioning methods and our method rely on two different continuous relaxations of the combinatorial graph partitioning problem (4). In fact, the set of valid cuts can exactly be characterized as the intersection of two sets: the feasible set of (8) (this is the one we do optimization over) and the set of orthogonal matrices. Informally, problem (5) is related to finding the binary matrix satisfying the cardinality constraint whose projection onto the orthogonal complement of 1 is closest to the eigenspace of the smallest nonzero eigenvalue of graph Laplacian. _(8) and (9):_ They are equivalent by proposition 3.1. _Smallest choice of $s$:_ Given the graph Laplacian $L = D - W$, the smallest $s$ satisfies the conditions for prop 3.1 is given by the expression $(s-1)(D_{ii}+D_{jj}) \geq 2w_{ij})$. One can compute the smallest $s$ analytically by computing the expression $1 + \max_{ij}2w_{ij} / (D_{ii} + D_{jj})$. This expression involves a simple element wise division and scan, taking $O(|E|)$ linear time in the number of edges of the graph / nonzero entries of $W$. _Convergence rates for ADMM:_ we clarify that while global convergence is thoroughly established, the convergence rate of ADMM for nonconvex problems is an ongoing research area. Under certain convexity and smoothness assumptions, ADMM converges linearly which we expect to hold for our problem. --- Rebuttal 2: Comment: Thanks for investing time and for providing valuable feedback! We appreciate the suggestions and questions and your help with improving the quality of our paper. To follow-up on the choice of $s=0.1$, we compute the smallest values of $s$ that guarantee integer solutions on MNIST, FashionMNIST, and CIFAR10 (using the equation provided in the rebuttal). We will provide a more detailed experiment in the final version of the paper to explore the effect of the labeling. | MNIST | FashionMNIST | CIFAR10 | | ----------- | ----------- |----------- | | $s = 0.6$ | $s = 0.7$ | $s = 1.1$ | It can be seen that these values are approximately on the order of $0.1$ (the value that we adopt in the main table in the paper) and agree with the results provided in the appendix (Figures 6b and 7c). Note that we also evaluate larger values of s, (i.e. s=0.5, 1.0, etc.) in the appendix. In practice, we note that (1.) generally a choice of $s$ that is smaller than the bound yields solutions close enough to integer and (2.) the effect of the concave term (i.e. larger $s$ inducing more bad local minima) and the computational overhead of running CutSSL on larger sequences of $s$ need to be considered. This tradeoff is explored in Figure 6b in the appendix and these results (notably the robustness of CutSSL to coarse sequences of $s$ and $s$ smaller than the bound) motivated our choice of $s=0.1$ for the results presented in the main text. We will clarify this in the final version of the paper. Thank you again for the high-quality review. We would be happy to address any remaining issues, or clarify sources of confusion.
Summary: The paper introduces CutSSL, a novel framework for graph-based semi-supervised learning (SSL). The authors address the limitations of Laplace learning algorithms, particularly their poor performance at low label rates and in imbalanced class scenarios. CutSSL leverages continuous nonconvex quadratic programming to achieve integer solutions, inspired by a cardinality-constrained minimum-cut graph partitioning problem. The framework utilizes the ADMM for robust and efficient problem-solving. Strengths: Originality: The paper introduces a fresh approach to graph-based SSL by framing it as a continuous nonconvex quadratic programming problem, a novel perspective compared to traditional Laplace learning methods. Quality: The theoretical formulation is robust, and the use of ADMM for solving the quadratic programs is well-justified, offering solid convergence guarantees. Clarity: The paper is well-structured and clearly explains the motivation, methodology, and contributions. The connection between their method and existing heuristics like mean-shifted Laplace learning is particularly insightful. Significance: By addressing the degeneracy of Laplace learning at low label rates and in imbalanced settings, the paper tackles a significant challenge in SSL. The performance improvements on various benchmark datasets highlight the practical relevance of their approach. Weaknesses: Theoretical analysis: The paper would be better if more theoretical analysis could be provided on why the proposed method works when the label rate is extremely low or unbalanced. It is quite unclear on the theoretical motivation behind this proposed method. Experimental Validation: While the results are promising, the experimental setup could benefit from a more extensive comparison with a broader set of baseline methods. Including more recent graph-based SSL algorithms, especially those GNN-based methods would strengthen the empirical validation. Scalability: Although the authors claim that their method scales well to large graphs, a detailed analysis of the actual runtime comparisons with other methods would be beneficial. Parameter Sensitivity: The paper does not discuss the sensitivity of CutSSL to its hyperparameters. An analysis of how different parameter settings affect performance would help in understanding the robustness of the method. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Can you provide some theoretical analysis on why the proposed method works when the label rate is low or unbalanced? 2. How do recent GNN-based methods perform when compared with your proposed method? 3. How long will the proposed method run on some ogbn datasets you used in the experiments? 4. How sensitive is CutSSL to its hyperparameters? Could you provide an analysis or guidelines on setting these parameters? 5. How does CutSSL perform on different types of graph construction methods beyond the kNN graphs tested in the paper, still using the MNIST datasets instead of the real-world graph datasets? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have addressed some limitations of prior methods, particularly the degeneracy of Laplace learning at low label rates and in imbalanced class scenarios. However, the paper could benefit from a more detailed discussion of the following: Running Time Overhead: The continuous nonconvex quadratic programming approach may introduce computational overhead. A comparison of actual computational costs with other methods would be useful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and constructive comments and for appreciating the significance and originality of our work. We address reviewer questions and suggestions below. ***Theoretical analysis in low-label rate regime*** We agree that this is an interesting and important direction. To be clear, In this paper, we make three formal mathematical statements. In contrast to Laplace learning, which is degenerate in the low-label rate regime, we start with a framework that is nondegenerate, even without any labels, but combinatorial problem (min cut, stated in eq (4)) and derive a nonconvex continuous relaxation that incorporates label information. To show the relaxation is exact (eq (9)), we prove two statements (propositions 3.1 and 3.2). First (proposition 3.1), we show that at integer solutions (valid cuts), the relaxation coincides with the combinatorial problem, i.e., the min cut with respect to the relaxation is the min cut with respect to the combinatorial problem. Next (proposition 3.2), we show that (1.) local minima of the relaxation are integer-valued and (2.) the number of local minima can be controlled (and is via our framework) to prevent a badly behaved optimization problem with many spurious local minima. Thus the global minimum of the relaxation is exactly the min cut. However, a precise statement about the quality of the solutions recovered by CutSSL is beyond the scope of this work. Such a statement would need to take into account the topology of the network, the true labeling of all the vertices, and the distribution of the provided labels and additionally address the underlying nonconvexity of the problem. In a future work we plan to explore equivalent reparameterizations of the problem, e.g. a convex objective over a nonconvex set. ***GNN-based methods*** In tables 2 and 8 we included a comparison to a recent GNN-based method designed for low-label rate semi-supervised learning (GraphHop, Xie et al. 2023). We also compare to SOTA method Poisson MBO (Calder et al. 2020) and show we out-perform both methods, with ~30% improvement of CutSSL compared to GraphHop. We will include comparison to additional GNN-based methods in the final version. ***Runtime analysis*** We highlight additional results that appear in the main text and appendix of our paper. In particular, in table 2 of section 5 of the main text and Table 8 in Appendix C4 we provide the average runtime over 100 trials on the large-scale OGB-Arxiv and OGBN-Products benchmarks. While a marginal increase in computation cost of CutSSL is apparent, when compared to classic Laplace and Poisson learning methods, we emphasize that the cost is small and is vastly superior compared to PoissonMBO and the GNN-based method GraphHop. The price of each iteration of ADMM is very cheap due the well-conditioned nature of the subproblems (see section 3.3 for more details). ***Choice of hyperparameter / sensitivity*** We highlight a result that appears in the main text and several experiments presented in the appendix of our paper. In particular, the choice of $s$ is governed by Proposition 3.2, which provides an upper bound on the choice of $s$ that guarantees recovery of integer-valued solutions (for intuition see figure 1). A trivial lower-bound is given by one convex relaxation of the problem ($s = 0$). Given the upper and lower-bound, we construct a homotopy path, a sequence of solutions $X(s)$ that vary in $s$. In practice, we choose a linear or geometric sequence s between $0$ and the upper bound. Many times, the upper bound need not be reached to recover a solution that is very close to integer (see Figure 6.b in the appendix). Likewise, we demonstrate that in practice, one can take a very coarse sequence of $s$. This result is presented and discussed in Figure 6 in appendix C.5. In fact, to reproduce the results in the main text, only a single nonzero data-dependent value of s is used (i.e. the homotopy path consists of constructing solutions for $s = 0$ and $s = 0.1$). ***Beyond KNN-Graphs*** Thanks for this suggestion. This is an interesting direction for future work. We note that in Table 10 in appendix C.5, we provide an ablation for various choices of k. As mentioned in the appendix, the relative performance remains consistent (i.e., CutSSL continues to outperform the other methods). One very interesting observation is that for CutSSL, the accuracy tends to improve with the value of $k$, although it’s possible that the accuracy could deteriorate for larger values of $k$. In the table below we evaluate the effect mechanism used to construct the graph on MNIST. First, we evaluate two variants of weighting edges of the KNN-based method adopted in the main text. Singular refers to weights of the form $w_{ij} = 1/||x_i - x_j||$, where $x_i$ and $x_j$ are the features of the $i$-th and $j$-th data points. Uniform refers to $w_{ij} = 1$, i.e. an unweighted graph. In the third column, we evaluate a more sophisticated method for graph construction based on Non-Negative kernel regression (NNK) [1]. As opposed to the KNN-based methods, which only consider the distance (or similarity) between the query and the data, the NNK-based method takes into account the relative position of the neighbors themselves. Note that our CutSSL out-performs Poisson-MBO for the different graph weighting methods and NNK construction. We thank the reviewer for this suggestion and we will add this experiment to the supplement in the final version of the paper. | Method | KNN (Gaussian) | KNN (Singular) |KNN (Uniform) |NNK | | -------- | ------- |------- |------- |------- | | Mean shift Laplace | 91.0 (4.7) | 89.7 (2.8) | 89.8 (2.3) | 91.7 (1.1) | | PoissonMBO | 96.5 (2.6) | 94.2 (2.9) | 95.7 (1.6) | 97.3 (1.4) | | CutSSL | 97.5 (0.1) | 95.8 (2.6) | 96.4 (2.0) | 98.4 (1.5) | [1] Shekkizhar and Ortega, Neighborhood and Graph Constructions using Non-Negative Kernel Regression, ICASSP, 2020
Rebuttal 1: Rebuttal: We thank all reviewers for carefully reading our work and for their constructive suggestions and comments. We have added individual responses and several additional experiments suggested by the reviewers. We will incorporate these in the final version of the paper. Below, we summarize the main contributions of the paper and expand on the choice of s parameter and introduce two new experiments based on discussions with the reviewers. ***Summary of contributions*** In contrast to Laplace learning, which is degenerate in the low-label rate regime, we consider a framework that is nondegenerate, even without any labels (the minimum cut problem stated in eq (4)) and derive a nonconvex continuous relaxation that incorporates label information. To show the relaxation is exact (eq (9)), we prove two statements (propositions 3.1 and 3.2). First (proposition 3.1), we show that at integer solutions (valid cuts), the relaxation coincides with the combinatorial problem, i.e., the min cut with respect to the relaxation is the min cut with respect to the combinatorial problem. Next (proposition 3.2), we show that (1.) local minima of the relaxation are integer-valued and (2.) the number of local minima can be controlled (and is via our framework) to prevent a badly behaved optimization problem with many spurious local minima. Thus the global minimum of the relaxation is exactly the min cut. Finally, we derive a connection between a special case of our framework and Laplace learning (proposition 4.1). This connection bridges our method with existing literature on Laplace learning. We consider guarantees about the generalization capability of our method to be an important future direction, but out of the scope of this work. ***Choice of $s$*** Several reviewers had questions regarding the choice of the $s$ parameter first introduced in equation 7 section 2.3. We highlight a result that appears in the main text and several experiments presented in the appendix of our paper. In particular, the choice of $s$ is governed by Proposition 3.2, which provides an upper bound on the choice of $s$ that guarantees recovery of integer-valued solutions (for intuition see figure 1). A trivial lower-bound is given by one convex relaxation of the problem ($s = 0$). Given the upper and lower-bound, we construct a homotopy path, a sequence of solutions $X(s)$ that vary in $s$. In practice, we choose a linear or geometric sequence of $s$ between $0$ and the upper bound. Many times, the upper bound need not be reached to recover a solution that is very close to integer (see Figure 6.b in the appendix). Likewise, we demonstrate that in practice, one can take a very coarse sequence of $s$. This result is presented and discussed in Figure 6 in appendix C.5. In fact, to reproduce the results in the main text, only a single nonzero data-dependent value of s is used (i.e. the homotopy path consists of constructing solutions for $s = 0$ and $s = 0.1$). ***Additional experiments*** In the table below we evaluate the effect mechanism used to construct the graph on MNIST. First, we evaluate two variants of weighting edges of the KNN-based method adopted in the main text. Singular refers to weights of the form $w_{ij} = 1/||x_i - x_j||$, where $x_i$ and $x_j$ are the features of the $i$-th and $j$-th data points. Uniform refers to $w_{ij} = 1$, i.e. an unweighted graph. In the third column, we evaluate a more sophisticated method for graph construction based on Non-Negative kernel regression (NNK) [1]. As opposed to the KNN-based methods, which only consider the distance (or similarity) between the query and the data, the NNK-based method takes into account the relative position of the neighbors themselves. Note that our CutSSL out-performs Poisson-MBO for the different graph weighting methods and NNK construction. We thank the reviewers for this suggestion and we will add this experiment to the supplement in the final version of the paper. | Method | KNN (Gaussian) | KNN (Singular) |KNN (Uniform) |NNK | | -------- | ------- |------- |------- |------- | | Mean shift Laplace | 91.0 (4.7) | 89.7 (2.8) | 89.8 (2.3) | 91.7 (1.1) | | PoissonMBO | 96.5 (2.6) | 94.2 (2.9) | 95.7 (1.6) | 97.3 (1.4) | | CutSSL | 97.5 (0.1) | 95.8 (2.6) | 96.4 (2.0) | 98.4 (1.5) | In the next experiment, we provide additional motivation for CutSSL. We show that in the absence of labels purely unsupervised graph cut-based methods (e.g. Spectral Clustering (SC)) yield a graph cut / partition such that the resulting clusters are well-aligned with the labels. To demonstrate this, we calculate a partition using both SC and a graph cut solution to equation 8 (‘cut’), i.e.this is CutSSL in the absence of labels. To evaluate the alignment of the clusters yielded by the cut with the labels, we solve a linear sum assignment problem using the true labels to find the best permutation of cluster assignments. This assignment of clusters to classes shows that the graphs cut are relatively well-aligned with the class labels in contrast to Laplace learning (LP) which degenerates in the low label rate regime (i.e. just a single label per-class). Thus, cut-based techniques work since they are derived from a fundamentally non-degenerate framework and our CutSSL augments the multi-way graph partition with labeled nodes. | Method | MNIST | F-MNIST | CIFAR10 | | -------- | ------- |------- |------- | | Laplace/LP | 16.1 (6.2) | 18.4 (7.3) | 10.4 (1.3) | | CutSSL | 97.5 (0.1) | 64.3 (4.6) | 44.7 (5.9) | | Cut | 95.9 (0.0) | 62.7 (0.0) | 41.1 (0.0) | | SC | 92.6 (0.0) | 60.5 (0.0) | 35.3 (0.0) | [1] Shekkizhar and Ortega, Neighborhood and Graph Constructions using Non-Negative Kernel Regression, ICASSP, 2020
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Scalable and Stable Parallelization of Nonlinear RNNs
Accept (poster)
Summary: This paper aims to address the challenge of parallelizing the evaluation of nonlinear Recurrent Neural Networks (RNNs). Key contributions include the introduction of quasi-Newton approximations and trust region-based methods to reduce computational complexity and improve numerical stability, respectively. These enhancements enable reductions in memory usage and computational time, while maintaining the accuracy of the model's evaluations. The techniques presented are empirically validated, demonstrating their efficacy and potential application across various domains where RNNs are employed. Strengths: - The integration of quasi-Newton methods provides a fresh perspective on overcoming the challenges in parallelizing the evolution of nonlinear RNNs. The theoretical insight extends the scalability of RNNs soundly. - By employing quasi-Newton approximations, the paper effectively reduces the computational complexity from cubic to nearly linear with respect to the sequence length. This reduction is critical for deploying RNNs in resource-constrained environments. - The introduction of trust regions and their integration with Kalman smoothing techniques stabilizes the Newton iterations. This is a notable advancement over traditional methods, which often suffer from numerical instabilities due to undamped Newton updates. - The methods are not only theoretically sound but are also empirically validated to demonstrate their practical effectiveness. This includes significant reductions in memory usage and computational time, crucial metrics for real-world applications. Weaknesses: - The paper's empirical validation, though robust, is limited to specific architectures and scenarios. Expanding the testing to a wider range of nonlinear RNN types and more diverse datasets could provide a more comprehensive understanding of the methods’ applicability and limitations. - The implementation of quasi-Newton and trust-region methods might be complex and could require significant modifications to existing neural network training pipelines. This complexity could hinder widespread adoption without additional tools or simplified frameworks. Technical Quality: 3 Clarity: 3 Questions for Authors: - As I am not very familiar with this topic, I am wondering which equation will be speeding up using parallel computing. And how the method scales when parallelism is increasing or decreasing. - More experimental details are required such as how the parallelism is implemented and what kind of parallel computing is applied in the experiments. - Will the word "parallel" be proper for this type of method? Initially, I thought the parallelism was w.r.t the sequential length from the title. However, it seems that parallelism is applied for the quasi-newton method, which to me is equal to speeding up the computation in optimization with parallel computing. How it is related to parallelizing RNN in terms of the sequential length? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have not adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly, we thank the reviewer for taking the time to review our submission and for their positive comments! We were particularly pleased by the comments on the foundation, clarity and presentation of our work. > Weaknesses We discuss these in global review (“empirical validation” in Section 2 and “implementation” in Section 1.1). Broadly, regarding “empirical validation” we appreciated your perspective to bring our robust experimental validation to other datasets, and so took the suggestion of Reviewer VSf9 and demonstrated our methods on the chaotic Lorenz-96 system. Regarding “implementation,” we note that our method can be dropped into any stateful architecture. > As I am… This is a very interesting point, that we probably didn’t discuss in enough detail in the paper. The key insight of DEER was remapping the matrix inversion (a prohibitively expensive operation) required by the Newton iteration into a parallel scan. This allows the “inverse” to be solved efficiently and while exploiting parallelism. However, the parallel scan incurs $O(TD^3)$ work, and hence does not scale well (and is prone to instabilities due to the unboundedness of the individual Jacobians). Our insight was to introduce quasi-approximations into this, to reduce this complexity to $O(TD)$, while proving this retains global convergence (c.f. Proposition 1). (This suggested the “resetting” heuristic for combating instabilities; and also led us towards the connection with trust region stabilisation.) The parallel scan, given sufficiently many processors, scales as $O(\log T)$. As we show in Figure 6, we see this speedup at low model sizes and sequence lengths. Once the processors are saturated, we see a linear increase in the runtime (since the amount of work we do is linear), but it is making much more effective use of the GPU, resulting in a constant factor speedup over sequential application at larger model sizes/sequence lengths. We have added an expanded discussion of these points, thank you for highlighting them. > More experimental details… The JAX library includes an implementation of parallel scan which we use off-the-shelf. As a result the implementation (with regard to parallelism) is very straightforward. We included code in the original submission and are also preparing a full public code release. We have also added more tabulated hyperparameters, configs, runtimes etc; and added a paragraph explaining parallel scans in detail and how we leverage them. > Will the word… We believe “parallel” is the correct word. Crucially, we pose the inherently sequential operation as an optimization over the length of the sequence, where each optimization step is parallelizable (sub-linear parallel application time in the sequence length, $O(\log T)$, to be exact). We have reinforced this point in the introduction. If we have misunderstood the reviewers query, then please let us know! > The authors have… We refer the reviewer to the general response in regard to discussion of limitations. We have added much more explicit discussion of the limitations in a single place to correct this shortfall. If the reviewer has additional points they would like us to include, then please let us know, we are more than happy to include them! We again thank the reviewer for taking the time to review our submission and for raising some really interesting points of confusion that we had not foreseen. Fixing these has definitely made the paper better. Please let us know if you have any further questions! — Submission 13253 Authors --- Rebuttal Comment 1.1: Comment: Thank you for your helpful clarification and I would like to upgrade my score. I suggest the authors add these discussions in the revised version. --- Reply to Comment 1.1.1: Comment: Thank you for your helpful comments. We agree that they have made the paper better! Thank you as well for upgrading your score and for advocating for acceptance of this paper!
Summary: In this paper, the authors propose to improve DEER, a previous method that evaluates non-linear RNN in parallel by viewing it as a fixed point problem. Specifically, instead of using Newton's method to solve the fixed point problem, the authors leverage quasi-Newton methods to approximate the procedure. This alleviates the intractability of DEER. Moreover, they stabilize DEER by connecting it to Kalman smoothing technique. Empirical results show that the improved methods demonstrate superior performance in both speed and memory usage. Strengths 1. The paper is well written and easy to follow. 2. The proposed methods are very well justified. 2. The proposed technique is reasonably designed. Weaknesses 1. The experiments are thin. 2. Although the proposed methods are novel, the adopted techniques are already well-known. Overall, I recommend a weak acceptance of this paper. Strengths: 1. The paper is well written and easy to follow. 2. The proposed methods are very well justified. 2. The proposed technique is reasonably designed. Weaknesses: 1. The experiments are thin. In this paper, the authors evaluate the quality of different methods on only one dataset with a fixed model. It is hard to predict how the proposed methods will behave on other datasets with various RNN architectures. 2. Although the proposed methods are novel, the adopted techniques are already well-known. For example, quasi-Newton methods and the Levenberg-Marquardt update are well-studied before. This weakens the novelty of this paper. Technical Quality: 3 Clarity: 4 Questions for Authors: See the weaknesses. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly we thank the reviewer for taking the time to review our submission and for their positive comments! We were particularly pleased by the comments on the foundation, clarity and presentation of our work. The central reservation of the reviewer is that our experimental results are “thin”. In regard to this, we refer the reviewer to the general response and the supplemental review PDF, where we have included additional experiments. The reviewer also comments that many of the components we discuss exist in the literature. We ultimately agree with this statement; but argue that the real merit in our paper is assembling these components and exploring (theoretically and empirically) their relative strengths and weaknesses, and stratagems to operationalizing these components in the context of deep learning and non-linear systems. As such, we believe the ties to existing work highlights the timeliness and utility of our paper – as we clearly build directly and pragmatically on existing literature that is of general interest to many in the community. We again thank the reviewer for their evaluation of our paper, their kind words, and for pointing to avenues to improve the paper. Please let us know if you have any further questions! — Submission 13253 Authors --- Rebuttal Comment 1.1: Comment: Thank you for your response. I remain my stance to accept this paper. --- Reply to Comment 1.1.1: Comment: Thank you for advocating to accept this paper!
Summary: This paper presents improvements over the approach of Lim et al. last year for the parallelization of nonlinear recurrences in the sequence length. The paper jumps right into the problem after a quick introduction, and presents a few improvements on Deer: Quasi-Deer (an approximate method), IKE (stable but expensive), and Quasi-IKE (moderate stability, moderate memory and work). The authors corroborate their improvements with some theoretical results and experiments Strengths: The paper flows very well and jumps right into the math: the authors have lots to show and I must say I was impressed by the quality and flow of the paper. While I have some questions and think this line of work does not end with this paper, I believe this contribution is worth publishing. I think the author correctly shows their improvements, and I particularly liked the quasi-deer approach. I also really like how the paper is hones on limitations and clearly provides a gateway to future research. Weaknesses: I will put here weaknesses and questions at the same time: 1) Approximation quality: One missing plot is the approximation error: even in a toy setting and with random input, it is interesting to understand how well the methods can approximate the nonlinear RNN and at which cost. You have something similar in Figure 3, but this is not enough: I would be very curious to see, e.g., the evolution of the hidden state norm. 2) Speed: My experience with non-diagonal RNNs in Jax using parallel scan is that associative scans on dense recurrences are slower than linear processing - in contrast to the diagonal setting where I usually observe something like a factor 10. You do comment a bit on this, but I am curious about how much it costs, in your framework, to do a parallel scan on dense matrices. This is of course a lower bound on your compute, right? I am not a hardware person, so this counts more as a curiosity, which I think should be better developed in the paper. 3) What is this doing?: I think that, philosophically maybe, approximations sometimes work not because they are close to the object of interest, but for some other reason. This was, for instance, the case of S4 - where the ZOH discretization introduces some structure which ends up being the reason for its success. So I ask: can you present an example where you provide a precise computation of what your recurrence is doing? I think this would be super interesting, and perhaps connect your method to some other literature on SSMs. Technical Quality: 4 Clarity: 4 Questions for Authors: above. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Well discussed by authors Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper, for their positive feedback, and for providing some very interesting discussion points! > Approximation quality: This is a great observation. We have added Review PDF Figure 1 probing the compute or computational budget required to achieve different levels of accuracy in the AR GRU experiment. There are two major findings here: (a) the speed ups are fairly robust across elements *within* a dataset (i.e. once tuned the speedups are fairly low variance); and (b) quasi-methods took more steps, but were faster as each optimization step is faster. We also find that IKE methods converged faster for the AR GRU. However, we note this relationship is reversed in the additional Lorenz96 experiments (proposed by reviewer VSf9; see global review), where IKE and DEER converge in a similar number of steps, and quasi methods require notably more. This reinforces a point made in the discussion of the original submission that different methods may be faster or more stable in different settings, but that the memory savings and ultimate convergence are consistent. We have expanded discussion and guidance in the paper. > Speed: Yes, dense matrices are much slower to process, and this matches our experience (the parallel scan on a dense linear recurrence requires $O(D^3)$ work which saturates the GPU). Table 2 in the paper shows the average time per step and average number of steps required to converge. The key take-away is that individual steps in DEER/IKE are (approximately) a factor of between 3.5 and 30 times slower _per step_ than their quasi variants (which roughly tallies with your experience). However, they take (on average in Figure 4) a factor of between 2 and 10 fewer iterations. We have added some additional discussion of this tradeoff. > What is this doing? This is a super interesting question. We have two notable observations about (re-)interpreting the methods we present. Below we include a condensed proof on how IKE can be re-interpreted as stabilising the parallel scan by attenuating the eigenvalues used in the parallel scan, and hence stabilising their cumulative product. We will add the full proof to the main paper. To your question on quasi: we don’t have a particularly satisfactory answer. Very early in our investigations when exploring quasi approximations, we noticed that the off-diagonal elements were often much smaller in magnitude than the diagonal elements (which makes intuitive sense for many recurrent models). This gave us some confidence to explore quasi approximations further, but we were unable to derive any deeper insight, and so validated it empirically. It is a really interesting avenue of future work looking into this, or other families of approximations and their relative merits and drawbacks. **Summary**: We again thank the reviewer for positive feedback, and for raising some really unique discussion points. Please let us know if you have any further questions! — Submission 13253 Authors — **Condensed additional proof that IKE attenuates the largest eigenvalues in linear recurrence**: (We include a full proof in revised paper) Let $\{J_t\}$ be the Jacobians used in the linear recurrence relations and $\{b_t\}$ be the offsets. Then the prediction step of the Kalman filter (IKE) is the same as DEER. However, after applying the update step in IKE (which imposes the trust region), we obtain a second linear recurrence relation where the linear operator is given by $\Gamma_t J_t$. It can be shown that $\Gamma_t$ is a symmetric positive definite matrix with eigenvalues bounded above by $\frac{1}{1 + \lambda}$. Thus, making using of the Spectral Theorem, it follows that the norms of the eigenvalues of $\Gamma_t J_t$ are bounded above by the max of the norms of the eigenvalues of $J_t$, scaled by $\frac{1}{1 + \lambda}$. Note that larger $\lambda$ corresponds to more regularization/smaller trust region; and therefore correspondingly results in smaller effective eigenvalues in the scan. We recover DEER exactly if $\lambda = 0$. Thus, while large eigenvalues in $J_t$ are the cause of the instability of DEER when evaluating unstable dynamical systems, IKE directly attenuates these large eigenvalues, explaining why the intermediate iterations using IKE remain stable. --- Rebuttal 2: Title: Thanks! Comment: Thanks for the rebuttal. Very interesting - especially your view on what is the method doing to eigenvalues in practice. Keeping my accept score. Good luck! --- Rebuttal Comment 2.1: Comment: Thank you so much for advocating for the acceptance of this paper!
Summary: This work extends the parallel RNN (DEER) method to improve its efficiency and stability. The authors make two main modifications: 1. Replace the full Jacobian matrix inverse with its diagonal, allowing for linear-time inversion. 2. Introduce damping via Kalman filter to enhance the stability of Newton methods. Strengths: - Solid contributions to improving the DEER method - Potential for increased efficiency and stability in parallel RNN implementations Weaknesses: Overall, the paper have solid contributions. But there are several weakness 1. The potential drawbacks of these modifications are not fully explored - Efficiency gains may come at the cost of slower convergence - Damping could potentially compromise accuracy 2. Limited experimental scope - The experiments may not reveal issues that could arise in harder, larger-scale cases - For example: does the model work on chaotic systems such as Lorenz 96? 3. it will be interesting to compare with popular models like transformers and xLSTM Technical Quality: 3 Clarity: 3 Questions for Authors: Harder examples: how does the model work on chaotic system such as Lorenz 96 or Weather forecast? More benchmark, how does the model compared to transformers and xLSTM? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are not sufficiently addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and providing some helpful feedback! > Limited experimental scope We point the reviewer to the global response for additional experiments we have added: - In Review PDF Figure 1 we study the sensitivity to the $\lambda$ parameter (related to Reviewer S8Ch’s feedback), showing that $\lambda$ can be effectively set with a 1-d hyperparameter sweep, and that actualized speed-ups for a selected $\lambda$ are robust for a given data modality. - We also add additional experiments exploring the wallclock time/iteration/accuracy tradeoff of the methods (r.e. Your comment “The potential drawbacks…”). In Review PDF Figure 1, we show for the AR GRU that while full methods take fewer steps, quasi methods converge faster in wallclock time across a broad range of desired accuracy levels. - We really liked your suggestion of using our algorithms to perform inference in a Lorenz96 system. We include results for this experiment in the Review PDF Figure 2. We see that DEER and IKE methods converge in a comparable number of steps (this makes sense as DEER is a special case of IKE for $\lambda \to 0$). DEER is faster (in terms of wallclock time) because of the extra work done per IKE iteration. However, we do see that IKE has stabilised convergence, whereas DEER relies heavily on the resetting heuristic. Interestingly we see that quasi is slower by all metrics, suggesting that the chaotic dynamics may require the more accurate updates. Quasi methods still consume notably lower memory, however, and so may be preferable in certain circumstances. > It will be interesting to… The comparison to alternative architectures, such as the xLSTM and Transformers, is a potential point of comparison. However, we stress that this paper is about accelerating and operationalizing the parallel evaluation of sequential, non-linear models. Papers developing or benchmarking natively parallelizable architectures (e.g. Transformers) often state that sequential architectures such as (e.g.) GRUs cannot be parallelized, and are often therefore eliminated from consideration or score poorly on speed tests. However, DEER raised the point that this is not entirely accurate, and that alternative black-box methods may be within reach for efficiently evaluating these oft-maligned architectures. Our work focuses on extending and operationalizing DEER, without changing the architecture or its asymptotic performance. We have clarified this in the main text. > The limitations are… As mentioned in the global review, we have further highlighted and discussed the limitations in separate paragraphs in the conclusion (while also retaining the “inline” discussion to retain flow). If the reviewer has further limitations we have not addressed, then please let us know, we are more than happy to include them! We again thank the reviewer for evaluation, and for raising many points that have improved the paper. Please let us know if you have any further questions! — Submission 13253 Authors --- Rebuttal Comment 1.1: Comment: Thank you for including the Lorenz experiment. However, I’m a bit confused about how the task is defined. Typically, the task involves predicting the next state auto-regressively given the history and the iteration is the time stepping. Since the Lorenz system is ergodic, the state should not converge to a stationary state. However it seems the iteration is something different here. Could the authors provide more details about the experiment? - Specifically, what are the input and output of the model? - if the model is not auto-regressive, how does it modeling the time evolution? - what error metric is being used? --- Reply to Comment 1.1.1: Comment: Thank you for asking follow-up questions. We were very tight on characters so we couldn’t include much information initially! We can certainly add some more clarity: > If the model is not auto-regressive, how does it model the time evolution? We tackle the parallel evaluation of the classic non-linear 5-dimensional Lorenz-96 system, with F=8 which results in chaotic dynamics. The setup is the same for (q-)DEER and (q-)IKE. We directly use the Lorenz-96 dynamics as our nonlinear dynamics function $f$, i.e. the architecture/time evolution _is_ the Lorenz-96 ODE system. The state is the five-dimensional Lorenz system state (instead of using an RNN approximator with its own hidden state predicting the next timestep of the true system). We chose this in part to explore the parallel evaluation of other architectures; but also to allow us to focus on the parallel evaluation of the Lorenz-96 system itself – as opposed to the capability of a function approximator to approximate the system. > Specifically, what are the input and output of the model? The input is therefore the initial condition of the ODE; and the outputs are the $T \times 5$ subsequent system states (plotted in Review PDF Figure 2, bottom row). > What error metric is being used? Our solvers [(q)-DEER and (q)-IKE] are optimizing the merit function defined in eq. (7), which is the sum of squares of the one-step prediction error (defined in eq. (1)). However, this loss expresses the pairwise consistency of consecutive inferred states under the specified dynamics; as opposed to the more conventional notion of “accuracy”: the difference between the inferred sequence and the true sequence. We therefore plot this as the mean absolute deviation (MAD) of the time series at Newton iteration $(i)$ against the true state sequence. “Iteration” then refers to the number of Newton iterations, i.e. the number of updates applied to the entire state sequence. We hope this clears any points of confusion. Please let us know if you have any further questions! -- Submission 13253 Authors.
Rebuttal 1: Rebuttal: Firstly, we thank all six reviewers for their positive feedback and insightful comments. We present methods for parallelizing the evaluation of non-linear RNNs, building on a recent method, DEER, from Lim et al. We first proved the global convergence of DEER. We then ameliorate DEER’s two major weaknesses: cubic computational complexity and instability. We achieve this by introducing quasi-approximations and trust regions (IKE), respectively. We evaluate all methods and find that quasi and IKE variants match or outperform DEER across a range of metrics, including accuracy, wall clock time, memory and iterations. We first comment on the positives highlighted by the reviewers: - **Well Received**: We feel all reviewers understood and commended the intention and merits of our submission. - **Foundations and Connections**: Many reviewers noted how well-founded our approach is and how it dovetails nicely with recent work. - **Quality of Presentation**: We were especially pleased by the consistent praise for the quality of our submissions presentation. There were some critical themes shared in some reviews. We comment on these here: 1. **Discussion of Limitations**: Despite several comments praising our discussion of limitations, some reviewers commented we did not adequately address the limitations of the methods we present. In a bid to create better flow we attempted to “inline” the limitations, addressing them as they arose. However, we clearly missed the mark here. We have added an additional paragraph to the discussion explicitly repeating those limitations (we do note that we did discuss these limitations in the original submission, but appreciate the reviewers emphasis on highlighting limitations): a. Quasi methods lose the quadratic convergence properties of Newton (but we show empirically that convergence is *faster in terms of wallclock time* as a result of the efficient approximation). b. Although motivated by Proposition 1, the quasi approximation can be significant. c. IKE stabilises evaluation, but, like DEER, has cubic complexity in the state dimension (quasi-IKE then combats this). d. The heuristic of resetting to zeros when unstable is motivated by Proposition 1, but does slow convergence in (q-)DEER methods. e. (q-)IKE adds an additional hyperparameter ($\lambda$). f. Extended discussion on when each technique is likely to yield the best results. g. Architectural/implementation limitations (see more below). 1.1 **On Architectural/implementation limitations**: (q-)DEER and (q-)IKE can be applied to _any_ stateful architecture with state in $\mathbb{R}^D$ with _minimal-to-no_ modifications to the overall pipeline. All methods effectively define a different `forward` method for the model with identical interfaces. Sequential/(q-)DEER/(q-)IKE can be switched between simply by changing a flag in the model class. (q-)DEER and (q-)IKE are then implemented in a model-agnostic manner. The main architectural/implementation limitation, at least at the time of writing, is that Torch does not have an implementation of parallel scan. This is a major reason we wrote our code in JAX. However, with the explosion of parallel-scan-powered methods, it is our understanding that the Torch team is actively developing a parallel scan implementation. 2. **Experiments**: Several reviewers commented on the experimental evaluation – with some excellent suggestions of ways to strengthen them. Off the back of these suggestions, we have included some further experimental evaluation: - **Review PDF Figure 1**: In response to the comments of S8CH, VSf9 and 5uzK, we explore $\lambda$ in (quasi)-IKE for the AR-GRU (Left Column). We sweep over $\lambda$ for 15 different input sequences, and plot the median and quartiles of the cost to convergence in terms of Newton iterates and runtime. We see a bathtub curve (large $\lambda$ takes needlessly small steps, slowing progress; small $\lambda$ results in many resets, slowing convergence). Crucially, we see there is little variance across individual sequences. These results show that there is a well-behaved $\lambda$ dependence that can be optimised on a validation set with a simple 1-d grid search. Furthermore, In response to the comments of S8CH, VSf9 and 5uzK, we chart the approximation error vs cost for the AR GRU (Center and Right Column). We see that the approximation error reduces in fewer Newton steps with full DEER as opposed to quasi-DEER, but, crucially, the wallclock time (the more important of the two metrics!) is notably lower across all accuracies for quasi-DEER. This indicates that our more efficient – but approximate – quasi-DEER is broadly preferable to the more expensive – but exact – DEER updates. Furthermore, the stabilised IKE and quasi-IKE are better still. We also show the steps/time to convergence for a range of accuracy thresholds, and see that our methods outperform DEER across the full range of thresholds and metrics. - **Review PDF Figure 2**: In response to the comments from VSf9, KXRw and 4jAU, we include an extra experiment applying (q-)DEER and (q-)IKE to approximate the chaotic Lorenz-96 system. We demonstrate that all the parallelized methods converge to the correct trace, but that (q)-IKE are dramatically more stable at intermediate Newton iterations prior to convergence. > Explicit pointer to full proof of Prop1 in Appendix A.1 We noticed that we forgot to include a pointer in the main text to the full proof of Prop1, which can be found in Appendix A.1 of the original submission. We now include this explicit pointer. **Summary**: We hope that the expanded and clarified discussion of limitations and these additional experiments allay the reviewers' concerns, and help further elucidate the relative benefits of the methods we introduce. Thank you again, and please do not hesitate to reply if there are further clarifications we can make! — Submission 13253 Authors Pdf: /pdf/e09e432f0287064441cda01d924a3edc608b25dd.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents a stable and scalable method to parallelize evaluation of an RNN across the sequence length. Furthermore, the paper presents a sketch of when the more recently proposed DEER algorithm converges. Alongside these contributions, the paper presents experiments showing the efficacy of the proposed method. Strengths: + Identifies a shortcoming with the DEER algorithm + Presents a computationally efficient and scalable alternative to DEER Weaknesses: - The result proving the convergence of DEER isn't quite rigorous/general -- it relies on the Jacobians being finite which is quite a strong assumption which isn't true in practice, particularly in situations when the eigenvalues are > 1. The paper sweeps this under the rug through re-initialization, which suggests it isn't quite a general statement. Technical Quality: 2 Clarity: 3 Questions for Authors: - In general, getting to exactly zero error is impractical (floating point errors among other reasons) so how can the theorem be generalized to considering number of iterations to achieving \epsilon error? As a side consequence, can the authors clarify when DEER can achieve quadratic convergence rate of Newton method? How sensitive DEER is to blowing up errors when propagating information across sequence length? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting some interesting clarification points. > The result proving… This is a really interesting line of thought that we will clarify in the paper. There are lots of points that your comment touches on, so we will try and answer them sequentially: **Re: Generality of Prop 1 with regard to eigenvalues**: We state in Prop. 1 that we assume the Jacobians must be *finite.* Given this assumption, the proof by induction in Appendix A.1 is rigorous and general for real numbers. Thanks to your feedback, we will include an explicit pointer to the full proof. The finiteness of the Jacobians is a core requirement for DEER. Thus, Prop. 1 (a statement about the DEER algorithm) is applicable anywhere that DEER is applicable. We also argue that the finiteness of the Jacobians is a weak assumption, as it is a requirement for training models with backpropagation and is satisfied by many commonly used architectures. However, it is absolutely true that the eigenvalues can be greater than one, and this leads to the *product* of the Jacobians computed by the scan blowing up in magnitude. In fact, the sensitivity of DEER to blowing up is directly linked to the eigenvalues of the Jacobians. It is this explosion that causes overflow on finite-precision architectures. The resetting heuristic combats this, and is motivated by a result derived as part of Proposition 1: that the first $t$ steps have zero residual after $t$ Newton steps (see ~Line 111 or 447), e.g., that we can restart the optimization with the correct $t$-length initialization. We can view this heuristic as allowing the Newton steps to resume when overflow is encountered, as opposed to failing or falling back to sequential evaluation. **IKE directly combats eigenvalues greater than one**: A related point is that IKE was motivated to prevent overflow by limiting the step size in the latent state using a Kalman filter (KF). We have since made a connection from the Kalman filter back to the original scan: the trust region/KF can be interpreted as attenuating the eigenvalues by a factor of at least $\frac{1}{\lambda+1}$, stabilising the scan _without_ the use of the heuristic (see **Proof that IKE attenuates the largest eigenvalues in linear recurrence** in response to 5uzK). IKE combats exploding cumulative products without the use of the heuristic. **Re: discussion of the re-initialization heuristic**: we respectfully disagree that we “sweep it under the rug”. We discuss the heuristic in several places, and explicitly highlight the heuristic in Figure 4. The heuristic is a necessary intervention — not originally discussed by Lim et al. — to combat instabilities of undamped Newton in floating-point representations. We see the heuristic as a (albeit small) novel contribution, and a practical corollary of Prop. 1. We have expanded discussion of the limitations and opportunities posed by the reinitialization heuristic. > In general… **Re: connections back to floating point**: Our analysis was a theoretical exploration to develop methods we would eventually test empirically, and so we didn’t make explicit theoretical connections to floating point representations. Floating point precision can introduce instabilities into Newton through overflow and limited numerical precision in functions with large gradients. The finite precision also sets a floor on the fidelity of the convergence. These limitations are further motivation for stabilisation techniques, such as IKE, to combat the instabilities inherent to Newton’s method when executed in finite-precision. **Re: Convergence rates of Newton’s Method**: We spent a lot of time thinking about this. Newton’s method only converges within a suitable basin [Nesterov, 2018, Lectures on Convex Optimization, §1.2.4, p37] — but establishing best practices for initialization is an open problem. For instance, Yap provides a bound on the norm of the basin [Yap, Fundamental Problems in Algorithmic Algebra, 1993, Lecture IV, §10, p. 174]. However, this definition requires bounding the derivative of the objective function, which is harder than the original problem. Nesterov derives a basin for quadratic convergence around the true solution [Theorem 1.2.5; Nesterov, 2018, Lectures on Convex Optimization, §1.2.4, p39], but does not provide information on how to locate this basin apriori. Indeed, Nesterov defaults to taking standard gradient steps early in optimization until you assume you are in the basin, and _then_ using Newton steps [Nesterov, 2018, Lectures on Convex Optimization, §1.2.4, p39]. We experimented with this initially, but found it to be worse (by any metric we conceived). Quasi-methods do not, in general, enjoy quadratic convergence (but showing the global convergence of quasi-DEER was a major motivation behind Proposition 1), but are often observed to converge in practice. We have not been able to prove results about the convergence rates of the quasi methods we propose. However, our experiments show that their rates are acceptable in practice. Thus, studying quasi-DEER from a theoretical perspective is an interesting open problem. As a result of your points, we have included in the conclusion a list of open theoretical problems inspired by your questions. We have also added a brief survey on the convergence properties of Newton variants. > Limitations: Yes Is the reviewer saying there are limitations that we do not discuss? Or are that we do satisfactorily discuss limitations? If the former, please see the Global Response for information on how we have updated the paper. Thank you again for providing some really interesting and unique insights. The extra discussion has definitely improved the paper. Please let us know if you have any further questions!
Summary: This paper addresses the problem of parallel computation in RNNs, to be able to tap into the full potential highly efficient parallel machines (GPUs) that are available today: A naive inference of a recurrent model would apply each layer on the current hidden state, thereby requiring the length of the sequence $T$ sequential computations. The approach taken by this paper, which is introduced in the framework of Newton's method of root finding, proposes finding the solution to a chain of equations that, if satisfied, will verify that the found activations are equal to the sequential inference one. Given sequence states $s_1,\dots , s_T$, the general approach (based on earlier method DEER() is to define a sequence of $T$ residuals $$r = [s_1-f(s_0),\dots, s_T-f(s_{T-1})]$$ which capture the discrepancies between successive activations and the update layer corresponding to that layer. The update rule is based on the gradient of these residuals wrt to the layer activations $$\Delta s_t^{(i+1)} = \left[\frac{df}{ds_{t-1}} (s_{t-1}^{(i)}) \right] \Delta s_{t-1}^{(i+1)} - r_t(s^{(i)})$$. Since at each step we can compute activations in parallel, we can execute this update for all $t=1,\dots, T$ in parallel. The proposition 1 states that at most $T$ updates will be needed to arrive at the correct solution. This is already introduced in DEER, but this paper goes on to address two main problems with DEER. The linear recurrent update suffers from scalability due to the $D\times D$ size of the full Jacobians that impact both memory and time complexity, and numerical instability dueo to the singular values larger than one in Jacobian which can lead to numerical instabilities. I had a difficult time delineating the contributions, but I believe the following to be the main contributions: 1. Quasi-DEER: using a simplified form for the Jacobian where only diagonal terms of the Jacobian are considered, leading to a linear memory and time complexity per each layer. However, this model might still suffer from the numerical instabilities. 2. Iterative Kalman Evaluation (IKE), where the updates to the activations are "damped" by adding an L2 regularization term to the update rules, which can effectively prevent instabilities. Strengths: - The paper consideres an important problem, ie, parlallel computation of RNNs, which is of high practical importance. - The key contributions of the paper in quaski-DEER and (quasi-)IKE are novel and address issues of scalability and numerical instability. - To the best of my knowledge, the proposed contributions are novel. Weaknesses: I prefer to package most of my perceived weaknesses in the questions section, as they are mostly minor. Here are my main concerns: * As authors list lambda's for IKE as a hyper parameter, it warrants a more comprehensive discussion on its choice and impact on the numerical results. Namely, an abalation study that shows how sensitive the choice of lambda is. Also, whether it is task and data-dependent it could be, or it is fairly robust across these. * Contributions: The manuscript would benefit from a more clear distinction on their contributions. Since the paper draws heavily from previous work that introduced DEER, it was a bit challenging for me at the end to delineate what in the paper was novel and the current paper’s contribution. While IKE and quasi-IKE and the propositions were clearly novel, other points were not as clear to me. Technical Quality: 3 Clarity: 3 Questions for Authors: * The proposition 1 seems to suggest that the convergence stops after T steps. but that is not any speedup over the trivial sequential computation of T steps of the RNN, so it is of little value it seems. Or am I missing something? Is this T-step iterative method still better than a simple sequential RNN computation? perhaps that’s just theory, If the practical results are better, they better be stated here? * From seeing the figure 2, it seems like the Deer & quasi-DEER are faster, in light of my previous questions, can authors elaborate why is that the case? Is this because the actual number of steps required to stop is much less than T, or is it some hardware-related efficiency of the (quasi-)DEER implementation, vs the naive implementation? * in my initial reading understanding, I confused the Newton's method for root finding with Newton's method for optimisation, which would involve Hessian of loss for finding the minima of a function, while the formula involving Jacobian is for finding the “root” of a function. I'm not sure if other readers would have a similar confusion, but perhaps a small note would help. * I was missing a bit the high level intuition behind the residual sequence and how is it leading to the same sequential solution. For example, I wasn't sure if/how the equations would lead to a unique solution all the time or not, and why is that so. I think I would have understood it better * To me it seems that the NN layer updates must be deterministic for this approach to work. Is this impression correct? * Also, perhaps it is beneficial for readers if authors elaborate on what types of sequential RNN architectures would be compatible with their approach for parallel computation? * “Provided all of {df/dst−1} T t=2 are finite, then the Jacobian defined in eq. (3) is invertible and all of the eigenvalues are equal to one.” I don't understand why this is the case? Why are eigenvalues equal to one? Matrix J has off-diagonals, so it’s not quite clear why J inverse is so trivial to compute? * I find the following poorly stated/supported as of my reading so far: * 137  As a consequence of our Proposition 1, replacing the Jacobians {df/dst−1}Tt=2 with an arbitrary  matrix will still result in global convergence of the resulting DEER-like algorithm in at most T  iterations. * Replacing J with any **arbitrary matrix**? really? Why? * For a similar reason: A straightforward way to reduce the computational cost is to replace {df/dst−1}Tt=2  with {diag (df/dst−1 )}Tt=2 ; that is, to take the diagonal entries of the Jacobians of the dynamics ,  functions. * Why does this work? Is this a theoretical result, or an empirically valid approach? * Question on Figure 6: is the later linear growth of the wall clock because of the limited parallel compute on the GPU? * section 6.1: why is convergence of a random AR stable, while convergence of a trained AR model is more unstable? Can you elaborate? Is this setup leading to the large top eigenvalue problem mentioned earlier? * Figure 4: I’m not sure I understand top-right for some traces, * why the DEER (red) traces goes down at such a late iteration? I thought it was unstable and would never converge. * Also, the quasi-DEER (orange), goes up and then down, what does that mean? \ * Why is the IKE going down faster first, but then quasi-IKE converges faster? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I am a little concerned about the way that limitations are currently addressed. - In responses to the questions, authors have listed section 6.3 as a limitation, but I do not see where this section is exposing their method limitations. - As also mentioned in questions, I would appreciate if authors make some comments on architectural limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough and positive review! We refer the reviewer to the global response for discussion of an ablation study on $\lambda$ (r.e. “As authors list”) and limitations (r.e. “In responses to” and “As also mentioned”). We now respond to your other comments: > Contributions: The manuscript… We will explicitly list our contributions in the introduction. Specifically we: - Show DEER is globally convergent - Introduce quasi-approximations to improve efficiency - Introduce trust regions (IKE) to stabilise DEER - Empirically validate the methods, and provide guidance on how to select and tune them - Have since explicitly shown there is a unique global solution to DEER (see “I was missing…” response) and provide a reinterpretation of how trust regions stabilise the original scan (see 5uzK response). > The proposition 1… **and** From seeing the… You are absolutely correct. Proposition 1 tackles the theoretical global convergence of the algorithm, showing we can expect convergence. The upper bound is impractical as it requires $T$ iterations; but this is a _worst case_ convergence and we often require much fewer than $T$ iterations (e.g. Figure 4a). Each Newton step requires more work than a sequential step; but fewer steps are required and the work is parallelizable. Proposition 1 was also important in developing the resetting heuristic for (q-)DEER. We have added clarification on this. > In my initial… Thank you for raising this. While DEER was inspired by the root finding perspective, IKE was inspired by the optimization perspective (see Section 4.2). We have further clarified the relationship between perspectives and their implications. > I was missing… We add a proof of the uniqueness of the solution, and global convergence to this solution, as we did not make this connection clear enough. To summarise: For a deterministic forward function and fixed inputs, there is a fixed sequence of latents and outputs. This is therefore the **only** sequence with zero residual (i.e. there is a unique sequence $s_{1:T}^*$ generated by the deterministic dynamics). Furthermore, DEER cannot get stuck at any point that is not this sequence. We prove this in Proposition 1. Another way to see this however is that each update step (eq. (5)) is equal to $J^{-1} r$. But, we have established (see “Provided all of” below) that $J$ is always invertible and so has trivial nullspace. Furthermore, the residual $r$ can only be zero at the unique solution $s_{1:T}^*$. Thus $J^{-1} r$ is nonzero everywhere except at the true solution, where it is zero. Thus, DEER cannot get stuck en route to finding the true and unique solution. > To me it seems… This is an interesting point: DEER and IKE *can* handle stochastic models – because stochastic models become deterministic when the stochasticity is fixed i.e. has been reparameterized as in Figure 4. In fact, DEER and IKE have the same requirements as backpropagation, and so can be applied to any stateful recurrent model that admits backpropagation – a huge and practical class of models. > Provided all of… The eigenvalues of a lower-triangular matrix are equal to its main diagonal. The Jacobian in eq. (3) is the derivative of eq. (1). Each term in the vector is of the form $s_t - f(s_{t-1})$. Therefore, the Jacobian with respect to $s_t$ is the identity matrix; hence the leading diagonal is the identity; and all its eigenvalues equal one and the Jacobian is invertible (no zero eigenvalues). Efficiently solving this inverse is not tractable (see Line 92). However, we avoid computing the inverse by exploiting the structure in $J$ to instead pose eq. (5) as the recursion in eq. (6). This recursion is linear and can be solved in parallel with a scan. > I find the… We will make this discussion more clear in the paper: The elements in the sub-block-diagonal of $J$ can be replaced with arbitrary values – but the main block diagonal must remain as the identity and all other entries must be zero. Retaining convergence under modifications to the sub-block-diagonal portion is a corollary of Proposition 1, and can be seen from eq. (6): If all the states up to and including position $t-1$ at the $(i)$th Newton iteration are correct, then the update in eq. (6) at Newton iteration $(i+1)$ for position $t$ will use $\Delta s_{t-1}^{(i+1)} = 0$ (no update is required at position $t-1$), and so the update to $s_{t}^{(i+1)}$ no longer depends on the Jacobian. We will explicitly state this, as it motivates q-DEER and is, as you point out, a surprising result. We exploit this to develop q-DEER, retaining only the diagonal of the Jacobians. This reduces the parallel scan from $O(D^3)$ to $O(D)$ making each iteration faster (while still admitting global convergence as above), but needs more Newton iterations to converge due to approximate updates. We find that this trade-off often yields a faster wallclock time (see Review PDF Figure 1). Explicitly, the global convergence of q-DEER is a theoretical result (a corollary of Proposition 1), but the fast runtime of q-DEER in practice is an empirical result. > Question on Figure 6… Yes, that's a great point! We will more heavily emphasise this. > Section 6.1: why… Yes! The trained model does have larger eigenvalues, both at the true latent sequence, and at the sequences visited during the Newton iteration. > Figure 4: I’m… We use “unstable” to mirror Newton’s method parlance. Proposition 1 shows that $t \leq (i)$ have zero error, and so the instability applies only to $t>(i)$. Unstabilized methods can be arbitrarily bad in this regime until global convergence, but *may* converge before then. (q-)IKE removes the instabilities, leading to faster overall convergence. We have added clarification on this around Figure 4. Thank you for your thorough analysis, and for raising many points that have improved the paper. Please let us know if you have any further questions! --- Rebuttal 2: Comment: I would like to thank the authors for clarifying the questions I asked, and addressing my concerns. I believe adding several of the responses would to be added to the main manuscript and make it more readable. > The upper bound is ... Each Newton step requires more work than a sequential step; but fewer steps are required and the work is parallelizable. I can see that the theory is worst-case and in practice "fewer steps but more work" makes some sense on a very high level. But I was hoping the authors can provide some intuition as to why there are fewer steps than $T$ needed in practice? Given that I had already a high score, I would keep my overall score as is but increase contribution score (from 2 to 3) --- Rebuttal Comment 2.1: Comment: Thank you for reading our rebuttal, for your high score, and for advocating for acceptance of this paper. We are glad that our clarifications were helpful. We agree that your commentary has made the paper better! > The upper bound is... The intuition for why far fewer steps than $T$ are often needed in practice comes from the fact that DEER is equivalent to Newton's method. As we discuss in the rebuttal to reviewer jU7g, Newton's method provably enjoys quadratic (very fast) convergence in a basin near the true solution. Moreover, as exhibited by the widespread usage of Newton's method across many domains, Newton's method can exhibit fast convergence in practice. However, a major motivation for this paper is that globally, Newton's method can be unstable and converge slowly. This instability is a major motivation for our development of IKE. Another intuition is available from the scan (dynamical system) perspective. At each "Newton iteration," DEER linearizes the nonlinear dynamics of the RNN it is evaluating. To the extent that linear approximations are a very powerful tool across a wide variety of domains (think Taylor expansions), this linear approximation can be a decent approximation, and so lead to rapid convergence. For example, if we were dealing with linear RNNs, DEER would converge in one Newton iteration. In this paper, we are instead dealing with nonlinear RNNs, so more Newton iterations are required. Please let us know if there is anything else we can clarify! --- Rebuttal 3: Comment: This is very good to hear. Yes, we will certainly flesh out these intuitions in any camera ready version, and agree that they will make the paper even better.
null
null
null
null
Detecting Bugs with Substantial Monetary Consequences by LLM and Rule-based Reasoning
Accept (poster)
Summary: This study is motivated by accounting bugs in a real-world setting. The authors focused primarily on the Flash Loan attack in smart contracts, which cost $50 million due to eight accounting bugs in DefiLlama. The proposed method uses a Large Language Model (LLM), GPT3.5-Turbo, to trace a smart contract in the order of the control flow graph (CFG) and generate statements that include possible errors or LLM hallucinations. A follow-up step involves iterative self-reflection of the GPT via reasoning trace. Strengths: - Existing vulnerability detection tools can not detect accounting vulnerabilities in smart contracts - Leverage LLMs to understand the business logic of smart contract - Analysis of reducing false positives, false positives are a major limitation of static analysis tool - New bugs were investigated (see Section 4.1). Weaknesses: - The proposed approach's generalizability is not clearly proved. The evaluation of the new smart contact bug is rather minor, with only six true positive bugs. - The proposed technique uses GPT3.5-Turbo rather than LLM trained on codes like Code Llama[4] or CodeBERT [5]. **Minor Notes** -Reducing false positive alarms is a desirable property for security static analysis tools, otherwise, developers will be discouraged from using the tool because of the frequent false positive alarms. The authors could consider the paper [2] to reduce false positive rates of a static analysis tool. -The authors did not explain how to scale their approach. The authors may potentially scale their technique by taking into account the paper [3]. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. How were the 119 rules derived? Open coding is a common method for extracting rules from software artifacts, followed by inter-rater agreement. See a sample paper [1]. I am happy to raise my score if the authors address my questions and points mentioned in **Weakness*.* References: 1. Rahman, A., Parnin, C. and Williams, L., 2019, May. The seven sins: Security smells in infrastructure as code scripts. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE) (pp. 164-175). IEEE. 2. Reis, S., Abreu, R., d'Amorim, M. and Fortunato, D., 2022, October. Leveraging practitioners’ feedback to improve a security linter. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (pp. 1-12). 3. Rahman, A., Farhana, E., Parnin, C. and Williams, L., 2020, June. Gang of eight: A defect taxonomy for infrastructure as code scripts. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (pp. 752-764). 4. https://ai.meta.com/blog/code-llama-large-language-model-coding/ 5. Feng, Z., Guo, D., Tang, D., Duan, N., Feng, X., Gong, M., Shou, L., Qin, B., Liu, T., Jiang, D. and Zhou, M., 2020, November. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 1536-1547). Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: This paper mentions limitations of the proposed approach in Appendix I. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer 2yGP ### Answers to Questions of Reviewer 2yGP **Q1. How were the rules derived?** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We are smart contract auditors with years of experience. We have studied many business models (and their variants). These rules are invariants (i.e., properties held across different business models) we manually summarized. We really appreciate the pointer to the inspiring “seven sins” paper [1]. As mentioned in the limitations of our submission (lines 613-615), our rules are not comprehensive. A large empirical study involving more domain experts would help us complete the rule set. In addition, our rules could be extended to capture smells, as according to the paper, smells are quite problematic. **Generalization to unknown bugs** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please see C2 of the global response. **Using CodeLlama and CodeBert** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We tried CodeLlama-7B-instruct-hf and 13B-instruct-hf during rebuttal. CodeLlama-7B failed to produce formatted outputs in many cases such that our pipeline could not parse its results properly. Similar problems were observed in existing works [2,3]. 13B performs better in this matter. However, it does not seem to understand the nuances of various financial meanings even with the few-shot examples. As such, it produced much worse annotation results than our default setting (41.9% vs 75.6%). This indicates the level of intelligence of the underlying LLM is quite important. Please also see C1 in the global response, in which GPT4 has better performance than our default setting (using GPT3.5), namely, 86.6% annotation accuracy (versus 75.6%), fewer FPs, and a much smaller number of iterations before termination (2 vs 12), indicating that it has much fewer hallucinations. However, it also comes with a much higher cost. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CodeBert generates embeddings for both code and natural languages and does not operate like an LLM that can answer our questions. As such, it cannot be used in our pipeline. Exploring integration of code-language models and LLMs is part of our future work. **Reducing FPs** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Thanks for your pointer to the ASE’22 paper [4]. Although our tool does not produce that many FPs, reducing them is always highly desirable. We will open-source the tool upon publication and plan to solicit feedback from the community through a large-scale study like the one in the paper. The authors have strong connections with both the blockchain industry and the auditing community. These connections will be instrumental for the study. **How to scale our approach** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Thank you for the pointer to the “Gang of Eight” paper [5]. The scale of the study in the paper and the impact are exemplary. We aim to achieve similar ramifications. Our work mainly followed the taxonomy proposed in [6], which studied 500 or so critical/exploitable real-world smart contract bugs in 2021-2022. We can clearly see the benefits of having a larger study than and a more thorough taxonomy than [6]. From our experience, accounting bugs continue to be a very important bug category, causing substantial monetary loss, and the current tool support is still insufficient. Moreover, we observe that recent smart contracts have increasingly complex business models which demand a more fine-grained study. We plan to conduct a larger scale study focusing on accounting bugs in the past five years and hopefully come up with a fine-grained taxonomy and a comprehensive set of invariants (similar to the rules in our submission). Similar to [1,4,5], we will leverage efforts/feedback from the community. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We will cite [1,4,5] and include the above discussion in the paper. ### References > [1] Rahman, A., et al. “The seven sins: Security smells in infrastructure as code scripts.” ICSE’19. > > [2] Beurer-Kellner, Luca, et al. "Prompt Sketching for Large Language Models." ICML’24. > > [3] Beurer-Kellner, Luca, et al. "Prompting is programming: A query language for large language models.", PLDI’23. > > [4] Reis, S., et al. “Leveraging practitioners’ feedback to improve a security linter.”, ASE’22 > > [5] Rahman, A., et al. “Gang of eight: A defect taxonomy for infrastructure as code scripts.” ICSE’2064). > >[6] Zhang, Zhuo, et al. "Demystifying exploitable bugs in smart contracts." ICSE’23. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 2yGP, thank you very much for your insightful review. We wonder if you have a chance to look into our response. Although we think we have addressed your questions and concerns, we are worried that there may still be places of confusion. We would be very grateful for any feedback you may be able to provide such that we could address your further questions before the interactive period ends. Sincerely, The authors --- Rebuttal Comment 1.2: Comment: Dear Reviewer 2yGP, Thank you for your time and positive review. Since the iterative period is coming to the end, we would be very grateful if you could provide feedback to our response. We believe that we have addressed your question regarding the derivation of rules (question one), generalization to zero-day bugs (weakness one), and using CodeLlma and CodeBert (weakness two), reducing false positives (minor note one), and scaling the technique (minor note two). We would be more than happy to answer any further questions you may have before the period closes. ==================================================================== In case you may have further question regarding the quality of the rules, we proactively provide the following explanation. Our rules are comprehensive for common business logics. Note that since these are annotation propagation and checking rules. Problematic rules will lead to failures in type checking. The fact that our system can detect real accounting bugs and rarely has false positives (when typing thousands of functions) demonstrates the quality of our rules. That said, our rules do not model certain rare business behaviors such as options in future contracts. That is why we said our rules are not comprehensive in the limitation discussion of the submission. An analogy is that we are building some type system for Python programs. We design some rules. These rules allow typing common program behaviors, for example, those involving integers and strings. However, we lack rules to type rare behavior of complex numerical values. The rules are comprehensive and correct for typing the common behaviors, because if they were problematic, many correct Python programs (operating on integers and strings) would be flagged as having type errors, and buggy Python programs would escape the type checking. We will open-source our system and the rules upon publication. Sincerely, The authors --- Rebuttal 2: Comment: Thank you for reviewing our paper. We hope that our responses have addressed your concerns. If there are any remaining issues or questions, we would greatly appreciate your feedback so we can make further improvements. --- Rebuttal 3: Comment: I would like to thank the authors for their time and addressing my comments. I have also read other reviewers' feedback and the authors global rebuttal and I have increased my score. As this paper addresses an important yet underexplored real-world problem, e.g., automatically detecting accounting bugs, I will be happy to see this paper accepted. I would request the authors to incorporate all reviewers' feedback in the camera-ready version if the paper is accepted. --- Rebuttal Comment 3.1: Comment: Dear Reviewer 2yGP, It seems our messages may have crossed. We greatly appreciate your thoughtful feedback and are pleased that you found our work valuable. We will incorporate all reviewers' feedback in the camera-ready version if the paper is accepted.
Summary: The work proposes a system to detect accounting bugs in smart contracts by combining large language models (LLMs) and rule-based reasoning. The system first annotates the source code with LLMs to identify the parameters and global variables that are relevant to accounting bugs. Then, the system uses rule-based reasoning to detect accounting bugs by checking the annotated variables against a set of accounting rules. The system is evaluated on 23 real-world projects, and further on 5 recent projects from this year, showing that the system is able to detect accounting bugs effectively. Strengths: * Ths system combines LLMs and rule-based reasoning to build an effective accounting bug detection system, overcoming the limitations of LLMs. * The system is able to detect accounting bugs effectively and is further demonstrated by a study of 5 recent projects released this year. * The technique is fast and cost-effective to run without developers manually annotating the source code, making it potentially useful for a wide range of adoption. Weaknesses: * The system is not compared with other bug detection systems, and the narrow focus on accounting bugs may limit the applicability of the system. For example, there are about 9 projects out of 23 that are considered "not in the scope" for the system, and even with human annotation, the system cannot detect the bugs. It is not clear how the limitations of the system compare to other bug detection systems. * The paper would benefit from more analysis on how the LLM is able to perform the annotation. For example, how sensitive is it to the syntax of the code, and is a model less capable than GPT-3.5 enough to do the annotation? Technical Quality: 3 Clarity: 3 Questions for Authors: * It is mentioned that the system only runs on those source code files with accounting bugs; however, in a real-world scenario, the vulnerability is often not known in the first place. What are the false positive rates of the system given a whole project's source code files, assuming the vulnerability location or existence is unknown? * What portion of the code is included when prompting the model to do annotations of parameters and global variables? Are the relevant code snippets or function implementations included in the prompt? * How does this work compare with other automatic or semi-automatic bug detection systems? The results table only shows the performance of the proposed system comparing the LLM annotation with human annotation and with/without reflection, and does not compare to other baseline systems. * How sensitive is the LLM variable annotation to variable names, function names, or comments in the code? Would removing or altering the natural language, such as renaming the variables, affect the annotation accuracy? * From Table 1, it seems that for some projects, the system didn't annotate correctly for some variables, and still performed the same as the human annotation in terms of TP and FP. Could you provide more insights on why this is the case? What is essential for the system to detect accounting bugs effectively if the annotation is not perfect? * The reflection process of fixing the annotation may iterate several times. How is it determined when to stop? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are properly addressed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer uXwt ### Answers to Questions of Reviewer F88y **Q1. False positive (FP) rate when bugs are unknown** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please see C2 in the global response. **Q2. Code snippets included during prompting** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We prompt functions individually. We parse the output of each variable (including global variables). Our rules will propagate such information across functions (e.g., parameter passing from a caller to a callee and uses of the same global variable). In other words, we do not have to prompt a large code body (e.g., including relevant functions) because of the use of rules. **Q3. Other baselines** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please see C1 in the global response. **Q4. Sensitivity to variable names** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We conducted an experiment in which we leveraged the Solidity compiler to rewrite variable names (to something as plain as v1, v2), without changing program semantics. We kept the function names. We then reran our pipeline with the modified code. Our results showed that the annotation accuracy degraded from 75.6% to 31.7%. The true positives (TPs) degrade from 19 to 14, and the FPs increase from 9 to 21. It is not surprising because without proper variable names, it is extremely difficult even for humans to decide the financial meaning of an operation, e.g., a simple addition. Please also see Q5 to find the explanation of why substantial annotation degradation may not yield proportional loss in TPs. **Q5. Incorrect annotation may not affect TP or FP, and What is essential for the system to detect accounting bugs effectively if the annotation is not perfect?** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;For TPs, many incorrect annotations do not happen in the variables that are involved in the bugs. For FPs, the same operations (e.g., simple additions) are allowed for multiple financial types such that even though a variable is mis-annotated, the system may not flag its operation as an error. We will elaborate. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;To detect an accounting bug, the annotations for the variables involved in the business operation should be correct. **Q6. Termination of reflection** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;It terminates when the LLM does not propose any new annotations for variables. Note that there are finite types of annotations and hence termination is guaranteed. **Can a model less powerful than GPT3.5 work?** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We tried CodeLlama-7B-instruct-hf and 13B-instruct-hf during rebuttal. CodeLlama-7B failed to produce formatted outputs in many cases such that our pipeline could not parse its results properly. Similar problems were observed in existing works [1,2]. 13B performs better in this matter. However, it does not seem to understand the nuances of various financial meanings even with the few-shot examples. As such, it produced much worse annotation results than our default setting (41.9% vs 75.6%). This indicates the level of intelligence of the underlying LLM is quite important. Please also see C1 in the global response, in which GPT4 has better performance than our default setting (using GPT3.5), namely, 86.6% annotation accuracy (versus 75.6%), fewer FPs, and a much smaller number of iterations before termination (2 vs 12), indicating that it has much fewer hallucinations. However, it also comes with a much higher cost. ### References > [1] Beurer-Kellner, Luca, et al. "Prompt Sketching for Large Language Models." ICML’24. > > [2] Beurer-Kellner, Luca, et al. "Prompting is programming: A query language for large language models.", PLDI’23. --- Rebuttal Comment 1.1: Comment: Dear Reviewer uXwt, thank you very much for your insightful review. We wonder if you have a chance to look into our response. Although we think we have addressed your questions and concerns, we are worried that there may still be places of confusion. We would be very grateful for any feedback you may be able to provide such that we could address your further questions before the interactive period ends. Sincerely, The authors --- Reply to Comment 1.1.1: Comment: Dear Reviewer uXwt, Thank you for your time reviewing our paper. Since the iterative period is coming to the end, we would be very grateful if you could provide feedback to our response. We believe that we have addressed your five questions regarding false positives and unknown bugs (Q1), code snippets used in prompting (Q2), comparison with baselines (Q3), result sensitivity to variable names (Q4), Table 1 clarification (Q5), and termination of reflection (Q6). We have conducted new experiments and included the results in our response. In addition, we have responded to the two weaknesses: comparison with baselines (w1) and using weaker models (w2). Regarding the criticism on narrow focus (part of w1), we have the following further explanation. DeFi projects are the most important type of smart contracts. Their over-all market value has reached 103.63B [https://defillama.com/categories]. These projects are susceptible to accounting bugs. Every accounting bug is directly tied to monetary losses. At the time of writing our paper, accounting bugs had caused $50M in damages in 2024, accounting for 25% of the total loss from smart contract exploits Since our paper submission, there was another exploit of an accounting bug that led to the loss of 6.8 million US dollars. Therefore, we consider there is a pressing need to automatically detect such bugs. In addition, real-world accounting bugs are very difficult to detect as they are essentially functional bugs that are contract specific. Thank you for your time in advance! Sincerely, The authors --- Rebuttal 2: Comment: Thank you for reviewing our paper. We hope that our responses have addressed your concerns. If there are any remaining issues or questions, we would greatly appreciate your feedback so we can make further improvements.
Summary: The paper introduces ABAUDITOR, a hybrid system that combines LLMs and rule-based reasoning to detect accounting bugs in smart contracts. It leverages the semantic understanding capabilities of LLMs and rule-based logic for validating operations. Strengths: - The proposed method is computationally inexpensive, facilitating the use of the model on large source code. - The combination of logical rule-based reasoning with LLMs reduces the model's tendency to produce inaccurate outputs. Weaknesses: - The primary weakness is that the only baseline used is human annotation; for example, it's not clear how the method compares to those in [1] and [2]. Therefore, the significance of the results remains uncertain. [1] Zhang, Zhuo, et al. "Demystifying exploitable bugs in smart contracts." 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 2023. [2] Sun, Yuqiang, et al. "Gptscan: Detecting logic vulnerabilities in smart contracts by combining gpt with program analysis." Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. 2024. Technical Quality: 2 Clarity: 3 Questions for Authors: - How many times were the experiments run? - Please clarify the origin of the 119 rules. - The manual evaluation process is not sufficiently explained; for example, the number of people involved and how conflicts were handled are not detailed. Please provide more information on these aspects. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: - In addition to the concerns above, the authors need to justify how scalable the technique is across various programming languages, such as dynamically typed languages like python. - Please clarify if these 119 rules address all accounting bugs or just a selection of them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer F88y ### Answers to Questions of Reviewer F88y **Q1. How many times were the experiments run?** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Three due to the cost involved. We will clarify. **Q2. Origin of the rules** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We are smart contract auditors with years of experience. We have studied many business models (and their variants). These rules are invariants (i.e., properties held across different business models) we manually summarized. We are currently exploring using LLMs to extract a more comprehensive set of such properties. We will elaborate in the paper. **Q3. Manual evaluation process** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Three people were involved in the manual process. Two were annotating the variables independently, and the third served as a judge when inconsistencies occurred. **Comparison with [1] and [2]** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We cited [1] and the arxiv version of [2] in our submission (lines 472-474 and 454-456). [1] is an empirical study of exploitable bugs in real-world smart contracts and the effectiveness of existing tools on these bugs. It inspired us to work on accounting bugs as the authors found that such bugs are difficult, critical (as they may cause substantial financial loss), and beyond existing tools. [2] used prompt engineering (with Chain-of-Thought) to detect ten bug patterns. Most of them are not related to accounting bugs. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please also see C1 in the global response. **Scaling to other programming languages** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Solidity is the most popular programming language for smart contracts, and our implementation currently only supports that. The rule checker is implemented inside Slither, a Solidity analysis engine that generates intermediate representations (IR) of smart contracts and provides a rich set of APIs to manipulate the IRs. That said, it is possible to implement the rule checker at the source level (using just a parser), which would allow easy extension to other languages. **Comprehensiveness of the 119 rules** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Although these rules cover common business models, as stated in our limitations (lines 612-621), they are not comprehensive. For example, we currently do not model options in future contracts. We leave it for our future work. ### References >[1] Zhang, Zhuo, et al. "Demystifying exploitable bugs in smart contracts." ICSE’23. > >[2] Sun, Yuqiang, et al. "Gptscan: Detecting logic vulnerabilities in smart contracts by combining gpt with program analysis." ICSE’24. > --- Rebuttal 2: Comment: Thank you for reviewing our paper. We hope that our responses have addressed your concerns. If there are any remaining issues or questions, we would greatly appreciate your feedback so we can make further improvements. --- Rebuttal 3: Comment: Based on the responses, the comprehensiveness and clarity of the rules being considered remain unclear. Additionally, the experiments presented in the rebuttal show that ChatGPT-4 significantly outperforms the proposed model, albeit with higher computational costs. This raises concerns about the effectiveness of your model in detecting bugs, especially given its lower performance compared to other techniques, including [1]. Thank you to the authors for their effort in the rebuttal. However, due to concerns about the rules considered and the model's performance relative to other approaches, my score remains unchanged. --- Rebuttal 4: Comment: We are grateful for your continuous support and response, which discloses a few places that we did not sufficiently clarify. 1. **Lower performance compared to [1]**: [1] is an empirical study. It does not propose any technique. One of its important conclusions motivating our work is that accounting bugs are beyond existing automated tools. To confirm this, we double-check their project repo. There were only benchmark programs without any bug detection tools. Our tool is the first fully automated technique that can effectively detect accounting bugs. 2. **ChatGPT-4**: The reviewer may have some misunderstanding of our GPT-4 results. The reported results are acquired by replacing GPT-3 in our pipeline with GPT-4, not by directly prompting. Our technique is independent of the underlying LLMs. A more powerful LLM yielding better results is indeed a plus for our technique. Directly prompting GPT-4 (with few-shot examples) does not detect accounting bugs. In our experiment, GPT-4 was used to derive the annotations, which were further used in rule-based propagation and checking. 3. **Comprehensiveness and clarity of the rules**: our rules are comprehensive for common business logics. Note that since these are annotation propagation and checking rules. Problematic rules will lead to failures in type checking. The fact that our system can detect real accounting bugs and rarely has false positives (when typing thousands of functions) demonstrates the quality of our rules. That said, our rules do not model certain rare business behaviors such as options in future contracts. That is why we said our rules are not comprehensive. An analogy is that we are building some type system for Python programs. We design some rules. These rules allow typing common program behaviors, for example, those involving integers and strings. However, we lack rules to type rare behavior of complex numerical values. The rules are comprehensive and correct for typing the common behaviors, because if they were problematic, many correct Python programs (operating on integers and strings) would be flagged as having type errors, and buggy Python programs would escape the type checking. We will open-source our system and the rules upon publication. 4. **Lower performance than other techniques**: our tool has the state-of-the-art performance in automatically detecting accounting bugs. We have compared with recent automatic tools and tools that require intensive manual annotations. The results show that our tool substantially outperforms in terms of effectiveness and automation. We would be more than happy to empirically compare with any other tools that the reviewer may point us to before the discussion period ends. Thank you again for your time. Please let us know if there are places that we can further clarify. --- Rebuttal Comment 4.1: Comment: Dear Reviewer F88y, Thank you again for your earlier response. We further clarified the concerns mentioned in your response. Since the interactive period is coming to an end, we would be very grateful if you could further comment on our latest response. Sincerely, The authors
Summary: This paper proposes a system to detect accounting error vulnerabilities in smart contracts. The key idea is a hybrid approach that combines LLMs and rule-based reasoning. In particular, it prompts LLMs to annotate the financial meaning of variables in smart contracts, and employs rule-based reasoning to propagate the information throughout a contract’s logic and to validate potential vulnerabilities. To mitigate hallucinations, the system employs a feedback loop that provides reasoning traces of detected vulnerabilities to the LLM for iterative self-reflection. The system is applied to 34 smart contract programs written in Solidity which are known to contain 21 accounting bugs out of a total of 40 bugs. It is able to find 19 of the 21 bugs for a recall of 90.5%. It also achieves 75.6% accuracy in financial meaning annotations compared to humans. The reasoning trace-based hallucination mitigation reduces the false positive rate by 54.5%. Strengths: 1. The overall methodology is very elegant: it combines the strengths of LLMs and logic reasoning, using LLMs to overcome two key weaknesses of rule-based reasoning: annotation inference (pre-reasoning) and trace validation (post-reasoning). 2. The overall empirical results are very strong with both high recall and high precision on a suite of real-world smart contracts. 3. The paper is well-organized and easy to follow (with the exception of the flow being interrupted by frequent references to the appendices). Weaknesses: Major: 1. It is well known that correctness is critical in the domain of smart contracts; as such, I was expecting a more comprehensive approach in terms of a) coverage of the kinds of bugs detected, and b) expressiveness of the reasoning engine. The proposed system is very limited in both these aspects: it targets the relatively narrow class of "accounting" bugs -- which cover only 21 out of 40 in the authors' benchmark suite, and the checking is performed using very simple accounting rules. The system is really a lightweight type qualifier checking engine that is more like a linter than a verifier. 2. There aren't any realistic baselines in the evaluation, namely, existing static analysis and fuzzing techniques. 3. The system uses prompting only as opposed to fine-tuning; the ~ 75% accuracy of annotations indicates that there is significant room to improve, and in any case it would have been nice to see the benefits of fine-tuning. 4. The system has not been applied to find any previously unknown bugs. The abstract talks about discovering 4 bugs in 5 recent projects, which made me think initially that these were new bugs; but it is later revealed that they were known bugs. Minor: - The presentation flow is interrupted by constant references to material in the the appendices. - Abstract: "Finally, we apply the automated technique on 5 recent projects and discover 4 bugs". It is a bit misleading to say "discover" since these are known bugs. - line 3 of Fig 1: Currencey -> Currency - line 137: totalPairSupply -> totalShareSupply - I found the Example in Sec 2 to be simplistic; perhaps due to the fact that the approach itself does relatively shallow reasoning. - line 177: fianncial Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the Major points listed under Weaknesses, especially 2 and 3. I don't really expect much to change in terms of 1 even though I view it as the single most significant weakness of the work. However, you could possibly comment on whether your simple rule-based reasoning could be replaced with a more sophisticated reasoning engine, such as an SMT solver. And while 4 would be nice to address, it is understandable if you did not have the time to apply the system to discover new bugs. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes, limitations are discussed (although in the Appendix), namely: 1) insufficient financial meaning coverage, 2) inability to detect all hallucinations, and 3) inability to handle all accounting bugs. However, I don't view these as the most significant limitations (unless by "insufficient financial meaning coverage" the authors are referring to the fact that they target only accounting bugs). Likewise, 2 and 3 aren't significant since the approach has very high precision and recall. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer zmms ### Answers to Questions of Reviewer zmms **Q1. Replacing with rule-based reasoning with SMT solving and expressiveness of the reasoning technique** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;It is possible to enhance our system with an SMT solver. In lines 346-348 of our submission, we mentioned that the most common reason for FPs is the lack of path feasibility reasoning. This can be addressed by using a solver. From our experience, the difficulty of handling accounting bugs may not lie in using inference rules or the much more expressive SMT solving. Instead, it lies in deriving a set of rules/invariants that generalize across a wide spectrum of business models. Complex reasoning rules often run the risk of being too specific to a subset of business models. That said, we do think more expressive reasoning methods such as Symbolic Finite Automata (SFA), which can model symbolic transitions between states could be quite useful in modeling stateful behaviors of business models. It belongs to our future work. **Q2. Baselines** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please see C1 in the global response. **Q3. Benefits of finetuning** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please see C1 in the global response. **Q4. Unknown bugs** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please see C2 in the global response. **Covering more bug types** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Accounting bugs are quite different from other bugs as they are closely coupled with application specific business logics. Other bugs such as reentrancy and frontrunning have precise definitions and general oracles. To some extent, accounting bugs are analogous to the traditional functional bugs, whereas other smart contract bugs are analogous to buffer-overflow and divided-by-zero bugs. In traditional bug detection, functional bugs are a lot more difficult to deal with than others. In our work, we use type rules, which essentially denote the invariants that all business models should respect, to construct a general detector. Although there may be future designs different from ours, we believe detectors for accounting bugs would likely be different from those for other bug types. Moreover, just like functional bugs, accounting bugs are quite diverse and can be further classified to many sub-categories such as normalization errors, unit errors, interest errors, etc. **Example in Sec-2 is too simplistic** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In general, accounting bugs are much more complex than that in Sec-2. We chose it for readability. The average reasoning trace for a real bug is 20 lines, where the trace for that example is only 4 lines. **Minor changes** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Thanks for pointing them out. We will change all of them accordingly. --- Rebuttal Comment 1.1: Comment: Dear Reviewer zmms, thank you very much for your insightful review. We wonder if you have a chance to look into our response. Although we think we have addressed your questions and concerns, we are worried that there may still be places of confusion. We would be very grateful for any feedback you may be able to provide such that we could address your further questions before the interactive period ends. Sincerely, The authors --- Rebuttal 2: Comment: Thank you for reviewing our paper. We hope that our responses have addressed your concerns. If there are any remaining issues or questions, we would greatly appreciate your feedback so we can make further improvements.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and insightful comments. ## Common Concerns **C1. Baselines** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;According to [1] published in 2023, accounting bugs are beyond existing tools. That is why we did not compare our tool with others. Following the reviewers’ suggestions, we found two recent tools and performed empirical comparison during rebuttal. First, we compared ours with GPTScan [2], a recent LLM-based linter for smart contract bugs. It supports ten common bug patterns, including interest related accounting bugs. The results are shown in Table 1. The first column presents the bug types that it finds at least one instance. The second column lists the number of applications with such bugs, and the third the number of instances found. Only the first row (wrong-order-interest) belongs to accounting bugs, and the two reports in that row are false positives upon inspection. In other words, GPTScan could not find any of the bugs ours found. **Table 1 GPT-Scan Results**: | Bug Type | Included Projects | Total Instances | |-------------|----------------------|---------------------| | Wrong Order Interest | 2 | 2 | | Flashloan Price | 6 | 4 | | First Deposit | 3 | 3 | | Approval Not Revoked | 1 | 1| We also compared our tool with ItyFuzz [3], a SOTA public smart contract fuzzer. Due to the limited time we have, we ran it for 1-4 hours for each project, corresponding to hundreds of millions of executions per-project. We observed that the coverage reached 32% on average. However, the fuzzer did not report any bugs. In addition, we compared our tool with fine-tuned GPT3.5 and fine-tuned GPT4 mini. We used 50 fine-tuning examples covering all the supported financial types and those without financial meanings. We then evaluated our system with different settings, namely, with and without fine-tuning, with and without few-shot examples in prompting. The results are shown in Table 2. Due to the high cost of fine-tuning/using GPT4, we only include one setting for it. Observe that finetuned-fewshot (row 5) improved the accuracy of annotations from 75.6%(62/82 in row 2) to 78%(64/82); finetuned-no-fewshot (row 4) performed worse than our default setting (row 2); and no-finetuned-no-fewshot (row 3) has a lot more false positives (31 vs. 10) and more iterations. GPT4 has the best performance with annotation accuracy of 86.6% (71/82). However, its fine-tuning and inference costs are much higher than GPT3.5. Note that the annotation accuracy changes lead to changes of downstream bug finding. However, the influence may not be proportional because the financial types involved in the bugs are not evenly distributed. That is, the incorrect annotations lie in variables unrelated to the bugs. **Table 2 Results using Finetuned GPTs** | Model | True Pos. | False Pos. | Iters. | Correct Annotations| |-------------|----------------------|---------------------|-----------------|------------| | **Baseline** (_Manual_) | 19| 7| N/A| 82/82| | **ABAuditor** (_Gpt3.5 w/ few-shot_) | 19 | 10 | 12| 62/82| | _Gpt3.5 no few-shot_ | 17 | 31 | 14 | 32/82 | | _Fine-tuned GPT3.5 no few-shot_ | 17 | 16 | 9| 39/82 | | _Fine-tuned Gpt3.5 w/ few-shot_ | 19 | 9 | 7 | 64/82| | _Fine-tuned Gpt4 mini w/ few-shot_ | 19 | 9 | 2 | 71/82 | These results support our unique contributions. We will include them in the paper. **C2. Finding unknown bugs** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;During rebuttal, we further scanned 8721 additional functions in the five recent projects used in our submission (lines 355-359). In the submission, we did not scan them as they are not in the files that contain the known bugs. The tool generated 3 reports. However, manual inspection showed that they are not real bugs. On the positive side, our tool does not generate many false warnings in such a large-scale scanning. In addition, we scanned 75 functions in 3 new projects Munchables [4], Basin[5], and TraitForge[6] from the very recent Code4rena audit competitions whose results are still unknown. We chose them as they are business related. Due to the time constraints, we only scanned the files that define core business logics. We found 2 zero-day (unknown) accounting bugs. The first one is in function _farmPlots() of file LandManager.sol in project Munchables. It adds a variable of a normalized balance type to another variable of an unnormalized type, which is problematic. The second is in function calcReserve() of file Stable2.sol in project Basin. To validate that they are real, we generated exploits (or POCs) that caused monetary loss. We have submitted the bug reports, which can be found in the one-page supplementary material. In addition to the two bugs, the scanning yielded 4 false positives. To some extent, these illustrate the level of automation and the effectiveness of our tool. ### References >[1] Zhang, Zhuo, et al. "Demystifying exploitable bugs in smart contracts." ICSE’23. > >[2] Sun, Yuqiang, et al. "Gptscan: Detecting logic vulnerabilities in smart contracts by combining gpt with program analysis." ICSE’24. > >[3] Shou, Chaofan, Shangyin Tan, and Koushik Sen. "Ityfuzz: Snapshot-based fuzzer for smart contract." ISSTA’23. > >[4] https://code4rena.com/audits/2024-07-munchables#top > >[5] https://code4rena.com/audits/2024-07-basin#top > >[6] https://code4rena.com/audits/2024-07-traitforge#top Pdf: /pdf/25f9f778422ede79e9344965f72a892a0d2e4a29.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
decoupleQ: Towards 2-bit Post-Training Uniform Quantization via decoupling Parameters into Integer and Floating Points
Reject
Summary: The paper introduces decoupleQ, a novel method that decouples model parameters into integer and floating-point parts. This approach transforms the quantization problem into a mathematical constrained optimization problem, avoiding the limitations of traditional heuristic quantization methods. DecoupleQ achieves a significant improvement over existing methods in LLM., especially at extreme low bits (2-bit) and also release the W2A16 CUDA kernel. Strengths: 1. DecoupleQ eliminates the need for ad-hoc techniques to handle outliers and sensitive channels, focusing solely on optimizing model accuracy under extreme low-bit quantization. 2. DecoupleQ achieves a notable advancement over existing methods in LLM, particularly at extremely low bit. And the W2A16 CUDA kernel has been released. 3. DecoupleQ approach can be readily extended to supervised fine-tuning (SFT) to enhance model accuracy, or adapted for downstream sub-tasks. Weaknesses: 1. Please correct me if I am wrong. It seems that decoupleQ combines several existing approaches. Specifically, it uses Adaround to get the integer part in ResNets and GPTQ to get the integer part in LLMs. Additionally, it integrates PTQ and QAT by applying PTQ to the integer part while using supervised training for the floating-point part. 2. Regarding your point from lines 58-61, I believe GPTQ clearly outlines how to calculate scale and zero point in their code. Moreover, GPTQ can be seen as a constrained optimization problem, where the constraints align with yours: each integer weight is confined within [$\alpha$, $\beta$], which is a default constraint in GPTQ. 3. Further experiments on LLMs are essential. For example, evaluating decoupleQ's performance in multi-task settings and within the LlaMa 3 family would provide valuable insights. 4. Could you provide more ablation studies in the second stage, such as experiments without training norm layers? 5. There is a typo in line 125. The first letter of 'decoupleQ' should be capitalized. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading of our paper and your generally positive comments. And thank you for your accurate summary and for highlighting our strengths. We will try to respond to your questions in as much detail as possible, and we would be grateful if you could point out any omissions. # Weakness 1: The core value of this paper lies in transforming a quantization problem into a mathematical optimization problem ( refer to formula (6) in the original paper), and this quantization method is a general method, regardless of ResNet or LLM. From an algorithmic perspective, our process is as follows: 1. First, we focus on a linear (or conv) layer in the model. We aim to minimize the difference between pre- and post-quantization within a linear layer (refer to formula (5)). Then, substituting (4) into (5) and taking into account per-channel quantization, we get the objective function (7). Adding the constraints on $w$, we get the optimization problem (6). This is a constrained mathematical optimization problem. After the problem is solved, we get the solutions of the integer part and the floating point part. That is, we get the integer part and the floating point part via solving the optimization problem (6). 2. Second, since minimizing the quantization error within the linear layer does not mean minimizing the model quantization error, we performed an optimization at the block level (2). In this stage, we freeze the integer part and only train the floating point part. 3. The above two steps both fall within the scope of PTQ, and we found that with only these two steps, we can get a reasonable model accuracy (refer to Table. 3). In practice, if we want to further improve the model accuracy, we can fine-tune the floating-point part of the whole model via labelled dataset. This process is similar to QAT, but it is a very lightweight training. Because we have fixed the integer part that occupies the vast majority of the parameters. # Weakness 2: 1. I'm sorry we were not clear here. We originally wanted to express that the core contribution of GPTQ is how to efficiently update the remaining elements, without specifically studying how to obtain better $(s,z)$. In the code of GPTQ, they use minmax or mse search to get $(s,z)$, while in decoupleQ, we solve the optimization problem (6) to get $(s,z)$. Thank you very much for pointing out the problem with this sentence. We will delete it in the revision to avoid misunderstanding. 2. What we said "unconstrained" in GPTQ is "the optimization problem formulated for updating the remaining elements is unconstrained". For simplicity, suppose there are three elements in the weight $w=[w_1, w_2,w_3]$, and suppose $scale=1$and $zero=0$. In GPTQ, they first fake-quantize $w_1$to be $\widetilde{w_1}$, here $\widetilde{w_1}$is constrained within the interval $[\alpha, \beta]$; then update $w_2$ and $w_3$ to be $w_2 ' = w_2+\Delta_2$ and $w_3 ' = w_3+\Delta_3$. However, $w_2'$ and $w_3'$ is **not constrained** within $[\alpha, \beta]$, that is, the updates to the remaining elements is unconstrained. In decoupleQ, we proposed two levels of approximation when updating the remaining elements, (10) and (11), for quantization time and model accuracy trade-off. (10) is constrained while (11) is unconstrained. # Weakness 3: Thanks very much for you suggestions. The lack of rich public experiments was indeed our shortcoming. We will make our code public on GitHub (regardless of whether the paper is accepted or rejected finally), and continue to add more rich experimental results. We hope that reviewers can pay more attention to our innovation in theory and scheme. We also believe that the novelty of a method may outweigh the number of experiments, especially in NeurIPS. In addition, the work is tangible and can be applied to industry. We do have launched 2-bit quantized large speech models in multiple consecutive releases of our company's core products. After the reviewing period is completed, the identity of our company and the products launched will be made public. Also, we have released the W2A16 CUDA kernel used in our products, which is under the merging review into NVIDIA tensorRT repo. We believe that our work will make certain contributions to the industry and open-source community. # Weakness 4: We have not yet tried fixing the norm layers and only training $s$ and $z$ in the sft stage. In all of our private experiments, we only fix the integer part $\widehat{W}$, and train all the floating-point parts, including $s, z$ and norm layers. Thank you very much for your suggestion. I have also heard that the training of the norm layer has a certain impact on the accuracy of the model [1]. I plan to do some experiments, and if the results are significantly different, we will update it to GitHub. &nbsp; [1] Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning # Weakness 5: Sincerely thank you for looking at our paper in such detail. We lowercase the "d" on purpose. "decouple" is lowercase and "Q" is uppercase, isn't that interesting? Just like "iPhone" or "eBay". &nbsp; Thank you again for your valuable time and comments. And I am very happy to discuss these issues with you. If you have any questions, we will reply to you as soon as possible. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. The author has addressed some of my concerns. However, conducting experiments without training norm layers is crucial, as other methods typically do not involve training norm layers. Additionally, I agree with other reviewers: They address a portion of the optimization problem using GPTQ and another portion similar to BRECQ. The novelty is limited, resembling more an aggregation of existing techniques rather than a novel contribution. --- Rebuttal 2: Comment: Thank you very much for your reply. We are sorry that we did not explain it clearly in our first reply. It is a pleasure to continue discussing with you. We have urgently conducted some ablation studies, and more experimental results are on the way. In this experiment, we choose to freeze or not freeze the layernom in LLama when training $(s,z)$ in the Block-wise minimization, and other settings are consistent with those in the original paper. Each element in the table represents (PPL of wiki-2, PPL of C4). | |1-7B |2-7B |1-13B | 1-30B | 1-65B | :---:|:---:|:---:|:---:|:---:|:---: | Train LN | (9.49, 11.41) |(9.74, 11.83) |(7.86, 9.86) | (6.37, 8.51) | (5.59, 7.63) | | Freeze LN | (9.69, 11.54)| (9.84, 11.97)| (7.90, 9.92) | (6.37, 8.52) | (5.48, 7.64) | When LN is frozen, PPL generally increases slightly. This may be due to the reduction of learnable parameters. Nevertheless, our results are still significantly better than previous shown in Table 3. Thanks for your comments, and we will add these results in the final version. &nbsp; Now, please give us some time to compare decoupleQ with GPTQ and BRECQ in detail. Both decoupleQ and GPTQ contain a step to minimize the loss between pre- and post-quantization within a linear layer. But minimizing the loss between pre- and post-quantization within a linear layer is a very simple and easy idea[1,2,3,4,5]. The core contribution of these papers is not that they use the layer minimization, but how to achieve the layer minimization. For example, AWQ[1] achieves this goal via per-channel scaling that protects the salient weights; GPTQ[5] achieves this goal by updating the remaining elements while quantizing the previous elements, and adaround[2] by training the rounding up or down. While in decoupleQ, we achieve this goal by constrained mathematical optimization (6). We believe that formula (6) abstracts the essence of the problem, transforming a model quantization problem into a mathematical optimization problem, and it no longer needs to focus on some of the minutiae unique to quantization. $w$, $s$, and $z$ are now totally independent of the original weight $W_0$ in the optimization process, as long as the final output error is minimized. This is the essential difference between decoupleQ and previous methods. When solving formula (6), we provide two approximate solutions, (10) and (11). Both formulas are common unconstrained convex optimization problems. We used the the idea from GPTQ to solve (11), but this is not the only solution, because we can choose to solve (10) to obtain a more accurate result. &nbsp; The comparison with BRECQ is similar. Block minimization is used in works[3,6,7]. In BRECQ, they train the rounding for GEMM weights, whereas in decoupleQ, we train $(s,z)$ and norm layers while keeping the GEMM weights frozen (because it has already been quantized to integers at this step). This is a very lightweight training compared with BRECQ, because GEMM weights occupy most of the model parameters, and $(s,z)$ only occupies a very small part of the model parameters. &nbsp; In addition, if we only consider the first stage, i.e., minimizing the loss between pre-quantization and post-quantization within the linear layer, without considering the second stage, our results still outperform the others (especially GPTQ, the fairest comparison), as shown in Table 4 and Table 3. &nbsp; [1] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration \ [2] Up or Down? Adaptive Rounding for Post-Training Quantization \ [3] AffineQuant: Affine Transformation Quantization for Large Language Models \ [4] QuIP: 2-Bit Quantization of Large Language Models With Guarantees \ [5] GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers \ [6] OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models \ [7] BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction &nbsp; Thank you again for your valuable time and high-quality comments. We sincerely look forward to further discussions with you.
Summary: This paper proposes a linear and uniform quantization method, decoupleQ, which abandons the traditional heuristic quantization paradigm and decouples the model parameters into integer and floating-point parts, then transforming the quantization problem into integer and floating-point part. Experiments show decoupleQ achieves comparable acc as fp16/bf16 on 2-bit weight quantization setting. Strengths: Experiments show decoupleQ achieves comparable acc as fp16/bf16 on 2-bit weight quantization setting. Weaknesses: 1. Experiments are based on W2A16, lower activation bitwidth(<=8bit) should be experimented. 2. The novelty is limited. The core idea of decoupleQ is similar to Normalization(Batch-Norm or Layer-Norm). The learnable floating part of decoupleQ equals to a learnable Normalization parameters. 3. More existing Quantization methods should be compared, such as, NWQ[1], PD-Quant[2] [1] Leveraging Inter-Layer Dependency for Post -Training Quantization [2] PD-Quant: Post-Training Quantization based on Prediction Difference Metric. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. What's the gain for lower activation bitwidth(<=8bit) of decoupleQ? 2. Experiment comparison over more existing Quantization methods. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing our paper, and we respond to your concerns as follows: # Weakness 1: As we put in line 83, ***we focus on weight-only quantization***. In the era of large language models, weight-only quantization has important industrial value because during the inference process with latency constraints, the batch size of decoding will be very small, so that only a few tokens are decoded at a time, which makes the inference process IO-bound in most GPUs. In this situation, quantizing only weights can reduce IO overhead and thus speed up inference. There are many weight-only quantization works in recent years, such as GPTQ[1], AWQ[2] # Weakness 2: decoupleQ is a quantization method, and normalization is a kind of skill for effective model training. These are two completely different things. # Weakness 3: In our paper, we focus on weight-only quantization, while NWQ and PD-Quant focus on weight-activation quantization, and not report their results on weight-only settings. Nevertheless, we find that these two papers are creative and we will cite them in the revision. Question 1-2: please refer to the responses to Weaknesses &nbsp; [1] GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers \ [2] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration --- Rebuttal 2: Comment: Thank authors for the rebuttal. The rebuttal does not solve my concern. The novelty is limited: DecoupleQ is the same as Batch-Normalization or Group-Normalization if you make the BN or GN 's parameter learnable and merge them into normal quantization process. The effect between them from my point of view is the same. Further, comparison to current CNN quantization methods is not fully provided. Thus I still keep my score and reject this paper: it does not reach the bar of NeurIPS, in shortage of novelty proof and enough experiment comparison. --- Rebuttal 3: Title: Clarification on the novelty Comment: Thank you for your reply. The core of our work is quantization. To this end, we transform a quantization problem into a mathematical optimization problem. In decoupleQ, the quantization process includes not only the solution of the floating-point part $(s,z)$, but also the solution of the integer part $\widehat{W}$. However, Batch-Normalization or Group-Normalization is a skill for effective training. In some model structures, their parameters can indeed be merged into $(s,z)$, but this is not the case for all model structures especially in LLM. If conventional quantization cannot obtain a good solution for $\widehat{W}$, then no matter how we train the Normalization parameters, it will be difficult to obtain high model accuracy. decoupleQ is to solve the integer part $\widehat{W}$ and the floating-point part $(s,z)$ together, rather than just considering the floating-point part. &nbsp; Thank you again for your reply. If we have not clarified your concerns, we sincerely hope that you can raise them in time, and we will reply to you as soon as possible.
Summary: The paper presents decoupleQ, a post-training quantization method that improves the accuracy of quantized models, particularly at very low bit-widths (2-bit). It achieves this by separating model parameters into integer and floating-point components and formulating the quantization process as a constrained optimization problem. This approach eliminates the need for traditional quantization techniques like outlier handling and focuses on optimizing the core objective. Strengths: 1. The paper introduces a fresh perspective on quantization by abandoning traditional methods and reframing it as a constrained optimization problem. 2. decoupleQ demonstrates impressive results in 2-bit quantization, achieving accuracy comparable to higher precision formats like fp16/bf16 in large speech models. 3. The quantization process is linear and uniform, making it easier to implement in hardware compared to non-uniform methods. Weaknesses: 1. The paper's writing lacks cohesion and clarity regarding its ultimate goal. The paper also has several spelling mistakes. 2. The authors claim to separate the model parameters into integers and floating-point components. However, as far as I understand, this practice is not a novel contribution but rather a common approach in quantization. 3. They address a portion of the optimization problem using GPTQ and another portion similar to BRECQ. 4. The authors acknowledge that their solution may not be optimal. 5. The quantization process in decoupleQ can be more time-consuming than other methods. Technical Quality: 2 Clarity: 1 Questions for Authors: 1. The paper mentions achieving state-of-the-art accuracy in Llama-1/2. It would be helpful to see a more detailed comparison with other state-of-the-art quantization methods on this specific model. 2. The authors could elaborate on the potential impact of the weak correlation between PPL (perplexity) and loss in LLMs. How might this affect the practical application of decoupleQ for LLM quantization? 3. The paper briefly mentions the risk of overfitting. Providing more insights into mitigation strategies for this risk would be beneficial, especially when dealing with underdetermined matrices. 4. Given the longer runtime compared to other methods, it would be interesting to see a more comprehensive analysis of the trade-off between quantization time and model accuracy. 5. The authors could consider adding a section on future work, outlining potential directions for further research and improvement of the decoupleQ method. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Yes, however, it will be beneficial to divide the current Discussion section into separate Conclusion and Limitations sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for taking the time to review our work and then give an accurate summary and outline our strengths. We will explain your concerns in detail. Due to page limitations, we put the responses to weaknesses at the "Official Comment" box. # Question 1: GPTQ, AWQ, OmniQuant are three of the influential works on quantization recently, and they all reported their results in LLama-1/2. We would be very grateful if you could provide some papers to compare with. # Question 2: In line#283~290, we find that the block reconstruction loss decreases monotonically as the number of iterations $K$ increase, and that the model’s best PPL is where $K = 1$ and then fluctuates within a range as $K$ continues to increase. The loss we are talking about here is the loss between the pre- and post-quantization of the block. In the field of PTQ, we cannot directly optimize the PPL of the model but only some proxy. For example, GPTQ tries to minimize the loss of the linear layer, and BRECQ tries to minimize the loss of the block. In this situation, the weak correlation (if any) between block loss and PPL is a general problem, not specific to decoupleQ. However, we cannot deny the value of PTQ, because in a broad scope, reducing the loss can lower the PPL of the model. # Question 3: When $H$ is an underdetermined matrix (although the size of calibration dataset is large enough), a very effective solution is to enhance the diagonal of the matrix, i.e. $H \leftarrow H + \lambda I$. This is not just a mathematical trick, but has strong physical meaning. Our initial optimization goal (Eq.(5) in the original paper) has the following transformation: $${\arg\min}_{\widetilde{W}} ||X\widetilde{W}-XW_0||_2^2 =\text{tr}\{(\widetilde{W}-W_0)^TH(\widetilde{W}-W_0) \} => \text{tr}\{(\widetilde{W}-W_0)^T(H+\lambda I)(\widetilde{W}-W_0) \} = \text{tr}\{(\widetilde{W}-W_0)^TH(\widetilde{W}-W_0)+\lambda (\widetilde{W}-W_0)^T(\widetilde{W}-W_0)\} $$ The last term $\lambda (\widetilde{W}-W_0)^T(\widetilde{W}-W_0)$is independent of $H$, thus the calibration dataset. It plays the role of regularization, regularizing $\widehat{W}$ to be close to $W_0$ under the naive L2 metric. # Question 4: In our industrial practice, the runtime of quantization via decoupleQ depends mainly on the size of the calibration dataset. As for the number of iterations $N$ defined in Alg.1, we find that$N=2$ or $3$, can make it converge well. The time cost mainly comes from the forward pass of the data in the model. So we plot the trade-off between the size of calibration datasets and model accuracy in Figure 5. Specifically, in the 2-bit quantized speech model that has been launched in our company, which contains 32 transformer blocks with 7 billion parameters, we use 8 millions of tokens as calibration dataset and train 1 epoch in each block-wise minimization process. The total time for this process is about 22 hours. # Question 5: Thank you very much for your suggestion. After the submission, we plan to add a chapter. Future work mainly includes the step-by-step derivation of formulas when there is activation quantization. And we think further research on SFT (only trains the float-point part) is valuable and necessary. This is very lightweight training. &nbsp; Thank you again for taking the time and effort to review our paper carefully and then asking these high-quality questions. We look forward to further discussions with you. We would be grateful if you could raise the rating after addressing your concerns. (Due to page limitations, responses to weaknesses and questions must be placed in two boxes.) --- Rebuttal 2: Title: responses to weaknesses Comment: Thank you so much for taking the time to review our work and then give an accurate summary and outline our strengths. We will explain your concerns in detail. Due to page limitations, we put the responses to Questions at the "Rebuttal" box. # Weakness 1: The ultimate goal of our paper is to propose a novel quantization method to contribute to the industry and the community. Specifically, in the organization of the paper, we first formulate the quantization problem as the optimization objective(6) step by step in Sec. 3.1 and Sec. 3.2, which is the core of our paper. Once the optimization objective(6) is proposed, it is natural for us to think about how to solve it. So we propose a solution in Sec. 3.3 and Sec. 3.4. After solving the optimization objective(6) with the proposed method, a linear layer within a Transformer block is then quantized. In Sec.3.5, we further fine-tune $(s,z)$ at the block level to further improve the model accuracy. Thank you very much for your careful reading, and we will check for spelling mistakes carefully in the revised version. # Weakness 2: In the traditional quantization paradigm, the integer part and the floating point part are calculated using the following procedure (or its various variations): $$s=\frac{max(W_0)-min(W_0)}{2^N-1} \tag{a}$$ $$z=min(W_0)-\alpha s \tag{b}$$ $$\widehat{W} = \text{clip}(\lfloor \frac{W_0-z}{s} \rceil, \alpha, \beta) \tag{c}$$ where the meanings of the above notations are explained as in Sec.3.1. In this set of formulas, the solutions for the floating-point part and the integer part are interdependent. What we call "decouples the model parameters into integer and floating-point parts" aims to completely decouple the two and use the optimization formulation(6) to find the optimal solution. The feasible region of the $(s,z)$ and $w$ have no dependency and can be regarded as independent primal variables. Judging from the results, all common quantization schemes will result in integer and floating-point parts after the model is quantized. However, we focus on the solution process and the ideas and methodology embodied in this process. # Weakness 3: When solving formula (11), we used the solution of GPTQ. But this is only one optional step in our solution process. Our core contribution is not the solution to this step, but the construction of formula (6). Formula (6) is an optimization objective, and there should be many ways to solve it, and we just provide one. BRECQ aims to use block reconstruction to train the rounding of all the weights within the block, while in our work, we use block reconstruction to train the $(s,z)$ while freezing the integer part. In fact, we have given three levels of optimization solutions: 1. At the linear layer level, we should solve formula (6) to get the integer part and floating-point parts as the first step; 2. At block level, we freeze the integer part and only train the floating-point parts via block reconstruction to further improve the model accuracy; 3. At the model level, we can use labelled dataset to train all the floating-point parts in the model to further improve model accuracy, or adapt to the downstream sub-tasks. All of the three levels revolve around a common idea, which is to optimize the integer part and the floating-point part of the model separately. # Weakness 4: The value of our paper lies in treating the integer part and floating-point part of model quantization independently, and then transforming a quantization problem into a mathematical optimization problem (6). As for how to solve this formula (6) in isolation, it is not the core contribution of this paper. There are many solution methods in the field of mathematical optimization, and we only provide one. We admit that this solution may not be the best, but even a suboptimal solution can still achieve high quantization accuracy. Doesn’t this further illustrate the superiority of our method ( transforming a quantization problem into a mathematical optimization problem (6))? # Weakness 5: We unabashedly admitted in the paper that our quantization method is more time-consuming than other methods such as GPTQ. However, we also provide specific time costs as a consideration for engineers when choosing the most appropriate time-accuracy trade-off for application. In industry, it takes less than 40 hours (reported in the original paper, but can be reduced to 20 hours at the moment, via setting $J=1$, defined in line 208) to produce a quantized and high-accuracy large language model. We think it is of high industrial value. &nbsp; Thank you again for taking the time and effort to review our paper carefully and then asking these high-quality questions. We look forward to further discussions with you. We would be grateful if you could raise the rating after addressing your concerns. Many thanks again! --- Rebuttal Comment 2.1: Comment: Thank you for the detailed rebuttal. The authors have adequately addressed my concerns, and I am prepared to increase my rating accordingly. However, I believe the original manuscript could benefit from improved structural clarity. To enhance readability, I recommend the authors focus on improving the overall organization of the paper for the final submission. --- Rebuttal 3: Comment: Thank you for your recognition of our work and your valuable suggestions. We will carefully review the organization of the paper (especially Sec 1 and 3), introduce our ideas step by step, highlight the key points, and enhance the coherence of the text, . We will also check carefully to avoid spelling errors. Thank you again and wish you all the best.
Summary: This paper proposes a novel post-training quantization method to achieve 2-bit uniform quantization on large language and speech models. The proposed method decouples the quantized values into integer and floating-point parts, which are then optimized via a constrained optimization problem that can be solved with off-the-shelf solutions. The proposed method allows uniform quantization down to extreme bits. Strengths: 1. This paper proposes a novel optimization-based method to conduct PTQ on large models. The proposed method is solid and unique from previous methods. 2. The proposed method achieves good performance with only uniform quantization, without special procedure for outliers etc., providing direct benefit to the runtime of the quantized model on general hardware. 3. The limitations and future directions are clearly discussed in the paper. Weaknesses: 1. The distinction between the proposed decoupleQ and the traditional quantization methods are not clearly derived in Sec. 3.2. The statement that "(s,z) lost the traditional meaning" on line 138 is not clear. My understanding is that W, s, and z are now totally independent of the original weight w0 in the optimization process, as long as the final output error is minimized? I think adding a comparison with the optimization objective/procedure of the traditional quantization here will help. 2. The proposed method appears to be sensitive to the size of the calibration set, so that the calibration size reported in the experiments are much larger than that of the previous baselines. As it is understandable that the optimization process may require more data to avoid overfitting, it would be more fair if the baseline methods are also calibrated with the same dataset/training cost. 3. For the LLM experiments, only ppl is used as metric. However, the ppl has been shown to be an inaccurate metric to reflect the utility of the LLM after compression. More evaluations such as zero-shot performance on downstream tasks and the instruction following ability etc., as in SqueezeLLM and OmniQuant papers, would be helpful to see if the quantized model still retains the ability as the FP one. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weakness section. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations and potential social impacts are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for reading our paper carefully and for your generally positive comments. We will respond to your concerns in detail as much as possible and would be grateful if you could point out any omissions. # Weakness 1: In traditional meaning, the whitepaper[1] explained in Sec 2.1 that: "The scale specifies the step size of the quantizer and floating point zero maps to zero-point. Zero-point is an integer, ensuring that zero is quantized with no error. " This quantization paper [2] said in Sec 2.1 that "the constant $S$ (for “scale”) is an arbitrary **positive real number**... The constant $Z$ (for “zero-point”) is of the same type as quantized values $q$, and is in fact the quantized value $q$ corresponding to the real value 0. Since then, most papers have calculated $(s,z)$ and $\widehat{W}$ from $W_0$ in such a (a-b-c) procedure or its variants [1,2,3,4]: $$s=\frac{max(W_0)-min(W_0)}{2^N-1} \tag{a}$$ $$z=min(W_0)-\alpha s \tag{b}$$ $$\widehat{W} = \text{clip}(\lfloor \frac{W_0-z}{s} \rceil, \alpha, \beta) \tag{c}$$ In such a context, $s$ stands for step size of the quantizer , which is generally a **positive** number. $s,z$ and $\widehat{W} $ are dependent on the full-precision $W_0$. While in decoupleQ, $s,z$ are just two primal variables without any constraints in problem(6). The optimal solution of $s$ can be 0 or **negative**, and we completely abandoned formula (3), which is a very important step in the traditional quantization method. As you understand, $s,z$ and $\widehat{W}$ are now totally independent of the original weight $W_0$ in the optimization process, as long as the final output error is minimized. We get the values $s,z$ and $\widehat{W}$ only from the optimization problem (6), not from the above (a-b-c) procedure. Thank you for pointing out this misleading point. We can modify this sentence to "$s$ has deviated from the meaning in the traditional quantitative context and is now unconstrained optimization variables, whose optimal solution may be 0 or even negative." [1]: Quantizing deep convolutional networks for efficient inference: A whitepaper [2]:Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference [3]: AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration [4]: OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models # Weakness 2: Very sorry we were not clear on this point. The data size of decoupleQ is the same as the previous baseline. In ResNet compression, previous baselines such as GPTQ used calibration datasets consisting of 1024 random training samples, and then applied standard flipping and cropping augmentations to artificially increase the size of this dataset by 10×, resulting in a total of 10240 images as the input of the model. In our experiment, we randomly choose 10240 images, with standard augmentation but not increasing the size of dataset, resulting in a total of 10240 images as the input of the model. Thank you for pointing this out. Since we did not find the source code of GPTQ to process the ImageNet dataset, we directly took 10,240 training images without increasing size to save trouble. We sincerely admit that we did not strictly align them with the previous baseline. In LLama compression, as we write in line#217-218 and line#251-253, we use the source code from GPTQ, and we use 128 segments, each of which contains 2048 elements. This setup is strictly consistent with the previous baseline. The fair comparison results are shown in Table 3, and our results outperform than previous by a clear margin in W2A16. In addition, since our solution to Eq(6) is very dependent on the Hessian matrix $H$, we find that increasing the size of the calibration data can further improve the accuracy of the quantization model in decoupleQ. This is shown in Figure 5. We think this is a very good feature. In the industry, we can use enough calibration data (The total time cost for quantization is less than 20 hours at the moment) to enable our large speech model to be quantized to W2A16g64 and have the same accuracy as the unquantized baseline, and it has been launched in our company's core products. # Weakness 3: We sincerely admit that the lack of rich public experiments was indeed our shortcoming, although our PPL in Table 3 is lower than others by a large margin in W2A16. We will make our code public on GitHub (regardless of whether the paper is accepted or rejected finally), and continue to add more experimental results. We hope that reviewers could pay more attention to our innovation in theory and scheme. We also believe that the novelty of a method may outweigh the number of experiments, especially in NeurIPS. In addition, the work is tangible and can be applied to industry. We do have launched 2-bit quantized large speech models in multiple consecutive releases of our company's core products. After the reviewing period is completed, the identity of our company and the products launched will be made public. Also, we have released the W2A16 CUDA kernel used in our products, which is under the merging review into NVIDIA tensorRT repo. We believe that our work will make certain contributions to the industry and open-source community. &nbsp; &nbsp; Finally, thank you again for your valuable time reviewing our work! We are very happy and look forward to discussing any issues with you further. And we will reply to all your comments immediately. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I would like to thank the author for the rebuttal. The rebuttal resolves my concern on the calibration size. Generally speaking, as I agree with the other reviewers that more experiments can be done to show the effectiveness of the proposed method, I believe the existing evidence is enough to show the proposed method is appliable. Given the novelty and significance of the proposed method, I increase my score to weak accept. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Thank you very much for your reply and for your recognition of our novelty and significance. Thanks again!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Dissect Black Box: Interpreting for Rule-Based Explanations in Unsupervised Anomaly Detection
Accept (poster)
Summary: The paper addresses the challenge of distinguishing between normal and anomalous structured data. A new method designed to interpret and understand the structure of normal data distributions. It integrates anomaly detection model predictions into its splitting criteria to enhance the clustering process. In addition, a complementary algorithm that defines boundaries within each segmented distribution is proposed. Strengths: 1. The designed Gaussian boundary delineation algorithm helps in managing high-dimensional data and ensures robustness against data drift and perturbations. 2. Extensive evaluations are conducted for comparisons against five established baseline anomaly detection models across four diverse datasets Weaknesses: 1. The writing should be enhanced as some technical concepts are unclear. For example, a running example could be provided to explain the SCD tree and GBD algorithms. 2. The work targets the structured, tabular data which should be clear at the beginning of the paper. How to build such an SCD tree for instructed data is not explored. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How to implement the SCD tree for unstructured data? 2. What is the typical running efficiency? 3. Considering the data drifts and perturbations, does the naive Gaussian-based assumption work for more heterogeneous drifting data? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have broadly discussed the limitations of the work. Most of the concerns are discussed and clarified through the rebuttal stage. I would like to update thee overall score. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate your insights and suggestions, which have helped us to improve our paper. > Comment 1: “The writing should be enhanced as some technical concepts are unclear. For example, a running example could be provided to explain the SCD tree and GBD algorithms.” **Response:** We agree that a running example would greatly clarify the technical concepts of the SCD tree and GBD algorithms. We have added a detailed running example in Section 4.2, “Methodology Overview,” and pseudocode in Appendix A.2 where we walk through the entire process of constructing an SCD tree and applying the GBD algorithm using a sample dataset. This example includes: 1. **Initial Data Segmentation (SCD-Tree)** - The SCD-Tree uses the outputs of the anomaly detection model to segment the data. - The tree structure is built by recursively splitting the data based on the model's outputs. 2. **Boundary Delineation (GBD)** - Within each segment, the GBD algorithm uses Gaussian Processes to define decision boundaries. - For instance, in a segment with features A and B, the algorithm determines the boundary where the likelihood of being normal is highest. - These boundaries are then refined to capture the nuances of the data distribution. 3. **Rule Extraction** - The refined boundaries are translated into interpretable rules. - For the example dataset, a rule might be: "If A > 5 and B < 10, then the data point is normal." > Comment 2.1: “The work targets the structured, tabular data which should be clear at the beginning of the paper.” **Response:** We apologize for the lack of clarity regarding the data type our method targets. We have revised the introduction to explicitly state that our approach is designed for structured, tabular data. Additionally, while our method is designed for structured, tabular data, we acknowledge the importance of extending interpretability methods to unstructured data (e.g., text, images). Potential strategies for applying the SCD-Tree to unstructured data include: - **Feature Engineering**: Transforming unstructured data into structured formats using techniques such as text vectorization or image feature extraction. - **Hierarchical Clustering**: Using hierarchical clustering to segment unstructured data before applying the SCD-Tree. We will discuss these potential extensions further in the future work. Thank you for your carefully detailed suggestions, which helped us improve the comprehensibility of the article! > Comment 2.2: “ How to build such an SCD tree for unstructured data is not explored.” **Response:** In future work we envisage Building SCD Tree for Unstructured Data. To extend the SCD tree for unstructured data, such as images, we propose the following steps: 1. **Image Data Feature Engineering**. Use convolutional neural networks (CNNs) or other feature extraction techniques to convert images into feature vectors. Pretrained models (e.g., VGG16, ResNet) can be utilized to extract high-level features from images. 2. **Data Segmentation (SCD-Tree)**. Once the unstructured data is transformed into a structured format, the SCD-Tree can be applied in the same manner as for structured data. The tree uses the extracted features(chhannel or patch granularity) to segment the data based on the anomaly detection model's outputs. Subsequent GBDs and Rule Extractions will work in the same way as the original. > Comment 3: “What is the typical running efficiency?” **Response:** We appreciate the reviewer's interest in the running efficiency of our model. We have conducted a thorough evaluation of both the training and inference times of our method. Detailed results are presented in Appendix A.4.1, where we provide comprehensive performance metrics for various datasets. Specifically, Figure 2c in the main manuscript illustrates the typical running efficiency. This figure shows the training and inference times for our proposed method across different data dimensions, highlighting its scalability and efficiency. The results indicate that while the training time scales with the number of features and data points, the inference time remains consistently low due to the rule-based nature of our method. These findings demonstrate that our method is both efficient and scalable, making it suitable for deployment in high-stakes environments where timely decision-making is crucial. > Comment 4: “Considering the data drifts and perturbations, does the naive Gaussian-based assumption work for more heterogeneous drifting data?” **Response:** The naive Gaussian-based assumptions can be effectively applied to more heterogeneous drift data. - The probabilistic nature of Gaussian Processes provides a measure of uncertainty in the predictions. This is particularly useful for heterogeneous drift data, as it allows the model to quantify the confidence in its predictions. - Gaussian Processes are inherently flexible and capable of modeling complex, non-linear relationships in data, allowing GPs to adapt to local patterns and variations within the data. And GPs can dynamically adjust their parameters to capture the nuances of different segments, ensuring accurate representation of the underlying distributions. - In our methodology, the SCD-Tree initially segments the data into more homogeneous regions. Within each segment, the GBD algorithm applies the Gaussian-based assumptions which ensures that GP are applied within localized regions where they are more likely to hold true, even if the overall data distribution is heterogeneous. The experiments results in Table 3 of Section 6.3 show that our method maintains high fidelity and robustness, indicating that the Gaussian assumptions are sufficiently flexible to handle diverse and drifting data patterns. Thank you for your constructive review. We hope these changes meet your expectations and look forward to any further comments you may have. --- Rebuttal Comment 1.1: Comment: Hello, Thank you so much for your reply. One more question is that from my understanding, Gaussian distribution is a naive assumption to many machine learning models, even for the mixture of Gaussian. I cannot find the arguments for "the naive Gaussian-based assumptions can be effectively applied to more heterogeneous drift data." convincing. Maybe the authors could provide more thoughts on that. --- Reply to Comment 1.1.1: Title: Response to Concerns Regarding the Use of Gaussian Processes in Heterogeneous and Drifting Data Scenarios Comment: Thank you for your insightful comment regarding the use of Gaussian Processes (GPs) in our framework. We understand the concern about the adequacy of Gaussian assumptions, particularly in scenarios involving heterogeneous or drifting data. Our choice of GPs, however, is grounded in their unique strengths and the specific nature of our application, which we would like to clarify further. Gaussian Processes are favored in our method primarily because of their ability to provide not just point estimates but also a measure of uncertainty through variance predictions. This feature is especially critical in high-stakes anomaly detection, where understanding the confidence in model predictions is as important as the predictions themselves. The uncertainty estimates offered by GPs allow our model to adapt more effectively to data variations, including heterogeneous drift. While it is true that a single Gaussian distribution might be too simplistic for complex data, GPs offer a more sophisticated approach by being able to model complex, non-linear relationships. GPs do not assume a single global Gaussian distribution for the entire dataset; instead, they create a smooth, flexible function that can adapt to the local structure of the data. This adaptability is crucial in our methodology, where the data may exhibit different behaviors in different regions of the feature space. For instance, in our application, the GBD algorithm applies GP locally within each segment identified by the SCD-Tree, allowing it to accurately model the nuances of each specific data region. In our empirical studies, presented in Tables 3 and 4 of the paper, and noise experiments shown in Table 6 in attachment of global response, we demonstrate that the integration of GPs within our framework enhances the robustness and interpretability of the model, even in the presence of heterogeneous data or drift. The performance metrics consistently show that our method maintains high fidelity and low false positive rates across various datasets, underscoring the effectiveness of GPs in this context. The ablation studies further confirm that removing the GBD component leads to a decrease in performance, which underscores the value added by the probabilistic modeling that GPs provide. While GPs have shown strong performance in our experiments, we acknowledge that no single method is universally optimal. As such, we are exploring other probabilistic models, such as Variational Inference techniques or non-parametric methods like Bayesian Non-Parametrics, which could potentially offer even greater flexibility for highly complex and non-stationary data. These explorations will be part of our future work. We appreciate your constructive feedback and are open to further discussions or suggestions on how to enhance our approach.
Summary: This paper introduces the Segmentation Clustering Decision Tree (SCD-Tree) and Gaussian Boundary Delineation (GBD) algorithm to interpret black-box anomaly detection models in high-stakes domains. The method segments high-dimensional data, incorporates model predictions into decision criteria, and defines flexible boundaries to distinguish normal from anomalous data points. Evaluations across diverse datasets demonstrate superior explanation accuracy, fidelity, and robustness compared to existing methods. Strengths: 1. The proposed approach of using rule-based interpretations for anomaly detection results is intriguing, addressing a critical gap in the field and offering a fresh perspective on model explainability. 2. The clarity of the empirical study is commendable, effectively showcasing the proposed method's robustness and versatility. Weaknesses: 1. The proposed method primarily builds upon some existing techniques. While the integration and adaptation of these techniques are innovative to some extent, the foundation lacks substantial originality. The approach leverages well-known concepts without introducing new theoretical insights or methodologies. 2. The clarity of the paper is compromised by the disorganized structure of the related work section. The related work section is somewhat disordered, making it challenging for readers to follow the logical flow of the discussion. To improve clarity, the sub-sections should be organized more coherently. 3. While the current evaluation demonstrates the potential of the proposed method, using more advanced anomaly detection models and realistic datasets would provide a more comprehensive and convincing validation. (i) The experiments use AE, VAE, OC-SVM, and iForest as black-box models. To better illustrate the applicability and robustness of the proposed interpretation model, it would be beneficial to include the latest state-of-the-art deep anomaly detection models. (ii) The curse of dimensionality is highlighted as a challenge in the paper. However, the highest dimensionality of the datasets used in the experiments is only 80. The proposed method should be tested on datasets with much higher dimensions. Technical Quality: 3 Clarity: 3 Questions for Authors: Are there any theoretical advancements or unique aspects of these methods distinguishing them from similar techniques? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations of their work in section B.4 of the paper. They discuss several key limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you! Your feedback has been invaluable in enhancing the clarity and impact of our work. > Comment 1:The proposed method primarily builds upon some existing techniques. ... While the integration and adaptation of these techniques are innovative to some extent, the foundation lacks substantial originality. **Response:** While it is true that our method integrates existing techniques like decision trees and Gaussian Processes, the novelty lies in how these techniques are combined and applied to the problem of interpreting black-box anomaly detection models in high-risk fields like cybersecurity. We observed that traditional methods struggle with rule fitting for high-dimensional data due to their reliance on direct Euclidean distance calculations in feature space. To address this, we developed the SCD-Tree, which integrates anomaly detection model outputs directly into its splitting criteria. Unlike traditional decision trees, our approach leverages the decision-making results of black-box models to capture complex data distributions effectively and this unsupervised calculation method subverts traditional entropy-based approaches. In addition, we recognized that outliers often exist within normal data in anomaly detection scenarios. Traditional methods typically have rigid decision boundaries, leading to reduced robustness and the potential for false alarms and is a significant concern in security field [1]. To mitigate this, we introduced the GBD algorithm, providing flexible boundaries that better accommodate data variability. By integrating GBD with the SCD-Tree, our method offers an interpretable, and robust framework for anomaly detection that maintains high fidelity to the original black-box models while reducing the incidence of false alarms. > Comment 2: The clarity of the paper is compromised by the disorganized structure of the related work section. ...To improve clarity, the sub-sections should be organized more coherently. **Response:** We appreciate the advice on the related work. We have reorganized the related work section into clearer sub-sections, each focusing on a specific aspect of related research: Unsupervised Anomaly Detection Techniques, Interpretability in Anomaly Detection, Issues inside Existing interpretation Approaches in Anomaly Detection Models. Due to word limitations, we will provide you with the revised full related work section in the final version during the discussion phase. Thank you very much for your advice. This reorganization helps to show the logical flow more clearly. > Comment 3(i): The experiments use AE, VAE, OC-SVM as black-box models. ... it would be beneficial to include the latest state-of-the-art deep anomaly detection models. **Response:** We would have liked to demonstrate the fitness of our model with as classical a model as possible, and the experimental results bear this out. We acknowledge the importance of using state-of-the-art deep anomaly detection models. To address this, we have extended our experimental evaluation to include recent advanced models such as VRAE and DAGMM. The results are shown in Table 3 of the global response. These models represent the latest advancements in deep anomaly detection and provide a more comprehensive validation of our method's applicability. > Comment 3(ii): The curse of dimensionality is highlighted as a challenge in the paper. However, the highest dimensionality of the datasets used in the experiments is only 80. **Response:** We appreciate your concern regarding the dimensionality of our datasets. In anomaly detection, the curse of dimensionality is particularly challenging due to the inherent sparsity and complexity of high-dimensional data. Although 80 features might appear moderate in other fields, they are considered high-dimensional within anomaly detection. We have carried out an exhaustive study of datasets in the field of anomaly detection and have added a summary table to the attachment of global response. Our chosen datasets, such as CIC-IDS2017, and TON-IoT, are standard benchmarks in this domain, typically ranging from 10 to 80 features. These dimensions capture the real-world complexity, each feature signifys a specific aspect of network traffic. In practice, datasets with 80 features are sufficient for effective anomaly detection, as demonstrated by our method's high fidelity and robustness across these benchmarks. We have also added to the global response the results of our model's experiments on data of different dimensions, demonstrating its ability to handle high-dimensional data. Research has shown that more features do help in security tasks, so building datasets with a higher number of features is really the trend nowadays. We are also in the process of collecting higher dimensional datasets about network packets, and in future work, we will also try to use our method to apply it to unstructured data to test its effectiveness in higher dimensional data. > Comment 4:Are there any theoretical advancements or unique aspects of these methods distinguishing them from similar techniques? **Response:** Yes, our method introduces several theoretical advancements and unique aspects that distinguish it from similar techniques: - The SCD-Tree uses the outputs of anomaly detection models directly in its splitting criteria, which is a novel unsupervised approach to enhance the tree's ability to capture complex data distributions. - The GBD algorithm refines the decision boundaries within each segment using Gaussian Processes, providing a probabilistic framework that quantifies the uncertainty in boundary definitions, enhancing robustness against data drift. We have also restated the necessity and innovation in global response, and thank you for your professional comments and suggestions. Reference: [1] Hassan, Wajih Ul, et al. "Nodoze: Combatting threat alert fatigue with automated provenance triage." network and distributed systems security symposium. 2019. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Some of my concerns are addressed. I would like to raise my score. One minor point: the references [8] and [18] are repetitive. --- Reply to Comment 1.1.1: Title: Thank you for your review and fair ranking, we will modify the details you raised! Comment: Thank you very much for your reply, your meticulous suggestions are very important to us. We will revise your questions in the final version of the paper and merge references [8] and [18], and we hope that you will be well.
Summary: The paper proposes a general method to extract interpretable rules from any anomaly detection model. A decision tree is learned from a black-box anomaly detector output/scores and the decision boundaries in the learned tree are further refined using Gaussian Processe framework. Strengths: The paper tries to address an important issue with anomaly detection: explainability. It presents results against relevant baseline methods for explainability. Weaknesses: Not all claims are rigorously supported with evidence. Main comments: - Lines 106-110: "In summary ... their operational logic." -- The only concrete evidence provided in the paper is for robustness. The paper has not systematically addressed attributes such as interpretability, non-reliance of oversimplified surrogate models. - Section 4: Rule extraction using scores from other models has been researched in earlier literature (e.g., [1, 2, 3, 4]). The current paper should discuss the differences with prior literature and whether any of those earlier techniques can be utilized here. - Section 5: A justification for using Gaussian Processes is not presented. Why not use some other model such as KDE? - Section 5: There should be an ablation experiment to show the benefits with boundary estimation vs without. - Lines 267-268: "For the calibration of our anomaly detection models' hyperparameters, only normal instances from these datasets are utilized." -- This contradicts the claim that the algorithm is unsupervised as this statement implies that labeled normal instances are available. - Line 282 -- Fidelity and Robustness are predictive measures, not interpretability. The more relevant *interpretability* measures in [49] are 'Number of rules' and 'Average rule length'. These must be shown here with respect to interpretability. - Line 312: "...proving its resilience to data drift and..." -- 'data drift' is a different concept -- it means that over time the inherent data characteristics change permanently. What probably is being implied here is sample variance. The paper has not presented any evidence of being able to handle data drift. Technical Quality: 2 Clarity: 3 Questions for Authors: A justification for using Gaussian Processes is not presented. Why not use some other model such as KDE? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and insightful feedback on our paper. > Comment 1: Lines 106-110: "In summary ... their operational logic." The paper has not systematically addressed attributes such as interpretability, non-reliance on oversimplified surrogate models. > Comment 6:Line 282 -The more relevant interpretability measures in [49] are 'Number of rules' and 'Average rule length'. **Response:** Thank you for pointing out this oversight. We had intended to use robustness to show that our model explains the black-model well. We agree that we should provide concrete evidence for interpretability and non-reliance on oversimplified surrogate models. In revised paper, we incorporate these metrics ('Number of rules' and 'Average rule length' ) into our evaluation. Additionally, We add ablation experiments to demonstrate the interpretability. The results of the experiment are attached to the Table 4 and Table 1 of global response. ||NumberofRules|||AverageRuleLength||| |----------|--------------------|--------|--------|---------------------|--------|--------| ||AE|VAE|IFOREST|AE|VAE|IFOREST| |CIC-IDS|22|15|17|4.83|3.03|4.97| |ton-iot|21|23|13|5.00|5.00|5.00| |kddcup|17|19|21|5.00|4.70|5.00| > Comment 2:Section 4: The current paper should discuss the differences with prior literature and whether any of those earlier techniques can be utilized here. **Response:** We appreciate the references to earlier work. Our approach differs from prior literature primarily in the integration of Segmentation Clustering Decision Tree with Gaussian Boundary Delineation. To clarify these distinctions, we include a more comprehensive related work section that discusses the differences between our method and previous techniques [33, 34, 35, 36], specifically highlighting the novelty and advantages of combining SCD-Tree with GBD for rule extraction. Additionally, we explore the potential applicability of earlier techniques to our methodology and discuss their comparative performance. > Comment 3 & Questions: Section 5: A justification for using Gaussian Processes is not presented. Why not use some other model such as KDE? **Response:** We use Gaussian Processes (GPs) for their probabilistic framework, which quantifies uncertainty at each prediction point through variance. This capability is crucial for ensuring robustness in boundary delineation by allowing us to assess confidence levels in decision boundaries. KDE is a non-parametric way to estimate the probability density function of a random variable. While KDE is effective in density estimation, it does not inherently provide a measure of uncertainty. This lack of uncertainty quantification limit the robustness of boundary delineation, especially in high-stakes anomaly detection tasks. The probabilistic nature of GPs allows us to define decision boundaries that take into account both the mean and variance of the predictions. By setting thresholds on the mean and variance, we can delineate boundaries that are not only accurate but also resilient to variations and perturbations in the data. KDE can estimate density contours, but without an inherent measure of uncertainty, it may not delineate boundaries as effectively in terms of robustness. > Comment 4:Section 5: There should be an ablation experiment to show the benefits with boundary estimation vs without. **Response:** We agree that ablation study is necessary to demonstrate the benefits of boundary estimation. We conduct ablation experiment that compares our method with and without the GBD step. The results of are presented in global rebuttal and Appendix A.4.5 of the final version which show that rules-based approach GBD improves the accuracy by up to 0.12. > Comment 5:Lines 267-268: "only normal instances from these datasets are utilized." -This contradicts the claim that the algorithm is unsupervised as this statement implies that labeled normal instances are available. **Response:** We apologize for the confusion. Our approach is indeed unsupervised; however, for the calibration of hyperparameters, we utilized a small portion of one-class data presumed to be normal based on domain knowledge, not labeled instances. Instead, domain knowledge is leveraged to identify normal data for hyperparameter calibration. This ensures that the unsupervised nature of our anomaly detection approach is maintained. While using purely normal data is the ideal state, in real-world scenarios, some attack data may inevitably be present. We conducted experiments using CIC-IDS and TON-IoT datasets to assess the efficacy of our method under varying percentages of “noisy” data. Our results demonstrate that the model maintain high levels of fidelity and robustness even as noise levels increase. |Dataset|NoiseLevel(%)|TPR|FD| |---------|-----------------|------|------| |CIC-IDS|1|0.91|0.943| ||6|0.897|0.935| ||8|0.89|0.928| ||10|0.856|0.92| |TON-IoT|1|0.995|0.991| ||6|0.975|0.987| ||8|0.971|0.979| ||10|0.966|0.969| > Comment 7:Line 312: 'data drift' is a different concept -- it means that over time the inherent data characteristics change permanently. What probably is being implied here is sample variance. **Response:** Thank you for bringing this blunder to our attention. We acknowledge the misuse of the term 'data drift'. What we intended to convey is the model's ability to handle sample variance and minor perturbations. We will correct this terminology to 'Data Variability' in the final version and provide a more accurate description of our method's resilience to sample variance. Our model's ability to handle sample variance is achieved through the dynamic nature of the Gaussian Processes used in the boundary delineation step. GPs provide a probabilistic framework that can adapt to changes in the data distribution by continuously updating the mean and variance estimates based on new data points. Ablation experiments demonstrated the effectiveness of the method in dealing with sample variance. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to my comments. A couple of my concerns remain: 1. The new results for interpretability do not compare against benchmark algorithms. Hence it is hard to say whether the proposed one is better. 2. Even if a little data is being used for calibration, it is still labeled data available at the time of training. Hence 'weakly supervised' might be more appropriate. Overall, the authors response has satisfied most of by concerns and hence I will increase my score. --- Reply to Comment 1.1.1: Title: Clarifications on Interpretability Benchmark Comparisons and Terminology Adjustment to 'Weakly Supervised' Comment: Thank you for your valuable feedback on the need to compare our interpretability results against benchmark algorithms. > comment 1: “The new results for interpretability do not compare against benchmark algorithms. Hence it is hard to say whether the proposed one is better.” I apologize that I can't immediately provide you with the results of the comparison test of the different benchmarks due to time issues, but in the revised manuscript, we want to include comparisons of our method with well-established interpretability methods, such as LIME (Local Interpretable Model-agnostic Explanations) [1] and SHAP (SHapley Additive exPlanations) [2]. Both LIME and SHAP are widely recognized in the literature for providing interpretable models, especially in the context of black-box models. Specifically, LIME generates local linear models that approximate the decision boundaries of the original black-box model, while SHAP leverages Shapley values from cooperative game theory to attribute feature importance. We conducted additional experiments to evaluate the interpretability of our method relative to these benchmarks. The comparison metrics include Number of Rules, Average Rule Length, Fidelity. Our findings, will be presented in of the revised manuscript, demonstrate that the SCD-Tree combined with GBD not only generates fewer rules with shorter average lengths but also maintains higher fidelity compared to these benchmarks. These improvements are particularly significant in high-dimensional datasets, where traditional methods like LIME and SHAP may generate overly complex or less intuitive explanations. References: [1]. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). [2]. Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (pp. 4765-4774). > comment 2:Even if a little data is being used for calibration, it is still labeled data available at the time of training. Hence 'weakly supervised' might be more appropriate. We appreciate the reviewer’s insight regarding the use of labeled data for calibration and the terminology used to describe our method. This is indeed a crucial aspect of the methodological clarity and accuracy. Our method is designed to operate primarily in an unsupervised manner, where the majority of the data used is unlabeled. However, as you correctly noted, a small portion of labeled data is utilized during the calibration phase to fine-tune the hyperparameters and thresholds within the model. This process ensures that the model is optimized for the specific characteristics of the dataset, enhancing its overall performance. Given this use of labeled data, we agree that ‘weakly supervised’ is a more accurate descriptor. The term ‘weakly supervised’ is widely accepted in the literature to describe methods that rely predominantly on unlabeled data but incorporate some level of supervision, often minimal, to guide the learning process. In light of this, we have revised the manuscript to describe our method as ‘weakly supervised’ rather than ‘unsupervised.’ This change not only accurately reflects the methodological approach but also aligns our work with the broader literature on weakly supervised learning. We believe that this adjustment will help clarify the nature of our approach and its reliance on minimal labeled data for calibration.
Summary: In high-stakes sectors like network security and IoT security, accurately distinguishing between normal and anomalous data is critical. This paper introduces a novel method to interpret decision-making processes of anomaly detection models without labeled attack data. It presents the Segmentation Clustering Decision Tree (SCD-Tree) and the Gaussian Boundary Delineation (GBD) algorithm. The SCD-Tree enhances clustering by integrating model predictions, while the GBD algorithm defines boundaries within segmented distributions, delineating normal from anomalous data. This approach addresses the challenges of dimensionality and data drift, ensuring robustness and flexibility in dynamic environments. The method transforms complex anomaly detection into interpretable rules, demonstrating superior explanation accuracy, fidelity, and robustness compared to existing methods, proving effective in high-stakes scenarios where interpretability is essential. Strengths: 1. By addressing the curse of dimensionality, the approach effectively segments high-dimensional data into more manageable parts, allowing for better clustering and anomaly detection. 2. The method ensures robustness against data drift and perturbations by using flexible boundary fitting, which adapts to changes in data distribution over time. 3. The method is well formulated, which is easy to understand Weaknesses: 1. While the method shows robustness across several datasets, it might not perform equally well in all types of data or anomaly detection scenarios, especially those vastly different from the ones tested. 2. The effectiveness of the interpretative rules depends heavily on the quality of the initial black-box model. Technical Quality: 3 Clarity: 3 Questions for Authors: What are the limitations of the SCD-Tree and GBD algorithms when dealing with extremely large datasets or real-time data streams? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations have been discussed in the paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful feedback. > Question 1: While the method shows robustness across several datasets, it might not perform equally well in all types of data or anomaly detection scenarios. **Response:** We acknowledge the concern regarding the generalizability of our method to vastly different types of data and anomaly detection scenarios. To address this, we have conducted additional experiments on datasets from domains not covered in our initial evaluation. The results, as shown in the attachment of global response Table 2, indicate that our method maintains high levels of robustness and interpretability across datasets in various fields. However, we also recognize the inherent limitations in any single method's universal applicability. Therefore, we propose the following solutions to further enhance the generalizability of our approach and will implement in the future: - Adaptive Thresholding. Implementing adaptive thresholding mechanisms that dynamically adjust based on the characteristics of the input data can improve performance across diverse scenarios. - Domain-Specific Fine-Tuning. Developing domain-specific fine-tuning protocols to adjust the SCD-Tree and GBD algorithms based on the unique properties of different datasets. > Question 2: The effectiveness of the interpretative rules depends heavily on the quality of the initial black-box model. **Response:** You hit the nail on the head, and that's what we're trying to focus on as we do our work! We appreciate your insight regarding the dependency of our interpretative rules on the initial black-box model's quality. Indeed, the primary motivation behind our research is to enhance the interpretability of black-box models in anomaly detection. Our goal is to interpret the black-box model into understandable rules by using a rule-based approach, which is essential for high-risk domains. The aim is not only to explain the decision-making processes of these models but also to improve their transparency and trustworthiness. However, we would like to highlight several key aspects of our methodology that ensure its effectiveness, even when the black-box model is relatively small or less complex: - We have unified the output standards of the black-box models by using metrics such as Mean Squared Error (MSE) and threshold values to represent data normality. These standardized metrics serve as inputs for the SCD-Tree, ensuring consistent and reliable decision-making regardless of the black-box model’s complexity. - The SCD-Tree effectively segments the data into smaller, more manageable clusters. This segmentation reduces the dependency on the black-box model's complexity by isolating simpler patterns within each cluster. - We conducted extensive experiments using various black-box models (e.g., AE, VAE, Isolated Forests, One-Class SVM) of different sizes and complexities. The results in section 6.3 demonstrate that our method maintains high interpretability and accuracy on the currently dominant neural network models in anomaly detection field, even with smaller models. For instance, our experiments with a simple autoencoder model yielded interpretative rules with high fidelity and robustness, as evidenced by the performance metrics provided in Table 3 and Table 4(1) . > Question 3: What are the limitations of the SCD-Tree and GBD algorithms when dealing with extremely large datasets or real-time data streams? **Response:** The SCD-Tree and GBD algorithms, while effective, have certain limitations when handling extremely large datasets or real-time data streams. Here are the key challenges : 1. The time complexity of Gaussian Processes (GP) is relatively high ( $N^3$ ), which can lead to longer processing times when dealing with high-dimensional data. **However**, we have used the Segmentation Clustering Decision Tree (SCD-Tree) for spatial partitioning, so this issue only arises when there is a large amount of data within a particular subspace. By partitioning the data, we localize the high complexity to smaller regions, thus making the overall process more manageable. 2. Our hierarchical model involves several steps: running the black-box model, applying the SCD-Tree, performing Gaussian Process delineation, and boundary acquisition, which make the process lengthy. However, once the data has been partitioned by SCD-Tree , each subspace can be processed in parallel, which reduces the training time. However, the aforementioned challenges primarily occur during the training phase. In real-world scenarios, once the model has been trained and the rules have been inferred, anomaly detection only requires simply examines whether the sample meets these rules. The rules-based interpretation remains efficient and scalable for real-time applications and is well compatible with high-performance applications (like P4 [1], reaching up to 100Gbps throughput by integrating rule-based models [2]), which allows for the implementation of data plane processing with high efficiency and low latency. And our model is designed to handle new data continuously and achieve incremental rule updates. Therefore, even when encountering new types of data, there is no need to re-train the entire model. Instead, we can incrementally update the rules to adapt to new data, ensuring the model remains accurate and up-to-date without extensive re-training. This is certainly efficient in real life use. Thank you for your constructive feedback, which has been invaluable in guiding these enhancements. References: [1] P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown, J. Rexford, C. Schlesinger, D. Talayco, A. Vahdat, G. Varghese, and D. Walker, “P4: programming protocol-independent packet processors,” ACM SIGCOMM Computer Communication Review, 44(3):87–95, 2014 [2] R. Li, Q. Li, Y. Zhang, D. Zhao, X. Xiao, and Y. Jiang, “Genos: General in-network unsupervised intrusion detection by rule extraction,” --- Rebuttal Comment 1.1: Comment: Thank you for the reply. It resolved my concerns. I will keep my rating positive.
Rebuttal 1: Rebuttal: We are very glad that our efforts can clarify some of your concerns, and we really learn a lot from your valuable replies! Your suggestions are very meaningful and help us improve our work a lot! We appreciate the reviewers' recognition of our paper's contribution to advancing anomaly detection in high-stakes environments ("The method is well formulated" by ZxG9, "address an important issue with anomaly detection" by kdm5, " addressing a critical gap in the field and offering a fresh perspective on model explainability" by 7Q5n, "addresses the challenge of distinguishing between normal and anomalous structured data" by dX38). We are very happy that the reviewers were interested in the innovativeness of our method and the reasons for using it! By providing explanations that are both accurate and comprehensible, our method increases the trustworthiness and reliability of anomaly detection systems in environments where precision and transparency are crucial. **Novelty.** Our method uniquely combines the Segmentation Clustering Decision Tree (SCD-Tree) with Gaussian Boundary Delineation (GBD), offering a novel approach to enhancing interpretability and robustness in unsupervised anomaly detection. - We observed that traditional methods struggle with rule fitting in high-dimensional data due to the complexity and sparsity of such datasets. To tackle this, we developed the SCD-Tree, which differs from conventional decision trees that rely on Euclidean distances for splits. Instead, our approach integrates predictions from black-box models directly into the tree's splitting criteria, enhancing its ability to identify distinct data patterns and extract meaningful rules. This innovative unsupervised method overcomes the limitations of entropy-based calculations, enabling better utilization and comprehension of black-box model insights. The SCD-Tree partitions data into meaningful segments, ensuring each segment represents a distinct data pattern, facilitating the extraction of interpretable rules. Data segmentation is also an important way in which our model is able to handle high-dimensional data. - In anomaly detection scenarios, normal data often contain outliers, and existing rule extraction methods with rigid boundaries fail to accommodate these variations, leading to issues like "false alarm/alert fatigue," a significant concern in cybersecurity. To address this, we incorporated the Gaussian Boundary Delineation to provide a probabilistic framework that defines boundaries and quantifies associated uncertainty, enhancing model robustness by adapting to data variability. This integration ensures that boundaries are flexible and resilient, accommodating shifts in data patterns and improving overall interpretability. - It consistently achieves high fidelity, shows that the rule-based explanations closely align with original model predictions. The system's robustness is evident in its stable performance across diverse datasets and evolving environments. High TPR and TNR metrics further confirm the effectiveness of our algorithms, indicating accurate identification of normal and anomalous instances. These results underscore the accuracy and reliability of our method, validating its deployment suitability in high-stakes environments where precision is paramount. **Additional Experiments.** To address reviewers' questions and reinforce our claims, we conducted additional experiments using different domain datasets (CIC-IoT from IoT, Credit Card Fraud Detection from financial, and dataset Breast Cancer Wisconsin from the healthcare domain) and state-of-the-art anomaly detection models (LSTM, OC-NN, VRAE, DAGMM) . Our model achieves nearly the highest TPR and fidelity on different domain datasets and consistently showed high accuracy , with VRAE achieving a TPR of 0.9643 and DAGMM a TPR of 0.9931 on the CIC-IDS dataset, illustrating its robustness across cutting-edge models. And ablation experiments showing that our method can achieve a 0.12-point lift demonstrate its applicability in interpretation. At the same time, we provide information to show the model's interpretability capabilities in the field of anomaly detection. The results of these additional experiments are presented in a detailed table in the attached PDF. **Related Work.** Based on the reviewers' suggestions and requests, we have refactored the related work section and added the comparative advantages of the model with the latest work. The modified "Related Work" section is structured to provide a comprehensive overview of existing research in anomaly detection and model interpretability, specifically addressing: Overview of Anomaly Detection Models, Interpretability in Anomaly Detection and Issues inside Existing interpretation Approaches. In the final version, we will also add some experimental results in Appendix that we promised to the reviewers to improve the quality of our work. Again, thank you for your comments. We have made efforts to address the concerns raised, and we hope our revisions meet your expectations. Should there be any further questions or if additional clarifications are needed, we are more than willing to engage in further discussions or conduct additional experiments as necessary during the review phase. Pdf: /pdf/a0608aad73ac96581cad371480491edbdef1531b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
TPC: Test-time Procrustes Calibration for Diffusion-based Human Image Animation
Accept (poster)
Summary: This paper proposes diffusion guidance (Test-time Procrustes Calibration, TPC) for human image animation. TPC incorporates an auxiliary branch into the diffusion conditions process and provides a calibrated reference image latent. Experimental results show the effectiveness of the provided method. Strengths: 1. This paper studies an interesting problem of human image animation, that is, when the given human image and pose image have shape differences, the performance of the original model is poor. 2. The structure of this paper is well-organized and easy to follow. 3. The experimental results show the effectiveness of the proposed method. Weaknesses: 1. In Figure 1a, I acknowledge that there may be a large shape difference between a given image that needs to be animated and a guided image. In this case, I think the most direct and useful way is to find a way to align the two images, and I don't think it is very difficult to align the two images. So why do we need the proposed TPC method? 2. In line 54, the method of injecting guided images into the model should be more than just cross-attention. Would TPC work if the human image animation methods do not use cross-attention to guide image injection into the model? 3. The last paragraph of the introduction introduces the proposed TPC approach. However, this part is to analyze the gains that TPC can bring, and I think this part is more suitable for methods or experiments. 4. As for the TED man in the demo of the experimental results, why does the animated video only show the man's upper body? 5. For the unseen domain shown in the provided demo, I think in human image animation, we'll give the model images that need to be animated and guided images. Also, the base model is trained on a lot of images of human bodies, and the model should have seen something like this before, so calling it an unseen domain isn't accurate. For this kind of scene, there is a big gap between the character identity in the generated video and the given image, and TPC seems to have no way to help the original model maintain the character identity. Technical Quality: 2 Clarity: 2 Questions for Authors: Please see the above. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. The necessity of TPC 2. The experimental results do not help the original model retain the details of the input image. 3. Introduction needs to be reorganized Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments. We address your concerns below. [**Q1**] I think the most direct and useful way is to find a way to align the two images, and I don't think it is very difficult to align the two images. So why do we need the proposed TPC method? [**A1**] The direct way to input an aligned reference image is sub-optimal, as it fails in maintaining temporal consistency and preserving the background. Figure J of our attached PDF shows the experimental results for this. For right-to-left walking motions, the single reference image cannot aligned with both target positions, resulting in temporal inaccuracies, and at the same time, the background is not preserved. The TPC method addresses these issues by performing frame-wise calibrations to match all target frames and providing these as additional conditions for the diffusion model. To preserve the background, TPC filters out backgrounds in the calibrated frames (Line 214). The Procrustes approach used in TPC is more effective for calibration than other transformations, as demonstrated in Figure G and Table C, which show qualitative and quantitative superiority over methods like Affine [1,2], Perspective [3,4], and Global Pose Normalization (GPN) [5]. **Table C.** Ablation study of various transformation methods for calibration, reported in a format of foreground/background (validation splits, average score of compositional alignment/misalignment). PS: Perspective. ||||||||| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||L1↓$_{\times \textrm{E-04}}$|PSNR↑|SSIM↑|LPIPS↓|FID↓|FID-VID↓|FVD↓| |w/o calibration|4.51/3.57|27.81/29.15|0.689/0.758|0.304/0.259|38.3/27.7|37.2/29.4|184/162| | Affine|3.92/3.25|28.61/29.32|0.709/0.767|0.281/0.249|29.1/26.1|36.7/27.6|177/153| |PS|3.46/3.06|28.88/29.37|0.715/0.769|0.279/0.247|28.6/25.9|35.2/27.4|171/151| |GPN|3.12/2.83|29.06/29.48|0.731/0.776|0.274/0.244|27.3/25.6|31.5/26.9|164/145| |TPC (Ours)|**3.08**/**2.79**|**29.12**/**29.50**|**0.734**/**0.782**|**0.273**/**0.241**|**26.9**/**25.1**|**31.2**/**26.4**|**162**/**142**| [1] FedMed-ATL: Misaligned Unpaired Cross-Modality Neuroimage Synthesis via Affine Transform Loss, ACM MM'22 [2] Adaptive Affine Transformation: A Simple and Effective Operation for Spatial Misaligned Image Generation, ACM MM'22 [3] Shape-Preserving Half-Projective Warps for Image Stitching, CVPR'14 [4] Image stitching with perspective-preserving warping, ISPRS'16 [5] Everybody Dance Now, ICCV'19 [**Q2**] In line 54, the method of injecting guided images into the model should be more than just cross-attention. Would TPC work if the human image animation methods do not use cross-attention to guide image injection into the model? [**A2**] Yes, it works. Figure K demonstrates that even when using feature addition as a simple injection method, it is effective, though not as effective as the cross-attention method. Especially, the addition method is effective when the shapes are positionally aligned between the calibrated frame and the target frame. Furthermore, Table D demonstrates the quantitative effectiveness of these. **Table D.** Ablation study of different injection methods for applying TPC, reported in a format of foreground/background (validation splits, average score of compositional alignment/misalignment). ||||||||| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||L1↓$_{\times \textrm{E-04}}$|PSNR↑|SSIM↑|LPIPS↓|FID↓|FID-VID↓|FVD↓| |w/o TPC|4.51/3.57|27.81/29.15|0.689/0.758|0.304/0.259|38.3/27.7|37.2/29.4|184/162| |w/ TPC (addition)|3.16/2.92|29.01/29.37|0.727/0.771|0.283/0.249|28.2/25.9|32.6/26.9|171/152| |w/ TPC (cross-attention) |3.08/2.79|29.12/29.50|0.734/0.782|0.273/0.241|26.9/25.1|31.2/26.4|162/142| [**Q3**] The last paragraph of the introduction is more suitable for methods or experiments. [**A3**] Yes, we will move it to the experiment section. [**Q4**] As for the TED man in the demo (Figure 8) of the experimental results, why does the animated video only show the man's upper body? [**A4**] This is because the input target motion only includes the upper body. The skeleton's green and blue branches, connected to the pelvis, are part of the upper body. For reference, please see the full-body skeletons in the upper sample of the woman walking. [**Q5**] For the unseen domain shown in the provided demo, I think in human image animation, we'll give the model images that need to be animated and guided images. Also, the base model is trained on a lot of images of human bodies, and the model should have seen something like this before, so calling it an unseen domain isn't accurate. For this kind of scene, there is a big gap between the character identity in the generated video and the given image, and TPC seems to have no way to help the original model maintain the character identity. [**A5**] The unseen domains we aim to address are samples containing compositional misalignment (i.e., samples of human shape misalignments in terms of scale and rotation between the reference and target images). Human image animation datasets (e.g., TikTok, TED-talks) ensure this alignment, but real-world scenarios often involve compositional misalignments, making current systems vulnerable to these samples. To address this problem, we introduce a calibration branch for the diffusion model, demonstrating its effectiveness at test-time without additional training. To quantitatively validate our approach, we have collected an additional test set specifically for compositional misalignment samples, alongside the original test set (as mentioned in Line 221, with source links available in our supplementary material). We acknowledge the redundant use of the term "unseen" in Figure 10 and the demo. This term actually refers to the reference image obtained from the Text-to-Image (T2I) model. We will update the terminology to "T2I-synthesized" to avoid confusion. --- Rebuttal Comment 1.1: Comment: Dear Reviewer KKQ2: Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments? Best, AC --- Rebuttal 2: Comment: Thank you for your response. Based on the comments of other reviewers, I decided to adjust the score from 3 to 4. The main reason is that, like reviewer mDg5, this method is actually an enhanced version of the redirection technique proposed in AnimateAnyone, which is not novel enough. In addition, most of the videos displayed in the demo have artifacts, such as the flicker of the finger of the woman at 42s, the flicker of clothes at 49s, and the shaking of the background at 54s. --- Rebuttal 3: Comment: Thank you for your feedback. It seems you may have some reservations about giving us a score of 5, possibly due to concerns about novelty (which currently, lead to a rescaling to 4). We would like to clarify that the system presented in our manuscript represents a generalized version of diffusion-based human image animation systems and is not intended to replicate AnimateAnyone. As shown in Figure B of our attached PDF, the image encoder and pose encoder in our system correspond to the Appearance Encoder and ControlNet in MagicAnimate, and to the CLIP Encoder and ControlNet in DisCo. Moreover, the experiments in Table 1 of our manuscript demonstrate our module’s effectiveness across four different systems. In this regard, We hope this clarification addresses some of your concerns and wish your reconsiderations about the contributions of our proposed module.
Summary: In this paper, the authors propose TPC, an alignment algorithm for human image animation systems. optimal precision is currently achieved only when the physical compositions of the human shapes in the reference image and target pose frame are aligned. Misalignment leads to a significant decline in fidelity and consistency. To address this issue, the authors propose Test-time Procrustes Calibration (TPC). TPC provides a calibrated reference image for the diffusion model, enhancing its ability to understand the correspondence between human shapes in the reference and target images. This technique can be applied to any diffusion-based image animation system in a model-agnostic manner, improving effectiveness at test time without requiring additional training. Strengths: (1) TPC bridges the gap in shapes between reference and target by providing correspondence guidance latent. With this guidance condition, diffusion-based animation systems achieve robustness to fidelity variations and maintain temporal consistency among frames. (2) The TPC is simple and works in a model-agnostic manner without additional training. Weaknesses: (1) The implementation details of Iterative Propagation are not relatively comprehensive, and I suggest the authors make an algorithm table to depict the concrete process of the proposed Iterative Propagation. (2) There are some skeleton alignment algorithms in the previous works, such as Everybody Dance Now. The authors should conduct additional quantitative and qualitative comparison experiments between TPC and other previous skeleton alignment algorithms. (3) According to my observations of your provided demos, the backgrounds of some rotation cases encounter considerable changes during the animation process. The background details don’t align with the given reference image, such as the layout and color distribution. For instance, in the galaxy case, the positions of the stars in the background of the animated video are not the same as those in the original given reference image. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) In some particular situations, the protagonist in the given reference image only shows half of the whole body, while the driven target pose shows the whole body parts. Can TPC handle this sort of situation? Could you please give some quantitative and qualitative experiment results regarding this situation? (2) I am unsure how you can estimate or ensure the accuracy of the subset X. Could you please provide some visual comparison results in terms of different subsets with different numbers of key points? Additionally, could you provide the details of the common filtering process? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments. We address your concerns below. [**Q1**] I suggest an algorithm table of the proposed Iterative Propagation. [**A1**] Figure E of our attached PDF presents the algorithm table for Iterative Propagation. We will incorporate this into our manuscript. Thank you. [**Q2**] The authors should conduct additional quantitative and qualitative comparison experiments between TPC and other previous skeleton alignment algorithms (e.g., Everybody Dance Now [1]). [**A2**] Our investigation identified several transformation (i.e., alignment) methods, which can be categorized into two types: (1) shape-distorting (e.g., Affine [2,3], Perspective [4,5]) and (2) shape-preserving methods (e.g., TPC (Ours), Global Pose Normalization (GPN) [1]). Qualitative results in Figure G show that shape-distorting transformations can accurately align to the targeted region but tend to cause significant information loss in other areas due to distortion, leading to low fidelity. Shape-preserving methods generally performed better, but GPN struggles with rotational motions due to its inability of rotation. Quantitative analysis in Table B confirms that TPC is the most effective. We will include these comparisons in our manuscript. Thank you. **Table B.** Ablation study of various transformation methods for calibration, reported in a format of foreground/background (validation splits, average score of compositional alignment/misalignment). PS: Perspective. ||||||||| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||L1↓$_{\times \textrm{E-04}}$|PSNR↑|SSIM↑|LPIPS↓|FID↓|FID-VID↓|FVD↓| |w/o calibration|4.51/3.57|27.81/29.15|0.689/0.758|0.304/0.259|38.3/27.7|37.2/29.4|184/162| | Affine|3.92/3.25|28.61/29.32|0.709/0.767|0.281/0.249|29.1/26.1|36.7/27.6|177/153| |PS|3.46/3.06|28.88/29.37|0.715/0.769|0.279/0.247|28.6/25.9|35.2/27.4|171/151| |GPN|3.12/2.83|29.06/29.48|0.731/0.776|0.274/0.244|27.3/25.6|31.5/26.9|164/145| |TPC (Ours)|**3.08**/**2.79**|**29.12**/**29.50**|**0.734**/**0.782**|**0.273**/**0.241**|**26.9**/**25.1**|**31.2**/**26.4**|**162**/**142**| [1] Everybody Dance Now, ICCV'19 [2] FedMed-ATL: Misaligned Unpaired Cross-Modality Neuroimage Synthesis via Affine Transform Loss, ACM MM'22 [3] Adaptive Affine Transformation: A Simple and Effective Operation for Spatial Misaligned Image Generation, ACM MM'22 [4] Shape-Preserving Half-Projective Warps for Image Stitching, CVPR'14 [5] Image stitching with perspective-preserving warping, ISPRS'16 [**Q3**] The backgrounds of some rotation cases don’t align with the reference image. For instance, in the galaxy case, the positions of the stars in the background of the animated video are not the same as those in the original reference image. [**A3**] Thank you for your observation. The galaxy case issue was mistakenly generated without applying a background mask in our method, which led to the observed artifacts in background preservation. As noted in our implementation details (Line 214), we eliminate background intervention in calibrated images. When the background mask is correctly applied, it ensures alignment of background details with the reference image. Figure H shows the corrected sample with proper background preservation. We will update this and apologize for any confusion it caused. [**Q4**] In some particular situations, the protagonist in the given reference image only shows half of the whole body, while the driven target pose shows the whole body parts. Can TPC handle this sort of situation? Give some quantitative and qualitative experiment results. [**A4**] Yes, Figure 13 in the Appendix of our paper illustrates the resulting frames in such scenarios (specifically, the sample of a woman wearing white clothes). Existing models generate unnatural lower bodies from the upper body reference. However, integrated with TPC, they seamlessly blend the lower body with the upper body's style. For quantitative analysis, we collected such samples as another test set where the reference and target human shapes are not aligned in terms of rotation or scale, indicating compositional misalignment. Table 1 provides a quantitative evaluation of compositional misalignment samples. TPC significantly improves the qualities of these samples, comparable to those of composition-aligned samples. [**Q5**] I am unsure how you can estimate the accuracy of the subset X. Provide some visual comparison results in terms of different subsets with different numbers of key points and provide the details of the common filtering process. [**A5**] Yes, Figure I in our attached PDF shows visual comparisons with different numbers of keypoints, demonstrating optimal performance with 6 keypoints. Using 4 or 5 keypoints was also effective in this sample. The selection of optimal keypoints subset involves two steps (code available in our supplementary material). 1. Filtering Keypoints: Keypoints from the reference and target images are filtered to retain only commonly visible ones. Points with a low prediction score (< 0.3, invisible poinsts are usually lower than 0.3) are removed (using OpenPose for keypoint prediction). The remaining points common to both images form the common sets $X$ and $Y$, representing the same body parts. 2. Selecting Optimal Subset: From the common sets, we generate all possible subsets with four or more points (for computational efficiency). Using Procrustes Analysis, we obtain transformation parameters (i.e., scaling, rotation, translation) from each subset $\mathbf{x} \subset X$ to corresponding subset $\mathbf{y} \subset Y$ and transform the reference image into the calibrated image. We then measure the overlap accuracy of human shapes between the calibrated and target images using the pixel-wise IoU score (with SAM for segmentation). The subset with the highest score is selected as the optimal subset $\mathbf{x}^*$. For computational efficiency, this process is also available with a batch-wise implementation. --- Rebuttal Comment 1.1: Comment: Dear Reviewer c8u6: Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments? Best, AC --- Rebuttal Comment 1.2: Comment: I have carefully read the author's response. It has addressed all of my concerns. I will keep my original score.
Summary: This paper starts from an interesting problem that what happens when the motion condition and the reference image are not well aligned. It analyses the robustness of an existing human animation network given different levels of misalignment, and tries to find the underlying cause through attention maps. Then this paper propose TPC, a test-time calibration module to mitigate this issue. This module provides a calibrated latent feature as an additional condition to the diffuser. The calibrated latent is optimized via Procrustes Warping, and Iterative Propagation is proposed to improve temporal consistency among calibrated features. Strengths: This paper is among the most innovative work I've ever seen in human animation area. It is a curiosity-driven work starting with an interesting problem. The solution is neat and elegant. The presentation is clear and easy to follow. The analysis in Fig 1 and Fig 2 is very interesting and insightful. Technically, using keypoints as the evidence to align shapes is quite direct and reasonable. Using Procrustes analysis to optimize a transformation is sound. Weaknesses: I did not find obvious weakness, except that the demo results still contain some artifacts. But it is mainly due to the baseline MA or Disco. I recommend the authors to try some new methods as the baseline that have higher quality. The method is insightful, but only for human animation, a small area. It would greatly enhance the value of this paper if it can be applied to some more general tasks. Technical Quality: 4 Clarity: 4 Questions for Authors: What if directly calculating the transformation between the reference image and the target pose via some heuristical evidences? For example, considering the torso part, i.e., keypoints set {6,7,13,12}, since torso keeps rigid to some extend, compared to arms and legs. Then using the vector (mid(6,7) , mid(13, 12)) to compute the rotation angles, the perimeter for scales, the center coordinate for shifting. I know this design is not optimal. But how is the result with such simple idea? I noticed Table 2, the "Linear" and "Affine" setting, but seems not exactly the same one. Can this method be potentially applied to a broader tasks that require spatial alignment between some input variables? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The Broader Impacts and Ethic Statements have been adequatedly addressed. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your recognition of our work. We address your concerns below. [**Q1**] Demo results still contain some artifacts. But it is mainly due to the baseline MA or Disco. I recommend the authors to try some new methods as the baseline that have higher quality. [**A1**] Yes, as shown in Figure F of our attached PDF, the application of TPC with more recent models, such as Champ (to be published in ECCV'24) and MimicMotion (Arxiv'24), also enhances their inferences, resulting in better quality. We will incorporate this into our manuscript. Thank you! [**Q2**] It would greatly enhance the value of this paper if it can be applied to some more general tasks. Can this method be potentially applied to broader tasks that require spatial alignment between some input variables? [**A2**] Yes, as shown in Figure C, we confirmed that our proposed calibration concept can also be applied to other image-to-image tasks (e.g., virtual try-on) and video-to-video tasks (e.g., video editing), enhancing their quality. We will incorporate this as an additional section for plug-and-play applications in our manuscript. [**Q3**] What if directly calculating the transformation between the reference image and the target pose via some heuristical evidences? For example, considering the torso part, i.e., keypoints set {6,7,13,12}, since torso keeps rigid to some extend, compared to arms and legs. Then using the vector (mid(6,7) , mid(13, 12)) to compute the rotation angles, the perimeter for scales, the center coordinate for shifting. I know this design is not optimal. But how is the result with such simple idea? I noticed Table 2, the "Linear" and "Affine" setting, but seems not exactly the same one. [**A3**] Figure D illustrates the resulting frames based on the set of keypoints on the torso. This method is as effective as our original method when the torso is clearly visible in the target motion (i.e., first frame). However, it shows vulnerabilities when the torso is not visible (i.e., third frame). In fact, our initial approach also used keypoints of rigid body parts (e.g., torso) for transformations, but visibility issues arose with various movements. To address this, we proposed finding the optimal keypoints for each frame, considering commonly visible keypoints between the target and reference. --- Rebuttal Comment 1.1: Comment: Dear Reviewer rFbR: Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments? Best, AC
Summary: This work proposes a method that combines exsiting human image animation method and Procrustes Warping, which improves the robustness of image animation approach. In addition to the explicit image warping, this work also proposes an interative propagation method to improve the temporal consistency. Experiments show that the proposed method is effective in enhancing the animation performance. Strengths: 1. The proposed method is well motivated, the observation of the misalignment between reference image and target pose is straightforward and insightful. 2. The experiments in this work are solid, both the comparisons and ablations are extensive and detailed. Compared with the baseline methods, the improvements are noticeable. 3. The writing is clear and easy to follow. Weaknesses: 1. As far as I know, the Procrustes alignment is popular in the evaluation of 2D/3D human pose estimation. So, this work introduces this simple technique into the human image animation but technically this contribution is not novel enough. 2. There is no detailed description about the training process of this method, it could be better to introduce more about the training process in the main text. 3. From the Table 2 we can see that the best M is 20, but the evaluation metrics shown in the table are SSIM and FVD, it could better to show more results to help reader understand the ablation comprehensively. 4. Are there any trade-off boserved between the temporal smoothness and motion precision? For example, if M=1, the temporal consistency is the best while the pose precision is the worst? Technical Quality: 3 Clarity: 3 Questions for Authors: How do you apply TPC in DisCo and MagicAnimate? I am a little bit confused since the Figure 5 only depicts the structure of Anymate Anyone. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments. We have addressed your concerns to the best of our ability. [**Q1**] Procrustes alignment is popular in evaluating 2D/3D human pose estimation. This work introduces simple technique into the human image animation but technically contribution is not novel enough. [**A1**] We are not claiming novelty for the Procrustes alignment but instead for the test-time calibration concept for diffusion model that aligns the target subject in the reference image to the pose in the pose video. To the best of our knowledge, our study is the first work to address the issue of human image animation systems performing poorly in real-world scenarios, identifying compositional misalignment as the cause. When incorporated with any diffusion-based image animation system, our calibration significantly enhances its robustness in real-world applications. The Procrustes approach is one of our experimental attempts to implement this calibration concept. Table 2 in our manuscript also provides results from other attempts, including affine and linear transformation approaches. The Procrustes approach proved to be the most effective and our technical contributions (i.e., optimal keypoints selection and iterative propagation) focus on optimizing the calibration. [**Q2**] There is no detailed description about the training process of this method, it could be better to introduce more about the training process in the main text. [**A2**] The proposed test-time method does not have a separate training procedure, generating video during inference. No training is required as mentioned in Line 18 and 99. [**Q3**] From the Table 2 we can see that the best $M$ is 20, but the evaluation metrics shown in the table are SSIM and FVD, it could be better to show more results to help reader understand the ablation comprehensively. [**A3**] As mentioned in Line 288 and also shown in Table 2, $M$=30 provides the best setting. Table A below provides the ablation studies with additional metrics for both single frames (i.e., PSNR, SSIM, LPIPS, FID, L1 error) and videos (i.e., FID-VID, FVD). We will incorporate this into Table 2 of our manuscript. Thank you. **Table A.** Ablation study of Iterative Propagation (IP) according to the number of group $M$, reported in a format of foreground/background (validation splits, average score of compositional alignment/misalignment). ||||||||| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||L1↓$_{\times \textrm{E-04}}$|PSNR↑|SSIM↑|LPIPS↓|FID↓|FID-VID↓|FVD↓| |w/ IP ($M$=1)|3.92/3.25|28.61/29.32|0.709/0.767|0.281/0.249|29.1/26.1|36.7/27.6|177/153| |w/ IP ($M$=10)|3.46/3.06|28.88/29.37|0.715/0.769|0.279/0.247|28.6/25.9|35.2/27.4|171/151| |w/ IP ($M$=20)|3.12/2.83|29.06/29.48|0.731/0.776|0.274/0.244|27.3/25.6|31.5/26.9|164/145| |w/ IP ($M$=30)|**3.08**/**2.79**|**29.12**/**29.50**|**0.734**/**0.782**|**0.273**/**0.241**|**26.9**/**25.1**|**31.2**/**26.4**|**162**/**142**| |w/ IP ($M$=40)|3.10/2.81|29.08/29.46|0.728/0.777|0.275/0.243|27.0/25.3|31.5/26.8|165/145| |w/ IP ($M$=50)|3.09/2.79|29.09/29.43|0.731/0.777|0.274/0.242|26.9/25.2|31.8/27.2|170/149| |w/o IP (w/ IP $M$=video length)|3.11/2.80|29.09/29.47|0.731/0.778|0.275/0.243|27.0/25.2|33.2/27.1|169/150| [**Q4**] Are there any trade-off observed between the temporal smoothness and motion precision? For example, if $M$=1, the temporal consistency is the best while the pose precision is the worst? [**A4**] Yes, we observed a slight trade-off between consistency and precision within an effective operating region for iterative propagation (IP). Figure A in our attached PDF presents a sensitivity analysis of the IP according to $M$ in terms of synthesizing precision (LPIPS) and temporal consistency (FVD). We identified the effective operating region as 20 < $M$ < 50 (for an average 120-frame video), where the IP shows a slight trade-off. However, when we selected $M$ = 1, both precision and consistency were at their lowest. This outcome is attributed to the insufficiency of calibrated images, as only a single calibrated frame is selected to propagate all the target frames at each denoising step. The selected calibrated image failed to provide accurate calibration to the target frame due to significant temporal misalignment. Consequently, this reduced precision, and the series of inaccuracies also negatively impacted consistency. Therefore, the trend in consistency changes led us to determine that the optimal choice is $M$ = 30. [**Q5**] How do you apply TPC in DisCo and MagicAnimate? [**A5**] In Figure B of our attached PDF, we illustrate the applications of TPC on top of DisCo and MagicAnimate. For DisCo, the image encoder in Figure 5 is the CLIP Encoder, and the pose encoder is the pose ControlNet of DisCo. For MagicAnimate, the image encoder corresponds to the Appearance Encoder of MagicAnimate, and the pose encoder corresponds to ControlNet. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. I have carefully read this rebuttal. It has addressed all of my concerns. Thus, I will increase my rating. The reason for not selecting a higher score is that this method is actually an enhanced version of the retargeting technique proposed in AnimateAnyone and its novelty is not strong enough. In general, the analysis and experiments in this work are solid and that makes me vote for a weak accept.
Rebuttal 1: Rebuttal: We have uploaded a PDF file containing figures. Please refer to this PDF along with our rebuttal for a clear understanding. Pdf: /pdf/193146ec3b25f737d240c01082ee1b465da9e6e4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher
Accept (poster)
Summary: The paper proposes PaGoDA, an adversarial distillation method to support single-step generation of image resolutions higher than the teacher diffusion model. PaGoDA first solves the forward PF-ODE (data to noise) to collect noise from the dataset. Then, it gradually adds upsampling layers to the model and trains only those layers. The training objectives include reconstruction loss and GAN loss, and for text-to-image experiments, additional distillation loss and CLIP regularization are introduced to support classifier-free guidance. PaGoDA demonstrates strong one-step generation performance on ImageNet and COCO benchmarks and shows its superiority through various ablation studies. Strengths: **Originality** - This is the first study to support higher resolution than the teacher diffusion model when distilling a diffusion model into a single-step generator. - It is also the first study to use the forward PF-ODE to collect noise-data pairs for distillation. **Significance** - Despite using GAN loss, the training is stable due to the progressive growing strategy, as shown by the good performance at various resolutions in Table 2. - By using the forward PF-ODE for distillation, it learns the true data distribution and achieves higher sample diversity compared to GANs, as evidenced by the recall performance in Table 2. - The PaGoDA is shown to be applicable to the pixel-space text-to-image model, DeepFloyd. **Quality** - The paper is well-written and includes detailed implementation details in the appendix for reproduction. - Various ablation studies demonstrate the superiority of each loss and the upsampling method of this approach compared to the VAE decoder of latent diffusion models. Weaknesses: - PaGoDa is claimed to have a faster sampling speed compared to LCM, but wouldn't the sampling speed be faster with **smaller VAE** [A]? Wouldn't LCM be more efficient than PaGoDa if the VAE decoder has less layers than PaGoDa's additional upsampling layers? - Additionally, smaller VAE only requires training of a VAE, while PaGoDa involves sequential training stages depending on the resolution, making the training pipeline somewhat inconvenient. - In Table 4, PaGoDa performs better than the diffusion teacher, and in Table 1, PaGoDa outperforms EDM2-XXL. This may be because **the discriminator uses EfficientNet and DeiT pre-trained on ImageNet**, leading to better FID scores, which use an Inception network also trained on ImageNet. This issue has been discussed in [B]. There are two potential solutions to this problem. The first is to compare **Fréchet distances using DINOv2**, following the EDM2 paper. The second is to **train the discriminator for GAN loss from scratch**. If PaGoDa's GAN loss is truly stable, it should be possible to train without a pre-trained network. [A] Tiny AutoEncoder for Stable Diffusion: https://github.com/madebyollin/taesd [B] Kynkäänniemi et al., The Role of ImageNet Classes in Fréchet Inception Distance, ICLR 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: - What classifier-free guidance scale did PaGoDA use in Table 6? Is the point that achieved the best performance in Figure 14 reported in Table 6? - Why were different datasets used when collecting latents (CC12M) and during GAN training (COYO-700M)? - Does the DDIM inversion in Section 5.4 needs a diffusion teacher for inversion? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Appendix C explains well the gap between PaGoDA's theoretical assumptions and its practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s constructive feedback. **Weakness 1. LCM with lighter VAE may be more efficient than PaGoDA's upsampling blocks?** **Ans.** **[Lightweight PaGoDA]** The smaller VAE [A] decoder can be integrated into PaGoDA's super-resolution framework. If LCM's VAE is made more lightweight, PaGoDA's network could be similarly lightweighted. The degree of simplification for heavy decoder (or upsampling network) is determined by the function complexity. As PaGoDA’s upsampling function is simpler than VAE's nonlinear transformation, PaGoDA is more suitable for model distillation. Also, a lightweight decoder sometimes fail at high-quality generation, illustrated in Figure C in the attached PDF. **Weakness 2. A smaller VAE requires single-stage training, while PaGoDA involves multiple sequential stages.** **Ans.** **[Smaller VAE is Sequential Training]** The smaller VAE [A] separately distills a heavy encoder and decoder with lightweight counterparts to match latent distributions. In knowledge distillation, lightweight models distilled from heavy teachers tend to perform better than those trained without teacher. Likewise, the smaller VAE [A] requires to have the original VAE to minimize the performance sacrifice, which falls into a sequential training. **[PaGoDA Skips Intermediate Stages]** PaGoDA's progressive upsampling can be compressed. Instead of doubling resolution jumps, they can be increased by factors of 4x or 8x, allowing the super-resolution network to be trained all at once. It was observed that even with an 8x jump, model remained stable and performance was retained. **Weakness 3. FID evalutaion could be biased due to the use of EfficientNet and DeiT pre-trained on ImageNet.** **Ans.** **[Frechet Distance]** Following reviewer's suggestion, we additionally report FD using DINOv2 in Tables A and B of the attached PDF. On ImageNet 64x64, we have three implications: - vs. EDM & CTM: All three models (PaGoDA, EDM, CTM) share the same architecture. CTM performs better than teacher EDM only in FID, a signal of FID bias. However, PaGoDA consistently outperforms teacher EDM in both FID and FD. - vs. StyleGAN-XL: Direct comparison between PaGoDA and StyleGAN-XL is challenging due to differences in architectures and training details. Nonetheless, PaGoDA outperforms StyleGAN-XL in all metrics. - vs. Validation Data: Given that PaGoDA's FID is better than the validation data while its FD is worse, FID may no longer be a reliable metric for model evaluation. On ImageNet 512x512, we compare PaGoDA as follows: - vs. EDM: PaGoDA significantly surpasses EDM in both FID and FD, even with the same architecture (EDM is trained on latent space). - vs. EDM2: EDM2 outperforms PaGoDA in FD, regardless of the number of parameters used. However, please note that EDM2 requires $63$ steps to achieve good FDs, while PaGoDA requires only a single step. Based on these findings, we will focus on comparing PaGoDA using FD in our paper revision. We plan to explore whether applying PaGoDA on EDM2 architecture would bring us a better quality in future work. **Question 4. CFG scale used in table 6 for PaGoDA: is the best performance point in Fig. 14 reported in Table 6?** **Ans.** It is correct that we used the best performance in Figure 14, with a CFG scale of 1.15, to report the score in Table 6. Adopting reconstruction loss reduced the optimal CFG scale from 2-3 to 1-2. **Question 5. Why were different datasets used when collecting latents (CC12M) and during GAN training (COYO-700M)?** **Ans.** **[Prevent Overfitting]** We used different datasets to mitigate overfitting and memorization issues. Let's suppose that $z$ is a latent of $x$ from CC12M. Then, the reconstruction loss pushes $G(z)$ towards $x$. If we utilize the same CC12M for GAN training, then the discriminator will push $G(z')$ towards CC12M, where $z'$ is a neighborhood of $z$. Therefore, to alleviate $G(z')$ being too close to $x$ in CC12M, we provide an effect of $G(z')$ diverging from CC12M by training discriminator with COYO-700M. **Question 6. Does the DDIM inversion in Section 5.4 needs a diffusion teacher for inversion?** **Ans.** Yes, we utilized teacher diffusion in Section 5.4 for controllable generation. **<Final comments to Reviewer 3>** We respectfully request the reviewer to re-evaluate our paper with an emphasis on its upsampling capabilities. The components, though seemingly separate, are integrated to achieve efficient super-resolution generation. **[Upsampling: DDIM Inversion for Recon Loss]** DDIM inversion allows obtaining a latent $z$ from high-dimensional input data $x_{\text{high}}$ by downsampling it to DM's resolution. By training a map from $z$ to $x_{\text{high}}$, PaGoDA provides upsampling capabilities alternative to SD's VAE and Cascaded Diffusion Models. When using DDIM, we cannot train such an upsampling map. **[Upsampling: Classifier-Free GAN for Adversarial Loss]** Previous diffusion distillation methods used standard GANs with $\omega$-CFG teacher samples, limiting student performance and resolution extension. An easy fix could be using appropriately resized real-world data into GAN's real part, but this way is incompatible to T2I because the optimal student follows $p_{\text{data}}(x|c)$, not $p_{\text{data}}(x|c,\omega)$. Classifier-Free GAN solves this by predicting $\omega(x,c)$ and feeding it into the student network, ensuring alignment with the desired distribution $p_{\text{data}}(x|c,\omega)$ (see also L157-L159). **[Upsampling = Progressive Growing + DDIM Inversion + Classifier-Free GAN]** By integrating three proposed components altogether, we achieved efficient super-resolution generation in both training and sampling, providing a new and effective alternative to Stable Diffusion and Cascaded Diffusion Models. We kindly ask the reviewer to reconsider our paper with this perspective in mind. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough rebuttal. The attached PDF's Table A and B sufficiently address my concerns regarding evaluation. Results confirm that PaGoDA outperforms all models except for EDM2. Meanwhile, I have further questions regarding the smaller VAE: - While training a new VAE requires sequential training, PaGoDA also necessitates an additional data generation process, and for conditional generation, the model p(w∣x,c) must also be trained. In this regard, I still doubt whether PaGoDA truly offers a more convenient training pipeline compared to training a smaller VAE. - From Figure C in the attached PDF, I understand that a lightweight VAE might not be the best choice. However, doesn't PaGoDA also experience reconstruction error for the input image in Figure C? One could downsample the base resolution of that input image, obtain noise through DDIM inversion, and then check the reconstruction performance using PaGoDA. --- Rebuttal 2: Title: Official Reply by Paper Authors Comment: We sincerely express our gratitude for the reviewer to allow us compare PaGoDA with SD more deeply. We believe the review is highly helpful, and would like to revise the paper based on our discussion. **Q1-1. While training a new VAE requires sequential training, PaGoDA also necessitates an additional data generation process.** **Ans.** **[Minimal Overhead]** It takes about one day with 8xH100 GPUs to converge in T2I at base resolution if training data pairs are synthesized online. The data collection overhead is minimal compared to the overall DM pretraining cost. For higher-resolution training, the data pairs collected at base resolution are reused, significantly reducing the need for additional DDIM inversion and compute resources. **Q1-2. For conditional generation, the model p(w∣x,c) must also be trained.** **Ans.** **[Use Released Checkpoint]** We recognize that our approach involves higher initial costs due to the need to train the classifier. However, once the classifier is publicly available, the training pipeline will be as equivalent of training a smaller VAE. **Q1-3. I still doubt whether PaGoDA truly offers a more convenient training pipeline compared to training a smaller VAE.** **Ans.** In terms of the number of training stages, PaGoDA is indifferent from SD. However, we would like to further discuss the discrepancy between SD and PaGoDA when we desire to increase the sample resolution, e.g., from 512x512 to 1024x1024. **[SD Needs Entire Retraining]** In Stable Diffusion, the entire pipeline needs to be retrained from scratch, following these steps: - Train VAE at 1024x1024 *from scratch* - Train latent DM *from scratch* - (Optional) Distill latent DM into one-step generator - (Optional) (Optional) Distill VAE into smaller VAE **[PaGoDA Recycles Pretrained Network]** In contrast, PaGoDA could *reuse* 512x512 generator and only necessitates training a upsampling network from 512x512 to 1024x1024, making it significantly more cost-effective than SD, as follows: - *Reuse* pixel DM at base-resolution - *Reuse* PaGoDA at base-resolution - *Reuse* PaGoDA at super-resolution up to 512x512 - Train PaGoDA for upsampling from 512x512 to 1024x1024 **Q2. Doesn't PaGoDA also experience reconstruction error for the input image in Figure C?** **Ans.** PaGoDA exhibits less accurate reconstruction than generation quality. This is because PaGoDA's reconstruction quality solely depends on reconstruction loss. Below, we provide details on this respect. **[Different GANs for Different Purposes]** We kindly remind that the model objectives are different: VAE in SD is for data compression, while PaGoDA is to assist generation. Accordingly, VAE and PaGoDA applied GANs for two different purposes: - GAN for reconstruction - Real: a real image - Fake: its corresponding reconstructed image - GAN for generation - Real: a real image - Fake: a randomly generated image from a random noise PaGoDA prioritizes generation quality over reconstruction quality, thus using (II) type of GAN. Therefore, PaGoDA's reconstruction relies solely on LPIPS reconstruction loss. In contrast, SD's VAE uses option (I), specifically tailored for better reconstruction quality. **[Applying Option (I) in PaGoDA for Better Reconstruction]** We observed that the further use of (I) in PaGoDA on top of (II) improves reconstruction quality, and we will include a figure in our final revision to illustrate this improved reconstruction quality. It is important to note that a direct comparison with Figure C is not feasible for PaGoDA, as DDIM inversion relies on the text prompt, which is not provided in Figure C. **(Side Remarks on Q2)** We presume that the reviewer may have raised concerns about PaGoDA's upsampling capability due to the loss of high-frequency signal during downsampling of the encoder. This information leakage seems fundamentally blocking higher-dimensional generation at the first glance. Below, we further discuss that it is not necessarily true in PaGoDA. **[PaGoDA: Lossless Compression]** Although our encoder may seem to perform lossy compression, it can actually achieve lossless compression in principle, i.e., perfect reconstruction. If the downsampled versions of two different images $x$ and $y$ remain distinguishable, DDIM inversion, as an injective function, will map $x$ and $y$ to distinctive vectors in the latent space. Therefore, the generator can map these distinctive latent vectors back to the original signals $x$ and $y$, respectively, given a sufficiently flexible generator and a well-designed loss function. However, this lossless compression is only achievable when two different data remain distinguishable after downsampling. Hence, the downsampling factor should be carefully selected to ensure that the downsampled images are distinguishable. At resolutions like 64x64 (or even 32x32), natural images remain sufficiently distinguishable. --- Rebuttal Comment 2.1: Comment: Thank you for the detailed discussion regarding my questions. Your responses have addressed my concerns about the advantages of the PaGoDa pipeline over the existing latent diffusion pipeline. I also understand that comparing the reconstruction error between PaGoDa and VAE is not feasible. Given that this work demonstrates strong generation performance and has the advantage of reusing pre-trained generative models when increasing resolution, I believe it will be as beneficial to the generative AI community as latent diffusion models. Therefore, I am raising my score to 7. --- Reply to Comment 2.1.1: Title: Official Reply by Paper Authors Comment: We sincerely appreciate the reviewer helping us to enhance the quality of our paper with valuable discussions. In the final revision, we will thoroughly and faithfully reflect the discussions addressing the reviewer's concerns.
Summary: The paper introduces a novel GAN-based diffusion distillation approach that leverages ideas from PG GAN for effective high-resolution synthesis. Strengths: * PaGoDA demonstrates state-of-the-art single-step generation performance on ImageNet, competing with or outperforming more expensive alternatives, including the teacher DMs. * The proposed classifier-free GAN is an interesting modification for better conditional generation. * The authors justify most components of the proposed method by providing valuable discussions and ablation studies. * The paper includes thorough theoretical analysis, as well as many experimental and implementation details. Weaknesses: * PaGoDa is designed for pixel-based DMs only and hence is not applicable to most state-of-the-art T2I models, e.g., SDXL, SD3. Also, the motivation behind this (L42) seems a bit arguable. For example, one could try to adapt PaGoDa to LDM by replacing the AE decoder with a shallow one mapping a 64x64 latent variable to a 64x64 image. Then, PaGoDA can be applied as is. Do the authors have any thoughts on this? * The method requires collecting the dataset of a large amount of (x, z) and ($\hat{x}_w$, z) pairs. Likely, this significantly increases the training costs compared to sampling-free alternatives. Could the authors provide the data collection vs training costs? How does the overall training time compare to other distillation approaches? * The KD objective is used for the guided generators (i.e., using the reverse process for sampling ($\hat{x}_w$, z) pairs). This contradicts the motivation behind the reconstruction loss (L89). While the ablation study in Tab.7 aims to address this, could the authors clarify the inference CFG scale used in this experiment? How does the role of the reconstruction loss change with different CFG scales? * PaGoDa uses CLIP regularization in the T2I setup. I believe this makes the CLIP score evaluation incorrect. Moreover, other distillation methods, e.g., LCM, DMD, or ADD, do not use it. This raises concerns about the performance of the core approach in the T2I setting. * For T2I evaluation, automated metrics have been shown to correlate poorly with human preference (e.g., [1,2]). It would be highly beneficial to conduct a human study comparing PaGoDA to other distillation approaches and T2I diffusion models. If a reliable human evaluation setup is unavailable, I would suggest evaluating a metric learned to mimic human preference, e.g., [3,4,5]. * The CFG details for T2I inference are missing. What scale is in the final t2i results? How does the model perform with different CFG scales? * The paper does not provide qualitative T2I results demonstrating the diversity of generated images for a given prompt. * The ablation study lacks evaluation of the proposed approach without adversarial objective. [1] Podell et al. SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. [2] Li et al. Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation. [3] Xu et al. ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation [4] Kirstain et al. Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation. [5] Wu et al. Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis. [6] Sauer et al. Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation Technical Quality: 3 Clarity: 3 Questions for Authors: * Please address the questions and concerns raised in "Weaknesses". * Intuitively, the reconstruction and distillation losses are most beneficial for the base-resolution training. Did the authors try to deactivate these losses for SR stages? * Did the authors consider using a pretrained DM as a discriminator, following the ideas from [6]? It would be interesting to explore if noise-level feedback might be valuable for learning better generators. * PaGoDa approaches very low FID values on ImageNet. Could the performance gains be a result of overfitting the training data? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed the limitations and potential negative social impact in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for helpful reviews. Below, we faithfully answer the raised concerns, but we would be happy to provide additional experiments upon reviewer's request in our final revision. **Weakness 1. Can we integrate PaGoDA with SD?** **Ans.** Yes, we can integrate PaGoDA in the latent space of SD. If $d$ (say 512) is the latent resolution of SD (data resolution could be up to 2K or 4K), we can train a DM on $d/2^n$ (say 64) dimension (which is much cheaper than training DM on full 512 latent dimension) and then distill/upsample with PaGoDA up to $d$. Although we focus on the core algorithm with pixel DM to pursue a fundamental understanding of core components, we are interested to integrate PaGoDA with SD in future to examine the degree of compression this integration can offer. **Weakness 2. Discuss the training costs compared to sampling-free alternatives.** **Ans.** **[Data Collection is Optional]** While we utilized collected data-latent pairs mainly to save budget for model building, once training configurations are set, training data pairs can be obtained online during training. **[Minimal Overhead]** In T2I, convergence is achieved around 20k iterations, requiring about 1 day with 8xH100, including latent calculation. This overhead is minimal compared to the total DM pretraining cost. **[Safer Training]** We further address that sampling-based methods are safety-secure. During training, a classifier can filter out problematic samples related to privacy, NSFW, and copyright. Moreover, maintaining clear records of training data aids in post-mortem analysis. **Weakness 3. Adoption of KD objective in T2I is contradictory to reconstruction loss's motivation.** **Ans.** **[KD for High CFG]** Figure 4 highlights that the latents from DDIM inversion are out-of-prior at high CFG scales. Therefore, a model trained with reconstruction loss performs poorly at these scales. To resolve this problem, we impose the KD loss. However, using KD-only limits student upper bounded by teacher. **[Effect of Recon+KD]** When we add reconstruction loss, the FID-CLIP curve is shifted right (higher CLIP) and below (better FID) with the optimal CFG (w.r.t FID) falls from 2.5 to 1.15. We also report Figure A of the attached PDF of human evaluation, resulting in better image quality and prompt alignment. **[Eliminate KD]** We are interested in eliminating KD objective by substituting CFG with CFG++ [A], a recent method with improved DDIM and DDIM inversion on guidance scale less than 1. On these scales, DDIM latent falls into in-distribution that facilitates model training without KD. We originally planned to explore this as future works but would be happy to investigate on this if reviewer requests in our final revision. [A] CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models **Weakness 4. Using CLIP regularization makes the CLIP score evaluation incorrect.** **Ans.** Figure A in the attached PDF shows that CLIP regularization is advantageous for prompt alignment under human judgement. To minimize potential bias in evaluation, we utilized different CLIP models for training (ViT-L/14 trained on YFCC100M) and evaluation (ViT-g/14 trained on LAION-2B). **Weakness 5. Clarify the CFG details for inference.** **Ans.** The CFG scale in Tab. 5 and 6 is 1.15. As shown in Figure 14, CFG in PaGoDA exhibits a similar trend to teacher diffusion. **Weakness 6. Does PaGoDA work without adversarial objective?** **Ans.** **[Existence of Prior Holes]** PaGoDA requires a GAN loss. Since PaGoDA's encoder is a deterministic mapping, it avoids posterior collapse but suffers from prior holes. These holes are unseen during training with recon-only loss, causing decoder to struggle in creating high-quality samples in those areas. Therefore, we need GAN loss to cover the entire prior manifold during training. **[Uniform Encoding]** DDIM inversion encodes data uniformly into the prior manifold, as illustrated in Figure B of the attached PDF. This uniformity constrains generator, preventing mode collapse by densely regulating its output within the prior space through reconstruction loss. **[Well-Grounded GAN]** Moreover, Theorem 3.1 proves that the optimal generator of a GAN-augmented distillation model matches data distribution, unlike previous works where optimal generator is heavily influenced by pretrained DM. Theorem 3.2 confirms the stability of combining GAN with reconstruction loss. **Question 7. Can we deactivate reconstruction/distillation losses for SR stages?** **Ans.** Deactivating recon/distill losses causes model to solely depend on GAN. As seen in Figure 6, relying solely on GAN for SR results in object location shifted across resolution. This unwanted shift hinders the ability to control samples in base resolution. By activating reconstruction loss, the object's position remains consistent in resolution changes. **Question 8. Can we use a pretrained DM as a discriminator feature extractor?** **Ans.** Yes, but note that feature extractors behave differently. According to [6], discriminative features prioritize local details over global shape, while generative features do the opposite. Since reconstruction loss strongly penalizes errors in shaping the global context, we used features from discriminative models to prioritize capturing local details with GAN loss. **Question 9. Could the ImageNet performance a result of overfitting?** **Ans.** Table A of the attached PDF shows the Frechet Distance on DINOv2 feature space, comparing 50k samples with 50k val data. The degree of overfitting (FD val - FD train) in PaGoDA is comparable to baselines, implying PaGoDA is as robust as baselines in overfitting. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed and insightful discussions and additional evaluation results. I appreciate that other reviewers noticed the use of the IN pretrained discriminator for training and agree that it's important to eliminate this source of bias in evaluation. I'm glad the authors addressed most evaluation concerns and provided a human study for the t2i setting. In addition to updating the main results, I also encourage revising Section 5.2, considering potential bias, e.g., Fig.5(b) needs to be validated. Other questions and concerns were also well addressed, except for the qualitative results concerning t2i diversity. I would appreciate seeing t2i samples given one prompt and different seeds compared with the teacher model in the revision (please see figures 3 and 8 in ADD as examples). Regarding CFG++, I won't insist on adding these results but would still be excited and interested to see them in the revision if available. In my opinion, it could make the approach more elegant and completed. Overall, I believe the paper provides valuable contributions, and I do not have objections to its acceptance if the authors carefully address evaluation concerns in the revision. I have increased my score to 6. --- Reply to Comment 1.1.1: Title: Official Reply by Paper Authors Comment: We express our best gratitude to the reviewer's constructive feedback. Following reviewer's suggestion, we will revise Section 5.2 by adding the discussion of potential bias of FID, evaluating all possible FD values and comparing PaGoDA with baselines based on FD, including Figure 5-(b). Also, we will add T2I samples given one prompt and different seeds compared with the teacher to check our model generates diverse images in our final revision.
Summary: This paper introduces a method to distill a diffusion model into a one-step generator. The training process combines several loss functions. First, the authors use DDIM inversion to transform real images into latent noise, which is then fed into the generator. The generated images are supervised with a reconstruction loss. Second, an additional GAN loss is applied to the generated images, distinguishing them from real images. For text-to-image generation, the authors present a novel approach to enable classifier-free guidance in the GAN loss by using an auxiliary guidance predictor. Other auxiliary losses, such as knowledge distillation loss and CLIP loss, are also employed to enhance performance. The final method are evaluated across various benchmarks. Strengths: S1. The presented approach performs well on common benchmarks while achieving good inference efficiency. S2. The authors plan to open-source the method, which will be beneficial for further research. Weaknesses: W1. Some loss components are questionable and may mislead readers about the actual performance of the proposed approach. For instance, it is well-known that using a GAN loss with a discriminator pretrained on ImageNet significantly biases the FID metric [1]. This issue also applies to the CLIP loss, which biases the evaluation of FID/CLIP scores when used during training. A substantial portion of the strong results presented might be attributed to these misleading numbers, making it difficult to draw accurate conclusions about effectiveness. This is a common issue in some previous works but needs to be addressed now. For example, using GANs is good, but it might be better to avoid using a pretrained classifier as the discriminator, as suggested by recent explorations in diffusion GAN literature [2]. For text-to-image results, it is recommended to conduct human evaluations to assess real image quality, prompt alignment, and diversity. W2. Many of the proposed components have been well explored in previous literature, but the connections to these works are not sufficiently discussed in the related works section. For instance, the combination of a regression loss and a GAN loss has been explored for text-to-image generation in several studies, including [3, 4]. Additionally, the use of distillation loss and CLIP loss are common practices [5, 6]. This reduces the perceived novelty of the paper to a moderate level. W3. The introduction of a classifier-free GAN objective might overcomplicate the problem. Previously, GAN loss with classifier-free guidance (CFG) was enabled by generating fake samples using an original diffusion model with guidance, as applied in [2], and then utilizing the standard GAN loss. This method is simpler than training an auxiliary classifier and has a similar computational cost, since the current classifier training also requires generating samples with CFG using the original diffusion model. [1] Kynkäänniemi, Tuomas, et al. "The role of imagenet classes in fr\'echet inception distance." arXiv preprint arXiv:2203.06026 (2022). [2] Sauer, Axel, et al. "Fast high-resolution image synthesis with latent adversarial diffusion distillation." arXiv preprint arXiv:2403.12015 (2024). [3] Lin, Shanchuan, Anran Wang, and Xiao Yang. "Sdxl-lightning: Progressive adversarial diffusion distillation." arXiv preprint arXiv:2402.13929 (2024). [4] Song, Yuda, Zehao Sun, and Xuanwu Yin. "SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions." arXiv preprint arXiv:2403.16627 (2024). [5] Sauer, Axel, et al. "Adversarial diffusion distillation." arXiv preprint arXiv:2311.17042 (2023). [6] Kang, Minguk, et al. "Scaling up gans for text-to-image synthesis." CVPR 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: Most of my concerns are outlined in the weaknesses section. To reiterate, I am particularly concerned about the utilization of a pretrained classifier as a discriminator and the CLIP loss, which bias the quantitative evaluation. Additionally, the overall novelty is moderate, and some components might be unnecessarily complex. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1.** To address the concerns about fair evaluation, we provide additional results below. **W1-1. GAN's discriminator pretrained on ImageNet biases the FID metric.** **Ans.** **[Frechet Distance (FD)]** We agree with the reviewer that the use of ImageNet pretrained discriminator could bias the model towards achieving a good FID. Following EDM2 practice, we evaluate FD using DINOv2 features. Tables in the attached PDF imply that FID and FD trends are consistent in ImageNet 64x64, but not for high-resolution, which may indicate FID bias. Despite this, PaGoDA outperforms its teacher EDM at both resolutions on FD. We will update the manuscript to include a full set of results and discussions. Please also refer to R2-W9 and R3-W3. **W1-2. CLIP regularizer may also bias the evaluation.** **Ans.** **[Human Evaluation]** We conduct human evaluation to fairly measure the effect of CLIP loss at Figure A of the attached PDF. According to the figure, the adoption of CLIP regularization yields positive influence on prompt alignment. Also, to minimize the undesirable bias in FID/CLIP scores, we utilized different CLIP models for training (ViT-L/14) and evaluation (ViT-g/14). These models were trained on different datasets: ViT-L/14 with YFCC100M and ViT-g/14 with LAION-2B. **Weakness 2. To address the novelty concerns, we clarify our contributions below.** **W2-1. The combination of a regression loss and a GAN loss has been explored for T2I generation in several studies.** **Ans.** **[DDIM vs. DDIM Inversion]** Although the reviewer classifies both reconstruction loss (from DDIM inversion) and distillation loss (from DDIM) as regression loss, they function differently. Previous distillation models [3, 4] adopt DDIM for distillation, starting from the prior latent and obtaining synthetic data, which may harm model performance if the synthetic data is of low-quality. In contrast, DDIM inversion in PaGoDA, starting from the real data and obtaining its latent, can benefit from the high-quality of real data. **[Theoretically Grounded GAN Integration]** Additionally, we show that existing methods combining GAN loss and DDIM-distillation loss do not optimally learn the data distribution, as noted in Theorem B.4 (extended Theorem 3.1) in the appendix. In contrast, Theorem 3.1 demonstrates that PaGoDA's objective ensures optimal data distribution—making it the first diffusion distillation approach to guarantee this theoretically. This advantage stems from using real input data $x$ to construct $(x,z)$ pairs for distillation. Empirically, Figure 5-(d) shows that PaGoDA's loss (recon+GAN) maintains performance, while KD+GAN loss degrades as the teacher model worsens (imperfect teacher model). **[GAN Stability]** Moreover, previous works have not adequately addressed GAN stability concerns. Our Theorem 3.2 provides the first guarantee that GAN can be stable if combined with DDIM inversion. Addressing these questions on optimality and stability offers new insights into the distillation community. **W2-2. The use of distillation loss and CLIP loss are common practices [5, 6].** **Ans.** Please note that we are not claiming CLIP regularization as our contribution. Instead, we advocate using the CLIP regularizer as an option to achieve better text alignment. **W2-3. Proposed components explored in literature reduce the novelty of the paper.** **Ans.** **[DDIM Inversion + Progressive Growing = Upsampling]** We respectfully request the reviewer to re-evaluate our contribution. We are not complicating matters by adding well-established losses. Instead, we argue that DDIM inversion not only guarantees learning the data distribution (see Theorem 3.1) but, when combined with our progressively growing technique, offers an alternative to the upsampling methods used by Stable Diffusion and Cascaded Diffusion Models. **Weakness 3.** We provide explanations to address concerns about methodological complexity. **W3-1. Using the standard GAN loss is simpler than the proposed classifier-free GAN, which complicates the objective.** **Ans.** **[Upsampling]** We would like to address the reviewer's concern by connecting GAN with upsampling. Suppose $d$ is the base resolution where DM has been trained. To double the resolution, the GAN's real/fake data should also be doubled. Otherwise, the GAN cannot generate $2d\times 2d$ images with full details. **[Problem of Standard GAN]** A problem arises when using standard approach of putting $\omega$-CFG teacher sample as GAN's real input. Since the teacher sample is of $d\times d$ dimension, without learning independent upsampling models like Cascaded Diffusion Models, there is no efficient way to upsample the teacher sample to $2d\times 2d$. **[Use Real-world Data]** Our proposal is to use high-resolution real-world data. We put real-world data (downsized to $2d$ resolution) into GAN's real part instead of using teacher samples. We compare the standard approaches, and our proposal as follows: - Standard way: generate $\omega$-CFG teacher sample and feed it to GAN's real part. - (-) Student resolution not *extendable* - (-) Reach to *teacher* quality at optimum - Our proposal: retrieve real data, downsize it, predict $\omega$ of it, and put it into GAN's real part. - (+) Student resolution *extendable* - (+) Reach to *data* quality at optimum **[Use Released Checkpoint]** We recognize that our approach may involve higher initial costs due to the need to train the classifier. However, once the classifier is publicly available, the training cost will be similar to the standard approach. We kindly ask the reviewer to focus on the qualitative benefits of our proposal. **[Concurrent Works]** The literature [2-6], released publicly within 2-3 months before the NeurIPS submission, should generally be considered concurrent work to ours. We could add a section in our revision, but we think it is unfair to evaluate our method based on them. --- Rebuttal 2: Comment: Thank you to the authors for their response. I have increased my rating to 5 and am willing to accept the paper if other reviewers strongly support it. However, I still consider it borderline due to several issues. 1. The new experiments using dino metrics demonstrate that the results are not completely biased, which is appreciated. However, the method performs worse than EDM2 and possibly other untested methods. For the revision, it is crucial to correct numerous inaccurate claims regarding the state-of-the-art performance, initially attributed to a biased FID evaluation. I recommend conducting additional experiments using unbiased GAN discriminators, such as non-initialized or diffusion GANs. 2. I acknowledge the distinction between DDIM and DDIM inversion for reconstruction training. However, current method also needs to employ the DDIM distillation loss for text to image. Additionally, the improvement between these two may not be significant once the biased GAN issue is addressed. I suggest rerunning the experiment shown in Figure 5(c) for the revision. 3. I appreciate the clarifications on the benefits of certain components in the upsampling process. However, it remains unclear how much more advantageous the classifier-free GAN is in a broader setting compared to the simpler, well-tested GAN loss with generated image as real data. Upsampling may not be necessary in real world setting. For generation up to 1K resolution, LDM is sufficient; for higher resolutions, training a separate, smaller super-resolution network is often more straightforward. 4. Concerning related works, most references are old enough according to NeurIPS policy. Additionally, it is important to discuss any literature that has influenced our understanding. The papers I mentioned, along with others, are closely related and warrant a more thorough discussion. --- Rebuttal 3: Title: Official Reply by Paper Authors Comment: We deeply appreciate the reviewer for valuable and insightful feedback. We will revise the paper thoroughly reflecting all discussions. Below we further address the raised concerns. **Questions 1.** **Ans.** We would like to express our deepest gratitude to the reviewer for pointing out the bias in the FID. We fully agree with the reviewer's opinion on this matter and plan to conduct additional experiments using the unbiased GAN discriminator suggested by the reviewer, reflecting this in the final revision. **Question 2.** **Ans.** To further assess the effectiveness of the reconstruction (DDIM inversion) loss, we present our findings in Figure A. PaGoDA with reconstruction loss shows stronger performance in prompt alignment compared to PaGoDA without it, highlighting the benefits of incorporating reconstruction loss. Nevertheless, we plan to rerun the experiments presented in Figure 5-(c) for the revision, as suggested by the reviewer. **Question 3.** **Ans.** We agree with the reviewer. In a more practical scenario, as the reviewer said, it could be easier to generate higher-resolution samples by first generating 1K-resolution with LDM, and upscaling it into higher-resolution with a separate module. Although, we believe that PaGoDA could work effectively in the following cases. - **[Inference Speed]** Due to the upsampling capability, PaGoDA can replace LDM for 1K generation with halved cost compared to the 1-step LCM, as illustrated in Figure 1. In the LDM framework, regardless of the number of steps taken to synthesize the latent representation, the generated latent must be decoded back to the pixel space. This additional decoder evaluation incurs computational costs nearly equivalent to those of U-Net evaluation, accounting for about 50% of the total computation in 1-step LCM generation. This inherent and fundamental limitation of the LDM framework is completely overcome in PaGoDA, which directly trains on the pixel space, enabling sampling at half the cost compared to the 1-step LCM. - **[Latent PaGoDA]** If we want to keep the LDM framework for 1K generation, we can apply PaGoDA in the latent space of LDM to lighten the training load. Given that diffusion training accounts for most of the computational cost, PaGoDA's resource demands are nearly the same as those for diffusion training at resolution lower than the full latent dimension. In contrast, LDM requires training the diffusion model at the full latent resolution, which is considerably more costly. Therefore, PaGoDA offers a substantially reduced training budget compared to conventional LDM. **Question 4.** **Ans.** We agree with the reviewer and we will discuss related literature thoroughly in our paper revision. **[Final Comments from Authors]** We are planning experiments with the aim of thoroughly addressing all the reviewer's concerns by diligently fulfilling the requests. Given these efforts in mind, we respectfully ask the reviewer if our paper still remains borderline.
null
null
Rebuttal 1: Rebuttal: We sincerely appreciate all the reviewers for their constructive and helpful feedback. For a clearer evaluation, we would like to highlight a high-level overview of the contributions in this paper. **[Switch DDIM to DDIM Inversion]** PaGoDA proposes to utilize DDIM inversion for distillation. Unlike DDIM, which starts from Gaussian prior to synthesize fake data for generator training, DDIM inversion begins from real data to its latent representation for generator training. This approach allows PaGoDA to benefit from the high quality of real data, whereas distillation from DDIM depends on the quality of synthetic data, which can be less reliable. **[DDIM Inversion for Upsampling]** DDIM inversion, starting from real data, enables the generator to learn up to the resolution of the real data. In contrast, DDIM alone, without additional upsamplers like Cascaded Diffusion Models, cannot easily extend generator's resolution to higher dimensions. **[Switch Standard GAN to Classifier-Free GAN]** PaGoDA introduces a GAN loss tailored for super-resolution generation and compatible with CFG. While standard GAN distillation puts $\omega$-CFG teacher samples on the GAN's real part, this approach, similarly to DDIM, limits resolution due to reliance on synthetic data. Instead, we use real data in the GAN's real part, aligning with the motivation of using DDIM inversion. To ensure the student learns the $\omega$-conditioned data distribution $p_{\text{data}}(x|c,\omega)$ rather than $p_{\text{data}}(x|c)$, we predict $\omega(x,c)$ for the real data and use this in the student network evaluation, completing the design of Classifier-Free GAN. **[Upsampling = Progressive Growing + DDIM Inversion + Classifier-Free GAN]** High-dimensional generator can be achieved by incorporating three proposed core components altogether: progressive growing (decoder architecture), DDIM inversion (reconstruction loss), and Classifier-Free GAN (adversarial loss). With no single component, PaGoDA is not successfully working, necessitating all three components as essential ones. This high-dimensional generator, trained from low-dimensional teacher DM, provides a new and effective alternative to Stable Diffusion and Cascaded Diffusion Models. We, therefore, kindly ask the reviewer to reconsider our paper with this perspective in mind. We attach one-page of additional PDF here for the reviewer's information. Pdf: /pdf/d2598a39f33a22a4aa8d5ef8984eca656a4805a2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Certified Machine Unlearning via Noisy Stochastic Gradient Descent
Accept (poster)
Summary: The paper studies a practically important problem of machine unlearning and provides rigorous statistical guarantees for unlearning using the tools from the differential privacy. More specifically, it is shown that, for strongly convex losses, running Projected SGD on the new (adjacent) dataset guarantees that the removed data point will be forgotten, that is the output of "unlearning" algorithm will be close to the output of the Projected SGD if data point on a new dataset. Strengths: The algorithm is simple and easy to understand. The presentation is good with nice illustrations of the concepts. All necessary background is provided. The theoretical claims look sound to the best of my knowledge, apart from some technical remarks mentioned below. Weaknesses: (Major) My main concern is very conceptual. Isn't it trivial that running SGD (unlearning part) on a strongly convex loss with updated dataset will eventually converge to the same unique optimum of the new loss as if we have started training from scratch. I want to highlight that in the strongly convex case, the optimum is unique. Moreover, the initial distance to the optimum is forgotten exponentially fast for constant step-size SGD. Therefore, I cannot understand why the derived result in this paper is not trivial. In my view, as expected, the authors face a difficulty in the convex case (getting only vacuous result Corollary 3.9). (Minor) 1. Why the same batches are used in the unlearning process, shouldn't batches be from the updated dataset? This is perhaphs just a notational issue. 2. It is unclear to me what is the output of Algorithm 1. 3. It is unclear what it means to satisfy $(\alpha, \varepsilon)$-RU **with $c = 1-\eta m$** in Theorem 3.2. There is no parameter $c$ in definition 2.3. 4. The quantity $h_{\#}\mu$ in Lemma F.2. is not defined. This lemma is used in Proposition F.3., which is one of the key tools in analysis. 5. $\mathcal Z_{\mathcal B}$ appearing first in Theorem 3.2 is not defined in the main part. I had to go to Appendix to understand what it is. 6. Figure 3 (c) is not informative. All lines and confidence intervals overlap. 7. Why the method is called "contractive noisy iteration"? A more conventional name like Projected SGD would be more appropriate. 8. It would be helpful to explain the role of the projection in the algorithm. Is it possible to employ it without projection? 9. In the abstract, "convexity assumption" should be changed to "strong convexity assumption" because the guarantees in the convex case are vacuous in this paper. Technical Quality: 3 Clarity: 3 Questions for Authors: n/a Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The results are limited to strongly convex objectives. To make the problem not trivial, it is crucial to consider convex or PL losses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer MVBT for their insightful comments and suggestions. We address the weaknesses and questions below. **W1: ``Is the unlearning privacy bound for strongly convex settings trivial? Are our results for convex settings vacuous?``** *Compare to retraining:* We agree with reviewer MVBT that under strong convexity, the optimum is unique and SGD can approach that optimum exponentially fast. However, as we have discussed theoretically in lines 234-243 and demonstrated in experiments, our unlearning strategy does offer computational benefits compared to retraining when the initialization is not very close to the optimal solution. In Figure 3(a), we have shown that even unlearning with ***one*** epoch is sufficient to achieve good accuracy (utility) and a strong privacy guarantee ($\epsilon \approx 1$). In contrast, for a mini-batch size $b=128$, we need $20$ epochs to ensure the result converges reasonably well. *Convex only:* There seems to be a misunderstanding regarding our convex result in Corollary 3.9. The bound is ***not*** vacuous when the training epoch $T$ is not too large so that $\frac{2\eta MT}{b} < 2R$. This condition can be met if the model learns reasonably well with moderate $T$ and the projection diameter $2R$ is not set to be extremely small. For example, with $2R = 10$, $b = 128$, $\eta = 1$, and $M = 1$, we can train for at most $639$ ***epochs*** while still holding $\frac{2\eta MT}{b} < 2R$. A smaller step size $\eta$ would allow for even more epochs. Under this scenario, our approach still provides computational benefits compared to retraining from scratch when the initialization is not close to ***every*** (each corresponding to an adjacent dataset) optimum solution (set). We will clarify this further in our revision. *Compare to noiseless fine-tuning:* Instead of retraining from scratch to ensure exact unlearning, another potential approach would be running ***noiseless*** SGD and then applying output perturbation (i.e., publishing weights with additive noise) to mitigate privacy leakage, as done in prior work [9]. Note that such methods also require strong convexity for their privacy guarantees. However, there are several downsides to this strategy compared to unlearning with PNSGD (i.e., adding noise at every iteration, our approach). These include an inferior privacy-utility-complexity trade-off, as demonstrated by our experiments, and the issue of non-private hidden states discussed in Appendix N. **Q1: ``Question about mini-batch``** Note that the mini-batches only contain indices, and the underlying dataset is different with adjacent datasets $\mathcal{D},\mathcal{D}^\prime$. Since we denote the replacement notion of dataset adjacency, the size of all considered datasets is always $n$ so that the mini-batch sequences can be the same. The main reason to keep it the same is for the analysis purpose, see also our response to DXXZ, Q1. **Q2: ``The output of Algorithm 1``** We apologize for the confusion. The output is the last iterates of learning and unlearning processes, which are $x^0_T, y^0_K$ respectively. We will make it clear in our revision. **Q3: ``Meaning of Theorem 3.2``** We apologize for the confusion. Theorem 3.2 indicates that the value $c$ is determined by the step size $\eta$ and the strong convexity parameter $\mu$, and it is used solely to determine the bound of $\varepsilon$. This has no bearing on the definition of $(\alpha,\varepsilon)$-RU. We will clarify this point in our revised manuscript to avoid any ambiguity. **Q4: $h_{\sharp} \mu$`` is undefined.``** We apologize for this oversight. For a function $h$ and a distribution $\mu$, $h_{\sharp} \mu$ denotes the pushforward operation. Essentially, it represents the distribution of $h(X)$, where the random variable/vector $X \sim \mu$. We will ensure this concept is properly defined and explained in the revision. **Q5: ``Question about ``$Z_{\mathcal{B}}$** We apologize for the confusion. The meaning of $Z_{\mathcal{B}}$ is explained in line 59, Figure 1, and its caption; it represents the upper bound of the $W_\infty$ distance between two adjacent learning processes. In Theorem 3.2, we provide an explicit formulation of $Z_{\mathcal{B}}$. We will ensure the meaning of $Z_{\mathcal{B}}$ is clear throughout our revised manuscript. **Q6: ``Why the name 'contractive noisy iteration'?``** We followed the terminology used in prior work [15, 16, 20], where the authors introduced the key technical tool (Lemma 2.6) that we employ in our analysis. To give due credit to their contribution, we decided to retain the same name. **Q7: ``The role of projection and is it necessary?``** We leverage the projection operation in multiple aspects of our analysis. Firstly, it is used to prove Theorem 3.1, which shows that the limiting distribution of the PNSGD learning process exists, is unique, and stationary. Secondly, we use it to bound the $W_\infty$ distance between the initial distribution and the target stationary distribution $\nu_{\mathcal{D}|\mathcal{B}}$. This bounding is crucial when applying our results without assuming that the learning process has attained its stationary distribution. This is precisely the aim of Theorem 3.2. If such an assumption is made, we can simplify the bound in Theorem 3.2, leading to Corollary 3.7. Note that the $Z_{\mathcal{B}}$ bound in Corollary 3.7 can always use the former term as an upper bound, rendering everything independent of $2R$, the projection diameter. In summary, if we directly assume the existence of the stationary distribution of the PNSGD learning process and that it is attained after learning, the projection operation can be omitted. However, in the general case, the projection is essential for the rigor and completeness of our analysis.
Summary: The paper proposes an effective and efficient machine unlearning algorithm based on projected noisy gradient descent (PSGD). The proposed methods can be extended to handle multiple unlearning requests. The theoretical unlearning guarantee is established when the loss is assumed to be convex and smooth. Experiments verify the effectiveness of the proposed method. Strengths: 1. A simple unlearning algorithm is established by PSGD, whose effectiveness and efficiency is verified by experiments. Especially, the experiments show that the proposed algorithm outperforms the baseline with less gradient complexity. 2. Theoretical analysis of unlearning guarantee for the proposed algorithm is established for convex smooth losses, privacy-utility-complexity trade-off regarding the mini-batch size b for approximate unlearning is highlighted. 3, The paper is well-written and easy to follow. Weaknesses: 1. The smoothness of the loss is required, which might limit the algorithm's applicability. 2. In experiments, a logistic loss with $\ell_2$ regularization is considered. It would be better to provides some results on convex loss since the corresponding theory is given. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. In algorithm, mini-batch sequence B is fixed. Is it reasonable in practice or analysis? 2. The paper states that "A smaller batch size b leads to a better privacy loss decaying rate". This seems counterintuitive. Could authors give some explanation? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer DXXZ for their positive and thoughtful comments. We address the weaknesses and questions below. **W1:``Smoothness of the loss is required``** We agree with reviewer DXXZ that smoothness assumptions can restrict the applicability of our approach, for which will also list it as a limitation in Appendix B in our revision. We are currently working on relaxing these assumptions in the follow-up works. **Q1: ``Is mini-batch sequence B to be fixed in Algorithm 1 necessary?``** It is currently for analysis purposes, where similar limitations persist in the DP research of PNSGD [22]. We conjecture that one can randomly sample the mini-batch sequence at the beginning of each epoch. Our intuition is that instead of taking expectation with respect to $\mathcal{B}$ at the end of the (un)learning process, we can take the expectation for each epoch where the stationary property of the distribution remains. Nevertheless, the rigorous analysis in this direction is left as an interesting future work. **Q2: ``Question about a smaller batch size $b$ leading to a better privacy loss decaying rate.``** There appears to be a misunderstanding here. We are not saying that a smaller batch size $b$ will lead to better ***privacy loss***; instead, it will lead to a better ***privacy decaying rate***. To clarify, let's examine the bounds presented in Corollary 3.7, where the privacy loss is determined by the initial distance $Z_{\mathcal{B}}$ and the privacy decaying factor $c^{2Kn/b} = (c^{2n/b})^K$ for $K$ unlearning epochs. Notably, a smaller $b$ will lead to a better privacy decaying rate $c^{2n/b}$ ($c < 1$), since we are running more contractive updates (see Figure 1, green part) per epoch. However, as discussed in lines 234-243, an excessively small $b$ can increase the initial distance $Z_{\mathcal{B}}$. This is because $Z_{\mathcal{B}} = O(((1 - c^{n/b})b)^{-1})$, which is not monotonically decreasing with respect to $b$. Consequently, setting $b$ to its minimum value, such as $b = 1$, does not necessarily yield the optimal privacy guarantee. While smaller batch sizes improve the decaying rate, they can also adversely affect the initial privacy loss, necessitating a balanced choice of $b$, let alone its effect on the utility as discussed in line 230 and demonstrated in experiment section line 362.
Summary: The paper presents a simple scheme for machine unlearning. The algorithm has two parts: in the learning part, the models learns using the original dataset D, and in the second part it unlearns using a neighboring dataset D’. The algorithm uses 1) same batches at every epoch, and same batches for both learning and unlearning, 2) Gaussian noise for both learning and unlearning. Based on Renyi differential privacy and the relation between Renyi unlearning and (ε, δ)-unlearning, the paper shows that the privacy parameter ε drops exponentially with every epoch. Strengths: Novel set of results Weaknesses: My main concern is the lack of utility bounds. It’s not hard to design an algorithm which guarantees privacy: just don’t use the dataset at all. What is the utility your algorithm achieves after K iterations of unlearning? Technical Quality: 3 Clarity: 3 Questions for Authors: The assumptions in Theorem 3.2 are rather strong. You assume bounded domain + smoothness + Lipschitzness + strong convexity. Please justify these assumptions. The notation is sometimes confusing. For example, there is notation \nu_{T | B} and there is notation \nu_{D | B}. Line 177: \nu_{D | B}^{0, ‘} is undefined. In Lemma F.2, notation # is undefined. Minor issues: -- Is line 173 the definition of \nu_{D | B}? Please make it clear. The wording sounds like this is already defined, and I spent quite some time trying to find the definition. -- Algorithm 1: please list algorithm’s parameters, such as η and σ -- Line 146: PABI was not defined -- Theorem 3.2: Z_B: you don’t actually use B -- Appendix D: Lipschitzness (no s after z) -- Line 548: “Proof” -> ”Proof of Theorem 3.2” -- I think the presentation could be simplified by focusing on some specific value of α -- C_R: Line 97 doesn’t match Line 169 -- Theorem 3.2: relationship between D’ and D is not mentioned. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer SMx2 for their careful reading, positive assessment, and thoughtful comments. We address the weaknesses and questions below. **W1: ``No utility bound.``** We thank reviewer SMx2 for the thoughtful comments. We agree that utility is an important aspect of the unlearning problem, but we do not think a utility bound is always necessary for a literature on machine unlearning. While a privacy bound is crucial to prevent the worst-case scenario, it is common practice in the machine learning literature to empirically measure utility in an average or data-dependent manner by conducting extensive experiments (as seen in the seminal DP-SGD paper [13]). This is also true in previous unlearning literature [7,11]. However, we acknowledge that there are situations where a formal utility guarantee for the worst-case scenario is important. In such cases, the utility bound for noisy gradient descent is established in the differential privacy literature under strong convexity assumptions; see, for instance, Section 5 of [ref 1]. We will mention this point in our revision to provide clarity on when and how utility bounds can be applied. ### Reference [ref 1] Differential privacy dynamics of langevin diffusion and noisy gradient descent, Chourasia et al., NeurIPS 2021. **Q1: ``Justifying the assumption``** While it is true that our set of assumptions—covering strong convexity, Lipschitz continuity or bounded gradient norm, and smoothness—appears restrictive, these assumptions hold significant practical relevance. They cannot be directly applied to the modern neural networks but they still encompass critical learning problems, including logistic regression, as empirically demonstrated in our experiments. It is important to underscore that these assumptions are not unique to our work. They are foundational within the existing literature on machine unlearning [7-10] and are similarly necessary for studies related to the differential privacy (DP) guarantees and convergence of hidden-state PNSGD [15, 16, 22]. Specifically, the leading analytical approach for hidden-state PNSGD in privacy contexts—namely, the privacy amplification by iteration (or shifted divergence) analysis—relies on these assumptions. We acknowledge that relaxing these constraints represents a meaningful direction for future research. Indeed, prior works [15, 16, 22] identify this as an open problem, and we are actively conducting research to address these limitations. **Q2:``Clarity of notations and typos``** We thank reviewer SMx2 for their careful reading and suggestion. We will make sure all notations are defined and typos are corrected in our revision. **Q3:``Questions about Theorem 3.2``** Thanks for pointing out the issue regarding $\mathcal{Z}_{\mathcal{B}}$. Originally we derived a $\mathcal{B}$ dependent bound $\mathcal{Z}\_{\mathcal{B}}$ by Lemma 3.3. There we have $j_0$ that depends on when the differing index between $\mathcal{D},\mathcal{D}^\prime$ we encounter for a given mini-batch sequence $\mathcal{B}$. In the current Theorem 3.2, we directly choose the worst case $\mathcal{B}$, for which the bound is still valid (but loose). We will correct it in our revision and thank you again for spotting this issue.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DiffusionBlend: Learning 3D Image Prior through Position-aware Diffusion Score Blending for 3D Computed Tomography Reconstruction
Accept (poster)
Summary: This paper used the diffusion model to solve the widely known inverse problem in computed tomography. The paper is well-written and well-organized. I reckon this paper has two main contributions: 1. The first diffusion model in CT considered z-axis consistency and used 3d data as neural network input. 2. A SOTA results with astonishing visualization and metrics results. Strengths: 1. The first diffusion model in CT considered z-axis consistency and used 3d data as neural network input. 2. A SOTA results with astonishing visualization and metrics results. Weaknesses: 1. I admit this is the first paper I read using the diffusion model while considering z-axis consistency. However, the y-axis has been widely discussed and solved using various types of regularization between 2D slices or just using 3D data before. I consider the author just used a 3D way here to solve this point. Yeah, it is new for the diffusion model, but I do think it is less attractive to me considering the works already made before. 2. The results achieved by the authors, particularly in sparse-view reconstruction using only four angles, are indeed remarkable. The quality of the reconstruction with such limited data is surprising and commendable. Given the significance of these results, it would be beneficial for the community if the authors consider sharing their code upon acceptance to enable further research and validation. 3. The paper's novelty is somewhat overshadowed by the work "Solving Inverse Problems in Medical Imaging with Score-Based Generative Models," which the authors have cited. It would be beneficial for the authors to clearly state the innovations and contributions of their work, especially in light of the foundational problem settings of inverse problem solving for medical imaging already addressed by the cited paper. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is your window center and width for CT images in the main body of the paper? I think the current window width value you selected for the main part of the paper is quite wide, considering using a narrower one. I think some details are not so clear in this setting, though I can tell your result is better than others. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. Although the author used some way to speed up the process both for training and sampling. However, diffusion models are still slow compared to other methods both traditional and deep learning and non-diffusion way. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing the encouraging feedback! Below, we address the concern: > **Q:** *Discuss related work with other regularization methods* A: We are aware of other works that apply external regularization between 2D slices with diffusion models such as [1,2] as we cited in our paper. Some other works train a xy plane prior and a yz plane prior [3] (also cited) for medical image reconstruction. Nevertheless, to the best of our knowledge, our work is the first one that uses only one 3D generative prior for medical image reconstruction without any additional regularization or priors. We will add more discussions on related works in our camera-ready version. > **Q:** *Release code* A: Thank you for the encouraging feedback! We are in the process of refactoring our code, and will release it ASAP after acceptance. > **Q:** *Difference between the score-med paper* A: The problems we solve and our motivations are different from the score-med paper [4]. The score-med paper is one of the first works that leverages a 2D diffusion prior for solving 2D medical image reconstruction problems. Our work is the first work that investigates whether using a 3D patch diffusion prior can lead to improvement over a 2D diffusion prior for 3D medical image reconstruction. The answer to the question is positive as demonstrated in our paper. Also our baseline "DDS 2D" is a very similar approach to the score-med paper. We show that our method outperforms DDS 2D by a significant margin. > **Q:** *Window center and window width* A: We follow [1,2,3,4]'s approach for processing CT images, to see bone structures better. Our window center is 100 HU, and window length is 2800 HU ([-1300 HU, 1500 HU]). In the rebuttal pdf, we also provide the visualization of our reconstruction in a much narrower window [-150 HU, 250 HU], and we observe that our method can reconstruct the fine details and structures of images very accurately with more projections. > **Q:** *However, diffusion models are still slow compared to other methods both traditional and deep learning and non-diffusion way.* A: We demonstrate that our method and other diffusion-based approaches can be faster than TV-based iterative traditional approaches (SBTV (one of the best non-deep-learning approach)) in our rebuttal pdf. The reason is the computation of 3DTV can be costly, but our approach does not need any external regularization so it can be faster. [1] Chung, Hyungjin, et al. "Solving 3d inverse problems using pre-trained 2d diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023 [2] Chung, Hyungjin, Suhyeon Lee, and Jong Chul Ye. "Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse Problems." The Twelfth International Conference on Learning Representations. [3] Lee, Suhyeon, et al. "Improving 3D imaging with pre-trained perpendicular 2D diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [4] Song, Yang, et al. "Solving Inverse Problems in Medical Imaging with Score-Based Generative Models." International Conference on Learning Representations. --- Rebuttal 2: Title: Looking for your feedback and input Comment: Dear Reviewer FN1q, We really appreciate your time for writing the review and providing the encouraging feedback. We want to highlight some novelties in our algorithm design in additional to our motivations and problem settings as the author-reviewer discussion window is closing. The key novelty in our paper mainly focuses on designing an efficient 3D generative prior by learning from the 3D volume for CT reconstruction. We will revise our paper to highlight the our key novelty, and clearly state our contribution. In our paper, we have the several key novel designs: 1. stack each 2D slice and each adjacency slice of a 3D medical image as a multi-channel image, and use a diffusion prior to learn the distribution of the patch of the adjacency slices. To our best knowledge, we are the **first** to propose this learning strategy in medical image reconstruction. 2. Propose a random blending algorithm for randomly partitioning and blending adjacency patches, which demonstrates to significantly improve slice inter-smoothness. To our best knowledge, we are the **first** to propose this method for medical image reconstruction. 3. ***More importantly***, we propose a novel jumping-slice patch idea, which enable our diffusion prior to learn the long-range dependency instead of focusing on adjacency slices. We treat non-adjacency slices as a wide patch and learn the distribution of 3D patches with different thickness. This method enables us to work on larger volumes. To our best knowledge, we are the **first** to propose this method. We demonstrate that we can directly learn the long-range dependency via jumping slices. Results show this design significantly reduces image artifacts as demonstrated in Figure.4 in our main paper. The novelty of the score-med paper [1] lies in its inverse problem solving technique and posterior sampling, which is not the main focus of our paper. We are more focusing on learning the 3D prior distribution, and we propose several novel designs to achieve this goal. Thanks for your consideration again, hope this comment resolves some of your concerns. Feel free to let us know if there is any remaining question about the manuscript and we will try our best to answer. [1] Song, Yang, et al. "Solving Inverse Problems in Medical Imaging with Score-Based Generative Models." International Conference on Learning Representations. --- Rebuttal Comment 2.1: Comment: I think the author's reply solved my doubts, I will improve my score. I suggest that the git repo be added to the final version, regardless of whether the paper is accepted.
Summary: 1. The method proposed a method that learns the 3D-patch image prior incorporating the cross-slice dependency. 2. The method achieves state-of-the-art reconstruction results for 3D volumetric imaging for the task of ultra-sparse-view and limited-angle 3D CT reconstruction on "AAPM 2016 CT challenge" dataset and "LIDC-IDRI" dataset. Strengths: 1. The probabilistic modeling of the 3D volume taking into account neighboring slices is novel. 2. The paper describes the author's motivation and implementation plan very well. Weaknesses: 1. Sparse angle CT imaging is typically defined as using fewer than 100 angles. The paper should evaluate the method's performance at various angles (e.g., 20, 40, 60, 80, and 100 angles). Demonstrating that this method outperforms the comparison method in PSNR/SSIM at fewer than 10 angles does not necessarily indicate better performance at other sparse angles (e.g., 20, 40, 60, 80, and 100 angles). 2. The comparison should include classic CT reconstruction methods, not just FBP, as FBP lacks prior information and performs poorly among traditional methods. Suggested comparison methods include: 2.1 SIRT (Simultaneous Iterative Reconstruction Technique) algorithm 2.2 Conjugate Gradient Least Squares (CGLS) algorithm 2.3 Split-Bregman (SB) Total Variation: Goldstein, T. and Osher, S., 2009. The split Bregman method for L1-regularized problems. SIAM journal on imaging sciences, 2(2) Because the traditional iterative method also involves the adjustment of hyperparameters, for a fair comparison, please first use grid search to search the hyperparameters of the traditional method to obtain the best performance of the traditional method, and then compare the performance of the traditional method with the performance of this paper. Also, please report the extent of the grid search. Only in this way can it can be fully evaluated in the numerous algorithms for CT image reconstruction. 3. Figure 3 shows a large number of structures that do not exist in CT images, which will interfere with the doctor's diagnosis. Please explain the reasons for this phenomenon and how to avoid it. (For example, increase the number of angles, which is why I am very concerned about increasing the angles). If increasing the angles can avoid the artifacts that come with the generative model on a large scale, the potential for medical applications will increase. 4. As the abstract said, the method's purpose is to decrease the cost of memory, time. In order to prove that this method has achieved the original intention of it, please show the time consuming and memory consuming during training phase and testing phase. And compare the memory and time consuming between different methods. Technical Quality: 1 Clarity: 3 Questions for Authors: 1. If the "conditioned slices" in equation (2) increase("conditioned slices" is: x[:,:,i-j:i-1] and x[:,:,i+1:i+j]), will the numerical performance better? If it is better, how does the performance gain change as the number of adjacent slices increases? 2. The paper mentions using a different partition so that the previous border slices can be included in another partition. However, if the method creates a new partition and then uses this new partition to create 3D patches, the adjacent slices of different 3D patches still cannot be updated simultaneously when the algorithm attempts to update the slices based on the scores calculated in each 3D patch. So, my question is, if the method uses the joint distribution modelling method (Eq. (5)), how can it avoid the situation I mentioned? 3. For the experimental part, The reconstruction mentioned here is the reconstruction from the simulated projection to the 3D CT volume or the reconstruction from the real CT projection(real measurement) to the 3D CT volume? If it is a simulated projection, is simulated noise added to the simulated projection? (Note: The noise I am referring to here is not the noise added in the diffusion model to train the neural network, but the noise added to simulate the real projection, which is generally a combination of Poisson noise and Gaussian noise.) Confidence: 4 Soundness: 1 Presentation: 3 Contribution: 3 Limitations: The artifacts generated by the diffusion model may lead to misjudgment in the doctor's diagnosis. Please provide methods to avoid artifact, such as increasing the angle or the amount of training data? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing the valuable feedback. Below, we address the concern: First, we address concerns regarding to additional experiments: > **Q:** *Evaluate method's performance at various angles.* A: - We provide results on [20, 40, 60, 80, 100] views for our method (DiffusionBlend++) as well as baselines with those angles (including the SB(Split-Bregman)TV, CGLS, and SIRT baselines) for the LDCT dataset in the rebuttal pdf as well in the table below. Results show that our method ***outperforms the baselines significantly*** for ***every*** angle we evaluated on. From Figure 1,2,3 in the rebuttal pdf DiffusionBlend++ starts to reconstruct images very close to the ground truth with 20 projections or more, but other baselines such as SIRT and CGLS still struggle to get a satisfying reconstruction with 60 views or more. The following table shows the reconstruction performance on the coronal plane of different methods. We observe that DiffusionBlend++ (ours) has a significant margin above baselines for every view. Note that DiffusionBlend++ still outperforms DDS2D significantly with >40 views, which demonstrates that our 3D prior is still very useful even with much more views. - The motivation of this paper is to exploit the 3D data-driven prior (instead of external 3D regularization or learning another 2D prior on another plane) for medical image reconstruction. The hypothesis is that with the aid of the 3D prior, we can have better reconstruction quality comparing to only using 2D priors. Many previous works leverage 2D diffusion priors for sparse-view reconstructions ( > 10 views), but only a few works exploit reconstructions with fewer than 10 views. By this consideration, we do not provide results for more than 10 projection in the main paper. Nevertheless, we will add additional results with more projection angles and more traditional baselines in the camera-ready version. ``Table.1 Coronal plane results on LDCT test set`` | Method |4 views| 6 views| 8 views | 20 views | 40 views | 60 views | 80 views | 100 views | |------------------|----------|----------|----------|----------|----------|----------|-----------|-----------| | FBP | 11.16/0.236 | 13.18/0.268 | 14.64/0.325 | 20.22/0.594 | 26.09/0.792 | 29.97/0.886 | 32.89/0.933 | 35.16/0.959 | | SIRT | 22.12/0.823 | 23.54/0.841 | 24.50/0.851 | 27.72/0.882 | 30.82/0.918 | 33.34/0.945 | 35.56/0.962 | 37.66/0.974 | | CGLS | 22.11/0.823 | 23.49/0.841 | 24.44/0.850 | 27.67/0.882 | 30.66/0.918 | 32.96/0.944 | 34.82/0.960 | 36.36/0.971 | | SBTV | 23.33/0.849 | 25.83/0.881 | 27.85/0.896 | 31.51/0.938 | 34.83/0.966 | 36.74/0.963 | 37.59/0.973 | 38.97/0.979 | | DDS2D | 30.25/0.916 | 32.33/0.939 | 33.64/0.950 | 35.34/0.961 | 38.26/0.977 | 40.07/0.984 | 41.19/0.987 | 41.88/0.989 | | DDS | 30.89/0.932 | 32.95/0.879 | 33.97/0.935 | 35.81/0.967 | 37.59/0.975 | 38.26/0.978 | 39.07/0.980 | 39.54/0.983 | | DiffusionBlend++ | ***34.27/0.955*** | ***36.66/0.963*** | ***37.87/0.968*** | ***40.56/0.980*** | ***42.53/0.987*** | ***43.66/0.990*** | ***44.42/0.992*** | ***44.95/0.993*** | > **Q:** *Comparison with classic CT reconstruction methods, not just FBP* A: Table.1 and Figure.1,2,3 in the rebuttal pdf show more results on additional classic baselines. Note that DiffusionBlend++ ***does not need to tune any hyperparameter*** when the number of projections changes, but this is a requirement for some traditional methods. - SBTV: We implement this algorithm with variables splitting of 3D anisotropic TV regularization (Dz, Dx, and Dy). We first check number of iterations, note that the performance converges with around 30 iterations. We did a grid search of hyperparameters on 9 validation images (not in the test set) for every projection angles. For example: the searching for 20 projections is conducted as below: | $\lambda$ | $\lambda / \rho = 0.02$ | $0.04$ | $0.08$ | $0.16$ | |--------|------|------|------|------| | 1.25 | 27.88 | 28.34 | 28.30 | 27.88 | | 2.5 | 28.16 | ***28.36*** | 28.07 | 27.44 | | 5 | 28.10 | 28.06 | 27.57 | 26.73 | | 10 | 27.98 | 27.57 | 26.77 | 26.24 | - SIRT: This algorithm iteratively updates the reconstruction based on the residual between projection of the reconstruction and the GT. It only has the number of iterations as its hyper-parameter. We note that during inference, PSNR increases with more iterations, but saturates later. So we set the total number iterations to be 1000, with an early stopping threshold of 1e-6 between two consecutive iterations. - CGLS: This algorithm uses conjugate gradient for solving least square problems. In our case, we use $CG(A^TA + \rho x^Tx, A^Ty)$, $\rho$ is set to be 1e-4 based on grid search for numerical stability. We tune the number of iterations on validation set, and find that performance saturates at around 25 iterations. > **Q:** *Whether the projections have noise?* A: Following approaches in [1,2,3,4,5], we do not add noise in our projections since the noise is very small for normal dose. Nevertheless, we also run additional experiments with 8-view reconstruction for LDCT test set in the low-dose setting with Poisson-Gaussian noise. The pre-log noise model is given by $y_i = \text{Poisson}(I_0 \exp(-Ax_i)) + N(0, \sigma^2)$. Following [6] to account for same dose level, we set $I_0 = 10^6$, $\sigma = 5$. Results as below show that our method is robust to noise. | | coronal| sagital | axial| |-----|----|-----|-----| | no noise | 37.87/0.968| 36.48/0.968| 35.69/0.966| |with noise| 37.62/0.966| 36.27/0.965 | 35.47/0.964| --- Rebuttal Comment 1.1: Comment: I think the author's reply solved some of my doubts, I will improve my score. I suggest that the supplementary experiments be added to the final version, regardless of whether the paper is accepted or not. --- Reply to Comment 1.1.1: Comment: Thanks for the review and reading our rebuttal. Your review is crucial for us to improve our manuscript. We will definitely add all the supplementary experiments discussed in the rebuttal period to our paper. Feel free to let us know if there is any remaining question about the manuscript and we will try our best to answer. --- Rebuttal 2: Title: Rebuttal (Part 2) Comment: > **Q:** *Explain why wrong structures exist in the reconstruction* A: - With ultra-sparse views (such as 4 views), CT reconstruction is extremely difficult since very few measurements are taken, so many traditional reconstruction algorithms completely fail in this scenario as demonstrated in Figure 3. Also, non-deep learning methods generate lots of structures that do not exist even with as much as 20 views as demonstrated in Figure 2 in the rebuttal pdf. To account for this missing information with ultra-sparse views, DiffusionBlend(++) generates many image structures that represent the ground truth image well, and some other structures that may look different from ground truth, due to the extremely big challenge of this ultra-sparse view reconstruction task. Note that our method is still significantly better than all compared baselines in such a challenging task. - With increasing number of views, more measurements are taken, and it is more likely we can reconstruct accurate structures (as mentioned by reviewer Vnw7). We provide reconstruction examples in the rebuttal pdf; results show that the image structures look ***almost identical*** to the ground truth with 40 views or more. > **Q:** *The method's purpose is to decrease the cost of memory, time. Show the time consuming and memory consuming* A: - Firstly, the main purpose of this work is, for the first time, to investigate how to learn a 3D patch diffusion prior that can improve ultra-sparse 3D CT reconstruction performance. The abstract mentions the difficulty of training a diffusion model on 3D full volumetric data (e.g. simply cannot fit into a 48G A40 GPU), which motivates us to train a 3D patch diffusion model which is much more computational efficient than a full 3D model, while achieving impressive results. - We have provided the inference time of our method and other baselines in Table 9 of our paper on the 500 slices of the LDCT test set. Additionally, we also provide the training time, training memory, inference memory in the rebuttal pdf. Overall our method is efficient since it only uses a 3-channel diffusion model. ***Surprisingly***, our method has better inference time than SBTV, while do not require much more memory than SBTV. The reason is that unlike SBTV which requires expensive 3D TV computation, our method does not require any external regularizations, so the inference speed is faster. | Method | Training memory | Training time | Inference memory | Inference time | |------| ------| ------| ------| ------| |SBTV [7] | - | -| 6045MB | 62 mins | |DiffusionMBIR [1] | 29043MB | 47 hours | 22062MB | 23 hours | |DDS [2] | 27040MB | 12 hours | 20608MB | 48 minutes | |DiffuisonBlend++ (Ours) | 35384MB | 4.5 hours | 9976MB | 32 minutes | --- Rebuttal 3: Title: Rebuttal (Part 3) Comment: > **Q:** *Whether more conditional slides will make numerical performance better?* A: We perform experiments that train our conditional model (DiffusionBlend) on 0, 2, 4, 6 conditional slices and evaluate on the LIDC dataset with 8 projections. Results show that including conditional slices increases the performance significantly, but adding more conditional slices is not guaranteed to further improve the performance and the improvement may be marginal. To explain why the improvement is marginal with more conditioning slices, we can think about the iterative process of diffusion reverse sampling [1,2]. For example, if using 2 conditional slices, for one specific slice, the first iteration conditions on 2 neighboring slices, but its neighbors also conditions on 2 additional slices. At the second iteration, that slice still conditions on its neighbors (which already had information from 2 additional slices), so it can get the information from the condition of 4 slices, by tracing back to the first iteration. In this way, we can show that with sufficient number of iterations, one specific slice can take the information (condition) on every other slice. Since we have sufficient number of iterations, either 2,4 or 6 conditional slices enables us to condition one slice on all other slices eventually. This method enables us to learn the 3D prior very ***efficiently*** with minimal number of conditioning slices. | Conditioning slices | PSNR/SSIM (8 views)| |------| ------| |0 | 30.98/0.894| |2 | 33.73/0.933| |4 | 33.32/0.932 | |6 | 33.53/0.936 | > **Q:** *The adjacent slices of different 3D patches still cannot be updated simultaneously* A: Previous work [1,3] demonstrate that each step of summation, gradient descent, or even ADMM updates can be split into each step of diffusion reverse sampling process while achieving satisfying performance. Here even though the adjacency slices may not be updated simultaneously at consecutive reverse sampling iterations, it is similar to a Monte Carlo average of the score of the distribution of different partitions as demonstrated in Eq.7. So the goal is not to compute the score of the 3D patch distribution as in Eq.5, but to approximate the score of the ground truth distribution p(x) by averaging the scores of the distributions of multiple partitions. [1] Chung, Hyungjin, et al. "Solving 3d inverse problems using pre-trained 2d diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023 [2] Chung, Hyungjin, Suhyeon Lee, and Jong Chul Ye. "Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse Problems." The Twelfth International Conference on Learning Representations. [3] Lee, Suhyeon, et al. "Improving 3D imaging with pre-trained perpendicular 2D diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [4] Song, Yang, et al. "Solving Inverse Problems in Medical Imaging with Score-Based Generative Models." International Conference on Learning Representations. [5] Chung, Hyungjin, et al. "Improving diffusion models for inverse problems using manifold constraints." Advances in Neural Information Processing Systems 35 (2022): 25683-25696. [6] Ye, Siqi, et al. "SPULTRA: Low-dose CT image reconstruction with joint statistical and learned image models." IEEE transactions on medical imaging 39.3 (2019): 729-741. [7] Goldstein, Tom, and Stanley Osher. "The split Bregman method for L1-regularized problems." SIAM journal on imaging sciences 2.2 (2009): 323-343.
Summary: This paper proposes a novel method for learning 3D diffusion priors for CT reconstruction, which does not require large-scale data or computational resources. It presents two approaches: DiffusionBlend and DiffusionBlend++. The former learns a specific frame given adjacent slices, while the latter learns a 3D patch. Experimental results demonstrate that both methods are efficient and outperform previous works. Strengths: 1. This method enables 3D reconstruction without the need for large-scale data and resources. 2. It achieves excellent inter-slice consistency without relying on external regularizations. 3. Additionally, this method outperforms existing baselines. Weaknesses: The only concern is that this method may be too simplistic to be extended to a broader field. For example, I believe that inter-slice smoothness cannot be guaranteed if this method is applied to videos. However, given that this method is proposed for CT reconstruction, I don't think this issue warrants the rejection of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: I don't have questions about this paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing the valuable feedback. Below, we address the concern: > **Q:** *Extending the method to a broader field* A: Our method is flexible and does not assume any specific data modality and forward models, so it should work for other modalities such as natural 3D images or videos, and for other inverse problems as well. Nevertheless, the results on other modalities and tasks are beyond the scope of this work, but we tested unconditional generation of 3D CT images which show good generation quality and inter-slice smoothness. We will include this experiment in the camera-ready version as reference. Thus, we think that the inter-slice smoothness of the learned prior from our method could be potential to work for videos as well to obtain a inter-frame coherence, which can be a very interesting future works. We will add this discussion in the camera-ready version as well. --- Rebuttal Comment 1.1: Comment: Thank you for your response; it has addressed some of my concerns to a certain extent. However, I will still maintain my score --- Reply to Comment 1.1.1: Title: Further clarifications on more general applications of our method Comment: Dear Reviewer YiqS, We present some additional preliminary analysis on whether our method is still feasible on video data, and the results look promising. Hope that this new evidence can be helpful to answer your question. To further address your concern about a broader application of the proposed method, we first compute the inter-slice consistency for unconditional generation of 3D CT generation with / without random blending. In addition, we also perform more experiments on unconditional video generation to see whether our algorithm can maintain inter-slice (frame) consistency for video data. We tested on the Sky Time-lapse dataset [1], which has time sequences of sky images. We use the diffusion model pretrained with 256x256 ImageNet data, and then fine tuned on 20 video sequences. Since our method can only handle single channel images like medical images (but should be able to extend to RGB with a little effort), we only use the first channel of the video for fine tuning). The learning rate is set to be the same as CT images, and we observe convergence at 5000 iterations with a batch size of 4. Then we test the inter-slice total-variation (TV) value on the unconditional generation of a 16-frame video (with / without random blending). One key observation is that video data has more coherent frames so it is actually easier for us to train the 3D patch diffusion on it since there is no inter-slice jump which is observed on 3DCT data. We present the z-axis TV values in the table below, which is computed by $||D_z(x)||_1/n$, where $n$ is number of pixels. The ground truth is taken from the test dataset for CT, and an average of 10 videos in sky-timelapse video. The results are based on an average from 10 generated volumes/videos. | Task | Without Blending | With Blending | Ground Truth | |------| ------| ------| ------| |3DCT Generation | 0.0236 | ***0.0035*** | 0.0065 |Skylapse Video Generation | 0.0419 | ***0.0021*** | 0.0036 | We observe with the proposed blending method improves the inter-slice smoothness significantly for both video and 3DCT data. Some video data has smaller inter-slice variation than 3D medical images. We observe that our method can generate 3D images/videos with the inter-slice smoothness comparable to the ground truth. Thus, this prior can also be useful for solving inverse problems in a broad applications. Thanks for the review and reading our rebuttal. Your review is crucial for us to improve our manuscript. Feel free to let us know if there is any remaining question about the manuscript and we will try our best to answer. [1] Zhang, Jiangning, et al. "Dtvnet: Dynamic time-lapse video generation via single still image." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16. Springer International Publishing, 2020.
null
null
Rebuttal 1: Rebuttal: Firstly, we would like to sincerely thank the reviewers for taking the time to review our paper and providing constructive feedback. We are encouraged that the reviewers think that in our paper - ```The first diffusion model in CT considered z-axis consistency and used 3d data as neural network input. A SOTA results with astonishing visualization and metrics results.``` (Reviewer FN1q), - ```This paper proposes a novel method. Experimental results demonstrate that both methods are efficient and outperform previous works.``` (Reviewer YiqS), - ```The probabilistic modeling of the 3D volume taking into account neighboring slices is novel. The paper describes the author's motivation and implementation plan very well.``` (Reviewer Vnw7). To address the concerns shared by some reviewers, we would like to globally highlight several key points in our response. ***1. This method outperforms the comparison method in PSNR/SSIM at fewer than 10 angles does not necessarily indicate better performance at other sparse angles, more traditional baseline results*** - We provide results on [20, 40, 60, 80, 100] views for our method (DiffusionBlend++) as well as baselines with those angles (including the SB(Split-Bregman)TV, CGLS, and SIRT baselines requested by Reviewer Vnw7) for the LDCT dataset in the rebuttal pdf and in the table below. Results show that our method ***outperforms the baselines significantly*** for ***each different number of angles*** we evaluated on. From Figure 1,2,3 in the rebuttal pdf, DiffusionBlend++ is able to reconstruct images very close to the ground truth with 20 projections or more, but other baselines such as SIRT and CGLS still struggle to get a satisfying reconstruction with 60 views or more. - The motivation of this paper is to exploit the 3D data-driven prior (instead of external cross-slice regularization or learning 2D priors on different planes) for medical image reconstruction. The hypothesis is that with the aid of the 3D prior, we can have better reconstruction quality compared to only using 2D priors. Many previous works leverage 2D diffusion priors for sparse-view reconstructions ( > 10 views), but only a few works exploit reconstructions with fewer than 10 views. By this consideration, we do not provide results for more than 10 projection in the main paper. Nevertheless, we will add additional results with more projection angles and more traditional baselines in the camera-ready version. ***2. A large number of structures that do not exist in CT images, which will interfere with the doctor's diagnosis*** - With ultra-sparse views (such as 4 views), CT reconstruction is extremely difficult since very few measurements are taken, so many traditional reconstruction algorithms completely fail in this scenario as demonstrated in Figure 3. Also, non-deep learning methods generate lots of structures that do not exist even with as high as 20 views as demonstrated in Figure 2 in the rebuttal pdf. To account for this missing information with ultra-sparse views, DiffusionBlend(++) generates many image structures that represent the ground truth image well, and some other structures that may look different from ground truth. Nevertheless, it is still significantly better than all baselines considered in this paper. - With increasing number of views, more measurements are taken, and it is more likely we can reconstruct accurate structures (as mentioned by reviewer Vnw7). We provide reconstruction examples in the rebuttal pdf; results show that the image structures look ***almost identical*** to the ground truth with 40 views or more. ***3. Difference from other related works*** - The problems we solve and our motivations are different from the score-med paper [1]. The score-med paper is one of the first works that leverages a 2D diffusion prior for solving ***2D*** medical image reconstruction problems. Our work is ***the first work*** that investigates whether using a 3D patch diffusion prior can lead to improvement over a 2D diffusion prior for ***3D*** medical image reconstruction. The answer to the question is ***positive*** as demonstrated in our paper. Also our baseline "DDS 2D" is a very similar approach to the score-med paper. We show that our method outperforms DDS 2D by a significant margin. - Some other works use inter-slice external regularization, such as total variation (TV) [2] or multiple 2D priors on different planes for reconstruction [3], which we have both cited in our paper, as mentioned by reviewer FN1q. Other methods such as ADMM-TV uses the TV regularization for xy, xz, and yz plane. Nevertheless, our work differs from them fundamentally in that our work ***for the first time*** learns one generative 3D prior and uses only that prior without any other regularization for medical image reconstruction. We will include more discussions about related works in our camera-ready version. ***4. Discussion on Computational Efficiency*** We have provided the inference time of our method and baselines in Table 9 of our main paper. In addition, we also provide training time, training memory, inference memory and inference time in the rebuttal pdf. Results show that we achieve better inference memory efficiency and decent inference speed compared to DDS since we do not use the computational-heavy total variation regularization (while DDS claims to be one of the most efficient diffusion solvers for medical imaging). ***Surprisingly***, we find the non-deep-learning method SBTV has a worse inference efficiency compared to our method due to the heavy computational cost of 3D total variation optimization. [1] Song, Yang, et al. "Solving Inverse Problems in Medical Imaging with Score-Based Generative Models." ICLR2022 [2] Chung, Hyungjin, et al. "Solving 3d inverse problems using pre-trained 2d diffusion models." CVPR2023 [3] Lee, Suhyeon, et al. "Improving 3D imaging with pre-trained perpendicular 2D diffusion models."ICCV2023 Pdf: /pdf/2af694fe03ab21bec2230f9271d29dca9712d20c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model
Accept (poster)
Summary: The paper presents the DDSR model for Sequential Recommendation (SR), which better captures user interest evolution by using fuzzy sets of interaction sequences. Unlike traditional methods, DDSR effectively handles the unpredictability of user behavior and addresses cold start issues. Experiments on benchmark datasets show that DDSR outperforms existing methods, demonstrating its effectiveness in SR tasks. Strengths: 1. DDSR introduces a novel perspective in SR, effectively addressing the limitations of traditional sequential modeling methods and enhancing recommendation accuracy. 2. DDSR uses fuzzy sets of interaction sequences and diffusion transition processes in discrete state spaces, improving the model's ability to capture the randomness and unpredictability of user behavior. 3. The theoretical justification for constructing the information diffusion approximation model is sound and fundamental. 4. Quantitative experimental results demonstrate the model's effectiveness, with particularly exciting results in cold start scenarios. Weaknesses: 1. The paper lacks an algorithm to describe the entire model operation. Including this would greatly enhance readers' understanding of the model and its theoretical underpinnings. 2. All experimental results are quantitative. It is recommended to supplement with a case study or visual experiment. 3. Although the author has analyzed the time complexity, I believe most readers would also appreciate a comparison of actual running times. Technical Quality: 4 Clarity: 3 Questions for Authors: See comments above. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: This paper has discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, thank you very much for reading our work carefully and for your valuable comments and suggestions, from which we have greatly benefited. Next, I will explain and respond to the shortcomings you have pointed out, and we will make corresponding revisions in the official version of our work based on your reminders. W1: Regarding a clearer description of our algorithm, we think your suggestion is very reasonable. To avoid making the theoretical part too lengthy within the main discussion, we placed the description of discrete diffusion separately in Section 3. However, this approach may not facilitate readers' understanding of the overall workflow. We add our algorithm flowchart below and will incorporate it into the official version of our paper based on your suggestion. **Training of DDSR.** **Input:** - Historical Interaction Sequence of user $u$: $v_{1:n-1} = c_{1:n-1;1:m}$ ; - Target item: $v_n = c_{n;1:m}$; - Transition matricest: $Q_t$; - Approximator: $f_{\theta}(\cdot)$. **Output:** - Well-trained Approximator: $f_{\theta}(\cdot)$. **Procedure:** 1. **while** not converged **do** 2. &emsp; Sample Diffusion Time: $t \sim [0,1,...,T]$; 3. &emsp; Calculate $t$-step transition probability: $\quad\overline{\boldsymbol{Q}}_t=\boldsymbol{Q}_1\boldsymbol{Q}_2\ldots\boldsymbol{Q}_t$; 4. &emsp; Convert $c_{n;1:m}$ to one-hot encoding $x_{n;1:m}^0$; 5. &emsp; Obtain the discrete state $x_{n;1:m}^t$ after $t$ steps by Equation (2), thereby obtain the 'fuzzy set' $c_{1:n-1;1:m}^{t}$; 6. &emsp; Modeling $c_{2:n;1:m}$ based on 'fuzzy sets' through Equation (5); 7. &emsp; Take gradient descent step on $\nabla$ $L_{CE}$ ($\hat{c}_{2:n;1:m}$, $c_{2:n;1:m}$). **Sampling of DDSR.** **Input:** - Historical Sequence: $v_{1:n-1} = c_{1:n-1;1:m}$ - Well-trained Approximator: $f_{\theta}(\cdot)$ - Sampling Steps: $T$. **Output:** - Predicted Target Item: $v_{n}$ 1. Let $x_T$ = $c_{1:n-1;1:m}$; 2. Let t = T; 3. **while** $t>0$ **do** 4. &emsp; Use the trained $f_{\theta}(\cdot)$ to obtain predictions $\widetilde{x}_{0}$ with $x_t$ and $t$ as inputs; 5. &emsp; Substitute $\widetilde{x}_{0}$ into equation (7) to obtain the distribution of $t-1$ step; 6. **end while** 7. $\widetilde{v}_{n}$ = $x_0$[-1;1:m]; 8. if the same code project exists: $v_n$ = $\widetilde{v}_{n}$; &emsp; else: $v_n$ is the project in the space closest to $\widetilde{v}_{n}$. W2: Regarding the issues with the experimental section, we greatly appreciate your constructive suggestions. We recognize that our experimental setup was not comprehensive, so we have added a study on the impact of codebook length on recommendation performance. We will present the following experimental results on dataset Scientific in the form of bar charts in the paper: | **code_len** | **Recall@10** | **NDCG@10** | **Recall@50** | **NDCG@50** | **GPU memory (GB)** |**Training Time (s/epoch)** |**Evaluation Time (s/epoch)** | |----------------|---------------------|-----------------------------|-------------------------------|-------------------------------|-------------------------------|-------------------------------|-------------------------------| | **64** |0.1235|0.0656|0.2396|0.0907|22.14|13.56|19.40| | **32** | 0.1207|0.0663|0.2153 |0.0842 |12.41|6.76|11.38| | **16** |0.1145|0.0603|0.2161|0.0846|9.74|4.34|3.79| | **8** |0.1084|0.0589 |0.2184| 0.0829|8.37|2.55|3.61| | **4** |0.0836|0.0433 |0.1820| 0.0759|7.43|2.49|3.33| W3: Regarding the comparison of actual runtime, we have compared the GPU memory usage and runtime of the UniSRec, DiffuRec, and our DDSR model on a single A100 GPU. The specific results are as follows: | **Datasets** | **Model** | **GPU memory (GB)** | **Training Time (s/epoch)** | **Evaluation Time (s/epoch)** | |----------------|-----------|---------------------|-----------------------------|-------------------------------| | **Scientific** | UniSRec | 8.32 | 3.51 | 0.67| | | DiffuRec | 14.94 | 4.97 | 17.52| | | DDSR | 12.41 | 6.76 | 11.38| | **Office** | UniSRec | 8.29 | 9.96 | 1.13| | | DiffuRec | 14.85 | 25.81 | 127.41 | | | DDSR | 12.48 | 36.19 | 69.10 | | **Online Retail**| UniSRec | 9.96 | 52.19 | 3.70 | | | DiffuRec | 15.97 | 65.22 | 103.44 | | | DDSR | 13.47 | 83.51 | 60.11 | The increased runtime of DDSR primarily stems from the introduction of the diffusion model and the increased time complexity due to the conversion of project embeddings into discrete codes. The current time complexity is $O(mnd^2+mdn^2)$, which is $m$ (codebook length) times that of methods like SASRec that only use the Transformer model. Fortunately, we have found in our experiments that good results can be achieved with relatively smaller settings for dimension $d$, as we now effectively use a vector of length $m*d$ to represent a project. Additionally, the use of discrete codes helps reduce storage overhead and DDSR enables faster sampling speeds compared to other diffusion model-based methods by performing inference for k steps at a time. We believe that the initial efficiency issues are still within an acceptable range. Additionally, since state transitions within the discrete space do not introduce noise, we found in our experiments that DDSR's training has significantly enhanced stability and convergence speed compared to DiffuRec. For example, DiffuRec requires about one to two hundred epochs to converge on the Scientific and Office datasets, while our model only needs about forty to fifty epochs. --- Rebuttal Comment 1.1: Comment: Thanks for the response which havs addessed my main concerns, and thus I keep my original score of acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and for taking the time to review our work. We greatly appreciate your thoughtful comments and are glad that we could address your concerns. Your support and acceptance are highly valued. Best regards!
Summary: This paper studies the problem of fuzzy modeling for sequential recommendation. The work proposes to leverage the fuzzy sets of interaction sequences for modeling the nature of users' evolving real interests. It also introduces the discrete diffusion modeling specifically for the discrete data. The experiment results showcase the better model performance in comparison with several included baselines. Strengths: 1. This paper studies the optimization of sequential recommender systems, which is an important application for the information retrieval domain. 2. The paper is written with fair clarity and thus it is easy to follow. 3. The paper provides careful analyses to support the claims made by this paper and the experimental results generally demonstrate the effectiveness of the proposed methods. Weaknesses: 1. Insufficient baseline comparison for the conventional sequential recommender models, as the two included models are outdated. 2. Insufficient empirical analysis in terms of the model efficiency. Firstly, the runtime cost comparison is missing, which is however very important to recommender system prototype evaluation for practical deployment consideration. Secondly, based on the complexity analysis, it is crucial to showcase the performance variations (effectiveness/efficiency) by varying the codebook length $m$. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How many times of performance evaluations are conducted for the average results? 2. What are the real runtime costs for the proposed model and other baselines for comparison? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Please see the weakness for details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly, we sincerely appreciate your careful review of our work and the extremely valuable suggestions you have made! In response to the issues you have pointed out, we will improve our research and provide detailed answers to each of your questions, hoping to clearly address your concerns. W1: Regarding the concern you raised about the insufficient baseline comparison with traditional sequence recommendation models, we indeed acknowledge that there is a gap in our work in this area. During our research, we primarily focused on comparing with other text-based and generative model-based sequential recommendation approaches, which may have led to an inadequate evaluation of traditional models. To address this, we have added experiments with the following three traditional sequential recommendation models: 1. **HSTU**[1]: A novel sequence recommendation architecture introduced by Meta in May this year. HSTU utilizes a hierarchical sequence transduction framework, incorporating a modified attention mechanism and a micro-batching algorithm (M-FALCON) for efficient handling of high-cardinality and dynamic recommendation data. 2. **Mamba4Rec**[2]: A model proposed by Liu et al. in June this year, based on the latest Mamba design, which includes a series of sequence modeling techniques. 3. **FDSA**[3]: The feature-level deep self-attention network introduced by Zhang et al., which integrates various heterogeneous features of items into a feature sequence with different weights through a fundamental mechanism. Here are our experimental results: | **Datasets** | **Model** | **Recall@10** | **NDCG@10** | **Recall@50** | **NDCG@50** | |----------------|-----------|---------------------|-----------------------------|-------------------------------|-------------------------------| | **Scientific** | HSTU| 0.1086 |0.0543| 0.1816| 0.0721| || Mamba4Rec |0.0976 | 0.0562 | 0.1881 |0.0760 | || FDSA |0.0901| 0.0599 | 0.1683| 0.0766 | | **Office** | HSTU | 0.1102 | 0.0718|0.1709| 0.0872 | | | Mamba4Rec |0.1079|0.0727 | 0.1633 | 0.0858 | || FDSA | 0.1108|0.0730|0.1723|0.0854 | | **Online Retail**| HSTU |0.1456| 0.0698|0.3750 | 0.1213 | || Mamba4Rec | 0.1410 | 0.0686 | 0.3783 | 0.1188 | || FDSA | 0.1478 | 0.0719 | 0.3817 | 0.1229 | Among these, we have integrated HSTU into our framework, which based on RecBole, as its original source code did not include a test set partition. Thank you for your suggestion; we will enhance the baseline comparisons in the formal version of the paper. [1] Zhai J, Liao L, Liu X, et al. Actions speak louder than words: Trillion-parameter sequential transducers for generative recommendations[J]. arXiv preprint arXiv:2402.17152, 2024. [2] Liu C, Lin J, Wang J, et al. Mamba4rec: Towards efficient sequential recommendation with selective state space models[J]. arXiv preprint arXiv:2403.03900, 2024. [3] Zhang T, Zhao P, Liu Y, et al. Feature-level deeper self-attention network for sequential recommendation[C]//IJCAI. 2019: 4320-4326. Q1&W2: Thank you for pointing that out. We will address these omissions in the final version of the paper. Below are the actual runtime cost comparison results of our DDSR model with UniSRec and DiffuRec: | **Datasets** | **Model** | **GPU memory (GB)** | **Training Time (s/epoch)** | **Evaluation Time (s/epoch)** | |----------------|-----------|---------------------|-----------------------------|-------------------------------| | **Scientific** | UniSRec | 8.32 | 3.51 | 0.67| | | DiffuRec | 14.94| 4.97 | 17.52 | | | DDSR | 12.41| 6.76 | 11.38 | | **Office** | UniSRec | 8.29 | 9.96 | 1.13 | | | DiffuRec | 14.85 | 25.81| 127.41| | | DDSR | 12.48 | 36.19| 69.10 | | **Online Retail**| UniSRec | 9.96| 52.19| 3.70| | | DiffuRec | 15.97| 65.22 | 103.44| | | DDSR | 13.47 | 83.51| 60.11 | Regarding the performance variations with different codebook lengths that you mentioned, we have added experiments here. Taking the Scientific dataset as an example, we explore how codebooks obtained through quantization methods affect the recommendation performance at various lengths: | **code_len** | **Recall@10** | **NDCG@10** | **Recall@50** | **NDCG@50** | **GPU memory (GB)** |**Training Time (s/epoch)** |**Evaluation Time (s/epoch)** | |----------------|---------------------|-----------------------------|-------------------------------|-------------------------------|-------------------------------|-------------------------------|-------------------------------| | **64** |0.1235|0.0656|0.2396|0.0907|22.14|13.56|19.40| | **32** | 0.1207|0.0663|0.2153 |0.0842 |12.41|6.76|11.38| | **16** |0.1145|0.0603|0.2161|0.0846|9.74|4.34|3.79| | **8** |0.1084|0.0589 |0.2184| 0.0829|8.37|2.55|3.61| | **4** |0.0836|0.0433 |0.1820| 0.0759|7.43|2.49|3.33| For the Scientific dataset alone, a codebook length of 64 achieved best results most of the time. We regret not having made sufficient attempts before. Q1: During our experiments, we ran each model five times and took the average as the final result to validate the significance of improvements. We failed to mention this point in the implementation details section of the paper, which was an oversight. We are very sorry for this and will include this information in the final version of the paper. If you believe that our response has resolved the issues raised and alleviated your concerns, we kindly request that you consider adjusting the score accordingly. If you still have any doubts or suggestions, we earnestly ask you to point them out to further enhance our work. Once again, thank you for your thoughtful comments and consideration. --- Rebuttal 2: Comment: Dear Reviewer, I hope this message finds you well. I am writing to inquire about the progress of my rebuttal submission. I understand that the review process can be quite involved, but I would appreciate any updates you could provide regarding the status of the review. Thank you very much for your time and effort! Best regards! --- Rebuttal Comment 2.1: Comment: Dear Reviewer, I hope this message finds you well. I understand that you may have a busy schedule, and I truly appreciate the time and effort you are dedicating to reviewing our manuscript. As the deadline for feedback approaches, I wanted to kindly check in and see if there is any additional information or clarification we can provide to assist with the review process. Your insights are invaluable to us, and we look forward to any feedback you might have. Thank you again for your time and consideration. Best regards!
Summary: The paper presents a diffusion model-based sequential recommendation from a novel information-theoretical perspective, which operates on discrete structural state spaces along with semantic labels improving efficiency and tackling cold-start issues. Strengths: Strengths* 1. Novelty. The paper uses a directed graph to model sequential recommendation and models the transitions of nodes with discrete diffusion. Besides, authors introduce semantic tags to replace meaningless item IDs, enhancing efficiency and improving cold start issues. 2. Soundness. The paper presents enough theoretic analysis and experimental evaluations to validate the proposed method. The significant performance improvement on three public datasets, along with a series of further ablation study and analysis, clearly demonstrates the soundness of the paper. 3. Easy-followed. The paper presents sufficient algorithmic detail. In addition, the authors provide sufficient codes to help understand and re-product the algorithm. Weaknesses* 1. Citation mixed in the text make obstacles for reading. See questions for detail. 2. Writing should be more regulated. See questions for detail. Weaknesses: See Above Technical Quality: 4 Clarity: 3 Questions for Authors: 1. In text such as line 235, 236, 251, the citation text is literally repeated. 2. In text such as line 20-26, it would be better to add brackets to distinguish references from the main text. 3. In line 74 and 144, it should be explicit that “2” and “4.1” refers to “Section 2” and “Section 4.1”. 4. The usage and format of quotation marks is a bit confusing, see line 34, 219 and Table 1 caption for examples. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: There is no significant limitation. Perhaps the authors could consider give a rough discussion or comment on how the computational complexity (or the coefficient constant) of the proposed method can be improved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable time you have dedicated to our work and the encouragement you have given us! Thank you for thoroughly reading our paper and pointing out some issues, which indeed arose due to our oversight; for this, we deeply apologize. In the final version, we will carefully review and correct each issue you have raised and perform a comprehensive proofreading from start to finish to ensure that such problems do not recur. Regarding the limitations you've mentioned, specifically the need to discuss improvements in computational complexity, we indeed recognize the importance of this issue and plan to address it in the final version of our paper. We believe that there are two feasible approaches for improvement: through model architecture and coding strategies. For instance, within the model architecture, one could employ linear attention mechanisms in Transformers as a substitute, redesigning the computation of $K$ and $V$ matrices to reduce the time complexity of the attention mechanism from $O(n^2d)$ to $O(nd^2)$. Additionally, we have taken note of the recently proposed mamba model, which maintains linear complexity while demonstrating robust performance. From a coding perspective, exploring ways to convey sufficient information with shorter codebook lengths seems to be an effective strategy. For example, in our research, the codebook lengths generated by RQ-VAE are significantly shorter than those required by traditional quantization methods, yet maintaining effectiveness on this basis remains a critical challenge. Lastly, we greatly appreciate the valuable suggestions you have provided. [1]Gu A, Dao T. Mamba: Linear-time sequence modeling with selective state spaces[J]. arXiv preprint arXiv:2312.00752, 2023. [2]Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. “Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention”. In: International Conference on Machine Learning. PMLR. 2020, pp. 5156–5165. --- Rebuttal Comment 1.1: Title: To authors Comment: You have addressed some of the concerns I raised in my previous review. I believe this work is indeed novel and meets the standards required for this conference. As a result, I am inclined to raise my score. Best regards, --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We would like to once again express our sincere gratitude for the time and effort you invested in reviewing our manuscript. We deeply appreciate your constructive feedback and are thankful for the higher evaluation during the rebuttal phase. Your insights have not only enhanced our work but also encouraged us to further refine our research. Based on your suggestions, we have made appropriate revisions to our paper accordingly. Thank you once again for your thoughtful review and for recognizing the potential of our work. We are extremely grateful for your expert guidance and assistance. Best regards!
Summary: The paper presents a new model for sequential recommendation (SR) called DDSR, which aims to predict items of interest for users based on their past behavior. The authors critique conventional SR methods for not adequately capturing the randomness and unpredictability in user behavior. DDSR uses fuzzy sets of interaction sequences to better reflect the true evolution of user interests. Unlike common diffusion models that operate in continuous domains, DDSR uses diffusion transition processes in discrete state spaces, avoiding information loss by employing structured transitions. To tackle the inefficiency of matrix transformations in large discrete spaces, the model utilizes semantic labels derived from quantization or RQ-VAE instead of item IDs. Tests on three public datasets show that DDSR surpasses current state-of-the-art methods, proving its effectiveness in SR tasks. Strengths: Effective combination of pre-existing ideas: discrete diffusion with semantic IDs. Empirical results in Section 5 on Scientific, Office, Online Retail datasets. Weaknesses: I think a main weakness of the paper is a lack of clarity in writing. The method is still unclear to me, although this may be partly due to my lack of experience in recommender systems. I would condense and combine Sections 3 and 4 and move Section 2 to end. The technical novelty is also not entirely clear to me. Is it in the form of the transition matrices used for the discrete diffusion? Or is the technical novelty claimed that the items are recommended based on the output of running a few steps of discrete diffusion (adding noise to the features)? Technical Quality: 2 Clarity: 3 Questions for Authors: Could the authors please elaborate on the analogy between fuzzy sets and discrete diffusion? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes, limitations adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly, we appreciate the valuable time you have invested in our work and the constructive suggestions you have offered; we will rigorously revise our paper based on your feedback. We hope the following explanations will somewhat alleviate your concerns. W1: We find your suggestion to relocate Section 2 to the end of the paper quite insightful, and we plan to adopt this adjustment in the final version. Regarding the idea of combining Sections 3 and 4, we need to explain that the original intention of discussing Section 3 before Section 4 separately was to separate the theoretical and model sections. This was aimed at preventing the model section from becoming too lengthy, which could negatively affect the reader's experience. However, your advice has highlighted a potential compromise in clarity that we had not fully appreciated. To enhance our discussion, we will introduce an algorithm flowchart below as an initial step. In the formal version, we will thoughtfully consider how to integrate your feedback. Currently, we are contemplating including Sections 3.1 and 3.2 within the methods part of Chapter 4 and positioning Section 3.3 just before the experimental segment, to provide theoretical backing for our proposed DDSR model. We would appreciate your thoughts on whether this revised structure seems appropriate. Here is the algorithm flowchart we provided (after generating discrete semantic encoding). **Training of DDSR.** **Input:** - Historical Interaction Sequence of user $u$: $v_{1:n-1} = c_{1:n-1;1:m}$ ; - Target item: $v_n = c_{n;1:m}$; - Transition matricest: $Q_t$; - Approximator: $f_{\theta}(\cdot)$. **Output:** - Well-trained Approximator: $f_{\theta}(\cdot)$. **Procedure:** 1. **while** not converged **do** 2. &emsp; Sample Diffusion Time: $t \sim [0,1,...,T]$; 3. &emsp; Calculate $t$-step transition probability: $\quad\overline{\boldsymbol{Q}}_t=\boldsymbol{Q}_1\boldsymbol{Q}_2\ldots\boldsymbol{Q}_t$; 4. &emsp; Convert $c_{n;1:m}$ to one-hot encoding $x_{n;1:m}^0$; 5. &emsp; Obtain the discrete state $x_{n;1:m}^t$ after $t$ steps by Equation (2), thereby obtain the 'fuzzy set' $c_{1:n-1;1:m}^{t}$; 6. &emsp; Modeling $c_{2:n;1:m}$ based on 'fuzzy sets' through Equation (5); 7. &emsp; Take gradient descent step on $\nabla$ $L_{CE}$ ($\hat{c}_{2:n;1:m}$, $c_{2:n;1:m}$). **Sampling of DDSR.** **Input:** - Historical Sequence: $v_{1:n-1} = c_{1:n-1;1:m}$ - Well-trained Approximator: $f_{\theta}(\cdot)$ - Sampling Steps: $T$. **Output:** - Predicted Target Item: $v_{n}$ 1. Let $x_T$ = $c_{1:n-1;1:m}$; 2. Let t = T; 3. **while** $t>0$ **do** 4. &emsp; Use the trained $f_{\theta}(\cdot)$ to obtain predictions $\widetilde{x}_{0}$ with $x_t$ and $t$ as inputs; 5. &emsp; Substitute $\widetilde{x}_{0}$ into equation (7) to obtain the distribution of $t-1$ step; 6. **end while** 7. $\widetilde{v}_{n}$ = $x_0$[-1;1:m]; 8. if the same code project exists: $v_n$ = $\widetilde{v}_{n}$; &emsp; else: $v_n$ is the project in the space closest to $\widetilde{v}_{n}$. W2: I apologize for the lack of clarity in our description and hope that the following explanation will address your concerns about the technical innovations. Unlike traditional diffusion models that add noise to features, we use discrete diffusion to induce structured transformations among discrete nodes, such as semantic labels, within a discrete state space using transition matrices. This approach replaces the introduction of noise in conventional diffusion models and avoids the deviation of representations towards meaningless directions. The semantic space itself is inherently meaningful, and the discrete diffusion framework allows us to design controllable transition schemes, such as importance-based transitions, which are a major source of performance improvement. We also note that traditional diffusion models add noise to embeddings, which often exacerbates training difficulties in highly sparse scenarios like sequence recommendation. For instance, in our experiments, we observed that DiffuRec required over three hundred training epochs to converge on most datasets. Our work is also the first attempt to enhance diffusion models with textual information in the context of recommendation. Q1: We introduce the concept of fuzzy sets because diffusion models are inherently designed for generative tasks, which fundamentally differ in form from autoregressive tasks like sequence recommendation, where the $n$th item is predicted using the previous $n-1$ items. By employing fuzzy modeling to redefine the diffusion process, we ensure that the diffusion can be theoretically applied to the target embeddings but also enhances adaptability without compromising the robust theoretical support inherent to diffusion models. As for discrete diffusion, it represents a form of fuzzy modeling in our work. The specific reasons for using discrete diffusion are presented in W1. Thank you for your thorough review. If you feel our clarifications have indeed alleviated any doubts, we kindly ask you to consider adjusting the score accordingly. Should you have any further questions or require additional clarification, please do not hesitate to point them out. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I thank the authors for their thoughtful modifications to the paper - is it possible to upload this revision to OpenReview before the review period ends? I think the inclusion of the pseudo-code for training and sampling algorithms is a big step forward. Also, with the inclusion of the new experimental results I am glad to increase my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your positive feedback and for recognizing the improvements in our revised manuscript. We greatly appreciate your support, particularly regarding the inclusion of the pseudo-code and the new experimental results. We will promptly upload the revised version to OpenReview before the review period ends, as per your suggestion. Thank you again for your valuable guidance and for increasing your score. Best regards! --- Rebuttal 2: Comment: Dear Reviewer, I hope this message finds you well. I am writing to inquire about the progress of my rebuttal submission. I understand that the review process can be quite involved, but I would appreciate any updates you could provide regarding the status of the review. Thank you very much for your time and effort! Best regards!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Doubly Hierarchical Geometric Representations for Strand-based Human Hairstyle Generation
Accept (poster)
Summary: This paper introduces a method for generating realistic strand hair geometry using a frequency-decomposed representation. The approach constructs a hierarchical generative model for hair strands, leveraging discrete cosine transform (DCT) and k-medoids clustering to create coarse guide curves that effectively distinguish fundamental hair shapes from complex curliness and noise. Additionally, it employs a permutation-equivariant architecture (PVCNN) to support flexible guide curve modeling and facilitate the hierarchical generation of strands in low and high frequencies, transitioning from sparse to densely populated. Strengths: 1. The paper proposes a hierarchical approach to hair strand generation, which starts from coarse guide strands and progresses to densely populated strands, ensuring detailed and realistic hair geometry. 2. Utilizing DCT for frequency decomposition to separate low-frequency structural curves from high-frequency details is innovative, particularly as it avoids the Gibbs' oscillation issues associated with the standard Fourier transform. 3. The use of k-medoids clustering for extracting representative guide curves ensures better retention of hairstyle characteristics compared to traditional UV grid sampling methods. 4. The paper proposes a permutation-equivariant architecture for the VAE, allowing flexible modeling of guide strand geometry without being restricted to a fixed grid, enabling the generation of dense strands in any quantity and density. Weaknesses: 1. **Minor Technical Contributions:** - The primary contribution lies in the utilization of DCT for hair curves and k-medoids clustering to extract guide hair strands. However, as stated in the paper, "DCT is widely used in image, video, and audio signal compression." The novelty of using DCT for hair curves has not been adequately highlighted. - The performance improvement of k-medoids clustering is not clearly demonstrated. This method should be compared with other clustering techniques (e.g., k-means) instead of grid-sampling, as shown in Fig. 3. 2. **Insufficient Comparisons with Previous Work:** - The paper lacks sufficient comparisons with previous relevant strand-based hair modeling methods, such as Wang et al. (2009) "Example-based hair geometry synthesis." 3. **Unfair Comparisons with Other Methods:** - In Fig. 7, the results of the nearest-neighbor upsample are confusing. There should be an explanation of how the nearest-neighbor upsample is performed, at least in the appendix, to ensure the comparison is fair and transparent. Technical Quality: 3 Clarity: 3 Questions for Authors: - For better validation of the method, it would be beneficial to include comparisons with other state-of-the-art clustering techniques and hair modeling methods. - Further highlighting the novelty and advantages of using DCT specifically for hair curves could strengthen the paper's contributions. - Ensuring fair and transparent comparisons with detailed explanations of all methods used in the evaluations would improve the credibility and reliability of the results presented. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and suggestions, and your appreciation of our innovations. **[W1] Technical contributions** Please refer to the global rebuttal for a detailed discussion on our contributions. We will specify our innovations, and especially which components in our method are new with regard to existing strand hair modelling methods in related work to better show our contributions. For some of your specific concerns: **[W1.1] novelty of DCT for hair curves not adequately highlighted** Thanks for the suggestion and acknowledging that utilizing DCT here is "innovative". We will specify in the contribution and related work that we do frequency decomposition on strand curves and introduce DCT in hair modelling, both for the first time. **[W1.2] k-medoids and comparison with other clustering techniques** First, we clarify that we do not perform clustering, but instead we sample guide strands (see W2 the discussion on Wang et al. (2009) for more details). It just happens that the optimal solution of the guide strand sampling can be achieved from the k-medoid clustering as resulting final medoids set. To discover this previously unknown property of this less famous clustering method in our application with a proof (Theorem 1) is considered our theoretical contribution. We compare our solution with grid sampling, because the latter is widely applied in recent hair modeling methods, especially the direct competitor HAAR. k-means itself does not suffice as a sampling method. Unlike k-medoids, the cluster centers of k-means is not necessarily from the original data (existing strands). It can cause problems such as the root of the strand is not aligned on the head scalp. To make the k-means centers valid hair strands, we modify the solution with an additional projection step to project each root of a sampled k-means center to the nearest point on the head scalp, so it matches a valid strand. We show the results in Table D in the rebuttal PDF as an extension of Table 1 in the manuscript. Since k-means and k-medoids are similar in methodology, the performance gap is not large, while the k-means sampling may lose some accuracy due to the projection step. However as we showed in theory, the k-medoids is ensured to be the optimal solution. **[W2] Comparison with existing works** Please refer to the global rebuttal for evaluation and comparison with the SOTA method HAAR. Thank you for introducing the work of Wang et al. (2009) which is relevant. We will add discussion in the related work. However, Wang et al. (2009) is not a generative method and the task setup is different. So we are not able to directly compare with this method. Also, as an early method there is no publicly available code or synthesized results. Wang et al. (2009) is a classic method based on Kwatra et al. (2005) that synthesizes texture variations from a given image example. So Wang et al. (2009) requires an exampler strand hairstyle (or combination of two if they are compatible) as well for the global shape and synthesizes only some local details. For the hair representation, Wang et al. (2009) use PCA to encode each strand as a classic method while recent neural representations are more advanced. The hierarchy of coarse-to-dense hair strands is modeled by clustering, which is suitable for the spiky hairstyle in their illustration. But in more general cases, the densification of hair strands should concern more than one sparse guide strand. We believe that the relationship between dense strands and sparse guide strands should be modeled by **sampling and interpolation** rather than **clustering**. That is, each dense strand is affected by not only a single guide strand as from its own cluster, but also a collection of guide strands in its neighborhood. Our method is conceptually different from (and most likely more advanced than) Wang et al. (2009). Nevertheless, how to adapt the representation from Wang et al. (2009) to modern neural networks is non-trivial and worth exploring. For these reasons, we believe that HAAR is a more recent and directly related SOTA method for us to compare with. But we will discuss Wang et al. (2009) in the related work section. **[W3] Details of nearest neighbors upsampling for fair and transparent comparison** For both our learning-based upsampling and nearest neighbor upsampling, we first need to sample a collection of root points for the dense strands on the head scalp UV map. In the usual pipeline of VAE generation and reconstruction, this is achieved by sampling from the learned density map. Specifically in the experimental evaluation here, we use oracle roots from the ground truth to ensure fair and direct comparisons. For nearest neighbor upsampling, for each sampled root of the dense strand, we identify one guide strand whose root is the nearest to the sampled root. Then we make the dense strand at this root is the same as the nearest guide strand, i.e., we copy this guide strand and translate it to the sampled root. Thanks for your suggestion. We will add the clarification to the appendix. - Wang et al. (2009) Example-Based Hair Geometry Synthesis. - Kwatra et al. (2005) Texture optimization for example-based synthesis. --- Rebuttal Comment 1.1: Comment: thanks for the rebuttal and clarification. I will raise my score a bit. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Sftt, We appreciate your insightful comments and suggestions, which helped to enhance our paper for better clarity and evaluation. Thank you very much for recognizing our contribution and raising your score. Best regards, Submission 11742 Authors
Summary: The authors propose reformulating hair strand generation by considering it in the frequency domain (DCT). This approach allows for decoupling high and low-frequency strands via frequency thresholding (low-pass filtering). The main idea is that we can first generate a set of sparse, low-frequency strands that can then guide the generation of dense, high-frequency details. The pipeline begins with k-medoids clustering on a dense, low-pass filtered set of hair strands to obtain a sparse set of guide strands. They then train a VAE (Variational Autoencoder) using a dual-branch hybrid point-voxel architecture (PVCNN with an additional decoder). This VAE generates sparse guide curves, which are then used to generate the remaining dense strands (still low frequency) via a densification network. Additional high-frequency model is then used to add a variety of high-frequency details to the low frequency dense strands generated from VAE + densification networks. Strengths: - Separating low and high frequencies is an interesting approach. It makes sense that artists work this way, focusing on low-frequency details first to outline the direction of the curve and then adding high-frequency details. - Good motivation for using DCT instead of DFT - The adaption of PVCNN for hair strand is quite clever and makes sense. There are several modification made to fit the hair strand problem ('voxelize' based on root point which reduce the space to 2D) - The guide curve encoder, decoder and the densification process are all trained jointly, which helps unify the model toward generating accurate hair strands. - One benefit of separating low and high frequency is the ability to generate high frequency details for each hairstyle. Weaknesses: - While separating low and high frequencies makes sense for artists, I'm still not entirely convinced that an automated pipeline must follow the same process to achieve the best results. There are benefit to separating them such as being able to generate variety of high frequency details from the same low frequency strands, but diversity can also be accomplished as a whole. I think a simple experiment that varies the frequency threshold for low-frequency strands (eventually removing the low-pass filter entirely and using all strands for clustering) might help. so we can see the benefit of separating low and high frequencies. - As with any pointnet based model, the runtime can be higher than pure convolutional, as mentioned in the limitations in the appendix of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: - why the use of depth-to-space upsampling instead of deconv? - I'm a bit confused about the density map for sampling root positions, which is necessary for decoding back into the proper position. I would appreciate some additional clarification on how this works Others: - Line 203, (T)o enhance … Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors clearly outline the limitations in the appendix. This includes runtime (compare to pure convolution) and example of failure cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and appreciation of our method and innovations. **[W1] On the effectiveness of DCT frequency decomposition, and separation of low- and high-frequency with varying threshold** We appreciate the constructive suggestion. The designing choice to learn first low- then high-frequency components adheres to the well known principle of spectral bias [28] for learning and generalization of neural models. This principle has also been adapted by some implicit neural representation models with frequency band controlling in the phase of training / optimization, e.g. BACON [A] and SAPE [B]. Another advantage specific to our per-strand representation is that, as we mentioned in Sec 3.1, we downsample the low-pass filtered curve to the resolution of two times the frequency threshold, which, from the Nyquist sampling theorem and empirically from the quantitative evaluation, are accurate enough to represent the low-pass filtered curve. So the low-pass filtering can help with data compression and the computation efficiency. For reasons above we expect frequency decomposition helps the representation quality in the neural network model. We show the additional ablation experiments on varying frequency threshold in reconstructing straight and curly hair strands with both low- and high-frequency details in Table C, with hair strands reconstructed by aggregating results from both our low- and high-frequency models. We observe that the frequency threshold in the range from 8 to 12 is optimal, and empirically we use 8 which is more efficient. When the frequency threshold is too low, the low-filtered signal does not capture enough information of the principle growing direction. And when the frequency threshold is too high, high-frequency structure cannot be encoded more efficiently by DCT coefficients, and the increased computation cost hinders optimization. Our representation makes use of both spatial and spectral domain with correct setup of frequency threshold. We also clarify that separation of low- and high-frequency components does not affect clustering (and in fact we do not perform clustering, but instead we sample guide strands. It just happens that we observe that the optimal solution of the guide strand sampling can be achieved from the k-medoid clustering as resulting final medoids set. ) Using low-frequency strands for sampling is just an implementation choice for the concern of efficiency, because the low-frequency can be resampled to lower resolution of controlling points according to the Nyquist sampling theorem, which is more efficient to process on. And we think the principle growing direction from the low-pass filtered strand is representative enough for sampling the guide strands. **[Q1] Depth-to-space upsampling instead of deconv** In 2D CNNs for image reconstruction and generation, deconvs have well-known drawbacks of checkerboard artifacts [D] which can be resolved by depth-to-space upsampling [E] without losing computation efficiency. So we use depth-to-space upsampling by default. **[Q2] Density map** At the end of the conv branch in the PVCNN decoder, we output a $\mathbb{R}^{W × H}$ density map, same to the spatial resolution of the conv feature map. The density map is trained to optimize towards the GT probability values of a root points falling in each of the $W × H$ grid, for each training example. In training, we use oracle root points from the encoder and optimize the density map and strands simultaneously. During inference, we can sample root points from the probability maps when the oracle root points from the encoder are unknown, before generating strand details. Each sampled root points is assigned with a random 2D UV coordinate within the small square grid it comes from. In practice, we output two density maps from the conv branch decoder, one for guide and one for dense strands, as mentioned in the appendix (the loss function part). At the implementation level, the sampling process can easily be helped by the torch.multinomial() function. Although the density map sampling does not guarantee the exact same original root positions, empirically with a reasonable resolution of density map, we find that the resulting root positions correctly resemble the distribution of root points, and the reconstruction of the whole hairstyle is still significant advantageous over grid-based baselines. Our evaluation are all based on set chamfer measurements, thus not requiring strand-to-strand correspondence for evaluation. - [A] Lindell et al. (2022) Bacon: Band-limited coordinate networks for multiscale scene representation - [B] Hertz et al. (2021) Sape: Spatially-adaptive progressive encoding for neural optimization - [C] Shen et al. (2023) Ct2hair: High-fidelity 3d hair modeling using computed tomography - [D] Odena et al. (2016) Deconvolution and checkerboard artifacts. - [E] Wojna et al. (2017) The Devil is in the Decoder: Classification, Regression and GANs --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thank you for the detailed response. The thresholding frequency experiment shows that separating frequency is beneficial, as the consistent performance drop at both low and high thresholds indicates that this separation is indeed useful. It's also nice that authors also compared with recent work like HAAR and achieved better (though a bit marginal) results in the user study. However, while these aspects do strengthen certain contributions, I understand the other reviewers' concern that the overall approach could be seen as a series of design choices rather than a substantial contributions. But I'd argue that the combination of these elements does create a novel pipeline that hasn't been explored before. Whether this contribution is significant enough for NeurIPS is debatable (it might align more with computer graphic/vision conference). Regardless, I still believe it offers value. After further consideration, I still stand by my original rating. I think the rebuttal is compelling enough for me. --- Rebuttal 2: Comment: Dear Reviewer vBP9, Thank you very much for your positive feedback, for sharing your valuable insights with your expertise, and for recognizing our contributions of "a novel pipeline that hasn't been explored before". **Regarding the performance gap from user study vs. HAAR**, it is hard to quantify the performance gap by averaging the subjective scores between 1-10 and conclude as “marginal” improvement, since many users often provide conservative mid-range scores when feeling uncertain. We have observed a clear advantage in the quality of our results over HAAR, as evidenced by our data and experience, which shows notably fewer instances of failure or unnatural examples. This improvement is attributed to our more sophisticated and flexible hair representation and learning model design. We have highlighted some of HAAR’s common issues in Figure B of our rebuttal PDF. Due to space constraints, we will include a more qualitative comparison in the revised manuscript. **We believe that our work will contribute values to the NeurIPS community.** Our research addresses a generative learning problem and engages with several prominent ML topics, including geometric deep learning, learning on sets, implicit neural representations, and graph convolution. We hope that our approach may inspire further exploration in these sub-communities, e.g. regarding hierarchical abstraction methods for learning structured non-Euclidean data representation. Thus, we believe that NeurIPS is an appropriate venue to share our approach and exchange thoughts, which could potentially inspire other emerging applications. Thank you once again for your dedicated review, which helped in guiding improvements to our work, especially with regard to the suggested ablation experiment. Best regards, Submission 11742 Authors Title: Values of our work to the NeurIPS community
Summary: This paper proposes a system to generate hair geometries in a coarse-to-fine manner via VAE. The paper demonstrates the effectiveness of the proposed method via some simple baselines such as grid-based methods. The paper is relatively easy to read. But it presents limited comparisons with existing SOTA methods. The paper also has very limited discussion with related works, which makes its positioning unclear. Strengths: The proposed method combines simple techniques such as DTC, k-medoid, and VAE, which could potentially be a plus as these methods are well-studied and can be improved further. The results seem to suggest the effectiveness of the proposed components. The coarse-to-fine generation method has the potential to generate fine details with reasonable computation efficiency. Weaknesses: It concerns me a little bit whether the paper's technical contribution is sufficient. Most of the introduced techniques, such as applying k-medoids, DCT, and/or VAE, are well established. It's also not clear from the related work section that these techniques are new to the hair generation applications. I fail to see comparison with SOTA methods such as HAAR [41] in the evaluation, which doesn't help with this concern. The evaluation metrics for ablation seem to focus on reconstruction, while the paper seems to claim generation as the main task. Technical Quality: 2 Clarity: 2 Questions for Authors: L145 - it's a bit unclear to me what this theorem 1 entails in the context of hair generation. Would be nice to clarify. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors include a discussion of limitation at section E. Additional limitation includes the concern of generative quality of VAEs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive feedback. **[S1 and W1] Contributions** Please refer to our global rebuttal. Although the methods of DCT and k-medoids are well established, these methods are never used in any prior work for hair modelling, because *how to employ DCT and k-medoids to hair modelling is non-trivial and not straightforward*. Our novel hierarchical hair representation proposes strand-level frequency decomposition and hairstyle-level optimal representative subset guide sampling, which enable using these techniques. The choices of DCT and k-medoids are well motivated against more popular variants DFT and k-means. Especially the discovery of k-medoids results as the optimal way of sampling guide strands and prove it theoretically (Theorem 1) is considered our theoretic contribution. We use VAE as the generative model for our hierarchical strand hairstyle generation, which was also used to learn strand codec [32]. We do not claim any contribution on novel generative models there. For technical contributions, apart from the more foundational methodology and theory on abstraction of hair strand data hierarchy (Sec 2), we would like to highlight also our several novel innovations with advantages in the corresponding neural model design (Sec 3) associated to the hierarchical hair parameterization. Please refer to our global rebuttal. **[W2] Relationship to existing works** Please refer to our global rebuttal. We will specify our innovations, and especially which components in our method are new with regard to existing strand hair modelling methods in related work to better show our contributions. **[W3] Evaluation against HAAR** Please refer to our global rebuttal. **[W4] Absence of evaluation metrics for strand hair generation quality** Indeed, quantitative evaluation of hair generation is hard, because currently there is no such a measurement to evaluate the generation quality of strand hair. Even the existing SOTA work HAAR did not evaluate the generation quality. Evaluation of image generation can take domain-specific PSNR, SSIM measures, as well as FID and LPIPS that require a pretrained semantic encoder (eg. VGGNet). Unfortunately, strand hair has neither these domain-specific measures nor a VGGNet-like semantic encoder for strands to use FID and LPIPS measurements. We will add lack of evaluation metrics as a limitation of the whold field of generative hair modelling in the discussion. Possible future direction could be to train a semantic encoder for strand hairstyles with self-supervised learning to enable FID and LPIPS measures, while it requires a large amount of high-quality data. We expect the most promising direction to achieve this would be to apply CT2Hair [A] in large scales for human hair capture and reconstruction, which requires specific equipments. Instead, in the VAE generative framework, we quantitatively evaluate the VAE reconstruction as a way to show the quality and advantage of our hierarchical hair representation. The generation quality is shown by qualitative examples and a well-established human assessment study against the SOTA method HAAR in the rebuttal. **[Q1] Theorem 1** Theorem 1 implies that, if you want to sample a number of $k$ guide strands from the original hair with dense strands, then theoretically the optimal way is to perform k-medoids clustering on the dense hair strands and take the resulting set of medoids as guide strands. And this resulting set of guide strands (called the representative guide curve set as in Definition 1) has the smallest possible chamfer distance from the original dense strand set, from any other way to sample the same number of $k$ strands. (Note that we do not perform clustering, but we just use the resulting medoids as sampled guide strands. It just happens that the optimal solution of the guide strand sampling can be achieved from the k-medoid clustering as resulting final medoids set. ) In this way the extracted representative guide curves ensure the best possible retention of hairstyle characteristics for the modelling of hierarchical hair representation as used for training the neural generative model. - [A] Shen et al. (2023) High-Fidelity 3D Hair Modeling using Computed Tomography.
Summary: The paper presents a representation for learning a generative model of hair strains. The suggested representation is hierarchical, going from low-frequency to high-frequency details. In turn, the suggested representation is incorporated into a VAE architecture. The method is evaluated on a dataset of synthetic strand hairstyles. Strengths: Both quantitative and qualitative results are provided. I appreciate the effort put into addressing the challenging task of hair strand generative modeling. Weaknesses: Presentation quality. The paper is difficult to follow. For instance, Section 3 should make a clearer distinction between implementation details and method details. Another example is Figure 4, which is challenging to interpret. The proof details are hard to follow as well. Contribution he paper primarily presents itself as a collection of design choices, such as DCT, clustering, VAE, and PVCNN. For example, the contribution list includes "utilize the discrete cosine transform" as a contribution, and clustering is also claimed as a contribution. It is challenging to classify these specific choices as contributions. Instead, a more detailed discussion, perhaps extending from the concrete focus of hair modeling to broader ML topics, would have been more appreciated. Evaluation The method is evaluated solely on a single dataset consisting of synthetic data. It is anticipated that this method should be applied to real data or in other settings for a more comprehensive evaluation. Technical Quality: 2 Clarity: 2 Questions for Authors: I would appreciate any response regarding the weakness stated above. Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. **[W1] Presentation quality and clarification** We will improve the proof details for better readability. For Sec 3 and Fig 4, we reviewed them and think the information is technically precise and clear. However, we understand that, since the whole field of generative hair modelling is very new, it may require some effort and time for readers to understand our methodology. So we make the following clarifications. **[W1.1] Sec 3 distinction between implementation/method details** We clarify that Sec 3.1 is the details of the parameterization of hair data for the neural model to process on, based on methods in Sec 2. These parameterization setup details are crucial to understand the following neural model design. All the information in Sec 3.2 is the necessary **method details** to understand our method and motivation of neural model design based on the proposed hierarchical hair representation, while all the **implementation details** of the model are delayed to appendix C.1. **[W1.2] Fig 4** Each of the Fig 4 (b) - (d) are corresponding to a paragraph in Sec 3.2, which can be recognized by the subfigure and paragraph titles. We will add clearer pointers to connect them. In more details: - Fig 4(a) illustrates the setup of guide / non-guide strands. - Fig 4(b) The guide strand model. The main architecture illustration follows the PVCNN convention with conv and pointnet branches, plus components of 1D strand encoder and decoder and the VAE reparameterization to adapt to the generative hair modelling task. - Fig 4(c) The densification model, combining bilinearly interpolated features (above) and the graph features (below) for decoding the dense hair strands. The graph features aggregate information from neighboring guide strands to sampled query locations on the scalp, as inspired by implicit neural representations (see the text in L224-233 for detailed explanation), which allows modelling arbitrary number of dense strands and density, and end-to-end joint training together with guides (b). - Fig 4(d) Adding high frequency, which is similar to the architecture in (b). **[W1.3] Proof of Theorem 1** We update the proof with more explanations for better readability. **Proof**: Assume that from the $k$-medoids algorithm, we obtain the set of medoids $\mathcal{U} = \{ u_1, \dots, u_k \}$ with each $u_i$ is from the set of dense hair strands $\mathcal{H}$. Then, from Eq. (2), $\mathcal{U}$ achieves minimum sum of cluster element-to-medoid distance $\sum_{i=1}^k\sum_{l^L \in \mathcal{S}_i}d(l^L, u_i)$. Next, from the algorithm implementation, each element $l^L$ in $\mathcal{H}$ is closest (or equally close) to the medoid of its own cluster than that of any other cluster, so $\mathcal{U}$ is the subset of $\mathcal{H}$ with cadinality $k$ that achieves minimum $\sum_{i=1}^k \sum_{l^L: \min_{u_i \in \mathcal{U}} d(l^L, u_i)} d(l^L, u_i) = \sum_{l^L \in \mathcal{H}}\min_{u_i \in \mathcal{U}} d(l^L, u_i)$ which is the minimum sum of each dense strand $l^L$ with its nearest medoid $u$. After taking the average (divided by a constant $|\mathcal{H}|$), $\frac{1}{\mathcal{H}}\sum_{l^L \in \mathcal{H}}\min_{u_i \in \mathcal{U}} d(l^L, u_i)$ is in the form of a **unidirectional chamfer distance** from $\mathcal{H}$ to $\mathcal{U}$. So $\mathcal{U}$ achieves the minimum unidirectional chamfer distance from $\mathcal{H}$, from all possible subset of $\mathcal{H}$ with cadinality $k$. Then we show that in the reverse direction, the unidirectional chamfer distance from $\mathcal{U}$ to $\mathcal{H}$, $\frac{1}{|\mathcal{U}|}\sum_{u_i \in \mathcal{U}}\min_{l^L \in \mathcal{H}} d(l^L, u_i)$, is constantly 0. This is easy to infer because $\mathcal{U}$ is a subset of $\mathcal{H}$, and each $u_i$ can find the same element from $\mathcal{H}$ that is closest to itself with distance 0. Aggregating both directions, we conclude that $\mathcal{U}$, from all possible subset of $\mathcal{H}$ with cadinality $k$, achieves the minimum (bidrectional) chamfer distance between $\mathcal{U}$ and $\mathcal{H}$, i.e., $\mathcal{U}$ is the representative subset of $\mathcal{H}$ with cardinality $k$ according to Definition 1. **[W2] Contributions** See our global rebuttal. Some notes specific to your comments: - Instead of "utilizing the DCT", here we claim the strand-level frequency decomposition in our novel hierarchical hair representation, which is for the first time used in hair modelling. Introducing DCT to hair strand modelling, again for the first time, can be a (less significant) contribution but not our main focus. How to apply DCT to hair modelling is **non-trivial**, without our novel representation design with frequency decomposition. - Adaptation of geometric deep learning models such as PVCNN is also **non-trivial**, because hair strands are not point cloud data for direct application. We would refer to S3 in Reviewer vBP9's comments suggesting that this adaptation and modification in our model design are "quite clever and makes sense". - We clarify that we do not "cluster" strands but we "sample" guide strands. See our reply to Reviewer Sftt [W1.2] - We mainly focus on the learnable hierarchical generative hair representation and its neural model design. For broader ML topics, our novel method and neural architecture design is highly related to the topics of geometric DL, set-based modelling, graph NNs, implicit neural representations, and hierarchical abstraction of non-Euclidean data. In this way, our work can potentially inspire these communities on method design and extending the application to other novel application. Thanks for the comments and we will add this to the discussion. - Besides the hair representation (Sec 2), we would like to highlight our several novel innovations with advantages in the corresponding neural model design (Sec 3) associated to the hierarchical hair parameterization. Please refer to our global rebuttal. --- Rebuttal Comment 1.1: Title: reply to authors Comment: I appreciate the authors’ thorough rebuttal and have no further requests. However, I remain concerned about the contribution of the paper, as noted in my original review. The paper presents itself as a collection of design choices, and I still question whether these specific choices can be classified as significant contributions to hair modeling. --- Rebuttal 2: Title: Regarding novel contributions by applying existing classic techniques in (part of) our method Comment: Dear Reviewer mnz6, Thank you very much for your response. Your comments have been valuable in helping to improve the evaluation and readability of our paper. However, we still stand by and wish to defend the contributions we have made. - **[Regarding classic methods (DCT and k-medoids) for extracting the hierarchical hair representation]**: Our major contribution here lies at the high-level hierarchical representation design, without which the direct application of aforementioned methods is impossible. For these individual classic methods, besides being introduced to hair modelling *for the first time*, we contribute by presenting the **insights and theoretical justifications** on why we opt for these classic methods, that are less utilized by recent researchers, instead of more common alternatives such as DFT and grid-sampling/k-means: - We use the DCT instead of the more popular DFT, by showing the motivational insights that DFT demonstrates *the oscillation issue* for frequency decomposition of strands as open curves. - We use k-medoids to sample guide strands, and show it is mathematically the optimal way to sample guide strands, which is more advantageous than the more commonly used grid-sampling / k-means in theory. In the meantime, using k-medoids for sampling and the theoretical optimality was *never discovered before*. - **[For existing methods (PVCNN) that inspire our neural model architecture design]**: - Geometric deep learning models are first introduced to strand hair modelling. Also, adapting *(rather than directly using)* existing geometric deep learning models (such as PVCNN) to hair is not straightforward, because they were originally designed **for point cloud data** which is highly different from hair strands. In fact, at the implementation level, we were unable to directly use almost any existing code from PVCNN due to the huge difference, but we built the architecture entirely *from scratch* with the PyTorch framework. - In addition to the components related to existing methods, we would also highlight our several innovations in the learning model to handle **flexible non-Euclidean parameterization, resolution-free representation to generate any number of strands with any density, and the end-to-end training of the high-dimensional strand hair data**, etc. We hope the significance of these innovations can be acknowledged once a reader thoroughly follows the full pipeline of our methodology. Overall, we fully understand the concern that using some existing techniques as components in (part of) the methodology can be considered less novel contributions. But we would like to present a slightly diverse philosophy from a well-famed researcher with a good reputation in the field: **"A new use of an old method can be novel if nobody ever thought to use it this way"**; **"the novelty arose from the fact that nobody had put these ideas together before"**. (Black, 2022). We would be grateful if you could consider re-evaluating our contributions and innovations, taking into account the full pipeline of our methodology and learning model design, as well as the insights presented in our work. Regardless of your decision, we sincerely appreciate the time and effort you have dedicated to the review. Your comments are invaluable in helping us to improve our work. Best regards, Submission 11742 Authors - [Black, 2022]: Michael J Black (2022). Novelty in Science: A guideline for reviewers.
Rebuttal 1: Rebuttal: We thank all reviewers for their precious time and effort they put into reviewing our work. We address some common concerns of unclear contributions and short of evaluation as follows. We will update the paper with evaluation results and other clarifications. **Contributions related to existing work** The most related work HAAR [41] (the only strand hair generation method with code that we can compare with) adopts hair representation from [32], which maps each strand to a code with a pretrained strand codec VAE, then project to 2D scalp UV map that can be processed with regular 2D CNNs. The same representation is widely used in other recent strand hair modelling methods [40, 46, 36] for different tasks. In contrast, we do not borrow any existing hair representation, but we design a brand new hierarchical hair representation method with associated neural models. Our method is more flexible, sophisticated and better performing than grid UV map (2D CNNs) + strand codec. Before [32] (ECCV2022), earlier strand hair methods [23,33,40,42,44,45] mostly have a different focus on optimizing strand growth or connecting segments, while learning applies to other intermediate representations (e.g. orientation field) but not directly on strands, thus not requiring hierarchical strand representation with abstraction. In these methods, the strand representation is just simple polylines (a sequence of points). Some early work also apply PCA and clustering to strands [A]. Our **hierarchical hair representation methodology** and the associated **neural model design** are novel with a number of innovations in strand hair modelling. To compare with existing methods, we structure our contributions as follows - Contributions on hair representation and hierarchical abstraction *(Sec 2)* - *(per-strand level)* For the first time in hair modelling, we apply frequency decomposition on strands to facilitate learning following the spectral bias principle. We introduce DCT for that purpose, also for the first time in hair modelling, and showcase that DCT is better than the popular DFT for strands as open curves. - *(collection-of-strands level)* For the first time in hair modelling, we introduce to use k-medoids clustering algorithm for guide strand sampling. We show that this way of guide strand sampling is theoretically optimal (closest to the dense strand set in terms of the chamfer distance) with a mathematical proof, while this property is not discovered before. - Contributions on neural model design *(Sec 3)* - For the first time in hair modelling, we adapt the family of non-Euclidean geometric deep learning models for learning on hair strands, instead of 2D CNNs, for more flexible modelling of hair strands. We opt for PVCNN for its efficiency, with modifications (as originally designed for point clouds) to fit the hair strand problem. - We propose a novel neural mechanism for learning strand upsampling / interpolation with graph message passing. Inspired by implicit neural representations, our method handles modelling any amount of dense strands at any sampling density, which is never seen in deep models that directly process on strand hairstyles. Another benefit is enabling end-to-end joint training with guide strands, which we haven't seen in learning-based strand modelling either. We will specify which components in our method are novel compared to existing strand hair modelling methods in related work to better show our contributions. **Contributions related to existing techniques DCT, k-medoids, PVCNN** Instead of *using some technique*, our major contribution is in the novel design of hierarchical hair representation and associated neural model design, or *how we make it possible to use these techniques* in hair modelling, which is not straightforward and thus never done before. Please refer to our replies to Reviewers mnz6 and VBS2. **Evaluation against SOTA method HAAR** 1. Code of HAAR was not available before NeurIPS submission. So comparing with the full pipeline is difficult. Therefore, we include comparison with the hair representation they use, which is exactly the "grid-based + strand codec" baseline in Table 2, as we stated in the Sec 4, and also in Table B in the rebuttal PDF. We will specify the connection to HAAR in the Table 2 caption (like Table B). The quantitative comparison shows the advantage of our novel hierarchical hair representation. 2. HAAR released their code and inference model (but not the artist hairstyle data) after NeurIPS submission. Due to lack of evaluation metrics for hair generation (see reply to Reviewer VBS2 W4), we conduct user study with human evaluation. We randomly generate 30 hairstyles using our method without selection, and 30 from HAAR, in total 60 examples, randomly shuffled before presented to the users. Each user will give a score 1-10 to each hairstyle on how realistic the generated hairstyle looks. For Code of Ethics, we provide examples and instructions on the Google form in the rebuttal PDF Figure A. We collected 54 valid responses. The resulting average scores are in Table A. suggesting the advantage of our generation over HAAR. We also show results in different hairstyles categories, For short hair, both methods perform good. Our method perform significantly better on long hair and especially curly hair, due to our sophisticated representation design, e.g. frequency decomposition, the learned neural interpolation, end-to-end training that facilitate optimization. Some qualitative issues with HAAR are shown in Figure B. **Evaluation on real-world data** The CT-scanned human hairstyles in CT2Hair [B] are the real-world strand data with the best quality we know. We evaluate on them in Table B of the rebuttal PDF, corroborating the advantage of our hair representation. - [A] Wang et al (2009) Example-Based Hair Geometry Synthesis - [B] Shen et al (2023) High-Fidelity 3D Hair Modeling using Computed Tomography Pdf: /pdf/cf16b8f46461c2e4fe2e8506a17b5836415927fd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Action Imitation in Common Action Space for Customized Action Image Synthesis
Accept (poster)
Summary: The paper introduces “TwinAct”, a novel method for separating actions from actors in few-shot action image generation using text-guided diffusion models (TGDMs). It creates a “common action space” to focus on actions alone, allowing for precise customization without actor-specific details. The process is streamlined into three main steps: constructing the action space from key phrases, imitating the action within this space, and generating varied, context-adaptive images using an action similarity loss. And the results demonstrate its robustness and versatility for personalized and creative content generation. Strengths: 1. The idea of imitating actions through a combination of action bases within this space is interesting and provides a new way to achieve few-shot customization. 2. The paper is well-structured and clearly articulated. The method is technically sound, with a clear explanation of the steps involved, from building the action space to generating the final images. 3. The experimental results are impressive, especially the robustness of TwinAct. The potential applications for both customized characters and actions are of great interests. Weaknesses: 1. In Figure 6, why do all the methods show a performance degradation when two people are involved? 2. Can the authors explain in detail how the model can be customized for both character and action? 3. Could the authors provide examples and analyses of where TwinAct fails or performs suboptimally? What steps can be taken to address these failure modes? 4. Adding something like Figures 8 and 9 from the appendix would make the results more convincing for readers who do not venture to the appendix to search for plots. Technical Quality: 3 Clarity: 3 Questions for Authors: Please check the detailed comments in the weaknesses part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **For Reviewer** **2ZgR** We are sincerely grateful to the reviewers for dedicating their time and effort to review our work, and we appreciate the recognition of the novelty of our approach and the significance of our given the impressive result. We will try to address reviewer's comments in detail below. ### **Q1. Performance degradation when two people are involved** It should be noted that two factors influence the identity of the actor. (1) The first factor is **the actor identification from the reference image**. In this paper, the customized action is decoupled from the actor in the reference image via a common action space (Figure 1, second and third rows). (2) Another factor is the **confusion between actors in the new context**, as previously mentioned with regard to Batman and Superman. It should be noted that even in the absence of consideration for customized actions, the stable diffusion model will nevertheless result in confusion regarding the identity information when generating multi subject[1,2]. The issue of how to eliminate confusion when generating multiple concepts represents a significant challenge, although it is not the primary focus of this paper. As a result, all models show performance degradation with two actors **due to the confusion of the multi-subject generation**. The issue of how to eliminate the confusion represents a significant challenge, although it is not the primary focus of this paper. [1]Easing Concept Bleeding in Diffusion via Entity Localization and Anchoring [2]FineControlNet: Fine-level Text Control for Image Generation with Spatially Aligned Text Control Injection ### **Q2. How the model can be customized for both character and action** The implementation of custom action generation can be integrated into existing custom production frameworks in an seamless manner. In the event that both the actor and the action are to be customized, it is only necessary to update the tokens of the action and the actor using different strategies. In particular, the actor tokens are updated directly by inversion, while the action tokens are generated by PCA coefficients, which are also updated by inversion. ### **Q3. Bad case analyses** One of the suboptimal results is the problem of confusion that arises when the generation process is conducted by multiple individuals. One potential solution is to introduce spatial constraints, such as bounding boxes[1] or manipulation attention maps, as discussed in reference [2]. It is also noteworthy that it is challenging to generate customized actions in certain specific contexts, such as when dealing with reptiles, such as snakes. [1] FineControlNet: Fine-level Text Control for Image Generation with Spatially Aligned Text Control Injection [2] Divide & Bind Your Attention for Improved Generative Semantic Nursing ### **Q4. Adding Figures** Thank you for your suggestions. We will readjust the layout in the final version to include the supplement of Figure 8 and Figure 9 in the paper, so that readers can have a better understanding --- Rebuttal Comment 1.1: Comment: Thank the reviewers for the rebuttal. After a careful reading of the author's response as well as the comments from the other reviewers, my concerns have been clearly addressed. I decide to maintain the recommendation for acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful consideration and for taking the time to engage with our responses. We are grateful for your recommendation for the acceptance of our paper.
Summary: This paper aims to tackle the few-shot action image generation problem by customizing text-conditioned diffusion models. To decouple action from actor, the proposed method introduces a common textual action feature space to avoid the interference of the actor's visual semantics during action generation. The experimental results demonstrate the proposed method is effective in generating customized actions and preserving actor identities. Strengths: - This work focuses on customizing text-conditioned diffusion models for few-shot action image generation, which is an interesting and meaningful topic; - The motivation of decoupling action and actor for action image generation is good, and the proposed textual action space is novel and effective in achieving the decoupling. - The proposed method is easy to understand, and the writing is good. Weaknesses: The overall method is technically sound and the demonstrated results are great, but the reviewer still has some concerns about the method designs, - The reviewer thinks it is reasonable to use three embeddings (i.e., k=3) to encode each action. However, it is hard to understand why the PCA is separately applied along the token embedding axis (e.g., when k=1, only all the first embeddings from each action are processed by PCA). - The collected image action dataset is very small (12 actions only). Why not use some image/video action datasets? And, more importantly, how to make sure the learned/customized model can be generated to unseen action image generation. - Regarding the action similarity loss, it is hard to ensure that the high-level action semantics will be learned, as there are no explicit constraints. - minor issue: the high cost of action phrases filtering: the authors generate 1.2K action phrases using GPT-4 and a total of 2x1.2K images are generated using Stable Diffusion XL; The generation is expensive and the manual filtering process is labor-intensive. /I will improve my rating if the author's response addresses all my concerns well. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the Weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are discussed and there is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **For Reviewer** **Q179** We are sincerely grateful to the reviewers for dedicating their time and effort to review our work, and we appreciate the recognition of the novelty of our approach and the significance of our given the impressive result. We will try to address reviewer's comments in detail below. ### **Q1. why the PCA is separately applied along the token embedding axis** We are grateful for your valuable suggestions. In response, we have included additional experimental results in the following table. In our supplementary experiments, we focused on encoding action phrases at the phrase level, as opposed to the token level, using the CLIP model and leveraging the output of the final layer pool. | Methods | S_Actor | S_Action | | ---------- | ------- | -------- | | w/ token | 73.34 | 69.47 | | w/ phrases | 60.27 | 58.66 | The experimental results presented in Table 2 of the Rebuttal PDF indicate that conducting principal component analysis (PCA) on the embedding of individual tokens is more effective than performing PCA on the embedding of the entire action phrase. This is attributed to the greater flexibility afforded by analyzing the token dimension. For instance, when analyzing the phrase "raising hand," distinct coefficients can be assigned to the words "raising" and "hand" when PCA is applied to the token dimension. Conversely, applying varying coefficients to verbs and nouns across the entire phrase dimension is not feasible. ### **Q2.** **Enhancing Action Dataset for Unseen Action Image Generation** (1) Thank you for your suggestion, **w**e acknowledge that the current dataset is limited in scale. We will expand it further in subsequent work. However, we have made a concerted effort to ensure the diversity of the data by including various types of movements, such as localized movements, whole-body movements, and both single-person and two-person movements. Furthermore, the current dataset includes some actions that Stable Diffusion (SD) has not encountered before, such as those shown in **rows 1-4 of Figure 4 in the main paper**. Additionally, we have incorporated a more challenging action **in Figure 2 of the Rebuttal PDF.** (2) Collecting data from image or video action datasets is a good idea, but we find some datasets such as Stanford 40 Actions [1], ADHA [2], CMU MOCAP [3], tend to be designed for action recognition, meaning that most of them only contains **common actions** **(SD-know)** such as **running, jumping, playing guitar, chopping vegetables, etc.** Our goal is to fine-tune the stable diffusion model to generate images of **unseen actions**, so we focus on collecting images of actions that are **outside the distribution.** (3) For how to generate images of unseen actions, we have experimented in our paper. These experiments are illustrated **in rows 1-4 of Figure 4 in main paper and Figure 2 in the Rebuttal PDF.** These actions are unknown actions (as evidenced by the results shown in Figure 8 in the Appendix). The experimental results show that TwinAct is capable of generating these unseen actions effectively. Moreover, TwinAct can combine these actions with previously unknown actors, as demonstrated **in Figure 5 of the main paper (Actor OOD + Action OOD).** [1]http://vision.stanford.edu/Datasets/40actions.html [2]http://[www.mvig.org/research/adha/adha.html](https://www.mvig.org/research/adha/adha.html) [3]http://mocap.cs.cmu.edu/ ### **Q3.The action similarity loss** (1) The reconstruction loss is designed to **focus on the low-level details** of an image (see ADI[1], Section 1). However, it is prone to **overfitting** to the reference image (see Dreambooth[2]). To address this, we propose incorporating supplementary supervisory signals, specifically the semantic similarity of images. (2) CLIP, which has been pre-trained through image-text contrastive learning, has demonstrated **robust performance in numerous action recognition tasks** [3, 4]. Therefore, we utilize it as the action encoder. (3) The outcomes of the ablation experiments, as illustrated **in Figure 7(b) and row 4 of Table 2 in the main paper**, also indicate that the inclusion of non-low-level visual similarity loss can enhance the generation of customized actions. [1] Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation [2] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation [3] FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action Recognition [4] ActionCLIP: A New Paradigm for Video Action Recognition ### **Q4. The high cost of action phrases filtering** The objective of our filtering process is to eliminate certain actions that are incapable of being accurately generated by SD. Consequently, our STEP for generating these images necessitates **25 steps** to ascertain the viability of action generation. In comparison, the typical number of STEPs for generating an image is 50, which expedites the generation process to a certain extent. The generation of the images for filtering required a total of **2.5 hours on an A100 GPU**. In the context of a manual filtering process, there are some possible **ways to reduce the cost**. The deployment of a multimodal large language model, such as GPT4, can serve as a filtering process. This approach entails the generation of images, which can then be subjected to filtering by GPT4. An additional method for reducing costs is to utilize an image description model. The filtering process entails the generation of corresponding text descriptions and the determination of whether these descriptions contain action phrases or a resemblance to the text prompt utilized for the generation of action images. --- Rebuttal Comment 1.1: Title: post-rebuttal-1 Comment: Thanks to the authors for their efforts during the rebuttal. After carefully reading the responses and comments from other reviewers, most of my concerns are well resolved, and I am willing to improve my rating. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer Q179 Comment: Thank you sincerely for your thoughtful consideration of our rebuttal and the feedback from the other reviewers. We are pleased to hear that our responses have addressed your concerns and that you are willing to improve your rating. Please feel free to reach out if any further questions arise. We truly value your feedback and support throughout this process.
Summary: This paper introduced an text-to-action generation framework. In this framework, the authors first abstracted the actions (represented by natural language phrases) with PCA technique into a common action space. Then these high-level action features are fed into a text-to-image transformer network. In order to optimize the whole pipeline, the authors used a CLIP encoder to measure the cosine similarity between generated action images and reference action images. Strengths: 1. The visualization results seem promising. Compared to other approaches in the submission, the introduced approach obtain better qualitative results. 2. The paper is well organized and easy to follow. Weaknesses: 1. The authors claim that PCA is leveraged for feature compression when establishing common action space. Moreover, the authors borrow this technique from facial feature representations. It is known to us all that PCA is somehow an old-fashioned technique in the machine learning community. The authors are expected to provide more detailed explanations on why they choose this technique. 2. The network architecture is quite simple and lacks technical contributions. To the reviewer, the authors simply apply the off-the-shelf text-to-image framework in their pipeline. In the meantime, other components of the framework seems simple and weak in novelty. 3. The authors provide a similarity score as the evaluation metric. However, this score is computed by CLIP, which also serves as the action encoder during training. To the reviewer, the comparison seems not so fair. The consistency in IDs and actions should be more carefully evaluated to demonstrate the efficacy of the method. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to weakness for more details. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors provide no limitations in the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **For Reviewer** **3NDQ** We appreciate your time and effort in reviewing our work, and we have carefully considered your comments. We will be sure to incorporate your suggestions to enhance the overall quality of the paper. We hope the following clarifications can address the reviewer's concerns: ### **Q1. More detailed explanations about PCA** We appreciate your insightful comments and concerns regarding our choice of Principal Component Analysis (PCA) to construct a common action space. While PCA is a traditional technique, it is **still widely used** [1,2] today and has several advantages that make it appropriate for our study. (1) Its robustness to linearly correlated data, computational efficiency, and ease of interpretation fit well with our needs for feature compression and action space generalization. (2) **PCA is only used as part of the data preprocessing** in our method, and can be easily replaced by more advanced methods if needed. (3) To some extent, the superior performance of the common action space constructed based on the "old-fashioned" PCA algorithm in the experiment also verifies the effectiveness of our method. [1] Dual Prior Unfolding for Snapshot Compressive Imaging (cvpr2024) [2] Makeup Prior Models for 3D Facial Makeup Estimation and Applications (cvpr2024) ### **Q2. The network architecture** A straightforward and impactful approach is the goal we strive for in our paper. The innovative of this work can be summarized as follows: (1) **Insights into the relationship between action and actor confusion**: The domain of customized action generation remains relatively unexplored. In contrast to existing paradigms of contrastive learning, our approach introduces an action space as an effective method of biased induction, which decouples actions and actors. (2) **A novel custom token inversion method**: In contrast to typical customization techniques such as TI, dreambooth, and custom diffusion, our approach does not directly learn the parameters of custom tokens through inversion. Instead, we focus on learning the coefficients of action bases. (3) **Overcome the shortcomings in the understanding of reconstruction loss**: The original diffusion loss is not well-suited to the task of learning abstract action semantics from pixel reconstruction. Therefore, we introduce a loss function that measures image similarity, and the experimental results demonstrate its efficacy in improving customized action generation. In summary, our objective is to fine-tune a text-guided to image generative model. However, we propose new designs in multiple aspects, including **parameter initialization, fine-tuning manner, training paradigm**, and provide **detailed motivation, analysis, and ablation experiments** for each component. Extensive experiments also prove that TwinAct can achieve the best performance with its simple and intuitive design. ### **Q3. The evaluation metric** Thanks for your suggestion, we actually used two clips of different sizes. During training, we use **OpenAI CLIP ViT-L (246M)** as the action encoder, and **OpenCLIP ViT-bigG (1.39G)** to evaluate the similarity between generated and reference images. As you suggested, we supplemented the results used for the evaluation using other models (**Align [1]**) as shown in the Table 1 in Rebuttal PDF and below. | S_action | Text Inversion | DreamBooth | Reversion | Custom Diffusion | P+ | ADI | Ours | | :------: | :------------: | :--------: | :-------: | :--------------: | :---: | :---: | :---: | | CLIP | 9.12 | 12.23 | 18.73 | 26.83 | 33.95 | 45.32 | **69.47** | | Align [1] | 10.05 | 11.67 | 17.37 | 28.33 | 30.45 | 44.93 | **70.82** | The experimental results show that TwinAct **still achieves the best results** among the Align-based evaluation results. In addition, we also constructed **user study (Table 1 in main paper)** and **4-dimensional error analysis (Figure 4 in main paper)**. These experimental results also prove the superiority of TwinAct. [1] Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision --- Rebuttal 2: Comment: Dear Reviewer 3NDQ: Thank you for your feedback. We have carefully addressed your comments in our rebuttal and kindly ask you to review them at your earliest convenience. If you have further questions, please do not hesitate to contact us—we are committed to providing thorough responses. Thank you once again for your time and effort in reviewing our work. We look forward to your response. Best regards --- Rebuttal 3: Comment: Thanks for the authors' responses. I found myself learning towards the responses, despite my initial hesitations. --- Rebuttal Comment 3.1: Comment: We greatly appreciate your openness and are glad that our clarifications were helpful. We are grateful for your support and feedback.
Summary: Problem: Preserve the consistency of the reference action when generating images with new actors by decoupling the action and actor properly. Key Idea: The authors propose a method to disentangle actions from actors in text-guided diffusion models and generate new images that exactly replicate the action pose in the given few-shot inputs. The main contributions are: - Generating action phrases using GPT-4 and filtering 832 action phrases know to the model. These are embedded into a common action space with a set of action bases, thus containing no distractions from non-action related information. - The action bases are then combined using a multi-layer perceptron to create new customized actions. Instead of backpropagation to determine optimal coefficients, the authors use CLIP as an action decoder to extract semantic features. - An action similarity loss is introduced between the encoded features of the reference and generated action to imitate the high-level action more accurately. - A novel benchmark consisting of 12 actions to compare with existing methods. The authors also provide extensive analysis and ablations. Strengths: - The proposed method replicates the sample action more precisely with new actors whereas existing methods make mistakes in the details. This shows that the method has successfully isolated the action information to apply it for new image generation. - The examples in the provided figures give a clear idea where other methods are failing while the proposed one successfully replicates the input action with a new actor. - The authors report higher scores than existing methods based on both objective and user study. - The ablation studies show the impact of each contribution and varying the number of principal components (p). Weaknesses: - The identity of actors is preserved is a strong claim which is not reflected in qualitative evaluation. For example, in Figure 4 row 4: the faces of Batman and Superman look similar. In rows 5 and 6, the subject does not exactly look like Leonardo. In the last row, we see Spiderman in place of Superman. - In some cases, the generated image contains a mirrored action of the sample image. For example, in Figure 5 row 2, the generated action is performed with the opposite hand than the sample action in the 3rd and 4th images. - The output generation space is limited, so some complicated customized actions might not be replicable by combining the bases. - It is not clearly shown how changing the value of coefficients for the same action bases impact the generated image. - Currently the model has pre-trained knowledge of only 25 actors and cannot be easily adapted to new ones without retraining. Technical Quality: 3 Clarity: 3 Questions for Authors: - How is the CLIP score exactly being calculated for SAction and SActor? Is it simply an image-to-text matching score? - For the human evaluation, what is the degree of knowledge of the 100 users (average or expert)? - How do you plan to scale this method for a higher number of action variations? Just generating more actions using GPT-4 and adding their embedding to the action space might not be enough. - Is it possible to show examples of what happens if we vary the value of coefficients in the action bases and how they impact the generated image? For example, a crouching position might be a combination of action bases related to sitting and standing. So, if the coefficient of sitting is 0, the generated image should only contain standing. Again, the image might only contain sitting in the opposite case. The intermediate values of these will show various angles of crouching. - Providing a TSNE visualization of the action space might also help in clarifying. - For multiple actors, the authors show examples of images containing only two subjects. Can this be increased to more subjects? - Also, for multiple actors, how is the role in the image defined? For example, in Figure 4 last row, how will we generate Leonardo carrying Obama or Barbie carrying gorilla? -- POST rebuttal: the authors have addressed all the raised concerns, increasing my rating. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed the concerns of limited output possibilities of their common action space and physical defects of TGDMs. They have not discussed any other limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **For Reviewer** **BXKD** Thank you for recognizing our paper and recommending it for acceptance. Now, we will address the key arguments raised in the reviews. ### **Q1.The identity of actors** It should be noted that two factors influence the identity of the actor. (1) The first factor is **the actor identification from the reference image**. In this paper, the customized action is decoupled from the actor in the reference image via a common action space (Figure 1, second and third rows). (2) Another factor is the **confusion between actors in the new context**, as previously mentioned with regard to Batman and Superman. Even in the absence of consideration for customized actions, the stable diffusion model will nevertheless result in confusion regarding the identity information when generating multi-subject [1,2]. The issue of how to eliminate confusion when generating multiple concepts represents a significant challenge, although it is not the primary focus of this paper. In order to maintain the rigor of the paper, we will modify the description of how the actor's identity is maintained in the paper. [1]Easing Concept Bleeding in Diffusion via Entity Localization and Anchoring [2]FineControlNet: Fine-level Text Control for Image Generation with Spatially Aligned Text Control Injection ### **Q2.The generated image contains a mirrored action of the sample image** The reason for generating the action containing the mirror is twofold: (1) One is that the actions in different directions are **contained in the reference image**. (2) The other is because we applied **image augmentation** (including rotation) to the training data. We consider that the mirrored actions are, at the very least, not semantically incorrect. However, should one desire to generate actions that are entirely customized, it would be advisable to consider removing the aforementioned two factors. ### **Q3.Some complicated customized actions might not be replicable** (1)In this paper, we have endeavored to conduct a comprehensive evaluation of TwinAct's capabilities, encompassing a range of scenarios. These scenarios **include detailed hand movements, full-body movements involving multiple body parts, and movements requiring the coordination of multiple individuals.** (2)We added **a complex action as shown in Figure 2** in Rebuttal PDF and the results demonstrate the superiority of the TwinAct method. (3)In the case of particularly intricate actions, the complexity of the customization process can be mitigated by **broadening the scope of the common action space** and **augmenting the number of reference images**. ### **Q4.Changing the value of coefficients for the same action bases** We have made adjustments to the coefficients as suggested and have generated intriguing findings as shown **in Figure 1 in Rebuttal PDF**. ### **Q5.** **Easily adapted to new ones** Although the focus of this paper is on the generation of customized actions, rather than customized actors, it also illustrates the capacity of TwinAct to synthesize SD-known actors but also presents experimental findings on a personalized actor dataset (**Figure 5 in the paper**), demonstrating that TwinAct can be easily integrated into existing methods for customizing actors. Furthermore, the customization of an action by TwinAct is a relatively expeditious process, requiring approximately only **15** minutes. ### **Q6. Calculated for SAction and SActor** In the case of S_action, our approach aligns with that of previous customization work[1,2]. The custom diffusion process employs the image from the clip and the image similarity metric for evaluation. In the case of S_actor, the pre-trained face encoder is employed to ascertain the degree of similarity. Furthermore, to provide a more comprehensive evaluation of the generated results, we have conducted a user study. [1] Multi-Concept Customization of Text-to-Image Diffusion [2] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation ### **Q7. The degree of knowledge of the 100 users** We employ 100 users from the outsourcing platform to conduct user research. The knowledge level of users includes junior college and undergraduate, and the ratio of junior college to undergraduate college is close to 2:1. We also provided detailed documentation and examples to ensure that all participants clearly understood the purpose and requirements of the task. ### **Q8. Scale this method for a higher number of action variations** If the action space is to be further extended: (1) one feasible way is to add new action phrases through expert knowledge. For example, it is possible to expand the action space by dividing the body parts at a more detailed level of granularity, such as the eyes, the mouth, the thumbs, and so forth, in order to realize a greater number of combinations of actions. (2) another possible way is to add customized actions token embedding generated by our methods to the action space as well, which we leave as an exploration for future work. ### **Q9. TSNE visualization of the action space** We have provided visualization results of T-SNE for the action space. The objective of our work is to generate customized actions through action-based combination, with the optimal action base being orthogonal. As shown **in Figure 4 in Rebuttal PDF**, the action base does not exhibit discernible clusters, and the distribution is relatively uniform, which aligns with the nature of our action base. ### **Q10. Increased to more subjects** Thanks for your suggestion. We additionally present the generated results of customized actions involving 3 people, as shown **in Figure 2 in Rebuttal PDF**. These results effectively demonstrate TwinAct's capability to manage multiple actors. ### **Q11. how is the role in the image defined?** If the role in the image needs to be specified, it can be specified by prompts, as shown **in the Figure 3 in Rebuttal PDF.** --- Rebuttal Comment 1.1: Title: After author response Comment: Thank you for providing a very detailed response, it addressed most of the concerns. I do have a couple of more questions/ just for clarification, - The shown t-sne in the rebuttal pdf does not look good, it is not clear, with such good generated samples, why clustering would be so bad? Are the shown samples cherry picked? or I am missing something here? - I think, it will not be good to claim identify preservation, considering it is not the main focus and also there is no strong evidence. - It is not clear why rotation augmentation will lead to flipped actions? - I think, claiming the proposed approach can do complex actions will be a stretch, although the shown samples look reasonable, it is not clear how the spatial ordering, role, etc, can be assigned, even using language, the shown samples seems like random assignment. --- Rebuttal 2: Title: Reviewer BXKD Comment: Thank you for your thoughtful comments. We sincerely appreciate your feedback. To address your queries, we have provided further explanations below: ## **Q1 The clustering in t-sne** We must point out that the purpose of an action base is to combine new actions. This suggests that it is desirable for these action bases to **be orthogonal to each other**. In other words, **each action base should be unique** in the action space so that more new actions can be combined. Therefore, when constructing the action space, we **filter repetitive or similar action phrases** to ensure the independence/diversity of the action base. For orthogonal bases, t-SNE projects them to different locations (e.g., uniform distributions) so that these basis vectors are separated from each other in low-dimensional space. Thus, these basis vectors can be more flexibly combined to generate new vectors(i.e. new actions). On the other hand, if the action bases form mixed clusters in the action space, indicating that they are highly correlated with each other, such basis vectors will show poorer combining ability. Thus our clustering is not bad. ## **Q2 The stability of the generated samples** In addition to the qualitative visualization results, we also present **quantitative experimental results** in the paper, and several experimental results demonstrate the superiority of TwinAct. In the **user study**, users also expressed a clear preference for images generated by TwinAct. Due to the limitation of the discussion, we cannot add more visualization results, but we will **add the results generated by multiple random seeds** in the final paper to further prove the stability of the generated results. Finally, our dataset and code will be open source. ## **Q3 Identify preservation** Thank you for your suggestion. We will revise the presentation about identify preservation in the final version. **Decoupling the action from the actors** may be more accurate. ## **Q4 About the rotation augmentation** We apologize for the loose wording, but the "rotation" we refer to in data augmentation includes both small rotations (**RandomRotation(degrees=30)**) and large rotations (**RandomHorizontalFlip**). Specifically, for image augmentation, we use the following techniques: **RandomHorizontalFlip, RandomRotation, RandomAffine, and ColorJitter**, which are randomly selected with a probability of 50%. ## **Q5 About role assignment** For the carry example, since it has an **explicit active-passive relationship**, we can specify it through language. Specifically, we have a mixture of "A v_carry B" and "B is v_carry A" in our dataset, so the model can learn that **"v_carry" means that the front character is always standing**, and **"is v_carry" means that the front character is always lying down**. So we can use "Leonardo v_carry Obama" or "Obama v_carry Leonardo" to specify who is standing in the back and who is lying in the front However, we recognize that it is difficult to linguistically constrain the spatial constraints for other examples that do not have an obvious active-passive relationship, such as the actions in rows 3 and 4 in Figure 1 of the main paper, and we tried "in the left" and "in the right," but SD does not work for this type of spatial constraint. We tried "in the left" and "in the right", but SD has a limited understanding of this type of spatial relationship. The generated roles are random. A possible solution is to **add layout information** such as a bounding box for control, and our method can seamlessly access similar methods such as controlnet. **We will add the generation results of introducing ControlNet[1] in the final version.** [1] Adding Conditional Control to Text-to-Image Diffusion Models Thank you for your valuable feedback on our paper, we will carefully consider your suggestions and revise the final manuscript. In addition, we would sincerely appreciate it if you could possibly raise your score. Finally, please feel free to contact us if you have any further questions. We look forward to your response. --- Rebuttal Comment 2.1: Comment: Thank you for clarifying the doubts. All my concerns have been addressed. I will increase my rating. --- Reply to Comment 2.1.1: Comment: Thank you so much for your understanding and support! We are glad to hear that all your questions have been addressed.
Rebuttal 1: Rebuttal: ### For ALL Reviews We sincerely thank all the reviewers for their thoughtful feedback. We are glad to see that most of the reviews have recognized our work: **[ALL Reviews]** **Robust and Superior Results:** We are grateful for the reviewers’ positive feedback on our impressive experimental results, particularly our method's ability to consistently outperform existing approaches in both objective metrics and user studies. **[BXKD, Q179, 2ZgR]** **Innovative Approach:** We are delighted that most reviewers acknowledge our motivation is good and interesting, and the novelty and effectiveness of our method in action image generation. **[ALL Reviews]** **Detailed and Accessible Presentation:** We appreciate the reviewers highlighting the comprehensiveness of our explanations and the ease of understanding our paper. We will now address the key points raised in the reviews and provide detailed responses to each reviewer. We highly recommend that reviewers take the time to **check PDF for additional visualization of results**. Once again, we extend our thanks for the reviewers’ time and valuable insights, and we look forward to any additional feedback or questions regarding our work. Pdf: /pdf/f8858e04d1b69a8a77ce14a575dfaf472cdbfce6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
Accept (spotlight)
Summary: The authors propose an algorithm for distributed optimization of a sum of nonconvex smooth functions, with partial participation. They obtain a O(1/T) rate. Strengths: The study of convergence of algorithms adapted to federated learning is an important topic. Weaknesses: * There is no discussion of the O(1/T) rate in Theorem 1 with respect to existing results, for instance on Scaffold. I don't think the method improves on the SOTA. * In Federated Learning (FL), the bottleneck is communication. The complexity in number of communication rounds is only one part of it, sending full vectors is not reasonable in FL, compression must be used to reduce dramatically the number of communicated bits, which is the right criterion to study. * There is a lack of comparison to existing methods. Here the local steps are performed to find a stationary point of the augmented Lagrangian. This is equivalent to compute, inexactly, a proximity operator. This is exactly what the 5GCS algorithm does, proposed in Grudzien et al. "Can 5th generation local training methods support client sampling? yes!" AISTATS 2023. 5GCS has been extended to compression in Grudzien et al "Improving Accelerated Federated Learning with Compression and Importance Sampling", arXiv:2306.03240. Another technique is loopless: instead of an inner loop of local steps, communication is triggered randomly. This is the technique used in Scaffnew, an important algorithm since which demonstrates that local training yields acceleration of communication in FL: Mishchenko et al. "ProxSkip: Yes! Local gradient steps provably lead to communication acceleration! Finally!" ICML 2022. Scaffnew has been interpreted as a primal-dual algorithm in Condat and Richtárik, "RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates" ICLR 2023. Scaffnew has been extended to partial participation, and also compression, as TAMUNA in Condat et al. "TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation" arXiv:2302.09832. * In these algorithms, such as 5GCS and TAMUNA, the dual variables of the inactive clients are not modified. This is fine, because the aggregated model from the active clients does not contain any information about the inactive clients, so we cannot expect to use the new model to update these dual variables. So, I don't see what the challenge addressed in the paper is, because combining local training and partial participation can been done successfully, as these papers show. * The writing is not good. The first 3 pages contain only vague statements, without the problem being defined. The paper contains a lot of unusual or weird expressions, such as "imminent challenge" "brings great distress for training". What does the sentence "Ozfatura et al. [2021], Xu et al. [2021] propose to adopt the global update consistently control the local training to force a high consensus." mean? * I don't see what "dual drift" actually means. If the dual variables of inactive clients remain the same for a long time, this is not a drift. The only explanation of this notion is that there is "excessive differences between the primal and dual variables", which is unclear, one cannot compare a primal and dual variable. If you mean that the primal and dual variables become inconsistent with each other, this inconsistency should be made precise. So, the paper title should be modified. Also, it is not grammatically correct, this should be "Needs", not "Need". Typos * (5) first update: this should be $\theta_i$, not $\theta_i^t$ * Algorithm 1, line 4: this should be $P^t$, not $N^t$ Technical Quality: 3 Clarity: 2 Questions for Authors: * Theorem 1 "let rho > O(L^3) be positive with lower bound": what does it mean ? * rho proportional to $L^3$ is weird, the left hand side and right hand side of (8) are not homogeneous. Looking at the proof, Line 987: is is not clear at all what the condition on rho is. * line 3: "randomly select active clients set" : do you mean choose a subset of size P uniformly at random? * Why does the number P of active clients not appear in Theorem 1? This is strange. * Line 972: how do you get the factors 4 and the 2, this looks like Young's inequality but not clear at all. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 and W2: I don't think the method improves on the SOTA and compression must be used to reduce dramatically the number of communicated bits, which is the right criterion to study.** The main contribution of this paper is neither proposing a theory that achieves faster optimization rates than SOTA nor introducing a compression technique to reduce communication bits. Moreover, we have never claimed in this paper that our method can achieve a faster rate than the SOTA results. **The primary focus of this paper is on addressing the training instability issues encountered in experiments of existing primal-dual methods.** Our research is not directly related to these two fields. We introduce our main contribution **in the overall response above**. **W3: There is a lack of comparison to existing methods.** Thank you very much for pointing out these excellent works. We will cite these theory-related works in section 2. We also summarize the differences between these works and ours **in the overall response above**. **W4 and W6: I don't see what "dual drift" actually means.** Thank you for this concern. Since this is the main issue studied in this paper, we specifically introduce this **in the overall response above**. **W5: The writing is not good.** We are sorry if some sentences may have caused misunderstandings. We will clarify their expressions and revise them to make them more straightforward and understandable. **Q1 and Q2: Theorem 1 "let rho > O(L^3) be positive with lower bound": what does it mean? Looking at the proof, Line 987: it is not clear at all what the condition on rho is.** Thank you for this question and we will revise this ambiguous expression. **Is $\rho$ proportional to $L^3$ ?** No. $\rho$ is just a general constant. We follow the proof techniques in [X1] to construct the recurrsive relationship on the combination of $\mathbb{E}[f(\overline{\theta}^t)]$ term and $R_t=\frac{1}{C}\mathbb{E}\Vert\theta_i^{t+1}-\overline{\theta}^t\Vert$ term. We can construct an arithmetic sequence when the following condition is satisfied (line 987 in our paper): $$ \rho^2 - 4L^3\rho - 36L^2-4L^4>0. $$ This can be regarded as a quadratic function, using the basic knowledge of the roots of quadratic functions, we have: $$ \rho > \frac{4L^3 + \sqrt{16L^6 + 16L^4 + 144L^2}}{2} \ \ \text{or} \ \ \rho<\frac{4L^3 - \sqrt{16L^6 + 16L^4 + 144L^2}}{2}. $$ Since $4L^3 - \sqrt{16L^6 + 16L^4 + 144L^2}<0$ but we need $\rho$ to be positive. Therefore, the required range of $\rho$ is $\rho > \frac{4L^3 + \sqrt{16L^6 + 16L^4 + 144L^2}}{2}$ only. Sorry for using an ambiguous expression here and $\rho=\mathcal{O}(L^3)$ is different from our $\rho>\mathcal{O}(L^3)$. The form of this constant is quite cumbersome when written in the theorem. Therefore, we used $O(L^3)$ as a placeholder, which has caused some ambiguity. We apologize for this and will correct it to $\rho > \frac{4L^3 + \sqrt{16L^6 + 16L^4 + 144L^2}}{2}$. [X1] Acar D A E, Zhao Y, Matas R, et al. Federated Learning Based on Dynamic Regularization[C]//International Conference on Learning Representations. **Q3: line 3: "randomly select active clients set": do you mean to choose a subset of size P uniformly at random?** It means that before each round of local training begins, the server will activate a batch of local clients to participate in the training, with each client having the same probability of being activated. After completing the training, they will remain inactive until they are selected to be activated again. **Q4: Why does the number P of active clients not appear in Theorem 1? This is strange.** Virtual updates can be understood as a form of synchronization for non-participating clients, and can also be seen as a form of virtual full-participation training. The conclusions drawn from this approach would not exceed those given in FedPD [X2], as the algorithmic proof in FedPD requires strict full participation. We present the comparison of the theoretical results in Table 7. For FedPD, even in the case of full participation, it does not indicate that the results are related to $C$. For FedDyn, the proofs indicate that the final convergence rate achieves $\mathcal{O}(\frac{C}{PT})$ (slower than $\mathcal{O}(\frac{1}{T})$). Our analysis shows that, with virtual updates and appropriate parameter choices to avoid excessive errors, the convergence rate of partial participation can achieve a similar complexity as full participation, thus reducing the impact of the $\frac{C}{P}$ constant term in FedDyn. [X2] Zhang X, Hong M, Dhople S, et al. Fedpd: A federated learning framework with adaptivity to non-iid data[J]. IEEE Transactions on Signal Processing, 2021, 69: 6055-6070. **Q5: Line 972: how do you get the factors 4 and the 2, this looks like Young's inequality but not clear at all.** Thank you for pointing out this. This is the noised version of proof of Lemma 11 in [X1] (page 35 in their paper). The coefficients $4$ and $2$ in our conclusion are consistent with those in their proof. This is because the proof in [X1] uses Young's inequality to first divide these terms into four groups, thus enlarging the coefficients by a factor of $4$. They then merge the dual updates and local update terms into a single upper bound, resulting in the constant $2$. Our conclusion additionally considers a case with local error, which introduces a constant $\epsilon$ due to the local inexact solution. We apologize for the lack of clarity here. **We will directly copy the lemma they used and cite it as their Lemma 11 in the next version. We hope this resolves your concerns.** [X1] Acar D A E, Zhao Y, Matas R, et al. Federated Learning Based on Dynamic Regularization[C]//International Conference on Learning Representations. Thanks for reading our rebuttal and we are happy to continue discussions if there are some other concerns unsolved. --- Rebuttal Comment 1.1: Comment: Papers with strong theoretical contributions and papers with heuristics showing strong empirical performance are both valuable. Also, nonconvex optimization is more difficult than convex optimization. I understand that you propose a heuristic technique to mitigate the negative effects of partial participation in nonconvex federated learning. You motivate your technique by an intuition based on correcting the "dual drift". My negative evaluation is based on the fact that there is a theoretically-grounded literature for the convex case, and it seems to contradict your intuition that it is bad if the dual variables of idle clients remain unchanged for a long time. In other words, your technique should be shown to work in the "simple" convex case, which is not the case. How do we know that A-FedPD will not diverge in even simple quadratic synthetic experiments? Showing improved generalization efficiency in some practical examples is fine, but this is not enough to make progress in our understanding of primal-dual optimization methods for federated learning, as the title claims, in my opinion. At this time, I am keeping my score. --- Reply to Comment 1.1.1: Title: Rebuttal from authors Comment: Thank you for your positive response and we are honored and happy to continue the discussion with you. Regarding your comments above, we have listed and addressed the two concerns you raised below. ## About the comment "there is a theoretically-grounded literature for the convex case seems to contradict your intuition". In our overall response, we have made it very clear that even for non-convex objectives, the classical federated primal-dual method FedADMM with partial participation have been theoretically proven to converge [X1]. However, despite the theoretical guarantees, our experiments have validated that FedADMM still faces extreme instability and even divergence during partial participation training (our figure 1). Obviously, **pure theoretical analysis can not provide an absolute guarantee** of an algorithm's feasibility. Therefore, it is not appropriate to dismiss the existence of the problem simply because it has been proven in theory. In fact, we are not the first to observe such non-convergence phenomena in experiments; previous classical works [X2,X3,X4] have repeatedly demonstrated the existence of this phenomenon experimentally. [X4] also indicates that the update mismatch between primal and dual variables leads to a "drift", which is related to the ``dual drift" we pointed out in this paper. [X1] Wang H, Marella S, Anderson J. Fedadmm: A federated primal-dual algorithm allowing partial participation[C]//2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022: 287-294. [X2] Xu J, Wang S, Wang L, et al. Fedcm: Federated learning with client-level momentum[J]. arXiv preprint arXiv:2106.10874, 2021. [X3] Baumgart G A, Shin J, Payani A, et al. Not All Federated Learning Algorithms Are Created Equal: A Performance Evaluation Study[J]. arXiv preprint arXiv:2403.17287, 2024. [X4] Kang H, Kim M, Lee B, et al. FedAND: Federated Learning Exploiting Consensus ADMM by Nulling Drift[J]. IEEE Transactions on Industrial Informatics, 2024. ## About the comment "Your technique should be shown to work in the "simple" convex case". We cannot agree with this comment. As you mentioned above, nonconvex optimization is more difficult than convex optimization. In fact, **a large number of papers on learning federated primal-dual methods conduct the experiments on non-convex experiments [X1, X2, X3, X4, X5]**. More importantly, **the "dual drift" issue is discovered in non-convex experiments** by several previous works (mentioned in the answer above), and our technique is aimed at addressing this problem. We have not identified this issue in convex experiments. Therefore, the reviewers have no reason to ask us to implement this work for convex objectives. If the ``dual drift" issue does not indeed exist in convex experiments, then our virtual update technique is not necessary for training convex models. However, since a lot of studies have identified this problem in non-convex experiments, we have validated that our method effectively addresses this issue in non-convex experiments. [X1] Zhang X, Hong M, Dhople S, et al. Fedpd: A federated learning framework with adaptivity to non-iid data[J]. IEEE Transactions on Signal Processing, 2021, 69: 6055-6070. [X2] Acar D A E, Zhao Y, Matas R, et al. Federated Learning Based on Dynamic Regularization[C]//International Conference on Learning Representations. [X3] Sun Y, Shen L, Huang T, et al. FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy[C]//The Eleventh International Conference on Learning Representations. [X4] Kang H, Kim M, Lee B, et al. FedAND: Federated Learning Exploiting Consensus ADMM by Nulling Drift[J]. IEEE Transactions on Industrial Informatics, 2024. [X5] Zhang Y, Tang D. A differential privacy federated learning framework for accelerating convergence[C]//2022 18th International Conference on Computational Intelligence and Security (CIS). IEEE, 2022: 122-126. We noted that the reviewers show concerns about the naming. One is with the term "dual drift", and the other is with the title. We believe these are very easy to resolve. We would greatly appreciate any suggestions the reviewers might have for the names.
Summary: This paper investigates the issue of dual drift caused by the mismatch between primal and dual variables when using partially participated training in federated learning (FL). It proposes a novel method, A-FedPD, which employs virtual dual updates to mitigate these negative impacts. Comprehensive theoretical analysis and extensive experiments are presented to verify the effectiveness of the proposed method. Strengths: 1. The experiment in Figure 1 vividly illustrates the source of defects in primal dual methods under partial participation. 2. The study presented in this paper is novel. Federated primal dual methods have long exhibited varying degrees of instability in experiments. The analysis based on dual drift provided in this paper is robust and useful. 3. The quadratic term in the primal dual method is used in this paper to obtain a reduced form of the stability bound in the iterative process, ultimately achieving a constant bound in Equation (10). This demonstrates that the proposed method is superior to the federated averaging method in terms of generalization under the same training process. This conclusion is broad and can be extended to other primal dual methods. 4. The experiments were solid and comprehensive in their analysis. In terms of both communication efficiency and wall-clock time testing, the A-FedPD method shows strong performance. Weaknesses: 1. What is the variant A-FedPDSAM? I didn’t see an introduction to this algorithm in the text; please remind me if I missed this part. 2. The performance of the algorithm is improved by trading space for time. During a global update, storing the dual variables requires considerable resources. Although federated learning primarily considers communication bottlenecks, this still increases server-side consumption. 3. I noticed that the FedDyn column in the main table indicates failure. Were all hyperparameters tested in the experiment? There should be an additional discussion to clarify whether this failure is a special case or not. Technical Quality: 3 Clarity: 3 Questions for Authors: The same with the weaknesses part. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The same with the weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: What is the variant A-FedPDSAM? I didn’t see an introduction to this algorithm in the text; please remind me if I missed this part.** Due to space constraints in the main text, we have noted at the bottom of page seven (in the first paragraph where the Experiments section begins) that the method A-FedPDSAM is introduced in appendix A.1. Actually, A-FedPDSAM is a straightforward but useful variant to improve the generalization efficiency in the experiments. The vanilla A-FedPD adopts the SGD as the local optimizer to solve the local lagrangian objective. While in A-FedPDSAM, we use SAM [X1] as the local optimizer to solve the local lagrangian objective. The design of SAM can provide better generalization performance. We list the main differences in the following. Local SGD optimizer: $$ \theta_{i,k+1}^t=\theta_{i,k}^t - \eta^t(\nabla f_i(\theta_{i,k}^t,B) + \lambda_i + \rho(\theta_{i,k}^t - \theta^t)). $$ Local SAM optimizer: $$ \theta_{i,k+1}^t=\theta_{i,k}^t - \eta^t(\nabla f_i(\theta_{i,k}^t + \rho\frac{\nabla f_i(\theta_{i,k}^t,B)}{\Vert\nabla f_i(\theta_{i,k}^t,B)\Vert},B) + \lambda_i + \rho(\theta_{i,k}^t - \theta^t)). $$ Other operations are essentially consistent with the A-FedPD method. **W2: The performance of the algorithm is improved by trading space for time. During a global update, storing the dual variables requires considerable resources. Although federated learning primarily considers communication bottlenecks, this still increases server-side consumption.** Thank you very much for pointing this out. As we discussed in the limitations, although our proposed method achieves state-of-the-art (SOTA) performance, it could be further improved in scenarios with a very large client scale, such as cross-device federated settings. Our paper primarily reviews and summarizes the issue of experimental non-convergence caused by increased model scale and more complex datasets in FedADMM-like methods. Then we organize this issue from the perspective of symmetry between the primal and dual problems as dual drift. Virtual dual updates are a simple and effective solution, and the storage cost is acceptable in cross-silo federated scenarios. We also look forward to further researching more efficient techniques for cross-device federated scenarios in the future. **W3: I noticed that the FedDyn column in the main table indicates failure. Were all hyperparameters tested in the experiment? There should be an additional discussion to clarify whether this failure is a special case or not.** We conducted all hyperparameter searching as shown in Table 3 (page 21) and tried almost all possible hyperparameter combinations. In fact, we also observed that while FedDyn performs reasonably well in CIFAR-10 experiments, it has a very high requirement for learning rate decay, necessitating a very slow learning rate decay. However, on CIFAR-100 (where the model structure and tasks become more complex), its stability drops sharply, requiring continuous reduction of the penalty term in the Lagrangian function to maintain stability. This observation inspired us to consider whether the asymmetry between the primal and dual objectives is causing this instability. When the penalty coefficient decreases to nearly zero, the dual variables almost stop updating and remain the initial zero vector, causing the entire Lagrangian function to nearly degenerate into the original function $f_i(\theta)$. This phenomenon can also be observed in the experiments reported in [X1]. We also encountered similar issues in the FedADMM experiments. This is not a coincidence or an anomaly, but a genuine issue. [X1] Xu J, Wang S, Wang L, et al. Fedcm: Federated learning with client-level momentum[J]. arXiv preprint arXiv:2106.10874, 2021. Thank you again for reading our rebuttal. We are also honored to have this discussion with you, which has made our submission more complete. If you have any further questions or issues to address, we would be very happy to continue discussing them with you. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I have no additional questions concerning this paper. --- Reply to Comment 1.1.1: Title: Thanks for the review Comment: Thank you again for the time and effort paid for reviewing our submission. We will revise the corresponding content based on the summary in the rebuttal.
Summary: A primal-dual-based federated learning algorithm (A-FedPD) is proposed to mitigate the drift of local dual variables. In federated learning with partial participation training, the local dual variables in inactive clients can be drifted. To mitigate this issue, the propose A-FedPD would be effective since the local dual variables in unparticipated clients are aligned with the global average models. Through extensive experiments, including nonconvex neural network models, the effectiveness of the A-FedPD was empirically investigated. Strengths: (S1) Simple but effective method: It is well known that the local dual variables in unparticipated clients can be drifted in the primal-dual based federated learning algorithms. The idea of modifying the unparticipated local dual variables to bring them closer to the global model is simple but effective. (S2) The proposed Algorithm 1 has been theoretically analyzed under certain assumptions, demonstrating a near-optimal convergence rate. One point of dissatisfaction is the comparison of convergence rates with several primal-dual based methods that will not be affected by dual drift in Table 7. Such an important results are to be included in the main paper. (S3) The effectiveness of the proposed method has been validated through several image classification benchmark tests using neural network models. Its robustness against various stresses (participation ratio, local interval, data heterogeneity) has also been empirically investigated. Weaknesses: (W1) In the experiments of Sec. 5.2, there seems to be no significant difference between the proposed method FEDPD and the comparison method FEDSPEED. The reviewer has serious concerns about this point. Without introducing SAM, can the superiority not be empirically demonstrated? (W2) In addition, the consideration of comparison methods in Sec. 5.2 seems insufficient. FEDSPEED performs better, but why is this? It also compares with primal methods such as SCAFFOLD and FedSAM, but why is the proposed method better than these methods? (W3) Some relevant literature might be missing. [a] R. Pathak et al., “FedSplit: An algorithmic framework for fast federated optimization,” NeurIPS 2020. [b] G. Zhang et al., “Revisiting the Primal-Dual Method of Multipliers for Optimisation Over Centralised Networks”, IEEE Transaction on Signal and Information Processing, pp. 228-243, 2022. Technical Quality: 3 Clarity: 3 Questions for Authors: (Q1) Could you provide proof sketch for theoretical analysis in Sec. 4? I would particularly like to know about the novel points in comparison with analyses in prior studies, such as FedPD. (Q2) On line 309, it is written that “we freeze most of the hyperparameters for all methods,” however these were searched in the list written in the Grid Search column of Table 3, right? I think it would be better to clearly write in the main paper that a hyperparameter search was conducted (see Appendix). (Q3) In Figure 2(b), it seems that accuracy is lower when the local interval is short. Does the total number of updates differ according to the local interval? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are noted in the Appendix. Social impact is not mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: The reviewer has serious concerns about this point. Without introducing SAM, can the superiority not be empirically demonstrated?** FedSpeed method essentially uses **a variant of the local SAM optimizer.** To illustrate this, we can recall its implementation: $$ g_{i,k,1}^t = \nabla F_i(x_{i,k}^t,B), $$ $$ y_{i,k}^t = x_{i,k}^t + \rho g_{i,k,1}^t, $$ $$ g_{i,k,2}^t = \nabla F_i(y_{i,k}^t, B), $$ $$ g_{i,k}^t = (1-\alpha)g_{i,k,1}^t + \alpha g_{i,k,2}^t. $$ The ascent learning rate is set as $\rho=\frac{\rho_0}{\Vert\nabla F_i\Vert}$. Then we have: $$ g_{i,k}^t = (1-\alpha)\nabla F_i(x_{i,k}^t,B) + \alpha\nabla F_i(x_{i,k}^t + \rho_0\frac{\nabla F_i(x_{i,k}^t,B)}{\Vert\nabla F_i(x_{i,k}^t,B)\Vert}, B). $$ **It is a combination of the SGD gradient and the SAM gradient, the second part is actually the SAM gradient.** Therefore for a fair comparison, from the perspective of local optimizers, the corresponding version of FedPD/FedADMM/FedDyn is A-FedPD, while the corresponding version of FedSpeed is A-FedPDSAM (both of them adopt SAM-based optimizer in the local client). As observed by the reviewer, our method makes the performance of SGD-based optimizers approach that of SAM-based optimizers, which is a significant improvement. **W2: FEDSPEED performs better, but why is this? It also compares with primal methods such as SCAFFOLD and FedSAM, but why is the proposed method better than these methods?** Thank you for pointing out this. In W1, we have already explained above why the FedSpeed method is superior to A-FedPD, primarily because its local optimizer is essentially a variant of SAM. For a fair comparison, it should be compared with the variant A-FedPDSAM. As for the second question, when the FedDyn method was proposed [X1], it already summarized that due to the correction of the dual variable, **local models will converge to the global model.** This means that the primal-dual methods will adjust the local heterogeneous objective to ultimately align with the global objective. Our proposed method is primarily designed to address the training instability issues and some inherent shortcomings in primal-dual methods. For clearer comparisons, we classify some methods and list some results from Table 3. | | type | local optimizer | Acc in Dir-1.0 | Acc in Dir-0.1 | | :----: | :----: | :----: | :----: | :----: | | SCAFFOLD | primal | SGD | 83.61 | 78.66 | | FedDyn | primal-dual | SGD | 84.20 | 79.51 | | A-FedPD | primal-dual | SGD | 84.94 | 80.28 | | FedSpeed | primal-dual | SAM | 85.11 | 80.86 | | A-FedPDSAM | primal-dual | SAM | 85.90 | 81.96 | [X1] Acar D A E, Zhao Y, Navarro R M, et al. Federated learning based on dynamic regularization[J]. arXiv preprint arXiv:2111.04263, 2021. **W3: Some relevant literature might be missing.** Thank you for pointing out these two important works. We will add them in the Related Work. **Q1: Could you provide proof sketch for theoretical analysis in Sec. 4? I would particularly like to know about the novel points in comparison with analyses in prior studies, such as FedPD.** (1) For optimization: [X2] provides classical proofs by independently bounding each term in the smoothness inequality, ultimately forming the final convergence conclusion. However, due to the complexity of the intermediate variables and relationships that need to be analyzed, the proof introduces a large number of constants. Then FedDyn employs a more refined technique [X1]. Instead of independently bounding each term, the proof in FedDyn combines the global update gaps and the local update gaps by a specific coefficient, yielding an arithmetic sequence. However, proofs in FedDyn must rely on the assumption of the local exact solution, i.e. $\nabla L_i=0$. Clearly, this is overly idealized, as optimization will always introduce some errors. Therefore, to make the proof process more general, we adapt the assumption of gradient error from [X2], i.e. $\nabla L_i=e$ where $e$ is treated as a bounded error. Our proof extends the simplified version of FedDyn to a more general version that allows for local inexact solutions and re-evaluates the necessary conditions for constructing the combined terms. (2) For generalization: **To our knowledge, current works have not provided generalization analysis for federated primal-dual methods.** We provide error bounds based on stability analysis here and prove that the generalization error bound of our proposed A-FedPD is lower than the FedAvg method. The main property is reflected through Eq.(10) in our paper. The local updates in primal-dual methods can be decayed by a coefficient less than 1 ($1-\eta^t\rho$), which implies fewer stability errors compared with the FedAvg method. [X1] Acar D A E, Zhao Y, Navarro R M, et al. Federated learning based on dynamic regularization[J]. arXiv preprint arXiv:2111.04263, 2021. [X2] Zhang X, Hong M, Dhople S, et al. Fedpd: A federated learning framework with adaptivity to non-iid data[J]. IEEE Transactions on Signal Processing, 2021, 69: 6055-6070. **Q2: I think it would be better to clearly write in the main paper that a hyperparameter search was conducted (see Appendix).** Thank you for pointing out this. We will add the sentence "Detailed hyperparameter search was conducted (see Appendix A.2)" in section 5.1. **Q3: In Figure 2(b), it seems that accuracy is lower when the local interval is short. Does the total number of updates differ according to the local interval?** Thank you for this question. In the experiments of Figure 2 (b), we fix the communication rounds as $800$ and change the local intervals from $[10,20,50,100,200]$. Since the learning rate is decayed by each communication round, we must ensure that the learning rate is reduced to the same after all experiments are finished. We will further clarify the experimental setup. Thank you again for reading our rebuttal. If you have any further questions, we would be very happy to continue the discussion with you. --- Rebuttal Comment 1.1: Comment: W1: I clearly understand your explanation. However, I couldn't extract this claim from the original version. Could you improve the experimental section to explain this better? W2: Including this table that explains the relationships between the methods in the Appendix would help enhance the readers' understanding. Q1: Adding this to the Appendix would make it easier to understand the contributions in the theoretical analysis. I do not have any questions furthermore. I keep my score. --- Reply to Comment 1.1.1: Title: Thanks for the review Comment: We are very grateful for your review of our rebuttal and are pleased that all your concerns have been addressed. We greatly appreciate your suggestions. We will add a horizontal table above Table 2 in our paper to categorize each method, and add the explanations in the rebuttal above regarding optimization and generalization techniques in the appendix. Thank you again for your recognition of our work.
Summary: This paper studies the primal-dual-based FL algorithm with partial client participation. The inactiveness of the local clients causes both local primal and dual variables to drift from their expected value and slows down FedPD's convergence. This paper provides a fix to the FedPD algorithm in the partial participation setting by moving the dual update to the server side and allowing the server to store the dual variables. The theoretical result shows that A-FedPD can achieve the same convergence rate as FedPD ($O(1/T)$), and the error bound is better than the prima-only FL algorithm. Numerical results also show that the proposed algorithm achieves the best accuracy in different settings. Strengths: 1. Soundness: the theoretical analysis provides a clear convergence and generalization result for the proposed algorithm. Numerical results also show that the proposed algorithm outperforms the existing algorithms on different models and data distributions. 2. Clarity: the paper is clearly written with adequate reference to prior works and clear notations and results. Weaknesses: 1. Memory inefficiency: The algorithm requires saving the local dual variable of all clients. When C is large, this might cause a large memory cost on the server, especially in the FL setting. This might restrict the proposed algorithm's use case. This is discussed in the limitations. Technical Quality: 3 Clarity: 4 Questions for Authors: How would A-FedPD (and other algorithms) perform when the communication patterns of different clients are different, i.e., $\mathbb{E}_P[\theta_i] \neq \bar{\theta}$? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: The algorithm requires saving the local dual variable of all clients. When C is large, this might cause a large memory cost on the server, especially in the FL setting. This might restrict the proposed algorithm's use case. This is discussed in the limitations.** Thank you very much for pointing this out. As we discussed in the limitations, although our proposed method achieves state-of-the-art (SOTA) performance, it could be further improved in scenarios with a very large client scale, such as cross-device federated settings. Our paper primarily reviews and summarizes the issue of experimental non-stability caused by increased model scale and more complex datasets in FedADMM-like methods. Then we organize this issue between the primal and dual problems as dual drift. Virtual dual updates are a simple and effective solution, and the storage cost is acceptable in cross-silo federated scenarios. We also look forward to further researching more efficient techniques for cross-device federated scenarios in the future. **Q1: How would A-FedPD (and other algorithms) perform when the communication patterns of different clients are different, i.e., $\mathbb{E}_P[\theta_i]\neq\overline{\theta}$?** Thank you very much for raising this interesting question. First, we clarify that the vanilla definition of $\overline{\theta}$ is not the absolute average $\frac{1}{P}\sum \theta_i$, but a weighted average $\sum p_i \theta_i$ where $p_i$ corresponds to the importance. In most work, the global objective is set as $F(w)=\frac{1}{m}\sum f_i(\theta)$, yielding that the global consistency we are solving for is simplified on the absolute average form. Therefore, based on your question, the corresponding issue I understand is: (1) global objective is still $F(w)=\frac{1}{m}\sum f_i(\theta)$ so the global consensus is still the absolute average form $\overline{\theta}$; (2) due to certain algorithm designs, such as importance sampling or client dropout, the sampling probability $p_i$ for each client is no longer consistent with the original global objective. This generates a new gap between the true global objective and the constructed objective. This discussion requires some supporting assumptions, which we will briefly discuss here. From the perspective of optimization, the constructed objective should play a similar role of the vanilla global objective which can guarantee the training process converges. Under this condition, the gap between $\mathbb{E}\Vert\nabla F(\mathbb{E}_P[\theta_i])\Vert$ and $\mathbb{E}\Vert\nabla F(\overline{\theta})\Vert$ must be upper bounded and can deminished to zero. We refer to this as "different but bounded". We have not thoroughly derived the impact that the distribution $P$ might have on the optimization results. However, given that the optimization results still converge, we can assume that the gap here remains bounded during training. Thus, adopting $\overline{\theta}$ is still applicable, but with an additional error term at the final convergence rate. There is another scenario that corresponds to this, which is the asynchronous setup. This situation appears to be more complex, but we believe we can draw on some ideas from asynchronous distributed ADMM methods [X1, X2]. Some related techniques like "partial barrier" and "bounded delay" can be introduced to mitigate the risk of unstable updates. The core idea is to specify the error between the global objective and global consensus, and then consider further optimization of this error. We think this would be a good academic perspective, and indeed a topic worth exploring further in the future. To further answer this question in detail, we may need more conditions and problem definitions for a more specific analysis. We hope that our current response addresses your concerns. [X1] Zhang R, Kwok J. Asynchronous distributed ADMM for consensus optimization[C]//International conference on machine learning. PMLR, 2014: 1701-1709. [X2] Hong M. A distributed, asynchronous, and incremental algorithm for nonconvex optimization: An ADMM approach[J]. IEEE Transactions on Control of Network Systems, 2017, 5(3): 935-945. Thank you again for reading our rebuttal. If you have any further questions, we would be very happy to continue the discussion, as it will help further refine our submission. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I don't have any further questions. --- Reply to Comment 1.1.1: Title: Thanks for the review Comment: Thank you again for the time and effort paid for reviewing our submission.
Rebuttal 1: Rebuttal: **We thank the four reviewers for their valuable time and effort in reviewing our submission.** Reviewers 6Q4p, b9A4, and YYQB provided positive feedback with rates of 7, 7, and 5, respectively. After reading all the review comments, we noticed that the response from reviewer c2hH mentioned some concerns regarding the main contributions and research objectives of this paper, such as the 'definition of dual drift' and 'whether it should be named as dual drift'. Given that this is the main issue studied in this paper, we hope to provide a comprehensive summary in this overall response to clarify misunderstandings. **A. Main Contributions** (1) We want to emphasize that the target of this paper is neither to propose a better theoretical framework to surpass the SOTA convergence rate nor to explore how federated learning can achieve lower communication bits. This paper investigates the training instability phenomenon widely observed in some classic federated primal-dual methods (we name it the "dual drift" issue) in experiments and proposes a simple and effective solution to avoid this issue. (2) The optimization proof is provided to demonstrate that our proposed technique does not cause the primal-dual methods to diverge. Meanwhile, it can significantly improve the test performance in experiments. (3) To the best of our knowledge, this paper is the first to provide a generalization error based on stability analysis for federated primal-dual methods. We explain the superior stability over the classical FedAvg method from the perspective of local training stability. **B. Dual Drift** We appreciate the reviewer c2hH for listing a series of works to demonstrate that partial participation in primal-dual methods is theoretically feasible. We have never claimed in this paper that they are theoretically infeasible. In fact, the FedADMM method, which exhibits training instability as shown in our Figure 1, is also theoretically proven to support partial participation [X1]. **However, we have verified that this method encounters significant challenges in experiments, with severe fluctuations as the participation rate decreases.** We believe that theoretical proofs are one of the assurances of an algorithm's validity, but they are not everything. In practice, various detailed issues still need to be effectively addressed. Based on extensive experiments and analysis, we attribute the cause of this instability phenomenon as follows. Methods like ADMM require alternating updates of the primal and dual variables to ensure that the Lagrangian function eventually converges to the original objective. Under the low participation ratio, a client waits for a long time to be activated. For the local client, it is as if the primal variables are continuously being updated, while the dual variables remain in a stagnant state. When it is suddenly activated, the lagging dual variables result in the current primal variables being in a very poor initialization state for the local Lagrangian function. Therefore, we have named this phenomenon ``dual drift". Our solution is to perform virtual dual updates on the dual variables based on estimated primal variables for those unparticipated clients, to ensure they do not lag too far behind. [X1] Wang H, Marella S, Anderson J. Fedadmm: A federated primal-dual algorithm allowing partial participation[C]//2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022: 287-294. **C. A series of works [X1-X5] mentioned by Reviewer c2hH** We are very grateful to reviewer c2hH for introducing us to this series of excellent theoretical works [X1-X5]. We will add a section on the theoretical advancements of federated primal-dual methods in the related work section to introduce them. However, as mentioned above, the goal of this paper is not to propose an optimization convergence analysis that surpasses existing methods but rather to address the currently widespread training instability phenomenon in experiments. **We need to emphasize that this phenomenon is not easily observed on small datasets and small models.** For example, FedDyn may perform normally with the ResNet-18 model on the CIFAR-10 dataset but shows unstable training on CIFAR-100. When the model size is further increased, e.g. Transformers, this phenomenon becomes even more pronounced. [X1-X5] primarily focus on the progress of optimization proofs, with experiments mostly centered around the tiny logistic regression models and smaller libsvm datasets. Additionally, the experimental scales they used are typically smaller than 50. The focus of the work's analysis scenarios and experimental setups differs from ours significantly, which makes it difficult for us to make a unified comparison. [X1] Grudzień M, Malinovsky G, Richtárik P. Can 5th generation local training methods support client sampling? yes![C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2023: 1055-1092. [X2] Grudzień M, Malinovsky G, Richtárik P. Improving accelerated federated learning with compression and importance sampling[J]. arXiv preprint arXiv:2306.03240, 2023. [X3] Mishchenko K, Malinovsky G, Stich S, et al. Proxskip: Yes! local gradient steps provably lead to communication acceleration! finally![C]//International Conference on Machine Learning. PMLR, 2022: 15750-15769. [X4] Condat L, Richtárik P. RandProx: Primal-dual optimization algorithms with randomized proximal updates[J]. arXiv preprint arXiv:2207.12891, 2022. [X5] Condat L P, Malinovsky G, Richtárik P. Tamuna: Accelerated federated learning with local training and partial participation[J]. 2023. Thank all reviewers for reading our rebuttal. We hope this summary clarifies the main research objectives and contributions of this paper. Responses to other concerns are provided in separate replies. If there are any unresolved concerns, we are more than happy to engage in further discussion with the reviewers.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Price of Implicit Bias in Adversarially Robust Generalization
Accept (poster)
Summary: This paper studies the generalization gap of robust empirical risk minimization for linear regression. The paper shows that the choice optimization algorithm or architecture affects the generalization gap of the trained linear model. In particular, a steepest descent algorithm w.r.t. $l_p$ norm finds the minimum $l_p$ norm interpolating solution on linearly separable data; a reparametrization of the linear model into a two-layer diagonal linear network has a bias toward minimum $l_1$ norm solution. Strengths: 1. Connections between implicit bias (of optimization algorithm and of architecture) and adversarial robustness. 2. Interesting discussion on the optimal regularization for robust ERM w.r.t. $l_\infty$. 3. Experiments are through and well presented. Weaknesses: 1. The message in this paper is delivered but the supporting argument is rather incomplete: * Section 2.1 highlights the need for an optimal regularization given specific data assumption and threat model, but the discussion is primarily for $l_\infty$ threat model. * Section 3.2 discusses how a diagonal linear network has a bias toward minimum $l_1$ solution, but the connection is not formal (the author acknowledges it in Remark 3.9). 2. The technical contribution is minor in my opinion. The ERM counterparts of the results in Section 3 are well-known and extensively studied, and extending them to robust ERM is more or less straightforward. Minor comments: 1. Corollary 3.5 refers to equation (8), which has $p^*$ as the conjugate of $p$, yet the corollary itself contains another $p^*$, it is confusing whether they are the same $p^*$. 2. Referring steepest descent w.r.t. $l_1$ as "coordinate descent" is confusing. Generally, coordinate descent chooses the coordinate to be updated in a cyclic or random order. I understand there is a variation of CD that picks the coordinate with the largest gradient component, but plainly using CD may let the reader think of the more standard CD algorithm. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weakness Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our paper and for your positive evaluation of our work. First, let us correct a small mistake on your summary of our work (since reviews might become public in the future and readers might get confused): > This paper studies the generalization gap of robust empirical risk minimization for linear regression. We do not study regression problems, but classification with linear (as a function of the input) models. Experiments contain results with non-linear models. We address your questions: > Section 2.1 highlights the need for an optimal regularization given specific data assumption and threat model, but the discussion is primarily for $\ell_\infty$ threat model. Indeed, we chose to focus on the $\ell_\infty$ case in Section 2, since (a) this type has received much attention in practical applications and (b) it illustrates clearly the main message of our paper. Gradient descent is well positioned for $\ell_2$ perturbations in linear models, so there is not much value to study this case in detail from a generalization perspective. Please note that it has often been the case for theoretical studies on robustness in the past to only focus on one type of perturbation [1, 2]. In Section 3, however, our optimization results cover a general case of $\ell_p$ perturbations. Finally, in the experiments of Section 4.2 where we consider different classes of non-linear networks, we effectively study different kinds of threat models (since different feature extractors map $\ell_\infty$ perturbations to different spaces). > Section 3.2 discusses how a diagonal linear network has a bias toward minimum $\ell_1$ solution, but the connection is not formal (the author acknowledges it in Remark 3.9). We would like to clarify that Corollary 3.7 and Proposition 3.8 and their corrresponding proofs in the Appendix are formal and rigorous. However, as you point out, we can only show that we get convergence to a stationary point in Corollary 3.7. As it is often the case in results in this area (see for instance [3]), a full characterization of a limiting point as local optimum is elusive, so we often end up with characterization as first-order stationary points. It is unclear whether we can obtain stronger results here due to the non-smoothness of the adversarial objective, yet we are optimistic about it and we hope that future work can address this. > The technical contribution is minor in my opinion. The ERM counterparts of the results in Section 3 are well-known and extensively studied, and extending them to robust ERM is more or less straightforward. The optimization results are not trivial, since existing results for ERM, such as the ones found in [4], cannot be directly extended for the robust objective. In particular, Lemma 10 in pg. 18 in [4] cannot be generalized and this was a starting challenge that we faced. Furthermore, previous results on the implicit bias of gradient descent in robust ERM with linear models contain inconsistencies and unjustified steps [5]; the usage of Taylor's Theorem, as well as the bound on the sharpness of the Hessian in pg. 12 in that paper, appear to be wrong and it is technically non-trivial to sidestep these challenges which are due to the non-smoothness of the loss. As a result, we chose to analyze steepest flow for the robust objective, which has not been defined, let alone analyzed before. Particularly, Lemma C.2 uses a completely different idea than [4] and, since it is a general result about steepest descent/flow, it is interesting and beautiful in its own right. > Corollary 3.5 refers to equation (8), which has $p^\star$ as the conjugate of $p$, yet the corollary itself contains another $p^\star$, it is confusing whether they are the same $p^\star$. Thank you for this comment. They are the same $p^\star$. > Referring steepest descent w.r.t. as "coordinate descent" is confusing. Generally, coordinate descent chooses the coordinate to be updated in a cyclic or random order. I understand there is a variation of CD that picks the coordinate with the largest gradient component, but plainly using CD may let the reader think of the more standard CD algorithm. Thank you for the suggestion. It is a matter of definitions and conventions. We followed the language of *Convex Optimization, Stephen Boyd and Lieven Vandenberghe* [6] (Section 9.4.2), which is one of the main references in Convex Optimization. We will add a note in the main text and in the appendix, stating that the algorithm should not be confused with other variations of CD. Thank you! Please let us know if our response addressed your concerns and we hope that you would consider raising your score if this is the case. Thank you! [1]. The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks. Spencer Frei, Gal Vardi, Peter L. Bartlett, Nathan Srebro. [2]. Rademacher Complexity for Adversarially Robust Generalization. Dong Yin, Kannan Ramchandran, Peter Bartlett. [3]. Gradient Descent Maximizes the Margin of Homogeneous Neural Networks. Kaifeng Lyu, Jian Li. [4]. Characterizing Implicit Bias in Terms of Optimization Geometry. Suriya Gunasekar, Jason Lee, Daniel Soudry, Nathan Srebro. [5]. Yan Li, Ethan X. Fang, Huan Xu, and Tuo Zhao. Implicit bias of gradient descent based adversarial training on separable data. [6]. Convex Optimization. Stephen Boyd and Lieven Vandenberghe.
Summary: The paper studies the implicit bias of robust Empirical Risk Minimization (ERM) and its connection with robust generalization. In regularized classification, the authors discuss the choice of regularization for a given perturbation set to improve robust generalization. In the unregularized setting, they study the implicit bias of steepest descent when applying it to the worst-case exponential loss in scenarios where the data is separable. They investigate two architectures: linear models and diagonal neural networks. Strengths: 1. The paper is well-written, and its contributions are well-explained. 2. The difference between the convergence of Gradient Descent in linear models and diagonal neural networks with $\ell_{\infty}$ perturbations is a very interesting result. Weaknesses: In my opinion, a weakness of the paper is that while the authors engage in an interesting discussion in the technical sections, the results presented in the paper are not very insightful on their own: 1. The result presented in Section 2 is directly derived from Theorem 2.1, which is borrowed from prior works, and Rademacher Complexity. 2. As the authors mention, the result of implicit bias in linear models is not surprising, and its proof is based on techniques from prior works. 3. The result of implicit bias in diagonal neural networks can be seen as a paraphrased theorem from prior work. Technical Quality: 4 Clarity: 3 Questions for Authors: Could the authors elaborate on the technical challenges they faced in proving their results, especially the result of implicit bias in linear models? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our work and help us improve it. We first reply to some of your points regarding the weaknesses of this paper: > As the authors mention, the result of implicit bias in linear models is not surprising, and its proof is based on techniques from prior works. While it is true that we can anticipate such a result for the implicit bias of steepest descent in robust ERM, we argue that the main novelty lies in asking the question in the first place ("*What is the effect of implicit bias of robust ERM in robust generalization?*") and in its connection with generalization bounds. There are no prior works that study these questions. Furthermore, please note that the proof is not a simple extension of previous results and it is not based directly on the techniques of [1] - see also our response to your question below. > The result of implicit bias in diagonal neural networks can be seen as a paraphrased theorem from prior work. We respectfully disagree. We leverage a result from prior work about robust ERM (Theorem 3.6) which characterizes the implicit bias of gradient descent for homogeneous networks in *parameter space*, while Corollary 3.7 and Proposition 3.8 establish the implicit bias of robust ERM in *function/predictor space*. This result is the first result of this kind for robust ERM. We are also confused about your previous point, since in Strength no.2 you complimented this result. Responding to your question, > Could the authors elaborate on the technical challenges they faced in proving their results, especially the result of implicit bias in linear models? The optimization results are not trivial, since existing results for ERM, such as the ones found in [1], cannot be directly extended for the robust objective. In particular, Lemma 10 in pg. 18 in [1] cannot be generalized and this was a starting challenge that we faced. Furthermore, previous results on the implicit bias of gradient descent in robust ERM with linear models contain inconsistencies and unjustified steps [2]; the usage of Taylor's Theorem, as well as the bound on the sharpness of the Hessian in pg. 12 in that paper, appear to be wrong and it is technically non-trivial to sidestep these challenges which are due to the non-smoothness of the loss. As a result, we chose to analyze steepest flow for the robust objective, which has not been defined, let alone analyzed before. Particularly, Lemma C.2 uses a completely different idea than [1] and, since it is a general result about steepest descent/flow, it is interesting and beautiful in its own right. More generally, we would like to defend the contributions of our paper (objecting to your score: "poor") by summarizing the most important ones: 1. We connect the implicit bias of optimization in robust ERM with the robust generalization error, and we show how and why implicit bias is more significant in robust ERM than in stardard ERM (this is where the term "price" comes from). 2. We paint a conceptual picture for the challenges faced in robust machine learning, which is rooted in foundational ideas of learning theory. 3. We rectify a misconception from prior work [3, 4] that asserted that regularization via the dual norm is always beneficial for robust generalization. 4. We provide rigorous results for linear models and diagonal neural networks against general $\ell_p$ adversarial perturbations. 5. We provide extensive experiments with synthetic data that validate the theoretical claims. 6. We provide several experiments with neural networks in image classification settings with gradient descent and sign gradient descent, and experimentally identify and establish a new connection between robust generalization gap and optimization geometry. We believe that the price of implicit bias in robust generalization is a new interesting phenomenon in robust machine learning and we believe there are many avenues for future work which can be inspired by this work. We hope that succintly summarizing the contributions of our paper, while also elaborating on some technical challenges that we faced by anwering your question could make you reconsider your evaluation of our work. We would be happy to include some discussion on the technical challenges, if you think this would benefit the paper. Thank you very much! [1]. Characterizing Implicit Bias in Terms of Optimization Geometry. Suriya Gunasekar, Jason Lee, Daniel Soudry, Nathan Srebro. [2]. Yan Li, Ethan X. Fang, Huan Xu, and Tuo Zhao. Implicit bias of gradient descent based adversarial training on separable data. [3]. Rademacher Complexity for Adversarially Robust Generalization. Dong Yin, Kannan Ramchandran, Peter Bartlett. [4]. Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks. Pranjal Awasthi, Natalie Frank, Mehryar Mohri. --- Rebuttal 2: Comment: Thank you for the answers and clarifications. I still believe that the technical contributions are limited and partially involve extending results from previous works to the robust objective. However, I agree that one of the major contributions of the paper is highlighting the phenomenon of the price of implicit bias in robust machine learning. As a result, I have revised my score accordingly. --- Rebuttal Comment 2.1: Comment: Thank you!
Summary: In this paper, the authors study the issue of large generalization gap with Robust ERM objective, they connect this with the implicit bias of optimization (including architecture and the optimization algorithm). The findings suggest that optimizing models for robust generalization is challenging since it is hard to do the right capacity control for robust machine learning. Strengths: - The paper has in-depth investigations into how does the choice of regularization norm affect the generalization ability of the model w.r.t. sparsity of data, optimization algorithm and choice of architecture - The authors validate their findings in both linear models and NN Weaknesses: - The theory studies might be still too limited Technical Quality: 4 Clarity: 3 Questions for Authors: See weakness Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing & positively assessing our work and for highlighting its strengths. We address your only concern: > The theory studies might be still too limited This paper is the first to consider the connection between the implicit bias of optimization in adversarial training/robust ERM and robust generalization and it is also the first that identifies implicit bias as a main source of many challenges in robust machine learning (such as the large generalization gap, robustness/accuracy tradeoff, etc.). As a result, we believe it is natural that we began our theoretical analysis with the simplest models that permit this. Please note that stronger theoretical results would have required tight generalization bounds for more complicated classes of models (such as neural networks) and this is a central question in deep learning theory, even in the absence of adversarial perturbations - see, for instance, [1]. It could be interesting to derive a more general result for the case of one hidden layer ReLU neural networks, where a tight (robust) generalization bound exists (Theorem 10 in [2]). However, obtaining a characterization of the implicit bias of robust ERM with various optimization algorithms (in *predictor space*) is highly non-trivial for this class of models and would require many new techniques and generalization of previous results on ERM from e.g. the work of Savarese et al. [3]. We are actively working in this direction for a follow-up study. **Furthermore**, our experimental results on neural networks suggest that the phenomena that we identify are more general and we are happy that you recognized this in your evaluation of the strengths of this work. We would argue that the theoretical part of our study is not limited (since it is rich in new ideas) and we would be happy to listen to any suggestions that you might have on directions that might be worth developing further. [1]. Spectrally-normalized margin bounds for neural networks. Peter Bartlett, Dylan J. Foster, Matus Telgarsky. [2]. Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks. Pranjal Awasthi, Natalie Frank, Mehryar Mohri. [3]. How do infinite width bounded norm networks look in function space? Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and I will keep my positive score --- Reply to Comment 1.1.1: Comment: Thank you!
Summary: This paper explores a linear classification scenario, investigating the factors that contribute to the gap between empirical adversarial risk and expected adversarial risk. Furthermore, they discuss which type of regularization should be applied in different cases. There are also simulations results to support their points. Strengths: 1.The paper is well-written with a clear structure. Motivations are well-explained on why the authors study the problem and the contributions of this study are well discussed. The analysis for the theoretical results are helpful in understanding. Overall, it is easy to follow the logic and flow of the paper. 2.Theoretical results are solid and well-organized. The authors made the theoretical settings clear. 3.Empirical results support the theoretical findings. Weaknesses: 1.In Theorem 2.1, it is not clear whether the constant $\rho$ has influences on other constants shown in the theorem. 2.While these results mainly focus on the gap between empirical adversarial risk and expected adversarial risk, maybe the discussions about their influences on expected risk and empirical adversarial risk are lacked. 3.As Theorem 3.3 focuses on steepest gradient dynamics on linear model, and Theorem 3.6 is about gradient flow on diagonal model, from my side, it is better to add a result about steepest gradient dynamics on diagonal model to make the analysis more sufficient. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your critical and positive evaluation of our work. We respond to your questions: 1. > In Theorem 2.1, it is not clear whether the constant $\rho$ has influences on other constants shown in the theorem. Thank you for the comment. The empirical margin $\hat{\rho}$ appears in the statement of Proposition 2.2 due to a typo, and it does not affect the bound in eq. 7, nor does it influence any constants as presented there. In fact, it is not used in the proof in the Appendix. We can obtain more refined, data-dependent versions of this bound by leveraging the empirical margin $\hat{\rho}$, but they do not offer any insights related to our discussion in Section 2, and we opted not to include such a result for the sake of brevity, as it could have misdirected the focus. See also our remark in lines 692-693 in the proof in the Appendix, where we address this issue and point the interested reader to a standard reference that discusses the role of $\hat{\rho}$ in generalization bounds. We removed the constant from the statement in lines 127-128, as well as from line 147, where it also appeared due to a typo. Thank you! 2. > While these results mainly focus on the gap between empirical adversarial risk and expected adversarial risk, maybe the discussions about their influences on expected risk and empirical adversarial risk are lacked. Thank you for this very interesting suggestion (we assume that you meant to say "expected risk and empirical risk", instead of "expected risk and empirical adversarial risk")! In short, our search for robust predictors can actually harm the standard generalization error (error measured without perturbations), **if** the type of the perturbation is not "aligned" with the type of data. For example, in the case of *Dense, Dense* data in pg. 4, we expect from the generalization bounds to see a tradeoff between robustness and accuracy. The reason is that small $\ell_2$ norm solutions are preferred for standard generalization, while small $\ell_1$ solutions are better for robustness. This can also be observed in the experiments. We measure the standard generalization error ($\epsilon=0$, regardless of the value of $\epsilon$ used during training) and we produce a figure similar to Figure 2, consisting of heatmaps with the average difference between the performance of gradient descent (GD) and coordinate descent (CD). The figure can be accessed through the following link: https://ibb.co/sw7TWMD. Indeed, we observe that, for example, when $k_{\mathcal{X}}=k_{\mathcal{W}}=d$ (*Dense, Dense*) and we train for large perturbations, $\epsilon = \frac{\epsilon^\star}{2}$ (bottom right corner in top left subplot), GD generalizes better than CD in terms of standard generalization error. If we contrast this with the same datapoint in Figure 2 (where we measure the robust error), we see that there is a tradeoff between robustness and accuracy, since CD has better robust generalization. However, this is not always the case; for example for *Sparse, Dense* data, no such tradeoff is observed. This provides a more nuanced understanding of this well documented tradeoff [1] and suggests that the implicit bias of optimization in robust ERM is at the heart of this tradeoff as well. The focus of the paper was on robust generalization, but we agree with you that a discussion of standard generalization could be interesting in this context, space permitting. We will include the above results in the experiments section and add some discussion in Section 2 after Proposition 2.2. Thank you! 3. > As Theorem 3.3 focuses on steepest gradient dynamics on linear model, and Theorem 3.6 is about gradient flow on diagonal model, from my side, it is better to add a result about steepest gradient dynamics on diagonal model to make the analysis more sufficient. The reason we opted to include results for gradient flow is essentially the non-smoothness of the robust loss for general $\ell_p$ perturbations, which complicates the technical arguments. In fact, as far as we understand, many proofs that have appeared in prior works concerning the implicit bias of gradient descent in robust ERM contain unjustified steps and non-rigorous parts and it is unclear how to rectify them, without many additional simplifying assumptions. Take, for instance, the proof of Theorem 3.1 in [2], which is about the implicit bias of gradient descent applied on the robust loss with linear models. The usage of Taylor's Theorem, as well as the bound on the sharpness of the Hessian on pg. 12 appear to be wrong and it is technically non-trivial to sidestep these challenges which are due to the non-smoothness of the loss. While providing rates of convergence is very interesting in its own right, such results do not affect our conclusions on the importance of implicit bias in the robust generalization of models. We would like to thank you once again for reviewing our work and helping us improve its presentation, especially with regard to weakness no.2. Please let us know if you have any more questions. If there are no outstanding concerns, we would kindly ask you to consider raising your score which would substantially help in reaching a reviewer consensus. Thank you. [1]. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. [2]. Yan Li, Ethan X. Fang, Huan Xu, and Tuo Zhao. Implicit bias of gradient descent based adversarial training on separable data. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Many thanks for addressing my questions. I'll raise my score to 7. --- Reply to Comment 1.1.1: Comment: Thank you!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Provable Benefits of Complex Parameterizations for Structured State Space Models
Accept (poster)
Summary: This paper provides a theoretical analysis of why SSM needs to be parameterized by complex numbers instead of real numbers. It shows that there exist complex LTI systems that could not be well-approximated by real systems of comparable size. Moreover, it proves that certain dynamics cannot be approximated by real LTI systems unless it has an exponentially large state-space dimension or large system matrices; yet, using complex LTI systems resolves this issue. Strengths: 1. The theoretical statements are precisely made. The sketch of the proofs are helpful for understanding the paper. 2. The comparison between real and complex is fairly thorough, encompassing different perspectives. Weaknesses: 1. My main concern is about the contribution of this work. While it is true that many ML models use real parameterizations, the diagonal matrix $\mathbf{A}$ comes from diagonalizing a general state matrix. Therefore, unless one puts restrictions on the matrix to be diagonalized (e.g. Hermitian), it is natural to assume that $\mathbf{A}$ should be complex-valued. Showing why a real parameterization does not work well sounds like a slightly artificial problem and adds relatively little to the SSM community. 2. The experiments could not strongly corroborate the theory. In addition to showing the performance, maybe some synthetic experiments would be helpful to show the exponential gap in Theorem 2 and Proposition 3. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. You formulate the problem using the discrete LTI system. Of course, in SSMs, the parameters are from the continuous-time LTI systems. While this does not change the basic question you are exploring because the real axis in the left half-plane gets mapped to the real axis in the unit disk under virtually all discretization schemes, would Theorem 2 be changed if you take the discretization into account? 2. The paper studies two cases: $\mathbf{A}$, $\mathbf{B}$, and $\mathbf{C}$ are either all restricted to real or all allowed to be complex. Intuitively, however, the important thing is that $\mathbf{A}$ has to be complex. Have you looked into the case where $\mathbf{A}$ is complex and $\mathbf{B}$ and $\mathbf{C}$ are real? In that case, which world would it fall into? 3. In section 3.3, instead of giving two examples of dynamics that are poorly approximated by real systems, maybe there could be a discussion of what you called forward difference. It would be helpful to relate Theorem 2 to some general (and easily interpretable) properties of the dynamics. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review, and for highlighting the thoroughness and preciseness of our theory. We address your comments and questions below. If you find our responses satisfactory, we would greatly appreciate it if you would consider raising your score. ## Significance of real diagonal SSMs: Real diagonal SSMs (with and without selectivity) underlie some of the most prominent neural network architectures for long sequence processing. This includes the recent Mamba architecture and its variants, as well as other architectures [A, B, C, D, E]. Advantages of real diagonal SSMs include simplicity and speed. The extent to which they compare to complex diagonal SSMs in terms of accuracy has been the subject of much debate (see our introduction). This debate is what motivated our work. We hope the above addresses your concern regarding the significance of our contribution. If not, please let us know and we will gladly elaborate. -- **[A]** “Mamba: Linear-time sequence modeling with selective state spaces”, Gu and Dao, 2023 **[B]** “Mega: Moving Average Equipped Gated Attention”, Ma et al., 2023 **[C]** “Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality”, Dao and Gu, 2024 **[D]** “Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length”, Ma et al., 2024 **[E]** “Jamba: A Hybrid Transformer-mamba language model”, Lieber et al., 2024 ## Experimentation: Following your comment, we conducted several experiments, including: * Demonstration of the gap in practical learnability between real and complex SSMs in the theoretically analyzed setting (i.e., in the setting for which we established such a gap). * Demonstration that complex parameterizations for (non-selective) SSMs improve performance of prominent neural network architectures (S4) on real-world data (sequential CIFAR-10, included in the standard long range arena benchmark). * Ablation study showing that among the architectural differences between S6 and S4, the one primarily responsible for closing the gap between real and complex parameterizations for SSMs, is the selectivity of the input and output matrices $B$ and $C$. ***The results of these experiments can be found in the PDF attached to our global response***. ## Impact of discretization: Our theory essentially applies as is (more precisely, it applies given slight modifications in the established bounds) to cases where the SSMs include conventional discretizations. For example, with the discretization laid out in the original Mamba paper [10]: the state transition matrix $A$ is represented as the exponent of a parameter matrix $A’$ times a scalar $\Delta$; and the input matrix $B$ is represented as a parameter matrix $B’$ times $\Delta$ times some matrix with spectral norm no greater than one. All of our proofs readily adapt to this case (in particular, the proofs of Theorem 2, Corollary 1 and Corollary 2 readily adapt to establishing an exponential lower bound on the magnitudes of $( B’ , C )$ for the real SSM). We will add a discussion of discretization to the camera-ready version. Thank you very much for raising the matter! ## Complex $A$ with real $B$ and $C$: Your intuition is correct. Namely, all benefits we have proven for the complex SSM apply (given a slight modification in one of the established bounds) when the input matrix $B$ and the output matrix $C$ are constrained to be real. To see this, note that our proofs of Propositions 1, Proposition 2 and Theorem 1 do not make use of complex assignments for $B$ and $C$. Our proof of Proposition 3 does make use of such assignments, but can readily avoid them by replacing the discrete Fourier transform with the discrete cosine transform (in this case a factor of $\sqrt{2}$ is introduced in the second bound). We will mention this in the camera-ready version. Thank you for the question! ## Mappings poorly approximated by the real SSM: While Corollary 1 indeed provides an example (albeit a canonical one) of a mapping poorly approximated by the real SSM, Corollary 2 goes far beyond an example, in the sense that it applies to a random (generic) mapping. Nonetheless, we agree with you that it would be interesting to relate the condition brought forth by Theorem 2 — namely, forward differences not being especially small — to easily interpretable properties of the mappings. We invested considerable efforts trying to establish such a relation (primarily using tools from complex analysis, e.g. Rice’s method), but to no avail. We will mention this pursuit as a direction for future work in the camera-ready version. Thank you for raising the matter! --- Rebuttal Comment 1.1: Comment: Thank you for your response! The additional experiments look convincing; therefore, I have raised my score. My primary concern is still that the underlying question studied in this work is relatively trivial from a linear algebra perspective, pinning that matrices with real eigenvalues are not universal approximators. This prevents me from further increasing my evaluation. However, I acknowledge that the problem studied in this work is important and the author(s) have done a good job of making a point. --- Reply to Comment 1.1.1: Comment: Thank you. We respectfully disagree with the following statement: > “the underlying question studied in this work is relatively trivial from a linear algebra perspective, pinning that matrices with real eigenvalues are not universal approximators”. It is true that matrices with complex eigenvalues are more general than ones with real eigenvalues, but to our knowledge, this does not establish any benefit of complex parameterizations over real parameterizations for diagonal SSMs. Indeed (perhaps counter-intuitively), we show in Proposition 1 that if the dimension of a diagonal SSM is not bounded, real and complex parameterizations are equivalent, in the sense that they both lead to universal approximation (of LTI mappings). The question is then whether complex parameterizations allow approximating mappings with lower dimension or parameter magnitudes than are required with real parameterizations. To our knowledge, this question — which we affirm in Theorems 1 and 2, Corollaries 1 and 2, and Proposition 3 — cannot be answered via simple linear algebraic arguments as you mentioned. We hope the above clarification addresses your remaining concern. Please let us know if this is not the case and we will further elaborate.
Summary: This paper considers learning LTI systems with bounded response. It shows that if we restrict to SSMs with diagonal dynamics, both real-valued and complex valued state sizes suffice. However, it shows that if we use only real-valued SSMs, to learn a sequence of length t we will need parameters that scale as $\exp(t)$, while if we use complex-valued SSMs, we need only parameters that scale as $t$. Strengths: This paper did a great job of distinguishing between model expressivity (showing that with large enough state size both real- and complex-valued SSMS are equally expressive) and practical learnability (showing that real- valued SSMs need an impractically large state size). I also liked how their proof of practical learnability was basically based on the point that for stable real-valued diagonal SSMs, the dynamics pretty have to be decaying exponentials, so we will need exponentially many of them to form a basis for a function. On the other hand, for complex-valued diagonal SSMs, we effectively have a Fourier basis, which is more expressive. I also like the robust limitations section of the paper, which emphasized that although the paper has demonstrated the need for complex parameterization in the LTI setting, they haven't addressed the selectivity setting, leaving open theoretical contributions to explain why Mamba can get away with using only real parameterization for language tasks. I also liked the proof technique based on impulse series and impulse response, that was a very nice framing. This paper provides an effort grounded in both experiments and theory that attempts to resolve an important question practitioners genuinely care about. Weaknesses: The paper would have been more interesting if it had addressed what is going on in the setting of selectivity. On lines 66-7, the authors write "We believe our results may serve as an important stepping stone in this pursuit." It would be nice if more evidence was given. I can't see how the proof techniques carry over. It would be great if some indication could be given of how this paper could be helpful in addressing selectivity. Footnote 1 is extremely confusing. It basically contradictions lines 101-2, which state that the SSMs considered in the paper are going to be diagonal. So, the eigenvalues of a real-valued diagonal A are definitely always real. I would just remove footnote 1. Technical Quality: 3 Clarity: 3 Questions for Authors: * I think the first sentence of the paper should name check S5 between S4 and Mamba (which is sometimes referred to in the body of its text as S6). S5 helped to pioneer the use of diagonal complex dynamics in an LTI setting, about which this entire paper is based. I think not namechecking it, and instead lopping in onto the end as "and more" does a disservice to a reader trying to learn more about the literature. * How do the results in this paper serve as an important stepping stone towards understanding real vs complex parameterization in LTV or selective settings? * What happens if we use a feedthrough matrix D in (1), which is common in nearly all SSM parameterizations? * What is $\Omega(t)$ on line 187? Is it defined somewhere in the paper? * Is it possible to do better than $n_{\mathbb{C}} = t$? why or why not? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support, and for highlighting the importance of the question we study and the merit of our theory and experiments! We address your comments and questions below. ## Addressing selectivity: We agree with you that addressing selectivity — i.e., theoretically supporting the success of real parameterizations in selective SSMs — is an interesting pursuit. Below we qualitatively discuss the reasoning behind our belief that “our results may serve as an important stepping stone in this pursuit”. A more detailed version of this discussion will be added to the paper. Thank you for raising the matter! Roughly speaking, the separations we established between real and complex SSMs result from a gap in their ability to express oscillations, i.e., to express frequency components in their impulse response: while the complex SSM can easily express any frequency, the real SSM is required to have exponential dimension or parameter magnitudes. Adding selectivity to the real SSM means that its parameters become input-dependent, leading to what can be viewed as an input-dependent impulse response. It can be shown that this dependence allows “importing” frequency components from the input to the impulse response. Therefore, if the input data is sufficiently rich in terms of its frequency content, selectivity may endow real parameterizations with all the benefits we proved for complex parameterizations. Recall that, as stated in our introduction, [10] conjectured that complex parameterizations are preferable for continuous data modalities (e.g., audio, video), whereas for discrete data modalities (e.g., text, DNA) real parameterizations suffice. The explanation above aligns with this conjecture: continuous data modalities typically consist of low frequencies only, whereas discrete data modalities have a “whiter spectrum,” i.e. a more uniform mix of frequencies. As stated, a detailed version of the above discussion will be added to the paper. ## Footnote 1: Per your suggestion, we will remove Footnote 1. ## Questions (addressed by order of appearance): * We will explicitly mention S5 when listing prominent SSM-based neural network architectures. * See “addressing selectivity” section above. * Our theory essentially applies as is (more precisely, it applies given very slight modifications in the established bounds) to the case where SSMs include a feedthrough $D$. The reason why incorporation of $D$ does not make a material difference is that it can easily be emulated by: (i) increasing the dimension of the SSM by one; (ii) assigning one to the last diagonal entry of $A$ and the last entry of $B$; and (iii) assigning $D$ to the last entry of $C$. We will mention this in the camera-ready version. Thank you for raising the matter! * $\Omega (\cdot )$ in line 187 stands for Big-Omega notation, i.e. it signifies a function that is at least linear in its argument. Note that it is used to qualitatively characterize a dependence whose precise form is provided by Theorem 2. * In Proposition 3 (and Proposition 1), it is indeed possible to improve (relax) the requirement $n_{\mathbb{C}} \geq t$. Namely, by leveraging the fact that the discrete Fourier transform of a real sequence is symmetric, one can update the proof of Proposition 3 to show that $n_{\mathbb{C}} \geq \lceil t / 2 \rceil$ suffices. Parameter counting arguments imply that the latter requirement is tight. We will mention all of this in the camera-ready version. Thank you for asking about it! --- Rebuttal 2: Title: Increasing score to 8 Comment: Thank you for a very clear and interesting rebuttal. I think your discussion of selectivity is very interesting and should be included in as much details as possible in the camera ready version. Your new experiments in the .pdf are great as well! They should also be included in camera ready. I am raising my score to an 8. --- Rebuttal Comment 2.1: Comment: Thank you very much! The camera ready version will indeed expand on both selectivity and the new experiments.
Summary: This work establishes a formal gap between real and complex parameterizations of stable, diagonal SSMs. While complex parameters can trivially express any real SSM, the converse is not true and real SSMs need an arbitrarily large number of parameters to approximate complex SSMs in at least two important cases. Strengths: 1. Clear writing and presentation 2. Theoretical results support a clear practical suggestion: use complex parametrizations for your SSM if you don't have input selectivity. Weaknesses: 1. Experimental results on non-synthetic datasets would be great. Especially considering the theorems regarding practical learnability and the exponentiality of real parametrizations for random impulse responses with high probability. Technical Quality: 4 Clarity: 4 Questions for Authors: See above. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The paper has a thorough discussion of the limitations of the analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support! We conducted several additional experiments, including: * Demonstration of the gap in practical learnability between real and complex SSMs in the theoretically analyzed setting (i.e., in the setting for which we established such a gap). * Demonstration that complex parameterizations for (non-selective) SSMs improve performance of prominent neural network architectures (S4) on real-world data (sequential CIFAR-10, included in the standard long range arena benchmark). * Ablation study showing that among the architectural differences between S6 and S4, the one primarily responsible for closing the gap between real and complex parameterizations for SSMs, is the selectivity of the input and output matrices $B$ and $C$. ***The results of these experiments can be found in the PDF attached to our global response***. --- Rebuttal Comment 1.1: Comment: Thank you for the response and additional results. I will stay with my assessment. --- Reply to Comment 1.1.1: Comment: Thank you. Please let us know if a need for further information arises.
Summary: The paper deals with an important question: is it possible to show a concrete advantage of complex diagonal RNNs compared to real diagonal RNNs? The motivation is clear, especially for people who studied the SSM literature. While modern SSM variants used in language modeling (e.g. S6-Mamba) do not make use of complex numbers, for some reasoning tasks in the long-range arena, these are necessary. This motivates the question "which tasks necessitate complex numbers and why?". The authors start with a useful recap, and correctly point out universality of both real and complex linear RNNs in the approximation of filters. Then show one example of a filter which is represented easily with complex numbers, but requires a huge hidden state if the recurrence is real. Further, the authors discuss a bigger class of filters that comprises random convolutions and shifts: here, real RNN parameters have to explode to reach the correct values. The authors corroborate their findings with experimental validations. Strengths: I believe in the value of complex numbers in recurrent networks. I liked reading the paper, and I think many people can find it interesting. However, the theoretical results might not be super surprising for readers who already worked on theory of RNNs (see next part) but experiments are interesting. I think it's good to draw attention to the question the authors study. Pros: Math is at a good level of formality, notation is precise, and the appendix is well structured. Clarity is good, though a specific thing can be improved (see later). I think with some tweaks, this can be a very good paper. I especially liked some parts of Sec. 3.3. Weaknesses: I like the theme of this paper and appreciate the formality and correctness of the results. However, I think some improvements are possible and some things need to be clarified. - Theorem 1: this is a crucial piece of Section 3. In short, both real and complex diagonal RNNs are universal, but complex diagonal linear RNNs require exponentially fewer parameters to approximate some specific functions. I agree with the proof, which is very simple: you cannot approximate well sin(x) using exponential modes (i.e. real RNNs). It is good to remind this fact, though the result is not novel it helps the discussion. However, while your statement is correct, it might be misleading: it looks like for a big part of the functions of interest in the hypothesis class, complex numbers are crucial. What the theorem actually shows is that for a subset of measure zero in the class of smooth functions, complex valued recurrences are more effective. I agree oscillations are important, but this is not formal and should be toned down. I would phrase this as an example rather than a theorem. It is indeed just a counterexample. - The second part of Section 3, Sec 3.3, is completely disconnected: here, you are talking about parameter magnitudes. This is fine, but I was a bit confused by the wall of text at the beginning of page 5. You should be much more schematic. You are now talking about something bit different, and the bridge should be clear but concise. - Sec 3.3 is interesting, but I'm not at all convinced that the results you found are orthogonal to those of Orvieto et al. (discussed at the end of page 8). I think your results are stronger, but comparing further makes your paper much more robust: Orvieto et al. shows that to retrieve information from real RNNs hidden states, the readout maps must explode in magnitude. Here, you found something similar, but on the B & C matrices when approximating some filters. I suspect the mechanism is similar, do you agree? I think drawing a better connection is needed, no way those effects have a disjoint cause. - Sec 3.3: All bounds you have here should be validated in the simplest toy setting possible. Would it be possible to verify empirically all your bounds in the setting of the propositions? This makes the reader believe much more in the results and makes the bounds clearer and visual. - Experiments: I like them, but as the authors themselves admit, more efforts are needed. Well my question here is: what would you do next, and why do you think the paper should be accepted without further experiments? Technical Quality: 4 Clarity: 3 Questions for Authors: See above, + I have another question - something which I might have missed but is my biggest curiosity: - How do you explain the fact that Mamba on language modeling works better with real recurrences? I am perfectly aware the question is not easy - but given your efforts, I believe your paper should try to answer this question. What you claim in the paper is "selectivity allows the real parameterization to achieve comparable performance": this is a bit too quick, on an extremely crucial issue. I expected the discussion to be much more precise on this point. I will be active in the rebuttal, and will engage in the discussion so to update my borderline score. I thank the authors in advance for their efforts. I know the topic in this paper is not an easy one, but for this exact reason I care the findings are robust and discussion complete. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review, for highlighting that “with some tweaks, this can be a very good paper”, and for the willingness to “engage in the discussion so [as] to update [your] borderline score”. Below we address your comments and questions. ## Specificity of Theorem 1: Theorem 1 readily extends to any function realized by the complex SSM whose impulse response oscillates. More precisely, the proof of Theorem 1 can easily be extended to establish a lower bound on $n_{\mathbb{R}}$ that is linear in $t$ whenever the restriction to odd elements of the impulse response of the complex SSM admits a number of sign changes linear in $t$. With this extension, Theorem 1 applies to a positive measure subset of the complex SSM’s function class. Moreover, among the subset of the complex SSM’s function class which is characterized by $A_{\mathbb{C}}$ having entries close to the unit circle, the above-described extension of Theorem 1 can be shown to apply to “most” functions. We will elaborate on this in the camera-ready version. Thank you for bringing it up! Notwithstanding the above, we note that proving separation in expressiveness through particular counterexamples is very common. Indeed, various important results in deep learning theory fall under this category (see references in “On the Expressive Power of Deep Learning: A Tensor Analysis” by Cohen et al.). ## Opening of Section 3.3: Following your comment, this text was completely refactored. In particular, we broke down Section 3.3 to three subsubsections, titled: “Real Parameterizations Suffer from Exponentiality”, “Exponentiality Impedes Practical Learning”, and “Complex Parameterizations Do Not Suffer from Exponentiality”. We believe Section 3.3 is much clearer now. Thank you for the feedback! ## Relation to Orvieto et al.: Thank you for raising this point! Upon closer inspection, we indeed believe that the result of Orvieto et al. ([25], Section 4.1) can be viewed as a special case of ours. In particular, the task considered in Orvieto et al. — namely, reconstructing past inputs to a real SSM from its current state using a linear mapping — can be viewed as taking an output matrix $C_{\mathbb{R}}$ with which the input-to-output mapping of a real SSM is a canonical copy (delay) mapping. Our Corollary 1 implies that this necessitates exponential parameter magnitudes, therefore essentially recovers the result of Orvieto et al. We stress that our Theorem 2 is far more general than our Corollary 1 (and accordingly, than the result of Orvieto et al.). Indeed, our Theorem 2 applies (i.e., ensures exponential parameter magnitudes for the real SSM) not only with a copy input-to-output mapping, but with any input-to-output mapping whose impulse response has forward differences that are not especially small (this includes, e.g., random input-to-output mappings — see Corollary 2). We will detail the relation between the result of Orvieto et al. and ours in the camera-ready version. Thanks again! ## Empirical demonstration of bounds in Section 3.3: Following your request, we conducted experiments demonstrating the bounds in Section 3.3. ***The results of these experiments can be found in the PDF attached to our global response***. In a nutshell, the results show that while the complex SSM is able to learn all impulse responses (up to times no greater than its dimension), when the impulse response is such that the real SSM is provably required to have exponential parameter magnitudes, training the real SSM does not converge. ## Further experiments: Following your feedback, we conducted further experiments, including: * Demonstration that complex parameterizations for (non-selective) SSMs improve performance of prominent neural network architectures (S4) on real-world data (sequential CIFAR-10, included in the standard long range arena benchmark). * Ablation study showing that among the architectural differences between S6 and S4, the one primarily responsible for closing the gap between real and complex parameterizations for SSMs, is the selectivity of the input and output matrices $B$ and $C$. ***The results of these experiments can be found in the PDF attached to our global response***. ## Success of real parameterizations in selective SSMs: We agree with you that theoretical support for the success of real parameterizations in selective SSMs is an extremely crucial pursuit. As stated in the paper, we believe our results may serve as an important stepping stone in this pursuit. Below we qualitatively discuss the reasoning behind our belief. A more detailed version of this discussion will be added to the paper. Thank you for raising the matter! Roughly speaking, the separations we established between real and complex SSMs result from a gap in their ability to express oscillations, i.e., to express frequency components in their impulse response: while the complex SSM can easily express any frequency, the real SSM is required to have exponential dimension or parameter magnitudes. Adding selectivity to the real SSM means that its parameters become input-dependent, leading to what can be viewed as an input-dependent impulse response. It can be shown that this dependence allows “importing” frequency components from the input to the impulse response. Therefore, if the input data is sufficiently rich in terms of its frequency content, selectivity may endow real parameterizations with all the benefits we proved for complex parameterizations. Recall that, as stated in our introduction, [10] conjectured that complex parameterizations are preferable for continuous data modalities (e.g., audio, video), whereas for discrete data modalities (e.g., text, DNA) real parameterizations suffice. The explanation above aligns with this conjecture: continuous data modalities typically consist of low frequencies only, whereas discrete data modalities have a “whiter spectrum,” i.e. a more uniform mix of frequencies. --- Rebuttal Comment 1.1: Title: Thanks! Comment: Dear Authors, Thanks so much for the efforts in this rebuttal. I raised my score to accept as I believe the additional experiments, along with your interpretation of selectivity, are convincing. I still would tone down Thm 1 and place it as a counterexample. What you can do is argue that oscillations are important, but you cannot be 100% formal in this. A thing that would be super nice to show is that using only exponential decay (real numbers) leads to a basis that requires, in general, more elements compared to exp + exp*sine waves. This may be true, but it requires some spectral or functional analysis. It would make your point, though, SO MUCH stronger. Currently, Thm1 gives a hint, but is not giving us the generality expected from a theorem. --- Reply to Comment 1.1.1: Comment: Thank you very much! As you suggest, we will reposition Theorem 1 and clarify its limitations. With regards to the stronger result you outline, there are two steps towards it which we intend to take: * Extending the result in Theorem 1 as discussed in our rebuttal. Namely, extending it to apply to sufficiently oscillating impulse responses, which form a positive measure subset of the complex SSM’s function class. * Analyzing forward differences of oscillatory impulse responses, thereby hopefully drawing another corollary of Theorem 2, which will establish that in order to approximate an oscillatory function, the real SSM must have dimension or parameter magnitudes exponential in $t$. Thank you again for your thorough analysis and very useful suggestions!
Rebuttal 1: Rebuttal: We thank all reviewers for their time and feedback, addressed per reviewer in our individual responses. ***Attached to this comment is a PDF presenting results of new experiments*** which will be added to the paper. These experiments include: * Demonstration of the gap in practical learnability between real and complex SSMs in the theoretically analyzed setting (i.e., in the setting for which we established such a gap). * Demonstration that complex parameterizations for (non-selective) SSMs improve performance of prominent neural network architectures (S4) on real-world data (sequential CIFAR-10, included in the standard long range arena benchmark). * Ablation study showing that among the architectural differences between S6 and S4, the one primarily responsible for closing the gap between real and complex parameterizations for SSMs, is the selectivity of the input and output matrices $B$ and $C$. Pdf: /pdf/351a5eb087b1a57e16859ae59bb313701c962dc0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts
Accept (poster)
Summary: The authors propose a method which generalize effectively to covariate-shift regions while precisely identifying semantic-shift regions, i.e., domain generalization and OoD segmentation. They design a novel generative augmentation method to produce coherent images that incorporate both, various covariate shifts and anomaly objects. Moreover, they introduce a training strategy that recalibrates uncertainty specifically for semantic shifts and enhances the feature extractor to align features associated with domain shifts. The approach is compared across different benchmarks for domain generalization and OoD segmentation (or both). Strengths: The paper is written clearly. The authors address an important problem, namely domain genralization and the detection of unknown objects in one step. While most works focus on one of these robustness problems, the authors present a method that tackles both together. The approach of using the power of generative models to obtain further (augmented) training data to increase the robustness of the network is good. Furthermore, calibrating uncertainty, i.e., generating high OoD scores for semantic-shift regions and performing robustly under covariate shifts, is a sound approach. The method outperforms the given baselines. Weaknesses: The paper is written in a clear way, but it is also a bit unclean, for example the caption of figure 1c, the reference to figure 2 is missing in the text and there are a few typos (for example "the ResNet backbone are froze" or "AadmW"). The related work section is rather short and as a comparison to anomaly detection and domain generalization it is only mentioned that these methods only focus on their problem, but not on both (in comparison to the authors). However, I miss a methodological comparison at this point to highlight the novelties of the paper. In the SMIYC benchmark, there are other methods that perform better than the method presented, but there is no comparison with them or an argument of why a comparison does not make sense. In addition, 5 different metrics are used in this benchmark for the evaluation, but the authors only calculate 1, which makes the comparison more difficult. The method depends on many hyperparameters, but there are no ablation studies on different values. Technical Quality: 2 Clarity: 2 Questions for Authors: The described papers for domain generalization are very limited and rather older, there is certainly newer literature to compare with. The problem of domain generalization is considered, but the SMIYC validation data is used for model selection, so is it domain adaptation? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: No limitations section in the paper, but the robustness of the method was demonstrated on various datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Title: Rebuttal by Authors Comment: We thank the reviewer for the time and detailed feedback. We address the SMIYC benchmark comparison in the general response and will revise the paper to correct the noted typos. Below, we address your specific concerns. # Weakness 2: Related Work & Novelty **Anomaly Segmentation:** Our method falls under discriminative-based anomaly segmentation methods that utilize additional out-of-distribution (OOD) data to train models to distinguish between known and unknown data [3, 30, 21, 26]. We contribute to the construction of generative-based OOD data and training pipelines. The details of our OOD data generation are explained in the general response. Regarding training, previous works [3, 30, 26] directly train models using OOD score functions calculated on logits (e.g., MSP, Energy, Entropy). In contrast, our approach introduces a learnable uncertainty function, initially set as the standard OOD score function and optimized before fine-tuning the feature extractor for joint known class segmentation and OOD detection. This approach decouples the training process to mitigate competition and leverages the meaningful features learned from previous closed-world pretraining. **Domain Generalization:** Existing methods for domain generalization include instance normalization or whitening [24, 6, 25] and domain-invariant feature learning through domain randomization [33, 29]. Our contribution lies in the latter category, where we propose a generative-based data augmentation technique. Unlike recent works that use generative models for feature [d1] or image randomization [d2], our technique concurrently generates data with both domain shifts and unknown objects. We believe that incorporating various distribution shifts is crucial to avoiding model biases and better handling domain and semantic shifts. # Weakness 3: Evaluation Metrics We would like to clarify that we used two metrics, AP and FPR@95, in the presented SMIYC results. These metrics are widely recognized as the **primary evaluation criteria** in the anomaly segmentation literature [1]. It is a common practice in this field to present only the primary metrics in the main paper due to space constraints, as exemplified in [21, d3]. We agree with the reviewer that using multiple metrics provides a more comprehensive evaluation of the method. To address the reviewer’s concern, we have included the results of our method across all five metrics below. As shown in Table A1, our method consistently performs better or on par with RPL/Mask2Anomaly across all evaluation metrics. These additional results will be included in the appendix of the revised paper. Table A1: Comparison of our methods with two recent works RPL and Mask2Anomaly on the SMIYC benchmark. | Exp Name | Backbone | SMIYC-RA21 | | | | | SMIYC - RO21 | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | | AP$\uparrow$ | FPR95$\downarrow$ | sIoU$\uparrow$ | PPV$\uparrow$ | F1$\uparrow$ | AP$\uparrow$ | FPR95$\downarrow$ | sIoU$\uparrow$ | PPV$\uparrow$ | F1$\uparrow$ | | RPL (ICCV 23) | DeepLab v3+ | 83.49 | 11.68 | 49.76 | 29.96 | 30.16 | 85.93 | 0.58 | 52.61 | 56.65 | 56.69 | | Ours | | 88.1 | 8.2 | 56.2 | 34.7 | 37.8 | 90.7 | 0.3 | 48.1 | 66.7 | 58.0 | | Mask2Anomaly (ICCV 23) | Mask2Former | 88.70 | 14.60 | 60.40 | 45.70 | 48.60 | 93.30 | 0.20 | 61.40 | 70.30 | 69.80 | | Ours | | 91.9 | 7.9 | 58.7 | 45.8 | 48.7 | 95.3 | 0.1 | 59.4 | 73.5 | 68.7 | # Weakness 4: Hyperparameters We thank the reviewer for the suggestion. We have added results ablating the two types of hyperparameters in our method: selection ratio and loss margins. THe results shown in Table 1 of the attached PDF demonstrate the robustness of our model to various hyperparameters. [d1] Gong, Rui, et al. Prompting diffusion representations for cross-domain semantic segmentation. BMVC (2024). [d2] Jia, Yuru, et al. Domain-Generalizable Semantic Segmentation with Image Diffusion Models and Stylized Semantic Control. ECCV (2024). [d3] Grcic et al., On Advantages of Mask-level Recognition for Outlier-aware Segmentation, CVPRW 2023. --- Rebuttal 2: Title: Rebuttal by Authors Comment: # Question 1: Domain Generalization Papers We chose RobustNet (2021)[6] and RuleAug (2023)[29] for our benchmark comparison because they represent two main DG strategies: Constraining the learned feature distributions and domain randomization. Although adding more recent DG techniques could enhance our analysis, many of these works do not provide pretrained models on Cityscapes or are based on different segmentation backbones, making direct evaluation and comparison challenging [25, d5]. To address the reviewer’s concern, we additionally evaluated a recently published work, CMFormer [d4] (2024), which adopts the Mask2Former architecture for DG in semantic segmentation. Using the provided pretrained model and official code, we performed inference on the ACDC-POC dataset and compared it with our method under the Mask2Former architecture. We note that the comparison is not entirely fair, as CMFormer uses a more powerful Swin Transformer backbone while we use ResNet50 following Mask2Anomaly. As shown in Table A2, CMFormer non-surprisingly performs better on known class segmentation with a 10-point gap in mIoU and 2 points in mAcc. However, its OOD segmentation performance is significantly worse, with over 60 points lower in AP and 30 points higher in FPR. This supports our finding that existing DG techniques might overly generalize to all types of distribution shifts, making it difficult to recognize unknown objects, raising safety concerns in autonomous driving scenarios. | Domain | ACDC-POC | | | | | --- | --- | --- | --- | --- | | Method | AP$\uparrow$ | FPR$\downarrow$ | mIoU$\uparrow$ | mAcc$\uparrow$ | | CMFormer | 27.84 | 31.25 | **60.71** | **85.90** | | Ours | **90.42** | **0.46** | 51.75 | 83.16 | # Question 2: Validation The main difference between domain generalization and domain adaptation techniques is that the former focuses on training one model to generalize to any data with different domain shifts, while the latter typically trains each model for each target domain. Our method belongs to domain generalization as we train one model on Cityscapes and evaluate it on various datasets with different domain shifts. We use the SMIYC validation set for model selection to ensure a fair comparison with other anomaly segmentation techniques in the benchmark, which also use the validation set for model selection. [d4] Bi, Qi et al. Learning content-enhanced mask transformer for domain generalized urban-scene segmentation. AAAI, 2024. [d5] Li, Yumeng et al. Intra-Source Style Augmentation for Improved Domain Generalization. WACV, 2023. --- Rebuttal Comment 2.1: Comment: Thank you for the additional experiments and the responses. Given the new experiments and answers regarding all reviewers, I have increased my score. --- Reply to Comment 2.1.1: Title: Thank you Comment: Thank you for increasing the score and for your time and careful review. We are glad to have addressed your concerns with the additional experiments and discussions, and we will incorporate these into the final manuscript.
Summary: This work proposes a novel generative pipeline and fine-tuning method for anomaly detection under domain shift. The generative pipeline uses a semantic-map to image model that can leverage the labels from the Cityscapes dataset with some modifications which introduce novel unknown classes. The resulting images have unknown novel objects (semantic shifts) and modified known classes (covariate shifts) while preserving most of the semantic meaning. Authors use the images augmented with their pipeline to train with a contrastive loss that pushes the representations of known classes together while pushes the novel categories away. Strengths: Applying a generative pipeline to augment images with either covariate or semantic shifts while trying to be as realistic as possible is an interesting direction with potential impact in the anomaly segmentation community. Moreover, how to best leverage the generated images explicitly during training is also an interesting avenue to explore. Results presented show convincing improvements upon recent work. Weaknesses: Although experiments are quite extensive, it would have been interesting to decouple the contributions of the generative pipeline and the proposed fine-tuning mechanism in the experiments. Many of the methods in table 2 could have used the images augmented with the generative model, and it would be very interesting to see how much can the generated data benefit those models. Especially because it seems from Table 2 that on the MUAD dataset, the OOD+RuleAug baseline is worse than RPL or Mask2Anomaly and to my understanding, the OOD+RuleAug baseline consists in the training proposed in section 3.3 but replacing the generative data augmentation with a rule-based one from [29]). Thus, leaves the question whether other existing methods like RPL or Mask2Anomaly + the proposed generated data could be better than the training scheme proposed in 3.3. I am aware the authors show in table 3 that every proposed component in the training pipeline leads to a drop in accuracy when removed, however, that does not show that the overall proposed pipeline is better than that of previous works if they all used the generated images for training (I would only focus on comparing with RPL / Mask2Anomaly which are the most recent ones). Moreover, comparing with previous methods, using a contrastive loss was already proposed in prior work e.g. RPL[21]. I think a better discussion of the related work in section 3.3 to highlight the novelty in the components would be very helpful to readers. In terms of the generative pipeline, I would like to bring to the attention of the authors a preprint which is very much aligned with the proposed pipeline and that it might be worth discussing in the related work: [Loiseau *et al.* Reliability in Semantic Segmentation: Can We Use Synthetic Data? ArXiv 2023](https://arxiv.org/pdf/2312.09231) As a preprint authors are not expected to compare quantitatively but given the similarity and the fact it was published Dec 2023 might be fair to mention it in the related work and discuss the differences with [8] and [34] a bit more extensively. Last but not least, although the proposed generative pipeline leverages a model that is conditioned on a semantic mask, the generative models still have important limitations and can introduce significant changes that do not align well with the original image. From looking at Fig 5 in the appendix, I could easily spot that in the first image it removes a very large building and replaces it mostly with sky; in the second it does something similar replacing the building at the end with trees; the fifth one replaces traffic lights with street lamps and in the last image modifies the direction sign with a "sign" that does not mean anything. Perhaps for some applications this will not matter, depending on the classes of interest and in the level of semantic detail one might want to accomplish, but in the context of autonomous this kind of modifications might be problematic. I think the pipeline still has its merit and can be of use to the community but a more clear discussion of the limitations of generative models is needed and perhaps some examples of failure cases would be of interest to add in the appendix. Technical Quality: 3 Clarity: 3 Questions for Authors: See my weaknesses section. Especially I'd be interested to know what are the main differences of the proposed training schedule and RPL and Mask2Anomaly and the main differences between the generative pipeline and [8, 34] and [Loiseau *et al.*](https://arxiv.org/pdf/2312.09231). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See my last point in the weaknesses section. More discussion on the limitations of the generative pipeline, especially regarding the preservation of semantic details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Title: Rebuttal by Authors Comment: Thank you for the thorough comments and many constructive suggestions. We appreciate the mention of the interesting work by Loiseau et al. [a1] and have discussed our differences, including [8] and [34], in the general response. We address the other concerns below. # 1. Decouple Contribution We thank the reviewer for the suggestions and have conducted additional experiments on Mask2Anomaly and RPL. The results are shown in Table C1. Below, we discuss the findings: -**Mask2Anomaly**: We observe consistent improvements across all datasets, demonstrating the benefits of using our generated data. However, the final performance achieved by Mask2Anomaly + CG-Aug is still lower than the model fine-tuned with our pipeline. This highlights the efficacy of the proposed training design. -**RPL**: The improvement is not as significant. This may be due to certain aspects of RPL's loss and training design being less suitable for our scenario. Firstly, RPL relies on the original network's predictions to supervise a learnable residual part. Since the original network does not generalize well to data with domain-shift, this results in imprecise supervision. Secondly, the RPL uncertainty loss focuses solely on increasing uncertainty for unknowns, without adequately addressing the known classes, particularly for augmented images. Additionally, restricting the trainable parameters to a residual block may limit the model's ability to learn more complex patterns, thereby reducing overall effectiveness. Those results demonstrate that effectively utilizing the generated training data with multiple distribution shifts remains an open question. Our work takes a step towards analyzing the shortcomings of existing training designs, offering novel and effective strategies for better handling this data. Table C1: We apply our coherent generative-based augmentation (CG-Aug) to the most recent works, RPL and Mask2Anomaly. | | | | RoadAnomaly | | SMIYC-RA21(val) | | SMIYC-RO21(val) | | |---|---|---|---|---|---|---|---|---| | Backbone | FT | OOD Data | AP$\uparrow$ | FPR$\downarrow$ | AP$\uparrow$ | FPR$\downarrow$ | AP$\uparrow$ | FPR$\downarrow$ | | Mask2Former | Mask2Anomaly | COCO (Default) | 79.70 | 13.45 | 94.50 | 3.30 | 88.6 | 0.30 | | | Mask2Anomaly | CG-Aug (Ours) | 85.47 | 22.38 | 97.96 | 1.55 | 89.80 | 0.12 | | | Ours | CG-Aug (Ours) | **90.17** | **7.54** | **97.31** | **1.04** | **93.24** | **0.14** | | DeepLabv3+ | RPL | COCO (Default) | 71.61 | 17.74 | 88.55 | 7.18 | **96.91** | **0.09** | | | RPL | CG-Aug (Ours) | 72.46 | 21.85 | 83.50 | 23.88 | 93.30 | 0.51 | | | Ours | CG-Aug (Ours) | **74.60** | **16.08** | **93.82** | **3.94** | 95.20 | 0.19 | # 2. Contrastive Loss We appreciate the reviewer’s kind suggestion and will add more related work in our model training section (3.3). Below, we discuss the novelty of the proposed contrastive loss. **Compared with contrastive loss in RPL**: RPL employs their contrastive loss *on a projected feature space*, supervising OoD training with a combination of feature contrastive loss and an additional energy loss to maximize uncertainty scores for unknown class data. In contrast, we calculate our contrastive loss *directly on the uncertainty scores*. This direct supervision allows for more explicit and effective ranking of uncertainties across different data types. Experiments in Table C1 replacing our OOD loss with RPL's showed that RPL's feature contrastive loss alone barely improves performance. Even with combined losses, RPL's performance falls short of ours, demonstrating the efficacy of our direct supervision on uncertainty scores. **Compared with other OOD Losses**: Existing OOD losses either maximize uncertainty scores solely for unknown data or supervise unknown and known data separately. In contrast, our method supervises the relative distance between OOD and inlier samples. We find this contrastive term to be more robust to hyperparameters and easier to optimize. To demonstrate its efficacy, we replaced the distance-based supervision with value-based supervision, similar to that used in Mask2Anomaly and PEBAL. As shown in Table 3, row #3, this change led to a performance decrease, validating the effectiveness of our loss design. Table C2: Ablation study of our contrastive loss (In DeepLab v3+). Our loss performs better in both OOD detection and known class segmentation results. | | SMIYC-RA21(val) | | ACDC-POC | | | | | --- | --- | --- | --- | --- | --- | --- | | Loss | AP $\uparrow$ | FPR$\downarrow$ | AP$\uparrow$ | FPR$\downarrow$ | mIoU$\uparrow$ | mAcc$\uparrow$ | | FeaConLoss (RPL) | 68.40 | 48.42 | 30.98 | 40.20 | 49.42 | 79.38 | | FeaConLoss +Energy (RPL) | 86.41 | 9.29 | 76.12 | 1.93 | 51.43 | 83.15 | | RelConLoss (Ours) | **93.82** | **3.94** | **82.41** | **1.01** | **54.12** | **85.07** | --- Rebuttal 2: Title: Rebuttal by Authors Comment: # 4.Generation Failures We agree with the reviewer that generative models still have important limitations, and appreciate the reviewer’s detailed examination and careful thought for generation failure cases and impact. Below, we first discuss how we currently deal with generation failures, followed by our analysis of the generation failure pattern, and examine how these failures affect model training. - **Noise-aware Learning:** During training, we mitigate the impact of generation failures by using a sample selection strategy (Sec. 3.3.2). We note that for failure cases mentioned by the reviewer, some can be prevented via pixel selection during training. To better illustrate this, we include the loss and selection map corresponding to the images from Fig 5. in the attached PDF. - **Generation Failure Cases:** Inspired by the reviewer, we examine our generated image and observe that generation failures typically occur in the following scenarios: (a) Remote scenes. (b) Small objects. (c) Text-related elements. This demonstrates the limitations of the current generative model and we hope they can be addressed in future research. - **Impact of Generation Failures:** - **Impact on Category Learning:** Generation failures may adversely affect specific classes. We evaluated per-class segmentation results and compared them with the baseline model. Results are presented in Table C3. We find performance in six categories, such as fence, pole, and traffic sign, remains similar (differences less than 1%); and performance on vegetation is worsened by 3%, likely due to poor generation quality for this class. - **Performance Saturation:** We observe that performance tends to saturate with increasing generative data. Experiments varying the dataset scale from 1.0x to 2.0x and 3.0x Cityscapes sizes, as shown in Figure 1(b) in the supplementary material, suggest this saturation. This may be due to an interplay between the benefits of additional data and the negative impact of generation failures We will include more examples of failure cases and a comprehensive discussion of the limitations of generative models in the revised paper and appendix. Table C4. Per-class segmentation results in mIoU on ACDC-POC dataset. | | road | sidewalk | building | wall | fence | pole | traffic light | traffic sign | vegetation | terrain | sky | person | rider | car | truck | bus | train | motorcycle | bicycle | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | DeepLab | 77.57 | 37.70 | 63.55 | 17.46 | **31.22** | 49.42 | 64.24 | 52.78 | **75.75** | 10.58 | 81.12 | 54.17 | 19.37 | 77.89 | **50.76** | **50.74** | 30.33 | 13.77 | 22.05 | | Ours | **85.75** | **58.16** | **74.80** | **40.79** | 29.43 | **50.81** | **71.69** | **53.01** | 72.82 | **30.21** | **81.35** | **62.77** | **32.17** | **79.99** | 49.41 | 49.33 | **38.08** | **29.19** | **38.52** | --- Rebuttal Comment 2.1: Title: Reviewer response Comment: I would like to thank the authors for their detailed response. After reading their response and other reviewers comments I think the authors did a reasonable job at addressing my questions and the other reviewers comments. Therefore I am inclined to keep my positive score. **Missing comparisons:** I agree with reviewer LCzB that it would be nice to include RbA as it has a strong performance on SMIYC and is concurrent to Mask2Anomaly and RPL in ICCV 2023 so it might not be fair to exclude it from the comparison. It would also be nice to add Grcic et al., On Advantages of Mask-level Recognition for Outlier-aware Segmentation, against the comparisons. To me this has been sufficiently addressed with (the second) Table A1, which I think should be Table A2 :) **Novelty of paper:** As I see it there are two main contributions in this work, one is the generative pipeline to augment images with both domain and semantic shifts and the other is the training pipeline to leverage that data. Initially the two were entangled as there were no experiments with other methods and their generated data. However, with the additional experiments in Table C1 it is more clear that, although other methods might benefit from the generated data in some cases, the proposed method seems to leverage the generated data better (i.e. Ours with CG-Aug (ours) is better than RPL/Mask2Anomaly with CG-Aug (ours) ). There are no experiments with the proposed fine-tuning method and COCO data as OOD which would have been interesting too to understand if the proposed fine-tuning method alone could surpass previous works or if the combination with the new generated data is the key to the improved performance. Regarding the comparison with POC [8], it would be interesting to discuss that the motivation in [8] is to precisely avoid domain shifts in the evaluation, as this would hinder anomaly segmentation. All datasets used in the comparison in Table 2 of the rebuttal pdf (RoadAnomaly, SMIYC and ACDC-POC) have strong domain shifts with respect to Cityscapes, but datasets with a smaller domain shift such as Lost and Found were not included. The proposed CG-Aug, combines both domain shifts and semantic shifts to improve anomaly segmentation out-of-domain while POC focuses on introducing semantic shifts that minimally modify the image beyond the introduced objects. Both will have its Pro's and Con's and this is where I see the novelty in the generative part. --- Rebuttal 3: Title: Thank you and Further Explanation Comment: Thank you for your constructive and detailed feedback. - **Regarding the comparisons**: We appreciate your suggestion and will include RbA and M2F-EAM (Grcic et al.) in the revised manuscript. Thank you also for pointing out our typo in the general response, where (the second) Table A1 should be Table A2. - **Regarding the novelty:** Thank you for acknowledging the contributions of our work. We are pleased that our additional experiments have clarified the advantage of our training pipeline, which is tailored to our specific task and effectively leverages our generated data. - **Further experiments with COCO data as OOD**: In response to your further suggestion, we have conducted experiments using COCO data as OOD to assess the efficacy of our proposed training pipeline without the generated data. We note that some training components, such as noise-aware learning and relative contrastive loss, are tailored for scenarios involving the generated domain-shift data. Therefore, in this evaluation, we focus primarily on assessing the effectiveness of our two-stage learnable uncertainty function and the relative contrastive loss applied between the original data and OOD data. **The results in Table C5 show that our training method surpasses RPL and Mask2Anomaly on most metrics,** and the combination with our generative data yields even greater performance improvements. - **Comparison with POC:** We appreciate your recognition of the novelty of our generative pipeline. Our work primarily addresses scenarios where both domain shifts and semantic shifts are present, which we believe are common in real-world applications. To address your concerns regarding performance under smaller domain shifts, we have included results on the FS LostAndFound and FS Static datasets, comparing our method with POC using the Mask2Anomaly training pipeline. As shown in Table C6, **our method demonstrates superior performance on FS LostAndFound**, indicating that our generated data closely resembles real OOD scenarios. However, on the FS Static dataset, our performance is lower than that of POC. We note that the FS Static dataset's OOD data is generated using a cut-and-paste technique, which may not reflect real-world OOD distributions. This could explain why simple COCO-pasted data achieves the best results on this dataset. Overall, we thank you again for your valuable input, which has helped us clarify the components of our design. We will incorporate all these discussions into the revised paper. Table C5: Performance of our training pipeline using COCO data, compared to previous methods Mask2Anomaly and RPL. | Backbone | FT | OOD Data | Road Anomaly | | | SMIYC -RA21 (val) | | SMIYC -RO21 (Val) | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | | | AUC$\uparrow$ | AP$\uparrow$ | FPR$\downarrow$ | AP$\uparrow$ | FPR$\downarrow$ | AP$\uparrow$ | FPR$\downarrow$ | | Mask2Former | Mask2Anomaly | COCO | - | 79.70 | 13.45 | 94.50 | 3.30 | 88.60 | 0.30 | | | Ours | COCO | 95.83 | 80.94 | 29.12 | **97.41** | 1.60 | 92.89 | 0.50 | | | Ours | Ours | **97.94** | **90.17** | **7.54** | 97.31 | **1.04** | **93.24** | **0.14** | | DeepLabv3+ | RPL | COCO | 95.72 | 71.61 | 17.74 | 88.55 | 7.18 | **96.91** | **0.09** | | | Ours | COCO | 96.26 | **76.01** | 17.44 | 91.40 | 6.80 | 96.77 | 0.12 | | | Ours | Ours | **96.40** | 74.60 | **16.08** | **93.82** | **3.94** | 95.20 | 0.19 | Table C6: Comparison of our CG-Augmentation with POC on FS LostAndFound and FS Static, integrated into the Mask2Anomaly training pipeline. COCO results are from the official Mask2Anomaly paper; POC alt. and POC c. results are from the official POC publication. | | FS_LostAndFound (val) | | FS_Static (val) | | | --- | --- | --- | --- | --- | | Mask2Anomlay+ | AP$\uparrow$ | FPR$\downarrow$ | AP$\uparrow$ | FPR$\downarrow$ | | COCO | 69.41 | 9.46 | **90.54** | **1.98** | | POC alt. | 68.8 | 11.4 | 87.4 | 3.1 | | POC c. | 73.0 | **9.2** | 87.0 | 2.1 | | CGAug (ours) | **76.56** | 10.17 | 85.70 | 7.16 |
Summary: This paper aims to tackle both covariate-shift and semantic-shift in semantic segmentation. The idea is to use a generative augmentation method to produce coherent images that incorporate both anomaly objects and various covariate shifts at both image and object levels. The semantic segmentation model is then on the synthetic image for recalibrating uncertainty for semantic shifts and enhances the feature extractor to align features associated with domain shifts. The authors have conducted extensive experiments to show the effectiveness of the proposed method over the state-of-the-art methods. Strengths: 1. The paper tackles an important problem in semantic segmentation and is well-motivated. 2. The solution is mostly reasonable and well-motivated. The paper is mostly well-written and organized. 3. The authors have compared with recent state-of-the-art OOD segmentation methods and demonstrated impressive performance. Weaknesses: 1. The novelty of the proposed solution is somewhat limited. While the generative-based data augmentation appears reasonable, its novelty is not clearly articulated. 2. The proposed relative contrastive loss involves several hyperparameters, but it is unclear how to effectively tune these parameters in practice. 3. The necessity of the Two-Stage Noise-Aware Training is not clear. Why not use a single-stage training process that trains the uncertainty function and the feature extractor simultaneously? 4. What are conditioned label masks? 5. How can we determine if a pixel has a clean or incorrect label given the augmented images? Equation 6 seems to be used for assessing the accuracy of the pixel label, but why is this an effective strategy? How is α determined? 6. The proposed method involves many hyperparameters, which reduces its applicability in real-world scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Comment: Thank you for your time and constructive feedback. We discuss the novelty of our proposed generative-based augmentation in the general response. We address your other concerns below. # Weakness 2: Hyperparameters for Relative Contrastive Loss Our relative contrastive loss includes three terms, each with a margin value controlling the distance penalty limits. **These margins are set based on the average uncertainty scores from the training set.** Specifically, we compute the differences in uncertainty scores between unknown vs. original known data, unknown vs. augmented known data, original known vs. augmented known data, and set the differences as margins for these distance respectively. A histogram of uncertainty scores is provided in the Figure 1 (c) attached PDF for reference. Moreover, our two-stage training framework first trains the uncertainty function based on the existing model, allowing this function to adapt to different scales. **This provides flexibility in parameter setting even without prior knowledge.** The experimental results, shown in Table 1 of the attached PDF, demonstrate the model's robustness across a wide range of hyperparameter variations. Specifically, maintaining the loss values within the same order of magnitude ensures that variations in parameters do not significantly affect the results. # Weakness 3: Two-Stage Thanks for pointing this out. The motivation and benefits of our two-stage training design are clarified below. - We observe that an initialized model with closed-world pre-training achieves a good feature representation while the initial uncertainty function, such as the energy score, is typically sub-optimal. Directly training the feature extractor with a sub-optimal uncertainty function risks disrupting the well-learned feature representations, which can harm known class segmentation and OOD detection (cf. Figure 4 (a)). - Our two-stage training approach addresses this challenge by first optimizing the uncertainty mapping head based on the current feature representations. This allows the subsequent fine-tuning of the feature extractor to be more effective and less disruptive. Additionally, having a well-initialized uncertainty head before joint training helps minimize task competition between known class segmentation and OOD detection. - In Figure 4(a), we empirically compare our two-stage training with single-stage training, showing that our approach (noted as 'second stage') outperforms the 'single stage' by nearly 10 points in AP. With a closer look, our first-stage training, which involves training only the uncertainty function already significantly improves the baseline model's performance (from 45% to 85% in AP). This demonstrates the large performance gap between different uncertainty functions under the same feature extractor. # Weakness 4: Conditioned Label Mask The term "conditioned label masks" in the context of Line 208 refers to the label masks used to generate images. As explained in Eq. (1), these masks are created by cut-and-pasting the masks of novel objects onto the original training labels. # Weakness 5: Noise-aware training - **Sample Selection Mechanism**: We use the 'small loss' criterion to determine whether a pixel has a clean label, a simple and widely used technique in the noisy label learning literature [b1]. Specifically, we calculate and rank the cross-entropy loss for each pixel. Pixels with smaller losses are selected for backpropagation, while those with larger losses are ignored (cf. Eq. (5-6)). This is effective because during training, a model first learns simple and clean patterns before fitting noisy data [b2]; We will include explanations and relevant literature in the paper to aid understanding. A visualization of our sample selection results is illustrated in Figure 4 (b) of the paper. - **Determining the Selection Ratio α:** We determine α by visualizing the selection map of a small batch of data under several choices to ensure that visibly incorrect patterns are removed. To address the reviewer's concern, we conducted additional experiments with selection ratios ranging from 0.6 to 0.9, as detailed in Figure 1 (a) of the supplementary PDF. The results show that while including too many pixels (1.0) introduces noise, and including too few (0.6) removes useful regions, the model performance is stable within a wide range (0.7 to 0.9), demonstrating the robustness of the model to this hyperparameter. # Weakness 6: Hyperparameters & Applicability We have demonstrated robustness to loss margins and selection ratio in the previous response, supporting the practicality of our approach for real-world applications. [b1] Jiang, Lu, et al. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. ICML, 2018. [b2] Arpit, Devansh, et al. A closer look at memorization in deep networks. ICML, 2017. Title: Rebuttal by Authors --- Rebuttal Comment 1.1: Title: Thank you the clarifications. Comment: I would like to thank the authors for the response. Since in the real-world applications of OOD segmentation, it is hard or impossible to tune the hyperparameters which somewhat limits the applicability of the method. But the overall, the problem is the interesting and the proposed method is reasonable. Therefore, I will maintain my original rating. --- Rebuttal 2: Title: Thank you and Clarification on Hyperparameters Comment: Thank you for your reply and for acknowledging our problem setting and proposed method. Regarding your concern about hyperparameter tuning, as detailed in our responses to weaknesses 2 and 5, the hyperparameters in our method, including loss margins and selection ratios, can be set directly based on training data statistics without the need for further tuning. Additionally, in the attached PDF, we demonstrate the robustness of our method across a wide range of these hyperparameters. Other normal training hyperparameters, like the learning rate, are tuned once per backbone and then kept consistent throughout experiments. We want to emphasize that our method does not require more hyperparameter tuning than previous approaches like RPL and Mask2Anomaly. Developing an OOD training strategy that eliminates the need for hyperparameter tuning is an interesting direction, and we will leave it to future work.
Summary: This paper addresses semantic segmentation in the presence of domain and semantic shifts. To enhance model robustness, the authors propose using a generative model guided by ground truth labels to generate domain-shifted images and further inpaint random negative data. Due to potential noise in the generative process, an additional module is introduced to filter incorrectly filled-in image areas, removing pixels with the highest loss in the generated image. The segmentation model is trained with a contrastive loss that promotes high uncertainty in negative samples and consistent uncertainty between original and domain-shifted areas. Training proceeds in two stages. First, an uncertainty function is trained on top of a frozen pretrained semantic segmentation model. Subsequently, both the uncertainty function and the segmentation model are trained simultaneously. Strengths: 1. The method is straightforward and can be integrated with both standard and mask-based segmentation models. 2. Some good qualitative results are presented. 3. The paper is clearly written. Weaknesses: 1. The first contribution regarding the presence of domain and semantic shifts has already been addressed in previous works [a] [b]. 2. There is limited novelty, as the use of generative models for data augmentation [8] has already been proposed. 3. SOTA results on SMIYC [c] [d] were omitted. [a] Zendel et al., WildDash - Creating Hazard-Aware Benchmarks, ECCV 2018 [b] Bevandic et al., Simultaneous Semantic Segmentation and Outlier Detection in Presence of Domain Shift, GCPR 2019 [c] Grcic et al., On Advantages of Mask-level Recognition for Outlier-aware Segmentation, CVPRW 2023 [d] Nayal et al. RbA: Segmenting Unknown Regions Rejected by All, ICCV 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How is sampling done for Eq. 4? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. The method is limited by the pretraining of the generative model Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time and constructive feedback. We discuss the novelty of our generative-based data augmentation and additional comparison results on SMIYC in the general response. We address your other concerns below. ## Weakness 1: First Contribution We thank the reviewer for highlighting the works by Zendel et al. [a] and Bevandic et al. [b]. These studies indeed demonstrate the necessity and feasibility of handling both semantic segmentation under domain shifts and anomaly segmentation by proposing datasets and baselines. However, our work differs in problem setting and method design. Below, we detail these differences and will revise the contribution description and related work section to better highlight our contributions. - Our work builds upon these foundations by delving deeper into the core challenges of simultaneously enhancing model performance in both areas. Specifically, **we provide a novel analysis of the limitations of Domain Generalization techniques in identifying anomaly objects,** demonstrating the problem of 'over-generalization' for unknown regions. We also address the challenges faced by current state-of-the-art anomaly segmentation techniques, which often make errors in distinguishing between known objects with covariate shifts and real novel/anomaly objects (cf. Fig. 1). Furthermore, our findings indicate that simply combining techniques from both domains does not always yield optimal results for jointly handling domain shifts and semantic shifts. This is due to their focus on different levels (e.g., image-level domain shifts or object-level semantic shifts), leaving the challenge of distinguishing object-level domain shifts and semantic shifts unresolved (cf. Sec. 4.4). - We also note that while [a] and [b] are seminal works in domain shifts and anomaly segmentation, **their problem settings are still in early stages.** For example, WildDash [a] contains very few image-level anomaly samples, and while [b] introduces object-level anomaly samples via cut-and-paste, it is limited to animals and involves significant artifacts. More recent benchmarks, such as SegmentMeIfYouCan and MUAD, offer domain and semantic shifts that better reflect real-world scenarios. We have tested existing OOD detection and common domain generalization methods on these benchmarks, highlighting their limitations. Overall, we believe that our work takes a step further in addressing the challenges of jointly handling the two distribution shifts and filling a literature gap. We expect that our results will advocate for more research into developing algorithms that can improve both generalization and anomaly detection. We appreciate the reviewer's suggestion and will incorporate a discussion of these works in the related work section to better illustrate the contributions of our study. # Questions: For the calculation of our relative contrastive loss (Eq. 4), we randomly sample an equal number of pixels from the unknown-class set, original known-class set, and augmented known-class set, to calculate the contrastive losses between unknown and (original or augmented) known pixels. The third contrastive loss term is directly calculated for all paired pixels from original and augmented images. # Limitations: We discussed this limitation of our method in the conclusion. Similarly to what the reviewer mentioned here, the proposed augmentation strategy could be impacted by the quality of the generative model. --- Rebuttal 2: Title: Answer to the rebuttal Comment: I find the rebuttal to be thorough enough to increase my score to borderline accept. --- Rebuttal Comment 2.1: Title: Thank you Comment: Thank you again for your time and valuable comments. We are pleased to have addressed your concerns and sincerely appreciate your positive feedback on our work.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and constructive feedback. Below, we discuss shared concerns and reply to each reviewer with individual responses. # The novelty of our Generative-based Augmentation We address reviewers’ concerns about the novelty of our coherent generative-based data augmentation (CG-Aug). Although our approach is similar to [8], [a1], and [34] in using generative models to enhance training data, there are significant differences in both methodologies and effects, which we summarize as follows: 1. **Simultaneous Generation of Multiple Distribution Shifts:** Our method generates data with multiple distribution shifts, including both semantic-level shifts (eg. novel objects) and domain-level shifts, in a single generation process. By contrast, [8, 34] focuses solely on generating novel objects within the same domain, and [a1] employs separate frameworks for generating novel objects and domain shifts, making their process more complex and time-consuming for generating one image with multiple distribution shifts. 2. **Coherent Novel Object Generation via Semantic Mask-to-Image:** We generate unknown objects using a semantic mask-to-image generation process, which retains the global context of the image and ensures a more natural integration of novel objects. In previous methods, [34] uses a style transfer model to change the global style of cut-and-pasted OOD objects, but the OOD objects and the environment still remain distinctly different. [8] and [a1] use inpainting on cropped patches and then blend these patches into the original image, leading to inconsistencies and requiring additional post-processing to remove artifacts. To evaluate the design of our generative-based augmentation, we compare with three variations: (1) **Semantic-Shift Only (SS):** Generate images with semantic shift using POC. (2) **DS or SS:** Create a mixed dataset with either domain shifts (DS) using our semantic-mask-to-image process or semantic shifts (SS) using POC. (3) **DS and SS:** First generate DS data, then inpaint unknown objects. The second and third methods can be seen as applying [a1] to our problem in two ways. Results in Table A1 show that: adding domain shift data significantly improves performance over semantic-shift-only data. Jointly generating DS and SS in one image yields better results than generating them separately. Our method, which generates both DS and SS in one step, achieves the best performance, ensuring more coherence without artifacts and outperforming the two-step approach. Additionally, a visualization comparison of our method and POC is provided in Figure 3 of the main paper. A comparison with POC’s official results is included in the attached PDF, showing that our method outperforms POC on three out of four evaluated datasets. These results further demonstrate the superiority of our augmentation design. Table A1: Ablation Study of our Coherent Generative-based Augmentation (CG_Aug). Results are shown on the RoadAnomly dataset, using our training methods with the Mask2Former network. | Data | AUROC | AP | FPR@TPR95 | | --- | --- | --- | --- | | SS | 95.43 | 83.66 | 10.33 | | DS or SS | 95.90 | 87.64 | 9.28 | | DS and SS | 96.47 | 89.08 | 8.16 | | CG-Aug (Ours) | 97.94 | 90.17 | 7.54 | [a1] Loiseauet al. Reliability in Semantic Segmentation: Can We Use Synthetic Data? ArXiv 2023 # Additional Comparison on SMIYC Benchmark We thank the reviewer for bringing up these recent works (RbA and M2F-EAM). Below, we discuss our concern about fairness by including them in our benchmark and present the results in Table A1. **Concern on Benchmark Comparison Fairness:** The primary goal of our experimental design is to validate the efficacy of the proposed methods. To ensure fairness, we kept the training data and model architecture consistent with previous methods, particularly the recent works RPL and Mask2Anomaly. In contrast, RbA and M2F-EAM utilize different resources and architectures. M2F-EAM, for example, leverages the Mapillary Vistas dataset, which is significantly larger and more diverse than Cityscapes. Additionally, both M2F-EAM and RbA use the Swin-Transformer backbone for the Mask2Former architecture, whereas we use ResNet-50 as in Mask2Anomaly. These differences can significantly impact performance, as evidenced by the ablation studies in both RbA and M2F-EAM papers. **Results and Analysis:** We provide additional comparisons with RbA and M2F-EAM in Table A2, using their officially reported scores in their papers. Our method outperforms RbA on all evaluation metrics despite using a weaker backbone. When compared to M2F-EAM, our method shows superior performance on the RoadAnomaly and SMIYC-RO tracks. Table A1. Complementary comparison on OOD segmentation task, comparing our method with RbA and M2F-EAM using their reported results. Note that new results for RbA are available on the benchmark website, but without additional context, a fair comparison cannot be ensured. | | | | RoadAnomaly | | SMIYC-RA | | SMIYC-RO | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Method | Backbone | Training Set | AP $\uparrow$ | FPR@95 $\downarrow$f | AP $\uparrow$ | FPR@95 $\downarrow$ | AP $\uparrow$ | FPR@95 $\downarrow$ | | Mask2Anomaly (ICCV 23) | ResNet-50 | Cityscapes | 79.70 | 13.45 | 88.70 | 14.60 | 93.30 | 0.20 | | RbA (ICCV 23) | Swin-B | Cityscapes | 85.42 | 6.92 | 90.9 | 11.6 | 91.8 | 0.5 | | M2F-EAM (CVPRW 23) | Swin-L | Cityscapes + Mapillary Vistas | 69.4 | 7.7 | 93.8 | 4.1 | 92.9 | 0.5 | | Ours | ResNet-50 | Cityscapes | 90.17 | 7.54 | 91.9 | 7.9 | 95.3 | 0.1 | Pdf: /pdf/37e599347b95ae8d44ed251b0d8d5b72b9b66d0c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Loss Landscape Characterization of Neural Networks without Over-Parametrization
Accept (poster)
Summary: This paper proposes a new condition to describe the optimization landscape of deep neural networks. This condition alleviates restrictive consequences of alternative conditions such as PL, in particular the overparameterisation and absence of saddle points. A convergence for SGD (and other variants of first-order stochastic methods) is proven for the class of functions that satisfy this condition. The paper showcases examples, both in simple cases and in deep learning, where other conditions do not hold while theirs does. Numerical experiments show that the condition holds in practice. Strengths: The paper is very clear and well-written. It tackles the crucial question of understanding which structure in the loss landscape of deep neural networks enables efficient optimization with first-order methods. The arguments in favour of the proposed condition are fairly compelling, with both theoretical results and experimental insights. I verified the proof of one of the main results (Theorem 1), which is correct. Weaknesses: The convergence for SGD is shown up to a non-vanishing error term $O(\beta \sigma^2)$. While the authors give some explanation on this term, I would have liked more insights: is the error term appearing for SPS and NGN the same (this is not directly obvious from the formulas)? Do authors think this term is an artefact of the proof technique, or intimately connected to the $\alpha-\beta$ condition? The authors argue that overparameterization leads to vanishing of this term. As far as I understand, both $\beta$ and $\sigma^2$ are affected by overparameterization, and both terms should decay with overparameterization. Is this correct? Does this give an insight of the "best" level of overparameterization? Technical Quality: 3 Clarity: 4 Questions for Authors: Questions: See Weaknesses for main questions. Other questions: - “However, we highlight that the convergence guarantees of the optimizers depend on a term $O(\beta \sigma^2)$ which is stable across all experiments we provide.” What does ”stable” mean here? - Remark 3 line 687: what is the connection between r and R small, and the fact that X is situated locally around S? If S is far away from 0, then r and R would be large even around S? - Line 696: several nabla signs missing. I also believe there is a factor 2 missing from the use of the smoothness condition, see for instance equation 2.1.10 in [1]. - Several existing function class (Polyak--Łojasiewicz inequality, Quadratic Growth and Error Bound) that are mentioned in the overview of Section 2.1 are actually equivalent, see [2]. This would benefit from being mentioned. - Line 84-85: there does exist works showing a PL inequality with only linear overparameterization in width, see [3]. Minor remarks: - The acronyms SPS and NGN are not explained in the main text, and are not that well-known. Spelling out the acronyms once would be beneficial. - $\gamma_b$ in Theorem 2 is not defined (in the main text). [1] Nesterov, Lectures on Convex Optimization, Second Edition, 2018. [2] Rebjock, Boumal, Fast convergence to non-isolated minima: four equivalent conditions for C2 functions, arXiv:2303.00096 [3] Marion, Wu, Sander, Biau, Implicit regularization of deep residual networks towards neural ODEs, ICLR 2024. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **A to W (part 1):** For NGN the first three terms that appear in our Theorem 3 also appear in the convex setting; see Theorem 4.5 in [1]. The fourth (and last) term appears because of the $\alpha$-$\beta$-condition. We highlight that the first three terms shrink with a decreasing stepsize, but the last term persists, and it is proportional to $\beta\sigma^2_{\mathrm{int}}$ similarly to SGD. However, if we consider a convex regime, i.e. $\alpha=1, \beta=0$ in the $\alpha$-$\beta$-condition, then the last term disappears. For SPS, the error term is different. First, it is proportional to $\alpha\sigma^2_{\mathrm{int}}$, i.e. even in the convex regime it remains in the bound (this result aligns with convergence guarantees in [2]). However, in the interpolation regime, for both NGN and SPS, the error terms disappear. In conclusion, the error terms for NGN and SPS are different in the general case ($\alpha,\beta >0$), but both disappear in the interpolation regime. [1] Orvieto, Antonio and Xiao, Lin, An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton Stepsizes, arXiv preprint arXiv:2407.04358, 2024. [2] Loizou, Nicolas and Vaswani, Sharan and Laradji, Issam Hadj and Lacoste-Julien, Simon, Stochastic polyak step-size for sgd: An adaptive learning rate for fast convergence, AISTATS 2021. **A to W (part 2):** We refer to the general answer. Regarding the "best" level of over-parameterization, could the reviewer please clarify his questions? From Theorem 1, the best rate of SGD is given when a model is over-parametrized, and then the second and third terms in the rate disappear; only the classic optimization term decaying as $\mathcal{O}(1/K)$ remains in this setting. **A to Q1:** Thank you for this comment. We can very roughly estimate the value of $\beta\sigma^2_{\mathrm{int}}$ based on experiments. The value of $\sigma^2_{\mathrm{int}}$ can be approximated as the value of the stochastic loss at the end of the training (since we use $x^K \approx x_p$, then $f_i(x^K) \approx f_i(x_p)$). Therefore, assuming that $f_i^*=0$ we get that $\sigma^2_{\mathrm{int}} \approx f_i(x^K).$ Using such approximations, we compute the value $\beta\sigma^2_{\mathrm{int}}$ and show following the experimental setup of the paper that $\beta\sigma^2_{\mathrm{int}}$ remains stable and decreases as a model becomes closer to over-parameterization. See the table in the separate pdf file for concrete values. **A to Q2:** We need $\mathcal{S}$ to be inside $\mathcal{X}$, so that the projection operation $\mathrm{proj}_{\mathcal{S}}$ in the definition of $\alpha$-$\beta$-condition is correctly defined for any $(v,W)\in\mathcal{X}.$ Note that if $r$ and $R$ are small, then $\mathcal{S}$ cannot be arbitrarily far away from $0$ as it lies fully inside $\mathcal{X}$. If $R$ and $r$ are small enough, then the revised value of $\alpha$ we take is $$\max\left\\{\frac{2\max\_{i\in[n]} \log(1+\exp(Rr\\|x_i\\|))}{c-f_i^*}, 1\right\\} = 1.$$ This implies that $\beta=\alpha-1=0.$ We provide Remark 3 to show that in most of the cases, we might need $\beta > 0,$ which implies that quasi-convexity (i.e., $\beta=0$) does not hold in this case. However, we highlight that this is only our intuition, and a more careful study is needed in this case. **A to Q3:** We thank the reviewer for pointing to the typos. We added the missing nabla signs in the proof of the convergence of SGD. Regarding the missing factor 2, there was a typo in the first equality (it should be $\gamma^2\mathbb{E}_k[\\|\nabla f\_{i_k}(x^k)\\|^2]$ instead of $2\gamma^2\mathbb{E}_k[\\|\nabla f\_{i_k}(x^k)\\|^2]$). Therefore, the final result does not change. **A to Q4:** We thank the reviewer for providing a reference [2]. This is indeed an interesting result that PL, EB, and QG are equivalent in the neighborhood of the $\mathcal{S}$ if $f$ is sufficiently differentiable. We will add this reference to highlight that the discussion on the limitations of PL condition is also transferable to EB and QG. **A to Q5:** We thank the reviewer for providing a reference [3]. After checking the paper, we would like to highlight that their Definition 1 is stricter than the standard PL condition. They ask for the PL condition to hold in the bounded set around some fixed set of parameters. The radius of the bound is vanishing with the number of data points $n$ as $M = \mathcal{O}(1/\sqrt{n})$. We believe this could be one of the reasons to make over-parameterization to be linear in $n$ in their work. However, we will add a citation of [3] to the main body to highlight the possible improvement of necessary over-parameterization with additional restrictions (i.e., in a bounded set around some point). **A to Q6:** We thank the reviewer for this suggestion. We will spell out the acronyms in the list of contributions where we mention algorithms for the first time. **A to Q7:** $\gamma_b$ is a stepsize upper bound of SPS${}\_{\max}$ algorithm $\gamma_k = \min\left\\{\frac{f\_{i_k}(x^k) - f\_{i_k}^*}{c\\|\nabla f\_{i_k}(x^k)\\|^2}, \gamma\_{\rm b}\right\\}$. We will add the definition in the statement of SPS convergence theorem. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal. Comment: I thank the authors for their detailed rebuttal. Regarding the "best" level of over-parameterization, the authors adequately answered my question (the term "best" referred to the fact that I thought that there was some sort of tradeoff). I keep my score.
Summary: A major challenge in Deep Learning optimization has been identifying structural conditions on the loss objective that ensure convergence of SGD and variants. As in practice despite the non-convexity, stochastic gradient algorithms have been tremendously successful in training neural networks. Many conditions have been proposed that purport to explain this phenomena, including the PL-condition and Aiming. Unfortunately, such conditions may fail to hold unless the network is extremely over-parametrized, which does not align with practice. Moreover, no comprehensive empirical study has been performed verifying these conditions hold on networks of realistic sizes. In this paper, the authors address these issues by performing an extensive empirical analysis, showing, in general, that many prior conditions fail to hold for realistic networks. They also provide theoretical counter-examples where these conditions fail to hold. This shows convergence analyses of SGD-like algorithms performed under these assumptions may not hold in practice. To rectify this unfortunate situation, the authors introduce a new condition, which they refer to as the $\alpha-\beta$ condition. They show under this assumption that SGD and variants like SPS converge to a ball of noise under this condition. Perhaps most importantly, the paper provides extensive empirical evidence and several theoretical examples showing the $\alpha-\beta$ condition holds. Strengths: The paper has several major strengths that make it a clear accept in my view: 1. Showing the PL-condition and Aiming do not necessarily hold for networks of realistic size encountered in practice. To the best of my knowledge, such an extensive empirical investigation has not been done before. This is of prime importance in my view, as much of the recent DL optimization literature takes these conditions for granted despite lacking a strong empirical foundation. I've long harbored doubts about how relevant the analysis in the over-parametrized regime (where said conditions hold with high probability) is to practice. Especially considering analyses over the past couple years showing networks that are well-approximated by their first-order Taylor expansion do not perform as well as networks not in this regime. So, I think the extensive empirics done in this paper showing these conditions don't hold for networks commonly used in practice is a valuable contribution to the literature 2. Introduction of the $\alpha-\beta$ condition and convergence analysis. The authors have gone beyond just identifying failure modes of previous conditions. They introduce a new condition, the $\alpha-\beta$ condition, under which optimizers like SGD (and several variants) converge to a ball-of-noise (or the minimum in the interpolation setting). Moreover, the condition allows for more realistic loss landscapes, i.e., saddle-points can occur. 3. Empirical verification of $\alpha-\beta$ condition. This is one of the most significant contributions of the paper. The authors provide strong empirical evidence that $\alpha-\beta$ holds for networks of practical interest. 4. The paper is very well-written. I found the paper easily to follow and enjoyable to read. Weaknesses: Overall, the paper has no major weaknesses, but it can improve on a few minor points. 1.) For the MLP and ResNet experiments I would have liked to have seen experiments with SGD+momentum and Adam (or AdamW), as these tend to be the most popular optimizers employed in practice. Given that you observed $\alpha-\beta$ holds for NAdamW for language tasks, I'd be surprised if you found the condition doesn't hold in the MLP and ResNet settings for optimizers other than SGD. 2.) Continuing off the preceding point, it would be insightful if, besides verifying $\alpha-\beta$ for several sets of optimizers, the authors also plotted the corresponding loss curves alongside and reported the estimated $\alpha, \beta$ values for each optimizer. This would give insight into how values of $\alpha, \beta$ differ across optimizers, and could help explain why an optimizer converges faster than another, as it may exhibit more favorable values of $\alpha, \beta$. Though admittedly, this requires some extrapolation as the current theory doesn't apply to a method like Adam, so it is unclear how the values of $\alpha$ and $\beta$ would affect its convergence. But it could lead to interesting questions that provide more avenues for future research. 3.) The authors provide no idealized setting, i.e. wide-networks, where $\alpha-\beta$ holds. It would be interesting to see if the level of overparametrization required to ensure $\alpha-\beta$ is weaker than what is needed to ensure the Aiming condition. Intuitively I'd expect the answer to be yes, given that it is a weaker condition than Aiming and the strong empirical evidence provided in the paper. Nevertheless, it seems non-trivial to verify and somewhat tangential to the goals of the paper, so its understandable the authors chose to leave this as a direction for future work. Technical Quality: 4 Clarity: 4 Questions for Authors: My main suggestion/request is that the authors address the first point under the Weaknesses sections. If the authors promise to include additional experiments along this line, I'm willing to raise my score. The results themselves can appear in the supplement, but there should be a clear reference to them in the main paper. In my view, this would significantly increase the value of the paper, as it would show that $\alpha-\beta$ empirically holds for three of the most popular optimizers in DL: SGD, SGD+Polyak momentum, and Adam. Even if the condition doesn't hold for one of them, say, Adam, it's still valuable as it raises interesting questions for future research. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: I think the authors have adequately addressed limitations of the work for the most part. But I think two more items should be added: 1) The assumption of globally Lipschitz continuous gradients. It is known empirically that models like transformers do not satisfy this property, see Zhang et al. (2019). Thus, the analysis here may not hold in such settings. It is known empirically, however, that transformers satisfy a generalized smoothness condition of the form [Zhang et al. (2019)]: $\||H(x)\|| \leq L_0+L_1\||\nabla f(x)\||$. The authors should state their assumption as a limitation and point to extending the convergence analysis under the $\alpha-\beta$ condition to the generalized smoothness setting as an interesting direction for future work. 2) The analysis does not cover methods like Adam, which is arguably the most popular optimization algorithm in deep learning. Moreover, much progress has been made in recent years on analysis of its performance; see, for instance Li et al. (2024). So, this should be stated as a limitation and an interesting direction for future work. References: 1.) Zhang, Jingzhao, Tianxing He, Suvrit Sra, and Ali Jadbabaie. "Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity." In International Conference on Learning Representations. 2019. 2.) Li, Haochuan, Alexander Rakhlin, and Ali Jadbabaie. "Convergence of adam under relaxed assumptions." In Advances in Neural Information Processing Systems 36 (2024). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough reviews, insightful comments, and valuable questions regarding our paper. **A to W1:** The main reason why we used SGD for MLP, CNN, and ResNet experiments is the fact that SGD is known to be able to train those models, i.e. the last iterate of $x^K$ is a good approximation of $x^*$ [1]. In contrast, for larger networks, especially for language modeling tasks, SGD is not able to train a model well, see e.g. [2]. Therefore, we use NAdamW to obtain a better last iterate $x^K$ as an approximation of $x^*.$ [1] Choi, Dami and Shallue, Christopher J and Nado, Zachary and Lee, Jaehoon and Maddison, Chris J and Dahl, George E, On empirical comparisons of optimizers for deep learning, arXiv preprint arXiv:1910.05446, 2019 [2] Noci, Lorenzo, et al. "Signal propagation in transformers: Theoretical perspectives and the role of rank collapse." Advances in Neural Information Processing Systems 35 (2022): 27198-27211. **A to W2:** Following the experimental setup that the reviewer suggested, we present the experiments on ResNet9 model on CIFAR100 dataset trained with three different optimizers in a separate pdf file. We use SGD (stepsize $0.01$ with OneCycle learning rate scheduler), SGDM (stepsize $0.01$, momentum $0.9$ with OneCycle learning rate scheduler), and Adam (stepsize $0.0001$, default momentum and epsilon parameters with OneCycle learning rate scheduler). For each optimizer, we run experiments with $3$ random seeds to obtain more stable results. We observe that Adam converges slower than SGD and SGDM. This is because the learning rate for Adam is chosen to be slightly smaller so that the convergence behaviour in the end of the training is stable (with learning rates $0.001$ or higher Adam converges faster than SGD and SGDM, but the fluctuations at the end of the training are too high. This most likely happens because, at the end of the training, the Adam stepsize involves a division by a small number as the gradient becomes close to zero. Therefore, we decided to choose a smaller stepsize which still gives the same stochastic loss at the end but with smaller fluctuations). First, we observe that momentum decreases the $\alpha$, $\beta$ constants in the proposed condition which suggests that the trajectory of SGD with momentum is better under the $\alpha$-$\beta$-condition. Next, we see that Adam optimizer achieves much better $\alpha$ and $\beta$ constants than both SGD and SGDM. The provided experimental results open new interesting questions as practically both stepsize adaptivity in Adam and momentum improve convergence under the $\alpha$-$\beta$-condition. **A to W3:** We agree with the reviewer that understanding how over-parametrization affects the $\alpha$-$\beta$-condition is an important direction to explore in follow-up works, although as you pointed out, it is a complex and non-trivial question. We will acknowledge this as a limitation of our work that requires further investigation. Thank you for highlighting this point. **A to Q:** We observe that the condition still holds using all three mentioned algorithms, which we agree is an interesting addition to the paper. We present the discussion in **A to Q2** together with the plots in a separate file. **A to L1:** We thank the reviewer for these comments. Relaxed smoothness is indeed an interesting approach for future work. We will add a discussion on this to the limitations at the end of the paper. **A to L2:** We agree with the reviewer that Adam is a widely used optimizer in practice, and including convergence guarantees for Adam would be a nice addition to the paper. However, we emphasize that deriving formal convergence guarantees for Adam requires an involved analysis, which seemed out of scope for the paper where we already derived convergence guarantees for three optimizers. Nonetheless, we concur with the reviewer that this topic should be noted for future research endeavors. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: I would like to sincerely thank the authors for their detailed reply. In particular, I appreciate the authors adding the experiments I've requested. I think they will enhance the quality of the submission. The response has addressed all my concerns. As I said in my initial review, I think this paper is interesting and provides a strong contribution. Therefore, I am happy to raise my score from 7 to 8.
Summary: This paper introduces a new regulatory condition named $\alpha$-$\beta$ condition. To motivate the necessity of such condition, it first show empirically that the aiming condition is not always satisfied in neural newtork training. The paper then support the $\alpha$-$\beta$ condition with $i).$ examples (including a shallow neural network) that satisfies such condition; $ii).$ convergence guarantee of SGD, SPS, and NGN under such condition combined with smoothness; and $iii).$ Empirical evidence that the $\alpha$-$\beta$ condition holds during neural network training. Strengths: 1. The newly introduced $\alpha$-$\beta$ condition covers the case where the function has saddle points. 2. The paper tries to give examples for which the condition holds 3. The paper empiricallly indentified cases where the aiminig condition does not hold Weaknesses: 1. The proof for Example 4 (shallow neural network) is problematic. In particular, based on the definition and assumption of $S$, the paper derived $f_i^* = 0$, and that points in $S$ have bounded norm (which is necessary to obtain the equation below line 684). However, these two conditions are contradictory as the logistic loss only achieves zero training loss when the output of the neural networks has magnitude infinity. This breaks the proof of Example 4. 2. The theoretical implications of Theorem 1-3 are week. The minimum loss during training is upper bounded by $O(\beta\sigma_{\text{int}}^2)$ which can be extremely large. First, in real world applications, it is very hard to control $\sigma_{\text{int}}^2$ as the model output can vary a lot based on different inputs (this is not a big issue, as many previous works assumes the boundedness of $\sigma^2$). Moreover, $\beta$ can be as large as $10^6$ in order for the condition to hold, as shown in Figure 4. Multiplying the two gives an error region that does not seems to be small enough. 3. There is no strong theoretical evidence that the proposed condition holds for neural network training in general. In particular, a major significance of developping the regulatory condition for neural network training is to theoretically show the convergence of neural network training based on such conditions. The proposed conditions loses such significance because it misses the theoretical connection with neural network training. 4. The empirical justification that neural network training satisfies the proposed condition is limited. Figure 4 only shows the condition holds for large $\beta$ (weaker condition), which does not imply great applicability of the condition in showing the convergence property. Indeed, Figure 5-7 shows that the condition holds for smaller $\beta$ for harder tasks, which is quite conter-intuitive since harder tasks should have less flavorable loss landscape. However, it should be noticed that the results is obtained by using $x^K$ to approximate $x^*$. Therefore, the seemingly flavorable result may come from the fact that in harder tasks $x^K$ is a worse estimator of $x^*$ than in simpler tasks, which implies that the results reported in Figure 5-7 may not be showing the actual $\alpha$-$\beta$ condition in practice. Technical Quality: 2 Clarity: 3 Questions for Authors: Is it possible to establish the convergence rate of the non-stochastic gradient descent based on the proposed condition and the smoothness condition? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The author mentioned some limitations of the paper. However, I believe that the biggest limitation is the missing theoretical connection between the proposed condition and neural network training, which is not mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough reviews, insightful comments, and valuable questions regarding our paper. **A to W1:** Thank you for pointing this out. To ensure boundedness of $\mathcal{S}$, one can simply add L2 regularization. We provide a sketch of the proof and will change Example 4 in the revised version of the paper. **Example.** Consider training a two-layer neural network with a logistic loss $f = n^{-1}\sum_{i=1}^nf_i$, $f_i(W,v)=\phi(y_i\cdot v^\top\sigma(Wx_i))+\lambda_1||v||^2+\lambda_2||W||^2_F$ for a classification problem where $\phi(t):=\log(1+\exp(-t)),$ $W\in\mathbb{R}^{k\times d}, v\in\mathbb{R}^{k},$ $\sigma$ is a ReLU function applied coordinate-wise, $y_i\in\\{-1,+1\\}$ is a label and $x_i\in\mathbb{R}^d$ is a feature vector. Assume that the interpolation does not hold, i.e. $\min_{(z,Z)\in\mathcal{S}}f_i(z,Z)>f_i^*$. Let $\mathcal{X}$ be any bounded set that contains $\mathcal{S}$. Then the $\alpha$-$\beta$-condition holds in $\mathcal{X}$ for some $\alpha\ge1$ and $\beta=\alpha-1.$ **Proof sketch.** Due to space limit we only mention the main difference in comparison with the proof of Example 4 in the submission. - First, we have additional $\lambda_1v$ and $\lambda_2W$ in the calculations of the gradients $\nabla_vf_i(v,W)$ and $\nabla_Wf_i(v,W)$ correspondingly. - The optimal value of $f_i^*>0$ because of the L2 regularization. - Because of L2 regularization $\mathcal{S}$ is bounded and from example statement it fully lies in $\mathcal{X}.$ - From non-interpolation assumption we make, there exists $c:=\min_{i\in[n]}\min\_{(Z,z)\in\mathcal{S}}f_i(Z,z)>f_i^*.$ - In the next derivations, we mainly follow the derivations of example 4. After using convexity of $\phi$, $\alpha$-$\beta$-condition is verified if we have\begin{align}&2\phi(y\_i(v\circ e_w)^\top Wx_i)+2\lambda_1\left<v,v-z\right>+2\lambda_2\left<W,W-Z\right>_F-\phi(y_i(v\circ e_w)^\top Zx_i)-\phi(y_i(z\circ e_w)^\top Wx_i)\\\\&\quad\ge\alpha\left[\phi(y_i(v\circ e_w)^\top Wx_i)+\lambda_1||v||^2+\lambda_2||W||_F^2-\phi(y_i (z\circ e_z)^\top Zx_i)-\lambda_1||z||^2-\lambda_2||Z||^2_F\right]\\\\&\quad-\beta(\phi(y_i(v\circ e_w)^\top Wx_i)+\lambda_1||v||^2+\lambda_2||W||^2_F-f_i^*).\end{align} - Rearranging terms and choosing $\alpha-\beta=1$ we get\begin{align}&\phi(y_i(v\circ e_w)^\top Wx_i)+\lambda_1||v-z||^2+ \lambda_2||W-Z||^2_F+\alpha\phi(y_i(z\circ e_z)^\top Zx_i)+(\alpha-1)\left[\lambda_1||z||^2+\lambda_2||Z||^2_F\right]\\\\&\quad\ge\phi(y_i(v\circ e_w)^\top Zx_i)+\phi(y_i(z\circ e_w)^\top Wx_i)+(\alpha-1)f_i^*.\end{align} - Since $(W,v),(z,Z)\in\mathcal{X}$, there exist constants $R,r\ge0$ such that $||z||,||v||\le r$ and $||Z||_F,||W||_F\le R$,and RHS in the above is bounded by\begin{align}2\max_i\log(1+\exp(Rr||x_i||))+(\alpha-1)f_i^*\ge\phi(y_i(v\circ e_w)^\top Zx_i)+\phi(y_i(z\circ e_w)^\top Wx_i)+(\alpha-1)f_i^*.\end{align} - From the assumption we have $\phi(y_i(z\circ e_z)^\top Zx_i)\ge c$. - Then we can take $\alpha\ge\max\left\\{\frac{2\max_i\log(1+\exp(Rr||x_i||))}{c-f_i^*},1\right\\}$ and $\beta=\alpha-1.$ We will modify Example 4 in the paper to the one with L2 regularization. **A to W2:** We refer to the general rebuttal. **A to W3:** We agree that a theoretical verification of the proposed condition for neural networks is an important question, albeit a difficult one. Most existing theoretical results rely on unpractical amounts of over-parameterization, whereas our aim was to relax this requirement. Notably, Example 4 demonstrates that the $\alpha$-$\beta$ condition holds for 2-layer NNs with L2 regularization, albeit under somewhat restrictive assumptions, underscoring the complexity of the problem. Empirically, we have conducted an extensive verification of the $\alpha$-$\beta$ condition across a wide range of architectures, indicating that this proposed condition is a promising direction for the analysis of neural networks. **A to W4:** We agree with the reviewer that for harder tasks $x^K$ might be a worse approximation of $x^*$ than for easier ones. We did mention in the limitations section that the empirical verification of the $\alpha$-$\beta$ condition is a difficult task. However, we use the best-known optimizer to get as close as possible to $x^*$ (we choose the optimizer and last iterate $x^K$ such that the performance of networks is close to state-of-the-art). We emphasize that this type of verification has not been conducted in previous studies to the best of our knowledge, making it a significant advancement in the literature. Furthermore, we have made a significant effort to provide empirical verification across a wide range of neural network architectures, including those that are widely used in practice, such as ResNet and Transformers. **A to Q:** Yes, it is possible to derive the convergence of GD under $\alpha$-$\beta$-condition. Note that averaging the $\alpha$-$\beta$-condition across all $i\in[n]$ gives $$\left<\nabla f(x),x-x_p\right>\ge\alpha(f(x)-f(x_p))-\beta\left(f(x)-n^{-1}\sum_if_i^*\right).$$ Using this inequality and following the standard proof of GD we get \begin{align}\mathrm{dist}(x^{k+1},\mathcal{S})^2 &\le\mathrm{dist}(x^{k},\mathcal{S})^2-2\gamma\left<\nabla f(x^k),x^k-x_p^k\right>+\gamma^2||\nabla f(x^k)||^2\\\\&\le\mathrm{dist}(x^{k},\mathcal{S})^2-2\alpha \gamma(f(x^k)-f^*)+2\beta\gamma\left(f(x^k)-n^{-1}\sum_if_i^*\right)+2L\gamma^2(f(x^k)-f^*)\\\\&\le\mathrm{dist}(x^{k},\mathcal{S})^2-2(\alpha-\beta-L\gamma)\gamma(f(x^k) - f^*)+2\beta\gamma\left(f^*-n^{-1}\sum_if_i^*\right).\end{align} Choosing $\gamma\le \frac{\alpha-\beta}{2L}$ gives$$\mathrm{dist}(x^{k+1},\mathcal{S})^2\le\mathrm{dist}(x^{k},\mathcal{S})^2-(\alpha-\beta)\gamma(f(x^k)-f^*)+2\beta\gamma\left(f^*-n^{-1}\sum_if_i^*\right).$$Finally, unrolling this recursion we derive the following rate$$\min\limits_{0\le k<K}[f(x^k)-f^*]\le\frac{\mathrm{dist}(x^0,\mathcal{S})^2}{\gamma(\alpha-\beta)K}+\frac{2\beta}{\alpha-\beta}\sigma^2_{\mathrm{int}}.$$ --- Rebuttal Comment 1.1: Title: Response to the Author's Rebuttal Comment: Thank you so much for providing the detailed explanation and the proof sketch. For W1, I agree with the new proof sketch which turns the set $S$ bounded by adding the regularization term. For W3, I agree that verifying the condition theoretically on NN with more complicated architectures can be a significant next step. For W2 however, my concern remains about the non-vanishing term. Based on the author's argument, I agree that the non-vanishing term is necessary. However, this does poses the question of whether the $\alpha$-$\beta$ is a condition powerful enough to guarantee a good convergence property. In short, I am concerned that this condition might be too relaxed, as one can choose super large $\beta$ and $\alpha$ while maintaining the difference between the two. For W4, I understand the difficulty of verifying $\alpha$-$\beta$ condition, and I am truly aware of the fact that finding minimizers of NN training is basically not possible. However, my real concern is that the values of $\alpha$ and $\beta$ are large for simpler networks, but smaller for more complicated networks. I still suspect that the large values of $\alpha$ and $\beta$ will hold across all neural network training. Combining W2 with W4, I still have concern that the $\alpha$-$\beta$ condition may not be a good condition for NN training. I have raised the score based on the new proof sketch, but I believe that the paper still falls below the acceptance borderline given the above concern. --- Rebuttal 2: Title: Response Comment: We would like to highlight that the value of $\beta$ itself is not involved in the convergence rate, but rather $\alpha-\beta$ and $\beta\sigma^2_{\mathrm{int}}$. While $\alpha-\beta$ is typically a constant of order $0.1$ in our experiments, we also observe that $\beta\sigma^2\_{\mathrm{int}}$ decreases with increasing over-parameterization (see figures in additional pdf). Moreover, we decided to conduct experiments on Resnet18 and Resnet34 on CIFAR100 to support our claims on larger models. Since the rules regarding posting external links are unclear, we provide approximate values of the smallest $\beta$ found in the experiments over 3 runs: |model| batch size | $\beta$| |-------|:-------------:|:----------:| |Resnet18| 64 | 578| |Resnet18| 128| 160| |Resnet18| 256 | 49| |Resnet34| 64| 797| |Resnet34|128 | 494| |Resnet34| 256| 70| We will include the plots of this set of experiments in the revised version of the paper. We observe that the value of $\beta$ tends to decrease when we increase the depth of the Resnet model (note that we have results for Resnet9 in the main paper). These results are consistent with all our previous comments, but in this case for larger models as well. Based on all the results, we do not agree with the reviewer that $\alpha$-$\beta$-condition might be too relaxed. The experimental and theoretical results show all expected practical trade-offs. --- Rebuttal 3: Title: Response Comment: Dear reviewer, As the deadline is approaching, we would like to know if the above response answers your concern regarding the values of $\beta$. We highlight that Resnet architecture can be accurately trained using SGD with CycleOneLR learning rate schedule reaching small loss. Therefore the issue of finding a bad approximation of $x^*$ is not the same as for large models. Based on Resnet experiments the values of $\beta$ tend to decrease with the number of layers as we expect (i.e., increasing the complexity of the model). Hence, in our opinion, the proposed $\alpha$-$\beta$-condition captures all expected trends in the training (convergence to the non-vanishing neighborhood, neighborhood size vanishes as a model becomes more over-parameterized).
Summary: This paper proposes a novel class of functions and proves convergence of gradient descent (and some other optimizers). Contrary to some previous classes, in relation to deep neural networks, this new class does not require extreme overparameterization. In addition to theoretical convergence results, experiments are provided showing that some example neural networks seem to belong to the function class. Strengths: The paper is written very clearly. It provides a new theoretical tool for studying convergence in deep neural networks, and has the potential of high impact. Weaknesses: Some of the highlighted differences with respect to previous work are unfair, in particular previous work in overparameterized models. For example, figure 1c and 1d show very small values of the PL constant in practical problems, implying slow theoretical convergence. However, the same is true for the new proposed class of functions, where the PL constant is replaced by alpha-beta, which seems very small in most experiments (alpha and beta have very similar values) thus implying very slow convergence. It would be easy to plots alpha-beta values in experiments to verify how large it can get, and thus show how fast is the convergence predicted by theory. Also, all other function classes considered in previous work guarantee convergence to a global minimum, here instead convergence theorems have an extra term since different data points have different optima, and it's not clear how big are those terms. It may be that these terms are so large that the bounds become trivial, for example when those terms are much larger than the decaying 1/K terms. At least some discussion comparing the magnitude of such terms would make the work more valuable. Technical Quality: 4 Clarity: 4 Questions for Authors: In my understanding, if the interpolation condition holds, then the new proposed class is equivalent to PL. Is that true and why is this not explained in the paper? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough reviews, insightful comments, and valuable questions regarding our paper. ***W1.*** "Some of the highlighted differences..." ***A to W1:*** This is a good comment and we will provide further detail on it in a revision. In Figures 1c-1d we observe that the empirical PL constant can be of order $10^{-7}.$ Besides, in each figure where we plot the empirical values of $\alpha$ and $\beta$, we include only those values that satisfy $\alpha \ge \beta + 0.1$. Let's now compare the optimization terms decreasing with $K$ in the rates under both conditions: $\frac{L}{K(\alpha-\beta)}$ under the $\alpha$-$\beta$-condition vs $(1-\frac{\mu}{L})^K$ under the PL condition (see Theorem 3.9 in [1] for the explicit rate in the deterministic setting. Note that the contraction factor in the rate in the stochastic setting is even worse; see Theorem 5.10). We refer the reviewer to Figure 3 in the separate pdf file where we provide plots for several values of $L$ and $\mu=10^{-7}, \alpha-\beta=0.1$. We observe that the theoretical convergence under the $\alpha$-$\beta$-condition with empirical values of $\alpha$ and $\beta$ is always more favorable than that under the PL condition with empirical value of $\mu.$ In addition, from a theoretical point of view (see Theorem 1) we can choose the stepsize of order $\mathcal{O}(\frac{\alpha-\beta}{L})$ which implies that the first term in the convergence bound is $\mathcal{O}\left(\frac{L\mathrm{dist}(x^0,\mathcal{S})^2}{K}\right),$ i.e. it is not affected by $\alpha-\beta$ at all which shows a clear advantage of $\alpha$-$\beta$-condition over PL condition. ***W2:*** "Also, all other function classes considered in previous work..." ***A to W2:*** We refer to the general rebuttal. ***Q:*** " In my understanding, if the interpolation condition holds..." ***A to Q:*** Good question! In the interpolation regime, we have $f_i^* = f^*.$ This means that $\sigma_{\mathrm{int}}^2 = 0.$ Therefore, $\alpha$-$\beta$-condition reduces to $$ \left<\nabla f_i(x), x-x_p\right> \ge \alpha(f_i(x) - f_i(x_p)), \quad x_p = \mathrm{proj}_{\mathcal{S}}(x). $$ If we replace $x_p$ by some fixed point $x^*\in\mathcal{S},$ then the functions satisfying the above are called quasi-convex. However, to the best of our knowledge, there is no known relation between quasi-convex and PL functions. Therefore, the interpolation regime does not imply that the $\alpha$-$\beta$-condition reduces to PL. We will add a comment on this in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, my opinion is still that the paper should be accepted.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable comments and questions that allowed us to improve our paper. **Non-vanishing term in the convergence rate:** Three of the reviewers raised the question on the convergence of optimizers under the $\alpha$-$\beta$-condition. The main comment is about the presence of the non-vanishing $\mathcal{O}(\beta\sigma^2_{\mathrm{int}})$ term in the rate. Below we provide a detailed discussion of this term which we believe can not be removed since $\alpha-\beta$ functions can have local minima. 1) This term directly appears because of the use of $\alpha$-$\beta$-condition in the analysis, without relying on additional upper bounds or approximations, and it captures the potential presence of local minima as explained below 2) We recall that one can find examples that satisfy the $\alpha$-$\beta$-condition and have local minima. For instance, let $$f_1(x,y)=\frac{1+x^2+y^2}{2+x^2+y^2},f_2(x,y)=\frac{(x-2)^2+(y-2)^2}{1+(x-2)^2+(y-2)^2},f=1/2(f_1+f_2).$$ For this problem, $f_1^*=\frac{1}{2}, w_1^*=(0,0),f_2^*=0,w_2^*=(2,2),f^*\approx 0.45,w^*=(1.97,1.97)$ (we will add the example in the revised version with a proof). We attach a surface plot in the separate pdf) that satisfies the condition with $\alpha\gtrsim1250,\beta=\alpha-1.$ Moreover, in example 4 we provide an example of a 2-layer neural network with ReLU activation that is known to have spurious local minima [4]. In contrast, previously proposed conditions such as PL and quasar-convexity do not allow to have local minima/saddle points. 3) Note that on the losses that have local minima, SGD does not converge to $f^*$, regardless of the values of stepsize $\gamma$. Indeed, as the rates must hold for any initialization $x^0$, in the worst-case scenario (i.e. $x^0$ is close to spurious minima), annealing the stepsize does not recover convergence to the global minimizer. Near local minima, the gradient dynamics resemble those on a quadratic function (Hartman–Grobman theorem), and one can assume the noise strength is bounded (in norm) near the basin (which holds for ERM problem that we consider). So if we initialize close to a local minimizer and let the stepsize $\gamma$ converge to zero, we get the convergence to the spurious local minima. This suboptimality is modeled by the stepsize-independent quantity $\mathcal{O}(\beta\sigma\_{\mathrm{int}}^2)$ since we provide convergence guarantees for function suboptimality value and not for squared gradient norm as usual in the non-convex setting. Because of the previous considerations, the loss is not necessarily minimized making it necessary to include a worst-case correction term $\mathcal{O}(\beta\sigma\_{\mathrm{int}}^2)$. **Additional evidence from prior works:** Other prior works lead us to believe this term indeed correctly appears in the convergence bound: - We emphasize that the stochastic term $\mathcal{O}(L\gamma\sigma^2_{\mathrm{int}})$ also appears in the standard rate of SGD with constant learning rate; see Theorem 5.5 in [1]. Although it can be annealed with a decreasing learning rate, this only guarantees convergence to a critical point for non-convex functions. - A non-vanishing term is frequently observed when training neural networks. We for instance refer the reviewer to Figure 8.1 in the seminal reference [2]. This is also observed during the training of language models where the loss is significantly larger than $0$; see plots in Figures 16 and 17 corresponding to language modeling in the appendix. This phenomenon suggests that reaching a critical point that is a global minimizer is not always observed practically. Therefore, the presence of a non-vanishing error term in the rate with the $\alpha$-$\beta$ condition is consistent with empirical observation. - Finally, prior works that propose other conditions to describe the loss landscape of deep neural networks (e.g. gradient confusion [3]) also obtain a non-vanishing term in the convergence rate (see Theorem 3.2 in [3]). Interestingly, the non-vanishing terms from both analyses capture some sort of discrepancy between datapoints, although they do not seem to be directly relatable. - The value of $\beta\sigma^2_{\mathrm{int}}$ decreases with over-parametrization which is the expected trend; see rough estimations of $\beta\sigma^2_{\mathrm{int}}$ in the table in the separate pdf. Nevertheless, we will investigate this aspect further in the next revision and provide an extended discussion regarding this limitation. [1] Garrigos, Guillaume and Gower, Robert M, Handbook of convergence theorems for (stochastic) gradient methods, arXiv preprint arXiv:2301.11235, 2023. [2] Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning, MIT press, 2016. [3] Sankararaman, Karthik Abinav and De, Soham and Xu, Zheng and Huang, W Ronny and Goldstein, Tom, The impact of neural network overparameterization on gradient confusion and stochastic gradient descent ICML 2020. [4] Safran, Itay and Shamir, Ohad, Spurious local minima are common in two-layer relu neural networks, ICML 2018. Pdf: /pdf/e90522dea55997333b2f8e28b2e45f1856bdb3c3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles
Accept (poster)
Summary: The paper constructs the benchmark SPLAT to evaluate and elicit lateral thinking abilities in LLMs. It includes questions similar to brain teasers, which describe scenarios and ask non-obvious questions for guessing. Example: __Question/Story__: A hunter aimed his gun carefully and fired. Seconds later, he realized his mistake. Minutes later, he was dead. __Reference Answer__: He hunted in snow-capped mountains. The shot provoked an avalanche, which covered the man. He died of strangulation. They also propose a multi-turn player-judge evaluation framework that reduces reliance on stronger evaluation models. Strengths: 1. The SPLAT dataset is different from previous datasets, with graded difficulty levels and harder questions. 2. the multi-turn player-judge framework are claimed to be able to reduces reliance on stronger evaluation models. (I acutally feel a bit confused for this part, see qustions) 3. The authors conduct experiments to validate their approach, demonstration that SPLAT can improve LLM performance on other lateral thinking tasks suggests broader applicability. Weaknesses: 1. "The dataset and framework are designed not only to evaluate but also to actively elicit lateral thinking in LLMs. Experiments show that using data and reasoning processes from our framework leads to improved performance of LLMs, even when applied to other lateral thinking benchmarks." This part really needs ablation studies. It's unclear which parts are effective. For example, are the reasoning processes, the entire dataset, or similar questions in previous data responsible for the improvement? Did this data improve the model's ability to think about these types of questions, or is the improvement possibly due to some tricks? Moreover, why was testing only done on RiddleSense when many other benchmarks were reported earlier? Is this cherry-picking? 2. A player-judge evaluation framework based on WizardLM-2 is proposed as an alternative to human judges to evaluate the answers, but it shows much worse agreement on hard questions compared to inter-human agreements. However, harder questions are actually proposed as a contribution of this work. As the answers for such puzzles can be "off by a hair but miss by a mile," it might require human evaluation for the hard puzzles. It is also a bit confusing why the human agreement on Medium questions is much lower than on hard and easy ones. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Similar datasets have existed before, as mentioned in the paper, such as RiddleSense and Oogiri. Here's an example from RiddleSense: My life can be measured in hours. I serve by being devoured. Thin, I am quick; Fat, I am slow. Wind is my foe. What am I? (A) paper (B) candle (C) lamp (D) clock (E) worm In comparison, SPLAT has longer questions and answers. Also, the reference answers for SPLAT are open-ended, while previous ones are all multiple-choice, which is claimed as a contribution. However, this also makes the evaluation more challenging. What makes you think open-ended is a much better option that makes it a contribution? 2. Author claim that their methods reduces reliance on stronger evaluation models, but it seems they are still using WizardLM-2 as a component (Judge Model) of their framework? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The size for this benckmark is small compared with previous similar datasets that have existed before, which include many more puzzles compared to the proposed SPLAT. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1-1. "The dataset and framework are designed not only to evaluate but also to actively elicit lateral thinking in LLMs. Experiments show that using data and reasoning processes from our framework leads to improved performance of LLMs, even when applied to other lateral thinking benchmarks." This part really needs ablation studies. It's unclear which parts are effective. For example, are the reasoning processes, the entire dataset, or similar questions in previous data responsible for the improvement? Did this data improve the model's ability to think about these types of questions, or is the improvement possibly due to some tricks?** We conduct an ablation study to examine the impact of our data and reasoning processes on model performance. Besides RiddleSense, we incorporate another lateral thinking benchmark (i.e., BrainTeaser), which includes word and sentence puzzles. The results, as shown in the table below, indicate that incorporating our dataset improves average accuracy (Llama3-8B: from 61.45 to 64.07; Llama3-70B: from 80.76 to 83.78). When we integrate the reasoning processes, these scores further increase to 66.71 and 85.42, respectively. These results further demonstrate that our data and reasoning processes can effectively boost LLMs' ability in handling lateral thinking tasks rather than tricks. | Model | RiddleSense (Dev) | BrainTeaser (Sentence, overall) | BrainTeaser (Word, overall) | Average | | ------- | ------- | ------- | ------- | ------- | | Llama3-8B (base) | 70.51 | 67.65 | 46.2 | 61.45 | | Llama3-8B (data)| 70.32 | 69.62 | 52.28| 64.07 | | Llama3-8B (data+reasoning) | 72.18 | 69.89 | 58.07| 66.71 | | Llama3-70B (base)| 83.34 | 87.76 | 71.2 | 80.76 | | Llama3-70B (data) | 82.95 | 91.12 | 77.27| 83.78 | | Llama3-70B (data+reasoning) | 85.21 | 91.51 | 79.54| 85.42 | **W1-2. Why was testing only done on RiddleSense when many other benchmarks were reported earlier? Is this cherry-picking?** Please refer to General Response **G1**. **W2. A player-judge evaluation framework based on WizardLM-2 is proposed as an alternative to human judges to evaluate the answers, but it shows much worse agreement on hard questions compared to inter-human agreements. However, harder questions are actually proposed as a contribution of this work. As the answers for such puzzles can be ``off by a hair but miss by a mile'', it might require human evaluation for the hard puzzles. It is also a bit confusing why the human agreement on Medium questions is much lower than on hard and easy ones.** The human-human agreement rates, sometimes as low as 87.5\% as seen in Table 2, reflect the strictness of our agreement metric. For instance, if judgements from three humans result in two 'matched' and one 'unmatched,' the agreement is considered 1/3. If the judge model agree with the 'matched' responses, its agreement with humans is 2/3. Despite this rigorous evaluation, our method still achieves a high agreement rate of 88.24\% on hard puzzles. It shows the diversity of acceptable answers and the challenging nature of our benchmark. **Q1. Similar datasets have existed before, as mentioned in the paper, such as RiddleSense and Oogiri. Here's an example from RiddleSense: My life can be measured in hours. I serve by being devoured. Thin, I am quick; Fat, I am slow. Wind is my foe. What am I? (A) paper (B) candle (C) lamp (D) clock (E) worm. In comparison, SPLAT has longer questions and answers. Also, the reference answers for SPLAT are open-ended, while the previous ones are all multiple-choice, which is claimed as a contribution. However, this also makes the evaluation more challenging. What makes you think open-ended is a much better option that makes it a contribution?** Not all problems are suited to multiple-choice formats, such as puzzles. While multiple-choice questions are easier to evaluate, they often simplify the problem. Besides, an open-ended format lessens the likelihood of inadvertently leading to the correct answer. However, the evaluation of open-ended answers is much more challenging. To this end, we develop a multi-turn framework that leverages LLMs as judges to address open-ended evaluation tasks. The robustness of this framework represents one of the key contributions of our work. **Q2. Authors claim that their methods reduce reliance on stronger evaluation models, but it seems they are still using WizardLM-2 as a component (Judge Model) of their framework?** By "reduce reliance on stronger evaluation models", we mean that the judge model in our framework does not necessarily need to be more powerful than the player model. However, it still needs to be sufficiently robust, achieving a high level of consistency with human judgements, as demonstrated in Tables 2 and 3. In traditional model-based evaluation methods (e.g., [1]), the evaluation model typically needs to be more capable than the model being evaluated since it directly grades the responses. This requirement often constrains the ability to evaluate newer, more advanced models. Our approach, by contrast, allows for the use of robust but not necessarily superior models as evaluators, facilitating a broader assessment of SoTA models. [1] Judging llm-as-a-judge with mt-bench and chatbot arena. NeurIPS, 2023. **L1. The size for this benchmark is small compared with previous similar datasets that have existed before.** Please refer to General Response **G2**.
Summary: This paper focuses on lateral thinking, which is about creativity and viewing problems from multiple angles. To sovle the challenge that the complexity of assessing creative thought processes and the scarcity of relevant data, this paper introduces SPLAT, a benchmark leveraging Situation Puzzles to evaluate and elicit LAteral Thinking of LLMs. By employing a new multi-turn player-judge framework instead of the traditional model-based evaluation, the proposed method lessens dependence on more robust evaluation models, enabling the assessment of state-of-the-art LLMs. Moreover, through applying data and reasoning processes from our benchmark to another lateral thinking-related benchmark, like RiddleSense, leads to performance enhancements. Strengths: 1. This paper focuses on the lateral thinking problem, which is important to creativity and viewing problems from multiple angles. 2. The proposed dataset is open-ended and the evaluation method is good. 3. The paper is well-written. Weaknesses: 1. The size of the dataset is samll. 2. The reference answer is limited to 1, while the questions are one-to-many questions. 3. The in-depth analysis is not enough Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why not provide more than one answer for each question (I do not think it is challenging, there are multiple answers to a question on the internet such as Quora)? Since one such question could have multiple answers, more reference answer might also be helpful for evaluation. 2. A overall evaluation metric is needed, some model might have a high acc with a high average round, you should define a overall metric to combine these two metrics for a clear comparison of existing LLMs. 3. It is better to adopt more benchmarks for validating the effectiveness of the proposed dataset (more than just RiddleSense). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. The size of the dataset is small.** Please refer to General Response **G2**. **W2+Q1. The reference answer is limited to 1, while the questions are one-to-many questions. Why not provide more than one answer for each question (I do not think it is challenging, there are multiple answers to a question on the internet such as Quora)? Since one such question could have multiple answers, more reference answers might also be helpful for evaluation.** We have tried to directly gather questions and answers from question-and-answer websites like Quora, where we observed that situation puzzle queries often ask "How to build a situation puzzle?" with responses typically listing several one-to-one puzzle pairs rather than multiple answers to a single question. Thus, as discussed in the paper, we originally wanted to evaluate it like in MS COCO, where multiple reference answers are provided for each image, and metrics are calculated based on the closest reference answer. However, due to the difficulty in collecting multiple reference scenarios for situation puzzles, we adapt our approach. Instead, we run each puzzle multiple times (denoted as $R$) and select the closest match to the reference scenario from these iterations as the final result. This method preserves the diversity of potential answers and mitigates the effects of hallucinations in LLMs by allowing multiple evaluations per puzzle. In the future, we seek to employ crowd-sourcing platforms like Amazon Mechanical Turk to collect diverse answers to the same puzzle. Besides, we will provide contributors with guidelines that encourage diverse thinking, asking them to provide unique answers or perspectives on the same puzzle. **W3+Q3. The in-depth analysis is not enough. It is better to adopt more benchmarks for validating the effectiveness of the proposed dataset (more than just RiddleSense).** Please refer to General Response **G1**. **Q2. An overall evaluation metric is needed, some models might have a high acc with a high average round, you should define an overall metric to combine these two metrics for a clear comparison of existing LLMs.** Thank you for your constructive suggestion. Based on the calculation of Accuracy (Acc, \%) and Average Round (Rnd) in our paper, we further define an overall evaluation metric as Overall = $1/N\sum_{i=1}^N\mathbb{I}$(sample$_i$) / Rnd$_i$, where $\mathbb{I}$ is an indicator function that returns 1 if the scenario deduced by LLMs matches the reference scenario/answer semantically, and 0 otherwise. Given that $\mathbb{I}$(sample$_i$) takes values in \{0, 1\} and Rnd$_i$ ranges from 1 to a maximum (maximum = 15 in our paper), the Overall metric spans from 0 to 1, with higher values indicating better performance. The table below reflects trends similar to those discussed in our paper, where GPT-4 and its Turbo variant, as well as WizardLM-2 (8x22B) show strong performance and outperform other models. GPT-4 leads with an Overall score of 8.16 (Avg, x100), indicating high capability. However, it is still far from saturation, which suggests room for further improvement. | Model | Overall (Easy, x100) | Overall (Medium, x100) | Overall (Hard, x100) | Overall (Avg, x100) | | --- | --- | --- | --- | --- | | LLama3-8B | 3.65| 1.58 | 0.50 | 1.91 | | LLama3-70B | 8.67 | 3.54 | 1.36 | 4.52 | | Qwen1.5-32B| 6.39 | 2.47 | 2.01 | 3.62 | | Qwen1.5-110B | 9.71 | 3.74 | 2.16 | 5.20 | | WizardLM-2-8x22B | 11.97 | 4.67 | 2.74 | 6.46 | | GPT-4 Turbo | 13.22 | 5.06 | 1.35 | 6.54 | | GPT-4 | 15.25 | 6.71 | 2.52 | 8.16 | --- Rebuttal Comment 1.1: Title: Replying to Rebuttal of Authors Comment: Thanks for your detailed response. It has addressed some of my concerns, I will raise the score to reflect this. However, I am still concerned that the number of answers to each question is limited to 1 since your paper is about "Lateral Thinking". Although "How to build a situation puzzle?" is not proper to obtain multiple answers, there still are many questions that have answers from multiple angles, such as "how to prove 1+1=2". --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for your valuable suggestions and for increasing the score. We will explore incorporating multi-answer formats in future updates to better evaluate lateral thinking in LLMs.
Summary: This work focuses on creating a benchmark, as well as a modeling framework for evaluating lateral thinking of LLMs. The benchmark, called SPLAT, consists of 975 situation puzzles. The framework consists of a “judge” and a “player” (the LLM to be evaluated). The judge poses an open ended puzzle that is further clarified by the player till a final answer is reached. The judge evaluates the final answer. The authors posit that this framework eases the challenge faced with typical auto-evaluation models since the judge does not need to be better than the LLM that is being evaluated. The authors evaluate the quality of the judge (via human agreement), and finally evaluate LLMs using a promising judge. Strengths: The motivation for having a lateral thinking benchmark is strong. The work around automatic evaluation using a judge v/s player framework is well thought out. I think such a benchmark as well as framework/protocol would be beneficial to the community. Weaknesses: Overall, the idea and motivation behind the work is solid. I also think that the benchmark is good quality. However, my concern is that this work is half-baked. The experiments aren’t conclusive. More work should go into making the experiments thorough and useful to the scientific community. * Although the motivation and overall idea is strong, I am concerned about the experimental setup. The agreement rates for WizardLM-2 in Table 2 and 3 don’t appear to be high enough (are around ~80%) and 3 individuals seems to be fairly less. How does this fare with human agreement rates in literature? Also, is there any reason we haven’t used the strongest model (GPT-4 from the paper) as the judge? * I also am concerned about Table 4. The authors should consider using strong base models for evaluation (that are closer to GPT-4 on public benchmarks) * It would be good to introduce modeling methods to improve upon the metrics that are obtained for models on SPLAT. This will give more insight into how this benchmark is valuable to the community. * nit: I see mentions of “using data and reasoning processes from our framework” consistently throughout the paper. This should be framed better and be more precise. Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses mentioned above. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Although the motivation and overall idea are strong, I am concerned about the experimental setup. The agreement rates for WizardLM-2 in Tables 2 and 3 don’t appear to be high enough (are around ~80\%) and 3 individuals seem to be fairly less. How does this fare with human agreement rates in literature?** Thank you for pointing out this. Due to the strict calculation method of our agreement, the agreement between humans cannot always reach 100\%. Specifically, as mentioned in the supplementary. if three humans vote 'matched', 'matched', and 'unmatched' for a puzzle, respectively, the agreement among them, noted as 'human-human', is only 1/3, as there are three pairs: (matched, matched), (matched, unmatched), and (matched, unmatched). If the judge model votes `matched', the agreement between humans and the judge model is 2/3. But even under such a strict evaluation metric, the judge model WizardLM-2 (8x22B) aligns closely with human judgements, achieving over 80\% agreement on both final answers and reasoning processes. Specifically, as shown in Table 2, the final answer agreements between WizardLM-2 (8x22B) and human evaluations are 100\%, 87.50\%, and 100\% for easy, medium, and hard puzzles, respectively. Similarly, the agreement on reasoning processes (Table 3) shows robust results with 84.18\%, 89.86\%, and 90.15\% for each difficulty level, respectively. **W2. Also, is there any reason we haven’t used the strongest model (GPT-4 from the paper) as the judge? I also am concerned about Table 4. The authors should consider using strong base models for evaluation (that are closer to GPT-4 on public benchmarks).** Before the submission deadline, data from its official website [1] indicates that WizardLM-2 (8x22b) performs competitively on benchmarks like MT-Bench, comparable to leading models such as GPT-4, i.e., WizardLM-2 (8x22b): 9.12 vs. GPT-4-0314: 8.96 vs. GPT-4-1106-Preview: 9.32. Besides, we also conduct an experiment that employ GPT-4 as the judge model, which yields results comparable to WizardLM-2 in both final answer agreement and reasoning process agreement. For instance, on "Hard" puzzles, GPT-4 achieved an 89.85\% final answer agreement rate, closely matching WizardLM-2's 88.24\%. [1] https://wizardlm.github.io/WizardLM2/ **W3. It would be good to introduce modeling methods to improve upon the metrics that are obtained for models on SPLAT. This will give more insight into how this benchmark is valuable to the community.** To enhance the alignment/agreement between the judge model and human assessments, we plan to focus on two approaches in future work: 1) Employing in-context learning methods such as Chain-of-Thought (CoT) [2] or Tree of Thought (ToT) [3], which follow human-like reasoning patterns and thus can improve alignment; 2) Utilising training-based methods like Supervised Fine-Tuning (SFT) and Direct Preference Optimisation (DPO) [4], which involve training models directly on lateral thinking puzzles to enhance their accuracy in evaluating such scenarios. These methods, while promising, are beyond the scope of this paper. We regard them as our future work. [2] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS, 2022. [3] Tree of Thoughts: Deliberate Problem Solving with Large Language Models. NeurIPS, 2023. [4] Direct Preference Optimisation: Your Language Model is Secretly a Reward Model. NeurIPS, 2023. **W4. nit: I see mentions of "using data and reasoning processes from our framework" consistently throughout the paper. This should be framed better and be more precise.** This refers to 'using the question-answer pairs generated during our SPLAT's player-judge interactions as auxiliary prompts'. We will revise this thoroughly and make it clearer. --- Rebuttal Comment 1.1: Comment: Thanks, I think W1 and W2 make sense. I have increased the score to reflect this. I do think W3 would be important to improve the potential impact of this work. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you again for the constructive comments. We will add the clarifications in the response to our revised paper.
Summary: This paper introduces SPLAT, a novel benchmark for evaluating and eliciting lateral thinking abilities in Large Language Models (LLMs) using situation puzzles. The key contributions include: A new dataset of 975 graded situation puzzles across three difficulty levels. A multi-turn player-judge evaluation framework that reduces reliance on stronger evaluation models. Demonstration that the benchmark can both evaluate and elicit lateral thinking in LLMs. Experimental results showing that a robust evaluation model (WizardLM-2) closely matches human judgments. Evidence that applying SPLAT's data and reasoning processes to other lateral thinking benchmarks leads to performance improvements in LLMs. Strengths: 1. The paper presents a novel approach to assessing lateral thinking in LLMs, an area that has been underexplored compared to vertical thinking. The use of situation puzzles and the multi-turn player-judge framework are creative solutions to the challenges of evaluating open-ended, creative thought processes. 2. The methodology appears rigorous, with careful data collection, annotation, and difficulty categorization. The multi-turn player-judge framework is well-designed to overcome limitations of traditional model-based evaluations. The experimental results, including comparisons with human judgments and applications to other benchmarks, provide strong evidence for the effectiveness of the approach. 3. The paper is well-structured and clearly written. The task definition, data construction process, and evaluation framework are explained in detail. Figures and tables effectively illustrate key concepts and results. 4. This work addresses an important gap in the evaluation of LLMs by focusing on lateral thinking, which is crucial for creative problem-solving and innovation. The benchmark has potential applications beyond evaluation, as demonstrated by its ability to improve LLM performance on other lateral thinking tasks. This could lead to advancements in developing more creative and flexible AI systems. Weaknesses: 1. While the data collection process is described, there's minimal discussion of potential biases in the dataset, such as cultural specificity of the puzzles or biases introduced during the human annotation process. 2. The paper would benefit from ablation studies to isolate the impact of different components of the SPLAT framework, such as the difficulty categorization or the multi-turn questioning process. 3. The paper lacks a discussion of potential ethical implications or misuse of improved lateral thinking capabilities in LLMs. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. How does the performance of LLMs on SPLAT correlate with their performance on other, more traditional NLP tasks? Is there evidence that lateral thinking ability is distinct from other language model capabilities? 2. Have you considered ways to automatically generate new situation puzzles to expand the dataset and improve scalability? 3. How sensitive is the performance of LLMs on SPLAT to the specific prompts or instructions given? Could you elaborate on the prompting strategy used? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. There's minimal discussion of potential biases in the dataset.** During our data collection process, we also track user preferences for each situation puzzle, measured by a preference score (\%). We notice a pattern where puzzles (about 1\% of our dataset) with lower preference rates (30\%-40\%) show greater variation in the time taken to solve them, ranging from 5 to 26 minutes for puzzles categorised as "Medium". In contrast, puzzles with higher preference rates (above 80\%) exhibit less time variation, with solving times ranging from 9 to 17 minutes for "Medium" puzzles. This suggests that annotations may introduce more noise when users engage with puzzles they find less appealing. To avoid this bias, we will enhance the diversity of the puzzles to ensure a broader appeal across different user groups, thereby reducing the likelihood of low preference scores. Besides, we plan to implement a dynamic refinement method during data collection to enhance puzzle engagement. If a puzzle receives a low preference score, we will revise it based on user feedback and re-release it to assess improvements in user preference. **W2. Ablation studies to isolate the impact of different components of the SPLAT framework.** We conduct an ablation study to explore the influence of our data and reasoning processes. In addition to the RiddleSense, we include the BrainTeaser benchmark, which features both word and sentence puzzles. As shown in the table below, incorporating our data into base LLMs leads to an increase in average accuracy (Llama3-8B: 61.45 -> 64.07; Llama3-70B: 80.76 -> 83.78). This improvement is further enhanced to 66.71 and 85.42, respectively, when we integrate the reasoning processes. These results further demonstrate that both our data and reasoning processes can effectively enhance the lateral thinking capabilities of LLMs. | Model | RiddleSense (Dev) | BrainTeaser (Sentence, overall) | BrainTeaser (Word, overall) | Average | | ------- | ------- | ------- | ------- | ------- | | Llama3-8B (base) | 70.51 | 67.65 | 46.2 | 61.45 | | Llama3-8B (data)| 70.32 | 69.62 | 52.28| 64.07 | | Llama3-8B (data+reasoning) | 72.18 | 69.89 | 58.07| 66.71 | | Llama3-70B (base)| 83.34 | 87.76 | 71.2 | 80.76 | | Llama3-70B (data) | 82.95 | 91.12 | 77.27| 83.78 | | Llama3-70B (data+reasoning) | 85.21 | 91.51 | 79.54| 85.42 | **W3. Potential ethical implications or misuse.** Thank you for the kind reminder. We have discussed these in Section A.1 of our supplementary. Briefly, while enhancing LLMs in lateral thinking offers numerous benefits, it also introduces potential risks such as the misuse of technology for fraud or misinformation. Thus, we emphasise the importance of building robust ethical guidelines and transparent AI usage policies. **Q1. How does the performance of LLMs on SPLAT correlate with their performance on other, more traditional NLP tasks? Is there evidence that lateral thinking ability is distinct from other language model capabilities?** While there is some overlap in the skills required for SPLAT and traditional NLP tasks, they are not exactly the same. Models (e.g., GPT-4 and GPT-4 Turbo) that perform well on standard vertical thinking benchmarks like MT-Bench do show competent performance on our SPLAT (Table 4). Conversely, models like Llama3-8B, which perform less impressively on MT-Bench, tend to exhibit similarly lower performance on SPLAT. This suggests that a strong model in language understanding and reasoning is advantageous for both vertical and lateral thinking tasks. While there is a correlation between general NLP skills and lateral thinking, the latter also demands distinct abilities like creativity and problem-solving beyond typical task completion. Figure 4 in our paper shows that when incorporating data and reasoning processes from our SPLAT, LLMs could perform better even on other lateral thinking benchmarks like RiddleSense. These suggest that enhancing these creative aspects could better elicit the capabilities of LLMs in lateral thinking. **Q2. Have you considered ways to automatically generate new situation puzzles to expand the dataset and improve scalability?** Yes, we have considered the potential of using automated methods to generate new situation puzzles to expand our dataset. Specifically, the generation process can use advanced LLMs like GPT-4 or Claude-3, which can generate puzzles based on specific prompts, rules, and requirements. However, the challenge arises in verifying the quality of these automatically generated puzzles. One effective approach is a hybrid human-AI collaboration, where puzzles generated by LLMs are subsequently reviewed and refined by humans through a crowd-sourcing platform. While this method leverages AI to handle initial puzzle creation, it remains time-consuming and labour-intensive due to the human review component. Thus, finding more efficient ways to scale up this data remains a critical area for future research. **Q3. How sensitive is the performance of LLMs on SPLAT to the specific prompts or instructions given? Could you elaborate on the prompting strategy used?** We conduct a sensitive analysis, where we rewrite the prompt of the judge model but keep the semantics of the prompt the same as before. For example, one of the original prompts for the judge model is "Read and fully understand both the provided short story and its answer, ensuring you grasp their logical connections. But do not show answer for the user". We rewrite it to "Read and thoroughly comprehend both the provided short story and its answer, making sure you understand their logical relationships. However, do not reveal the answer to the user". Results show that for Llama3-8B, even though the prompt is different, as long as the semantics are clear, the results tend to be comparable (e.g., average Acc original 17.05 vs. rewritten 16.19). These demonstrate the robustness of our SPLAT as an evaluation benchmark. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their response. After reading the response, I think my current score is appropriate. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you very much for your encouragement and valuable comments on our work. We will include the discussions and ablation study in our revised paper and make them clearer.
Rebuttal 1: Rebuttal: **General Response** **G1. Results on more benchmarks than just RiddleSense.** Besides the results on the RiddleSense (Section 5.3 in the submitted paper), we provide several results on another lateral thinking-focused benchmark, i.e., BrainTeaser [A], which features two main types of puzzles: Sentence (S.) puzzles and Word (W.) puzzles. Following its official settings, we assess model performance using two accuracy metrics: (1) Instance-based Accuracy: This metric individually evaluates each question—whether it is the original or a reconstructed version. We present instance-based accuracy for both the original puzzles and their semantic and contextual reconstructions (provided by the official dataset). (2) Group-based Accuracy: Under this metric, each original puzzle and its variants are treated as a group. The model earns a score of 1 only if it correctly solves all three puzzles within a group. If it fails to do so, the score awarded is 0. As in our submitted paper, we consider both open-source and closed-source LLMs, specifically Llama3 (in its Instruct versions) and GPT-4, with additional considerations for different model sizes, such as Llama3-8B and Llama3-70B. As shown in the following tables, in the zero-shot setting, the accuracy of various LLMs consistently improves on the BrainTeaser benchmark (both sentence and word puzzles). The results further demonstrate that the data and framework of our benchmark effectively elicit lateral thinking capabilities in LLMs when applied to various lateral thinking tasks. [A] Brainteaser: Lateral thinking puzzles for large language model. EMNLP, 2023. BrainTeaser (Sentence) | Model| Original | Semantic| Context | Ori \& Sem | Ori \& Sem \& Con | Overall | | --- | --- | --- | --- | --- | --- | --- | | Llama3-8B| 70.23 | 63.09 | 69.64 | 52.09 | 38.32 | 67.65 | | Llama3-8B*| 72.18 | 65.47 | 72.02 | 58.92 | 45.23 | 69.89 | | Llama3-70B | 89.34 | 87.57 | 86.39 | 84.61 | 75.73 | 87.76 | | Llama3-70B*| 93.49 | 90.53 | 90.53 | 88.75 | 82.84 | 91.51 | | GPT-4| 93.49 | 89.94 | 83.43 | 88.75 | 75.14 | 88.95 | | GPT-4*| 95.26 | 91.71 | 88.69 | 91.71 | 82.24 | 91.88 | BrainTeaser (Word) | Model| Original | Semantic| Context | Ori \& Sem | Ori \& Sem \& Con | Overall | | --- | --- | --- | --- | --- | --- | --- | | Llama3-8B| 47.72 | 44.69 | 46.21 | 31.06 | 17.42 | 46.20 | | Llama3-8B*| 54.54 | 54.54 | 65.15 | 39.39 | 29.54 | 58.07 | | Llama3-70B| 71.96 | 71.96 | 69.69 | 62.87 | 49.24 | 71.20 | | Llama3-70B*| 81.06 | 81.81 | 75.75 | 74.24 | 59.09 | 79.54 | | GPT-4| 71.21 | 65.91 | 56.06 | 59.09 | 41.66 | 64.39 | | GPT-4*| 74.24 | 72.72 | 62.12 | 64.39 | 45.45 | 69.69 | **G2. The size of the dataset.** Our dataset contains 975 puzzles, where the number of puzzles is more than that in BrainTeaser (Sentence) and BrainTeaser (Word), which are 627 and 492, respectively. Besides, as each of our puzzles requires multi-turn conversations (always more than 5) to solve, it is comparable to the dataset like RiddleSense or Oogiri (T2T, Eng.) in terms of inference volume, where both contain about 5,000 puzzles. Moreover, as a benchmark, our priority is to ensure diversity and robustness to produce a stable evaluation framework. We believe the current quantity of puzzles is sufficient for this purpose. We also acknowledge the need for a larger dataset and plan to expand SPLAT in future versions.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Least Squares Regression Can Exhibit Under-Parameterized Double Descent
Accept (poster)
Summary: The paper aims to understand the phenomenon of double descent in regression, and helps complement existing knowledge about the phenomenon by proving that double descent can occur even in the under-parametrized regime, going against previous intuition. They also prove that the peak in the norm of the estimator does not imply a peak in the risk. Strengths: Quality and Clarity: Contextualization relative to prior work is good, Table 1, for example, provides a very concise and insightful summary of existing results. Originality: The paper considers an original viewpoint on the problem of double-descent, namely they realize that violating either Assumptions 1 or 2 can cause the peak to move into the under-parametrized region. To the best of my knowledge, this had not been noticed before. Significance: This result is significant, as it addresses the prominent double-descent phenomenon in machine learning, which is an important primitive for understanding generalization and other properties of estimators operating on high-dimensional data. They also prove that a peak in the norm of the estimator does not imply a peak in risk, and this is important as it goes against some of the intuition provided in earlier works. Weaknesses: Clarity: Although the contextualization relative to prior work is good, I find that the paper lacks in clarity. In particular, it is not exceptionally well written, and leaves some sections with much to be desired in terms of exposition. Specific examples are: - Table 1: What does 1/Low mean? The superscript 3 leads to nowhere. - Middle of page four: “Hence, this is controlled by 1. The alignment…, 2. The spectrum”. What exactly is being controlled here? And why exactly is it controlled by 1. and 2. ? This is not immediate to me, and I think this needs to be made more precise. - Over-use of italics in the introduction makes it hard to know what to focus on. I would recommend maximum one italicized sentence per paragraph. - Section 4.1: be more precise about the model, I don’t know what it is at this point. I later identify that $X + A$ represents the spiked data, but this must be made clear earlier on. I find“Let A be the noise matrix” to not be clear enough. - Theorem 1: a comment on the proof technique for this theorem would be helpful, even if you have already mentioned it previously (or given the intuition). Also, the interpretation of Theorem 1 is not clear until we read Theorem 2, could you interpret Theorem 1 a bit more and specifically identify what it says that is not said in Theorem 2? Is Theorem 2 just a corollary of Theorem 1? - The first sentence of the abstract is not clear. Overall, the abstract may need to be rewritted in a more professional manner, not referring to previous works as "believing" in something, but something more precise. Technical Quality: 4 Clarity: 2 Questions for Authors: Figure 2: I see a clear peak, but I do not see an initial “descent”. Is there something I am missing here? Why does the model have best generalization at $c = 0$? I am guessing that the focus of the study is on the peak, although that is not the full picture of double descent as I am not seeing the initial descent. That is okay, but just make clear why there is no initial descent. Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the paper's good contextualization, its original viewpoint, and its significant result. We now address the reviewer's concerns. > Clarity: Although the contextualization relative to prior work is good, I find that the paper lacks in clarity. In particular, it is not exceptionally well written, and leaves some sections with much to be desired in terms of exposition. Specific examples are: We thank the reviewer for pointing out these concerns. We shall address them in the revised version of the paper. > Table 1: What does 1/Low mean? The superscript 3 leads to nowhere. Subscript 3 points to Appendices A and F, which present a conjectured formula for the low-rank case with experimental evidence. We shall update the table only to say 1, the rank considered in the main text. > Middle of page four: “Hence, this is controlled by 1. The alignment…, 2. The spectrum”. What exactly is being controlled here? And why exactly is it controlled by 1. and 2. ? This is not immediate to me, and I think this needs to be made more precise. We apologize there was a typo in the equation, it should read $\sum_{i=1}^r \frac{(y^T V)_I^2}{\sigma_i^2}$. This quantity depends on two things: the product $y^TV$, which we call the alignment, and $\sigma_i^2$, which is the spectrum. > Over-use of italics in the introduction makes it hard to know what to focus on. I would recommend maximum one italicized sentence per paragraph. We thank the reviewer for the feedback and shall reduce the number of italicized phrases. > Section 4.1: be more precise about the model, I don’t know what it is at this point. I later identify that $X + A$ represents the spiked data, but this must be made clear earlier on. I find“Let A be the noise matrix” to not be clear enough. We shall add a definition for the spiked model in section 4.1 > Theorem 1: a comment on the proof technique for this theorem would be helpful, even if you have already mentioned it previously (or given the intuition). We shall add the following small proof sketch to the paper. *Sketch:* The main steps for the proof are as follows. First, we use the results from [49], which is like a Sherman Morrison formula but for pseudoinverses. Following this, we rewrite the error as a sum and product of various dependent quadratic forms. We then use ideas from random matrix theory and concentration of measure to show that each quadratic form concentrates on a deterministic number that depends on the Stieljtes transform of the limiting empirical spectral distribution. We then show that the product/sums of dependent forms also concentrate. This gives us the error rate as well. > Also, the interpretation of Theorem 1 is not clear until we read Theorem 2, could you interpret Theorem 1 a bit more and specifically identify what it says that is not said in Theorem 2? Theorem 1 provides the complete error curve as a function of $c$, not just where the maximum is. Prior works such as [24,48] have shown that noise on the independent variable acts as a regularizer. Hence, we have two regularizers for the problem - the noise and the ridge regularization. It is interesting to explore their tradeoffs. We do this in Appendix D. We shall expand on this in the main text. > Is Theorem 2 just a corollary of Theorem 1? Theorem 2 can be viewed as a corollary of Theorem 1. It follows by taking the leading terms in Theorem 1 and doing calculus (compute the first derivative to find the critical point and the second to check that it is a maximum). > The first sentence of the abstract is not clear. Overall, the abstract may need to be rewritted in a more professional manner, not referring to previous works as "believing" in something, but something more precise. We thank the reviewer for pointing out this concern and shall change the phrasing to make it clearer. > Figure 2: I see a clear peak, but I do not see an initial “descent”. Is there something I am missing here? Why does the model have best generalization at $c = 0$? I am guessing that the focus of the study is on the peak, although that is not the full picture of double descent as I am not seeing the initial descent. That is okay, but just make clear why there is no initial descent. The reviewer is correct that there is no initial descent. This is actually quite common for linear models. See Hastie et al (2020) for many such examples. Whether the error is minimum at $c = 0$ vs $c = \infty$ is quite interesting and related to benign overfitting. $c~0$ is the case when $n >> d$. Hence, we have a lot of data points. Hence, classically, we hope for consistent estimators. Hence, we can expect to do well. The $c ~\infty$ case is when $d >> n$. This is the largely overparameterized case. If the global minimum is here, this suggests that the model exhibits benign overfitting. For our examples, benign overfitting seems absent, and hence, the minimum is at $c = 0$. We hope this has addressed the reviewers' concerns and improved their opinion about our work. If the reviewer has further concerns, please let us know, --- Rebuttal Comment 1.1: Comment: Thank you to the authors for addressing my questions, they have helped me better understand the paper.
Summary: The authors explore the double descent phenomenon, postulating that the location of the peak (that separates the "classical" and the "modern" interpolating regime) depends on the properties of the spectrum and the eigenvectors of the sample covariance. In particular, the authors show that the violation of one of two assumptions (assumption 1: Alignment of y and right singular vectors of X; assumption 2: Stieljtes Transform Peak Assumption) can move the peak from the interpolation point into the under-parameterized regime. They also present two simple examples that exhibit double descent in the under-parameterized regime and do not seem to occur for reasons provided in prior work. Strengths: - The paper tackles a very important research topic, and tries to understand the reasons behind the location of the peak in double descent. - The work seems rigorous and the contributions relevant. - Overall, the paper is well-written and reasonably clear. Weaknesses: - The conclusions of the paper are very brief and, from my point of view, not very informative (see section 6 in the paper). In relation to this, I also perceive a certain imbalance in the weight of the two examples provided: while the first (the one related to "Alignment Mismatch") occupies 3 pages of the work, the second example ("Shifting Local Maximum for Stieljtes Transform as a Function of c") is addressed more hastily (one page). - The volume of information provided by the paper is very high. From this point of view, I think it would be positive to recapitulate and indicate clearly, and in a simple and intuitive way, the way in which the risk curves shown throughout the paper are created (Figures 2, 4 and 6). The same applies to Figure 3: what ablation experiments do the authors refer to? Technical Quality: 3 Clarity: 3 Questions for Authors: - What do the authors exactly mean by input and output noise? - In Table 1 (page 3), where are the footnotes related with numbers 3 and 4? - In Figure 3, what ablation experiments do the authors refer to? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - In my opinion, the authors do not sufficiently discuss the limitations of the work performed. In fact, in the NeurIPS Paper Checklist, they only state that "We believe the main purpose of the paper is to show that a certain phenomenon exists and are very careful with our assumptions." Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding that the paper tackles a very important research topic, is rigorous with relevant contributions, and is well written. We now address the reviewer's concerns. > The conclusions of the paper are very brief and, from my point of view, not very informative (see section 6 in the paper). In relation to this, I also perceive a certain imbalance in the weight of the two examples provided: while the first (the one related to "Alignment Mismatch") occupies 3 pages of the work, the second example ("Shifting Local Maximum for Stieljtes Transform as a Function of c") is addressed more hastily (one page). We agree that the conclusions are short. In the final version of the paper, we shall increase the length of the conclusions to make it more substantive while balancing the two examples. In particular, we will add the stronger version of Theorem 4 mentioned in the response to reviewer 4cXR. > The volume of information provided by the paper is very high. From this point of view, I think it would be positive to recapitulate and indicate clearly, and in a simple and intuitive way, the way in which the risk curves shown throughout the paper are created (Figures 2, 4 and 6). The same applies to Figure 3: what ablation experiments do the authors refer to? Thank you for the feedback. We shall do so. The ablation experiment is described on lines 212 to 218, where we break the misalignment in two ways. First, we consider the unregularized problem and artificially shift the spectrum of the noise matrix. This results in the shifted noise matrix having the same spectrum as the effective spectrum in the regularized problem. However, the alignment wasn't broken, so we see the peak at 1. Figure 3 (left). Second, for the regularized problem, we artificially fixed the alignment and noticed that the peak is now in the over-parameterized regime. Hence, the experiment validates the theory that the location of the peak was due to the misalignment. > What do the authors exactly mean by input and output noise? Consider a linear function $y = \beta^T x$. This is a function that we are trying to fit. However, if we had access to exactly the correct inputs $x$ and outputs $y$, then solving the problem would be easy, and we would never see a double descent. To see double descent, we need to introduce noise into the problem. This can be done in two ways. 1. Output noise. That is, we do not receive the responses $y$, but noisy versions, so $y = \beta^T x + noise$. 2. Input noise. Now, instead of getting noisy $y$ measurements. We get the true measurements $y$ but receive noisy inputs $x$. > In Table 1 (page 3), where are the footnotes related with numbers 3 and 4? We apologize for this. Footnote 3 points to Appendix A and F, which discuss the low-rank version of Theorem 1. Appendix A provides numerical experiments to verify the formula, and Appendix F has a statistical physics-type derivatization for the error. Footnote 4 states that [41] only considers optimal regularization. > In Figure 3, what ablation experiments do the authors refer to? The ablation experiment is described on lines 212 to 218, where we break the misalignment in two ways. First, we consider the unregularized problem and artificially shift the spectrum of the noise matrix. This results in the shifted noise matrix having the same spectrum as the effective spectrum in the regularized problem. However, the alignment wasn't broken, so we see the peak at 1. Figure 3 (left). We hope that this has addressed the concerns of the reviewers and improved their opinion about our work. If the reviewer has further concerns, please let us know, --- Rebuttal Comment 1.1: Title: Score raised Comment: Dear authors, I've read your responses to my comments, as well as your responses to all other reviewers' comments, and I've increased my score (moving from "6: Weak Accept" to "7: Accept"). I thank you for your detailed reply. Best --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you for the feedback and help in improving the paper. We also thank the reviewer for increasing their score.
Summary: In this paper, the authors focus on the generalization performance of linear least squares regression and show the existence of double descent generalization curve in the under-parameterization regime. In particular, the authors argue, in the linear model in (1) under study (which is slightly different from standard linear models in the literature, but well motivated), that the generalization risk can have a peak in the under-paramererized regime that is due to the alignment between singular space of data and the target, and spectrum of data covariance, instead of the raw dimension ratio or the explosion of the estimator norm. Some numerical results are provided to support the theoretical analysis. Strengths: The problem under study is of significance. The message of this paper looks interesting. But I find it a bit hard to really understand the results and contribution, see my comments below. Weaknesses: While this paper looks interesting, it is a bit hard for me to really understand the results and contribution. I think making precise the dimension settings (relation between $n, d, n_{tst})$ will address this issue, see some of my detailed comments below. Another issue is the contribution: while Theorem 1 is rather general, the discussion thereafter seems all special cases: For example Theorem 2 is a special case, and the results in Sec 4.3 and 4.5 are essentially numerical. The discussions in Sec 5 is interesting but again a very special setting (mixture model of multivariate Gaussian and a fixed direction) without any motivation. It is thus difficult for me to evaluate the significance of this work. Technical Quality: 3 Clarity: 2 Questions for Authors: * line 35: when summarizing the contribution of this work, it would be helpful to forward point to the corresponding theoretical result and/or definition, for the sake of a precise statement of the technical result or the definition (for example, the spiked covariate model). * it seems that the footnotes 3 and 4 are missing? * Equation after line 123: I am a bit confused here. What is the purpose here? Is $\hat \beta$ still the min norm solution, then what is $\beta$? * To make Definition 1 more rigorous, perhaps say here that the convergence of ESD holds in a weak sense as $k \to \infty$, or something like that? * Theorem 1: perhaps say somewhere in the theorem that this result holds in the asymptotic setting as $n,d,n_{tst}$ going to infinity at the same pace? * Honestly, I do not understand this result. It seems to me that my previous comment is wrong, and that the result in Theorem 1 does NOT hold in the limit of $n,d,n_{tst} \to \infty$ together, or at least, $n_{tst}$ and $n$ can be much larger than $d$. Some specifications and discussions are needed here. * Theorem 2 looks interesting. Could the authors comment more on this? For example, note that taking $\mu = 0$ is (more or less) similar to the ridgeless case in the literature. There, according to Theorem 2, we should have a at $c = 1$, as in accordance with "classical" double descent. So, should we understand Theorem 2 as an extension of "classical" double descent to the regularized setting? Or is this due to the model in (1) and (2)? * line 207 -208: $\hat \Sigma^T \hat \Sigma = \Sigma^T \Sigma + \mu^2$ a typo here? --- I thank the authors for their detailed reply, which helps me better understand their theoretical results and their contribution I increase my score accordingly. I, nonetheless, feel that there are a few typos that need be fixed and clarifications needed, in the current version of the paper. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: I do not see any potential negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding that the problem we study is significant and that the findings are interesting. We now address the reviewer's concerns. > Another issue is the contribution: while Theorem 1 is rather general ... It is thus difficult for me to evaluate the significance of this work. The paper's main purpose is to attempt to understand the reasons peaks occur in the excess risk curve and their locations. Prior work has phrased this as occurring due to overparameterization and the location being on the boundary of the under and overparameterized regimes. Additionally, some prior work [32] has criticized parameter counts as a measure of complexity, saying that parameter counts might overestimate the true complexity. Hence, shifting the boundary between under and over-parameterized to the **right, not the left**. We question this narrative. In particular, we show that the location of the peak is determined by technical reasons in random matrix theory **and not because it is the boundary between over and under-parameterized**. To show this, we present the two conditions in **Section 3 and, in particular, lines 145 to 148 highlight them again.** The rest of the paper then presents two examples where each example violates exactly one of the conditions, and we show that the peak moves to the left. In particular, our examples let us move the peak to the left and control its location. > Scaling of $d$, $n$, and $n_{tst}$ This is a great question. Our results are valid for **any** $n$, $d$, and $n_{tst}$. However, they are only meaningful when $n$ and $d$ are large, and $d$ and $n$ are proportional. Note that we assume that $\sigma^2_{tst} = O(n_{tst})$. The last error is mostly independent of the scaling of $n_{tst}$. This can be seen as follows. The primary proof technique is concentration of measure. In particular, we want to show that the risk concentrates. Theorem 1 can be interpreted as saying when $c = d/n$ that (grouping the first and second term together) $$ \left|\mathcal{R}(c, \mu) - \frac{\sigma\_{trn}^2(\sigma\_{trn}^2 + 1)}{2d\tau^2} \left( \frac{c(1+c+\mu^2 c)}{\sqrt{(1-c+\mu^2 c)^2+4\mu^2 c^2}} - 1\right) - \frac{\sigma\_{tst}^2}{\tau^2 n\_{tst}} \right| = o\left(\frac{1}{d}\right) = o\left(\frac{1}{n}\right) $$ Hence, we show that the error concentrates around the above expression with an error of order $o\left(\frac{1}{d}\right)$. Hence if $d,n$ are small, the error might be large, however, if $d,n$ are large then the error will be small. To make sure this is meaningful, suppose $\sigma_{trn}^2 = \Theta(n) = \Theta(d)$ (we are assuming $n$ and $d$ are proportional) and $\sigma_{tst}^2 = \Theta(n_{tst})$. Then we see that $\tau^2 = \Theta(n^2) = \Theta(d^2)$. Then we see that the first term $\frac{\sigma_{trn}^2(\sigma_{trn}^2 + 1)}{2d\tau^2} \left( \frac{c(1+c+\mu^2 c)}{\sqrt{(1-c+\mu^2c)^2+4\mu^2c^2}} - 1\right)$ is $\Theta(1/d) = \Theta(1/n)$. The final term $\frac{\sigma_{tst}^2}{\tau^2 n_{tst}}$ of order $\Theta(1/d^2)$. Thus, the whole expression is of order $\Theta(1/d)$, while the error goes to zero faster. > line 35: when summarizing the contribution of this work, it would be helpful to forward point to the corresponding theoretical result and/or definition, for the sake of a precise statement of the technical result or the definition (for example, the spiked covariate model). Thank you for this suggestion. We had forward pointers to the Theorems; however, we shall do this for the definitions as well. > Footnote links We apologize for this. Footnote 3 points to Appendix A and F, which discuss the low-rank version of Theorem 1. Appendix A provides numerical experiments to verify the formula, and Appendix F has a statistical physics-type derivatization for the error. Footnote 4 states that [41] only considers optimal regularization. > Line 123 Apologies, it should be $\hat{\beta}$ on line 123 and the equation should be $\frac{(y^TV)_i^2}{\sigma_i^2}$ > Definition 1 Thank you, we shall add the phrase. > Theorem 1: perhaps say somewhere in the theorem that this result holds ... and > Honestly, I do not understand this result... Please see our response on the scaling on $d$ and $n$. *It is imperative that we clarify any concerns about the theoretical results. Please let us know if something is still unclear.* > Theorem 2 looks interesting... Setting $\mu = 0$ exactly recovers the result from [24], so it is an extension of that result. It is not a direct extension of the results in, say, Dobriban and Wager, Hastie et al., or Bartlett et al. Those settings are different. In our setting we have low dimensional signal, the response depends on the signal and have noise on the inputs. In the setting of Dobriban and Wager, Hastie et al., or Bartlett et al., we have full dimensional signal and noise on the outputs. The regularized extension for these works can be seen in [11,23]. Here, we see double descent for certain $\mu$, and for the optimal $\mu$, we do not. **However, the peak does not move!**. This and more prior work are summarized in Table 1. In our work, the peak moves $\mu$, which is quite surprising. *Classical wisdom would states* that increasing $\mu$ would *increase* the regularization, hence would *decrease* the complexity of the model. Hence, we would need *a larger* number of parameters to overfit. Hence, the peak would move to the *right* in the overparameterized regime. However, we show that the peak moves to **left**! > Line 207-208 Apologies, that is a typo. It should read $\hat{\Sigma}^T \hat{\Sigma} = \Sigma^T \Sigma + \mu^2 I$. We thank the reviewer again for their detailed comments. We hope that our response addressed all of the concerns. We would be eager to continue the conversation if there are any more concerns. We hope that the response has improved the reviewer's opinion on our work. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed reply, which helps me better understand their theoretical results and their contribution I increase my score accordingly. I, nonetheless, feel that there are a few typos that need be fixed and clarifications needed, in the current version of the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for their feedback and will incorporate the same.
Summary: The paper considers the problem of linear least squares. Its main contribution is presenting two examples of double descent in the under-parameterized regime. Strengths: - The paper is well written: Related works are sufficiently discussed (to my knowledge); the introduction is well-motivated and easy to follow; theorems are often followed by examples, figures, and illustrations helping the reader understand the results. - The problem the paper investigates and the perspective the paper takes is quite interesting. While the mainstream research in the field focuses on double descent in the case of over-parameterization, the paper analyzes under-parametrization in-depth and presents several results that improve one's understanding of double descent. Weaknesses: - The paper takes unconventional notations that make the paper more challenging to penetrate. For example, it uses row vector notations and writes vector-matrix multiplication $\beta^\top X$ rather than the more common matrix-vector multiplication $X^\top \beta$. Sometimes I also found that the notations of singular vector $u$ and regularization parameter $\mu$ can be confusing as they look similar. - The two examples the paper offered are indeed examples. The reason is that the paper's assumptions are quite strong. For example, Assumption 3 assumes the test and training data matrices are both of rank $1$, and Theorem 4 has the orthogonality assumption $\beta^\top z=0$ which greatly simplifies the model and analysis. Technical Quality: 3 Clarity: 3 Questions for Authors: I have no questions. It should be noted that I am not an expert in the exact area of double descent. I am not very familiar with the proof techniques used in the literature and am unable to make comments on technical depth. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding our paper well-written and our perspective interesting. We now address the reviewer's concerns. > Notation We shall fix $\beta^T X$ to $X^T\beta$ to align the paper better with the prior convention. We shall change the font for the $u$ to clarify the difference. > Strong Assumptions We agree with the reviewer that some of our assumptions are strong. However, we believe that this helps us highlight the dependence of the location on the peak on 1) the alignment between $y$ and $V$ and 2) The peak in the Stieljtes transforms curve at 0. However, here we discuss some relaxations of the assumptions. ------------ For rank 1 assumptions, we refer the reviewer to Appendix A and Appendix F, where we provide a conjectured formula for general rank $r$ data as well as numerical verification of the conjecture. ------------- For the orthogonality assumption, this is primarily to simplify the proof. **Please note that in the numerical simulations displayed on the left of Figure 7, $\beta$ is not orthogonal to $z$.** This version of the result can be proved. We will update the paper with the result without assuming orthogonality. Here are the proof changes sketched out. If we do not make this assumption, Then at the bottom of page 19, we would have an additional term: $$ \frac{\|v\|^2}{1+\|v\|^2 z^T(AA^T)^{-1}z} \beta^T z z^T(AA^T)^{-1} $$ Thus, on line 565, when we compute the excess risk, instead of $$\mathbb{E}\left[\left\|\left\|\xi^T \begin{bmatrix} A^T \\ vz^T \end{bmatrix} \left(AA^T + \|v\|zz^T\right)^{-1} \right\|\right\|^2 | X \right],$$ we would have $$\mathbb{E}\left[\left\|\left\|\xi^T \begin{bmatrix} A^T \\ vz^T \end{bmatrix} \left(AA^T + \|v\|zz^T\right)^{-1} + \frac{\|v\|^2}{1+\|v\|^2 x^T(AA^T)^{-1}z} \beta^T z z^T(AA^T)^{-1}\right\|\right\|^2 | X \right]$$ Then, we can expand the term norm into parts and see that the cross terms are zero due to $\xi$ being independent from $z, A$ and having mean zero. The first term would be $$\mathbb{E}\left[\left\|\left\|\xi^T \begin{bmatrix} A^T \\ vz^T \end{bmatrix} \left(AA^T + \|v\|zz^T\right)^{-1} \right\|\right\|^2 | X \right],$$ which have already shown how to compute the expectation. For the second term, to understand the contribution of the norm of this term, we would need to compute $$ \left(\frac{\|v\|^2}{1+\|v\|^2 z^T(AA^T)^{-1}z}\right)^2 \text{Tr}(z^T(AA^T)^{-2} z) $$ Using the estimates from lines 568-570 and the same concentration results from [24], we see that this concentrates around $$ \left(\frac{\|v\|^2}{1+\|v\|^2\mathbb{E}\_{\lambda \sim \nu}\left[\frac{1}{\lambda}\right]}\right)^2 \mathbb{E}\_{\lambda \sim \nu}\left[\frac{1}{\lambda^2}\right] $$ Then, since the covariance for the Gaussian part is $\frac{\pi_1}{d} I$. Multiplying, we get that this term goes to zero as $d \to \infty$. Hence, the error formula is unchanged. ------------ We hope that this has addressed the concerns of the reviewers and improved their opinion about our work. If the reviewer has further concerns, please let us know. --- Rebuttal 2: Title: Reply Comment: Dear authors, thank you for your reply. It has nicely addressed my concerns. I have thus increased my score by 1.
Rebuttal 1: Rebuttal: We thank the reviewers for their time, effort, and feedback. The paper's main purpose is to attempt to understand the reasons peaks occur in the excess risk curve and their locations. Prior work has phrased this as occurring due to overparameterization and the location being on the boundary of the under and overparameterized regimes. Additionally, some prior work [32] has criticized parameter counts as a measure of complexity, saying that parameter counts might overestimate the true complexity. Hence, shifting the boundary between under and over-parameterized to the right, not the left. We question this narrative. In particular, we show that the location of the peak is determined by technical reasons in random matrix theory and not because it is the boundary between over and under-parameterized. To show this, we present the two conditions. The rest of the paper then presents two examples where each example violates exactly one of the conditions, and we show that the peak moves to the left. We would also like to summarize the strengths of the paper as highlighted by the reviewers. We address the concerns of the reviewers in the individual responses. 1. Paper studies as Important Problem. \ a. Reviewer LfUR - "The problem under study is of significance."\ b. Reviewer RQj2 - "The paper tackles a very important research topic, and tries to understand the reasons behind the location of the peak in double descent."\ c. Reviewer SLpr - "addresses the prominent double-descent phenomenon in machine learning, which is an important primitive for understanding generalization and other properties of estimators operating on high-dimensional data" 2. The persecpective of the paper is new, interesting, and rigorous. \ a. Reviewer 4Q1q - "Their analysis is thorough.", "Experiments provided to support each theorem also look thorough."\ b. Reviewer 4cXR - "The problem the paper investigates and the perspective the paper takes is quite interesting.", " presents several results that improve one's understanding of double descent."\ c. Reviewer RQj2 - "The work seems rigorous and the contributions relevant."\ d. Reviewer SLpr - "The paper considers an original viewpoint on the problem of double-descent", "This result is significant" We hope that our responses address the reviewers concerns. If there are concerns that have not been addressed or need further discussion please let us know.
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors show several facts about the double descent phenomena for the linear regression model with L2 loss and Frobenius norm regularization. They show taking different assumption from previous work moves the peak of the risk of the problem from the interpolation point into the under-parameterized regime. Provided theorems concretely describe the position of peak risk, including separating the risk into several terms: bias and the norm of the estimator. The authors also provide derivations (appendix) and experiments. Strengths: Their analysis is thorough. Especially, reasoning change of trends of risks in Figure 4 through the term of the norm of the estimator supports the reliability of the results. They clarify proof steps in the appendix. Experiments provided to support each theorem also look thorough. Weaknesses: The authors results which do not coincide with prior theory are based on different assumptions. Can the authors discuss how wider cases covered by their assumption? Technical Quality: 3 Clarity: 3 Questions for Authors: Line 190: Is the assumption "$d$ is sufficiently large" used only to assume $o\left(1/d\right)=0$ in the Equation between Lines 177 & 178? If so, clarifying it in the main text would be better. Does $\left\|W_{opt}\right\|_F$ in Line 240 indicate its expectation? I recommend the authors to double check the overall text and equations. Here is a list of errata & typos. Note that from the equation between Line 119 and Line 120, $\hat \beta\in\\mathbb{R}^{d\times k}$. * Line 120: $\hat\beta = y X^\dagger$ -> $\hat \beta = \left(X^\dagger\right)^Ty^T$ * Displayed Equation between Lines 123-124: The denominator seems wrong. The following should be correct: $$ \left\|\hat\beta\right\|^2 = \sum_{i=1}^{\mathrm{rank}\left(X\right)}\frac{\left(y V\right)_i^2}{\sigma_i^2} $$ * Line 129: It would be better to add a couple of words emphasizing $\Sigma_n^{1/2}$ indicates the diagonal matrix $\Sigma$ in $X=U\Sigma V^T$ for readability. $\Sigma_n^{1/2}z_i$ looks like a summation symbol. I first thought it is a typo of $\sum_{i=1}^{1/2}z_i$. * Displayed equation between Lines 137-138: $z\in \mathbb{C}\setminus J$ -> $\zeta\in \mathbb{C}\setminus J$ * Lines 239-240: Is "for $n$ large enough and $d=cn$" a typo of "for $n$ and large enough $d=cn$"? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Does the previous work (peak at interpolation point, using Assumptions 1 and 2) also rely on Assumptions 3 and 4? Or are you using just a different set (A 2 & 3 & 4) of assumptions which is a neither necessary nor sufficient condition of one of the previous work (A 1 & 2)? It should be stated more clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding our analysis thorough, especially the reasoning for the trends. We now address the reviewer's concerns. > The authors results which do not coincide with prior theory are based on different assumptions. Can the authors discuss how wider cases covered by their assumption? We believe that much prior work assumes Assumptions A1 (alignment between $y$ and $V$) and A2 (Stieljtes Transform Peak). **In our paper, we want to construct examples that violate these two assumptions.** Hence, the example in Section 4 makes assumptions A3 and A4 (not A1 and A2), while the example in Section 5 is a mixture of a high-dimensional Gaussian and a one-dimensional Gaussian. Assumptions A3 and A4 exist in prior work such as [24,37]. These works do have a peak at $c=1$. In our paper, setting $\mu = 0$ recovers [24]. Appendix A, F has the corresponding result for which setting $\mu = 0$ recovers [37]. The mixture example can be viewed as an extension of prior theory presented in papers such as Dobriban and Wager, and Hastie et al., by taking data distributions that satisfy their assumptions and creating a mixture with a distribution supported on a lower dimensional set. Setting $\pi_1 = 1$ (i.e., removing the low dimensional component) recovers an example that satisfies their assumptions Finally, we would like to highlight some prominent examples that explicitly assume A1 and A2 or their assumptions imply A1 and A2 are satisfied. **Dobriban and Wager, 2015** *A1* Assume that $\beta$ (and hence $y$) has an isotropic distribution (Assumption B in the arxiv version). Hence, $y^T V$ has an isotropic distribution. Hence, they assume A1. *A2* The paper assumes assumption A and that the operator norm of the expected sample covariance is uniformly bounded. These assumptions then satisfy the assumptions of Theorem 1.1 in [BZ08], which implies that the Stieljtes transform of the limiting distribution satisfies $$ m(z) = \int \frac{1}{t(1-c-czm) - z} dH(t), $$ where $H$ is the limiting distribution of the population covariance matrix, our assumption A2 is concerned about the function $c \mapsto m(0)$. Here, we see that $$ m(0) = \int \frac{1}{t(1-c)} dH(t) $$ This function is clearly maximum when $c = 1$. Hence, it satisfies assumption A2. **Hastie, Montanari, Rosset, Tibshirani, 2020** *A1* They assume that data $x_i \sim \mathcal{N}(0,\Sigma)$. If the $x_i$ are the columns of $X = U\Sigma V$ (as is the notation in our paper). Then $XX^T/n = U\Sigma\Sigma^T U^T / n$ is the sample covariance. Hence, it is not dependent on $V$. Hence, this implies that the distribution of $V$ doesn't impact the sample covariance matrix. Hence, we believe this is similar to assuming that $V$ is uniform. Hence, assuming $y^T V $ is istropic. *A2* Similar to the Dobriban paper, their data satisfies the assumption of Theorem 1.1 in [BZ08]. [BZ08] Zhidong Bai and Wang Zhou. Large sample covariance matrices without independence structures in columns. > Line 190: Is the assumption " is sufficiently large" used only to assume $o(1/d) = 0$ in the Equation between Lines 177 & 178? If so, clarifying it in the main text would be better. The reviewer is right. We shall clarify this. > Does $\|W_{opt}\|$ in Line 240 indicate its expectation? Yes, thank for yo pointing this out, we shall correct this. > I recommend the authors to double check the overall text and equations. Here is a list of errata & typos. We thank the reviewer for the typos found. We shall correct them. > Does the previous work (peak at interpolation point, using Assumptions 1 and 2) also rely on Assumptions 3 and 4? Or are you using just a different set (A 2 & 3 & 4) of assumptions which is a neither necessary nor sufficient condition of one of the previous work (A 1 & 2)? It should be stated more clearly. Some prior work assumes A3 and A4 (such as [24,37], setting $\mu = 0$ in our result recovers those results); however, other prior work does not. The key fact is that A1 and A2 are true in prior work in these prior works. --------- We hope that we have addressed all of the reviewers' concerns and questions and improved their opinion of our work. If the reviewer has further concerns, please let us know. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response to the questions. It help me to understand this work more. I have been increased my score by 1.
null
null
null
null
null
null
End-To-End Causal Effect Estimation from Unstructured Natural Language Data
Accept (poster)
Summary: This work aims to the causal effect estimation from unstructured observational text data, proposing a pipeline based on LLM to extract the treatment, covariate and outcome from the unstructured data by LLM. It's a interesting exploration for causla inference and the pipeline process of the inference method is quite standard. Strengths: (1) The problem solution for treatment effect estiamtion is quite novel. (2) The proposed LLM pipeline can address the limitation of the current effect estimation method, i.e., can not deal with the unstructured data. Weaknesses: (1) some details are missing, leading to the poor readability. (2) Some equations are not correct (based on my understanding). (3) The presnetation of the whole methodology is not good enough, alought I am not an expert of LLM, I know how LLM works. However, i can not imagine how the proposed pipeline work and how it estimate ATE very clearly. (4) The evaluation part is kind of weak, given that baselines are too limited and simple. Technical Quality: 3 Clarity: 1 Questions for Authors: 1.there are some claims that are not well expalined, leading to confuse people. For exampel, "value may be recouped from outcomes that would otherwise be lost" this claim sounds good but i don't what does it mean here. 2.the overall idea of this work to utilize LLM to explore the causal effect estimation from unstructured data is good, i like it. But the writing can not match the good idea. For example, line 54-59, i have no idea about what the authors try to express, The transition from sentence to sentence is too stiff 3.The binary and discrete assumption about the outcome Y and covariates X is kind of too strict. Typically, one can assume the teatment assignment is binary but outcome and covariates are continous. It's not clear why this work adopt such assumption. if so, the applying scale of the proposed method would be limited. also, the assumption 4 is also too strong. 4.in the outcome predictor eq. (9), why it is P(Y=1|R_i,X_i,T_i) instead of P(Y=t|R_i,X_i,T_i)? I don't understand. When t=0, the outcome predictor still ouput the expectation over P(Y=1|xxx)? 5.the description how to extrct the covariates and their corresponding values is not clear. How to determine the feature values? For each report, the set of covarivates is the same, right? Can you present a specific example? what does each feature value mean in the real-world? The readability of the method section is poor. 6.In the evaluation table 1, the "IPW (Structured) " is a baseline or ground truth? there is no any discussion in the main body about this baseline and i am so confused about the discussion below because of it, given that the "IPW (Structured) " performs the best. 7.how does the ground truth of ate comes from? Can you clarify this point? by the way, i like the idea of this work, but the presentation of this paper is really poor. thus if the above questions can be addressed well, i will consider to increase the score. Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and questions! Below, we will clarify all the questions in the review. > (W1) some details are missing, leading to the poor readability. Could the reviewer point to specific details that they found unclear? We are more than happy to provide clarification. > (W2) Some equations are not correct (based on my understanding). Similar to above, if the reviewer can point out particular equations that they found to be incorrect, we are happy to clarify them. > (W3) The evaluation part is kind of weak, given that baselines are too limited and simple. - Since we propose a novel setting of constructing causal effect estimators from unstructured text data, there aren’t a lot of existing baselines. We constructed fair comparisons and included ablation studies for our experiments. In synthetic settings, we were also able to train baselines that require additional structured data (which NATURAL does not assume access to) and presented those results in Table 1. In real-world settings, we further teased apart the effect of different choices we made in the method implementation, as shown in Figure 4. - If the reviewer has specific suggestions for baselines that are relevant to our setting, we would be happy to include them. > (Q1) there are some claims that are not well expalined, leading to confuse people. For exampel, "value may be recouped from outcomes that would otherwise be lost" this claim sounds good but i don't what does it mean here. We apologize for any confusion and expand on the referenced phrase here. - Sometimes there are outcomes that individuals might observe themselves, that are not captured in structured, tabulated observational data, but that are captured in unstructured text data on the internet. - Had these outcomes been tabulated and recorded, they could have contributed in valuable ways to decision-making. - NATURAL recovers the value of these outcomes for decision-making by extracting them from unstructured text data. We are happy to change this sentence to “value may be *recovered* from outcomes that would otherwise be lost” in the paper. > (Q2) the overall idea of this work to utilize LLM to explore the causal effect estimation from unstructured data is good, i like it. But the writing can not match the good idea. For example, line 54-59, i have no idea about what the authors try to express, The transition from sentence to sentence is too stiff The referred lines are intended to explain the different information NATURAL depends on. These are: (1) LLMs to approximate relevant conditional distributions, and (2) expert knowledge to ensure necessary causal assumptions are satisfied by the study design. > (Q3) The binary and discrete assumption about the outcome Y and covariates X is kind of too strict. Typically, one can assume the teatment assignment is binary but outcome and covariates are continous. It's not clear why this work adopt such assumption. if so, the applying scale of the proposed method would be limited. - Our NATURAL Full estimator requires discrete and finite variables since we sum over them to approximate a conditional expectation with conditional probabilities. If we choose to use samples instead of conditionals, NATURAL can further be extended to continuous variables. - While we have binary outcomes in our experiments, the method and implementations extend to non-binary discrete outcomes easily. - It is true that outcomes and covariates in the real world are often continuous. In order to keep NATURAL applicable in these settings, we discretize these variables. We will explain the exact discretization through a complete walkthrough of the Semaglutide vs. Tirzepatide dataset in the appendix. > also, the assumption 4 is also too strong. Assumption 4 is weak or strong depending on how much information is in the reports. Our data filtration pipeline aims to maximize the information in the reports so that Assumption 4 may be matched. We discuss this in more detail in Section 3 of the paper as well as our response to Q1 from reviewer 7S1Z. > (Q4) in the outcome predictor eq. (9), why it is P(Y=1|R_i,X_i,T_i) instead of P(Y=t|R_i,X_i,T_i)? I don't understand. When t=0, the outcome predictor still ouput the expectation over P(Y=1|xxx)? - We don’t believe there is a mistake in eq. (9). It is a Monte Carlo approximation of the first term in eq. (2). Eq. (2), in turn, is the standard form of the Outcome Imputation estimator (see Ding (2023)). - To clarify our notation, $Y$ is a random variable denoting outcomes and $T$ is a random variable denoting treatments. Therefore, ${Y=t}$ is not an event that any of our estimators would consider. [1] Ding, P. (2024). A first course in causal inference. CRC Press. > (Q5) the description how to extrct the covariates and their corresponding values is not clear. How to determine the feature values? For each report, the set of covarivates is the same, right? Can you present a specific example? what does each feature value mean in the real-world? The readability of the method section is poor. - While the definition of the covariate variables and their descriptions are the same for reports in a given dataset, the values these variables take will vary for each report. For instance, if a covariate of interest is “age”, its value in the real world may be a different number for different reports. - We will include more details and exact descriptions of each variable for each dataset in the appendix of our final paper. --- Rebuttal 2: Comment: > (Q6) In the evaluation table 1, the "IPW (Structured) " is a baseline or ground truth? there is no any discussion in the main body about this baseline and i am so confused about the discussion below because of it, given that the "IPW (Structured) " performs the best. Thank you for pointing this out! We will clarify this result in the experiments section of the main paper. - IPW (Structured) corresponds to an oracle estimator that has access to the true structured treatments, outcomes and covariates, which can be plugged into the IPW estimator to estimate ATE. - Hence, it is not directly comparable to methods that do not have access to structured data, like NATURAL. We expect it to provide an upper bound on the performance, as verified in the results of Table 1. > (Q7) how does the ground truth of ate comes from? Can you clarify this point? As mentioned in lines 68-69 and again in lines 281-284, for every dataset we consider, there exists a randomized controlled trial which provides a gold standard estimate of the ATE. We treat this ATE value as “ground truth”. We thank the reviewer for bringing up questions about the parts of our paper they found confusing. We believe the clarifications above improve the readability of our paper. We hope our responses address all their concerns and that they will consider increasing their score accordingly. We are more than happy to provide any further clarifications during the discussion period. Title: Rebuttal by Authors (continued) --- Rebuttal Comment 2.1: Comment: Dear Reviewer p9SY, Thanks for your review! The discussion is ending soon, and it'd be greatly appreciated if you could acknowledge the author's rebuttal and update your review if necessary. Thank you! AC
Summary: The paper introduces a family of causal effect estimators named NATURAL, designed to use LLMs for mining causal effect estimates from observational text data. The authors address the challenge of automating data curation and using LLMs to impute missing information, presenting a method that conditions on structured variables to assist in computation of causal effect estimators. The authors develop six datasets (two synthetic and four real) to evaluate their method, and demonstrate that NATURAL estimators produce causal effect estimates within close range of ground truth from randomized trials. Strengths: 1. Clear and thoughtful use of LLMs to estimate causal effects from text data. The authors present a cool methodology, properly formalized, for using LLMs to extract and impute structured variables from observational text data. 2. Development and evaluation of six datasets, including real-world clinical trial data. 3. Strong results showing causal effect estimates are within a close range of ground truth values. Weaknesses: 1. The original claims made in the abstract+intro seems to be too grandiose, but in fact it is about the use of LLMs to classify variables of interest from the text. I would change the framing of the paper a bit to better reflect that. It would make the claims of the paper less objectionable and would better demonstrate the usefulness of NATURAL. 2. There is a vast literature in the social sciences on recovering interpretable variables directly from text for estimating causal effects from observational data. Most of these approaches rely on probabilistic approaches like topic models, but recently there’s growing interest in doing so automatically with LLMs. The authors should better address this literature, and compare their approach to relevant methods if possible. 3. The approach is really cool, but like any ML-pipeline approach it surely introduces errors at each step of the process (see for example Egami et al @ NeurIPS 2023). Each error will then bias the causal effect estimation. From the results of the paper I’m convinced empirically your approach works well, but would I be able to understand whether it produces accurate estimates on a new dataset? A deeper analysis and perhaps a way to propagate error/uncertainty to the estimation would be very helpful here. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Authors explain that as part of their process, they “filter for reports that are likely to conform to the experimental design”. Could this introduce bias? 2. All of the “classical strategies” used assume no-hidden confounding. Could you imagine how someone would use an LLM with a more complex identification strategy (say IV, or RDD)? You claim at the end of the intro that you can anticipate someone doing so, I wonder if you can elaborate. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have addressed the limitations of their work, acknowledging the dependency on the quality and calibration of LLMs and the potential inaccuracies in approximating the required conditional distributions. They emphasize that NATURAL estimators should not be used for high-stakes decision-making without experimental validation and stress the need for domain expert involvement to ensure causal assumptions are met. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for such a positive and encouraging review! Below, we address all the questions raised in your review. > (W1) The original claims made in the abstract+intro seems to be too grandiose, but in fact it is about the use of LLMs to classify variables of interest from the text. I would change the framing of the paper a bit to better reflect that. It would make the claims of the paper less objectionable and would better demonstrate the usefulness of NATURAL. Thank you for this feedback! The two take-homes that we tried to emphasize in the abstract and introduction were (1) unstructured text data can be used as a *sole* source of rich causal effect information and (2) LLMs can be used to extract this information. We also believe it is important to clearly distinguish our data access setting from the classical settings of causal inference (see response to W2 below). Still, we’re more than happy to make edits to get the tone right, if the reviewer can make some specific suggestions or point out specific sentences that are objectionable. > (W2) There is a vast literature in the social sciences on recovering interpretable variables directly from text for estimating causal effects from observational data. Most of these approaches rely on probabilistic approaches like topic models, but recently there’s growing interest in doing so automatically with LLMs. The authors should better address this literature, and compare their approach to relevant methods if possible. - Thank you for pointing out this section of the literature! We would greatly appreciate any specific references from the reviewer. To reiterate the setting considered for NATURAL, we look at data sources that *only* contain natural language. This contrasts our setting from works like Falavarjani et al. (2017), which rely on a combination of text and numerical or tabular data. The ability to use only text data is important and motivated by settings where tabular data is unavailable or incomplete, e.g. neglected diseases, unrecorded abortions in certain countries, or illicit drug use. To our knowledge, such a setting with text as the sole source of information has not been considered yet. - Another example of related work that we found is Ahrens et al. (2021), which estimates latent topics or variables from text, and is relevant to the broad problem we consider. NATURAL is distinct in two ways. First, similar to above, Ahrens et al. rely on tabulated numerical outcomes as well as text data, while NATURAL operates using only text data. Second, NATURAL relies on domain expertise to provide a study design and in particular, define the treatment, outcome and covariate variables to be extracted from text, whereas Ahrens et al. discover these variables in the text. Extending NATURAL to remove dependence on domain expertise may be interesting future work, but would require further assumptions on LLM capabilities. - We will include the above citations and discussion in the final version of our paper. [1] Falavarjani, S. M., Hosseini, H., Noorian, Z., & Bagheri, E. (2017, May). Estimating the effect of exercising on users’ online behavior. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 11, No. 1, pp. 734-738). [2] Ahrens, M., Ashwin, J., Calliess, J. P., & Nguyen, V. (2021). Bayesian topic regression for causal inference. arXiv preprint arXiv:2109.05317. > (W3) The approach is really cool, but like any ML-pipeline approach it surely introduces errors at each step of the process (see for example Egami et al @ NeurIPS 2023). Each error will then bias the causal effect estimation. From the results of the paper I’m convinced empirically your approach works well, but would I be able to understand whether it produces accurate estimates on a new dataset? A deeper analysis and perhaps a way to propagate error/uncertainty to the estimation would be very helpful here. Your intuition is correct in that different errors can arise due to the assumptions required by NATURAL, some of which may not be formally testable yet. While this is important future work that we are actively pursuing, there are some approaches we can already use to mitigate and/or understand such errors. - We try to minimize errors in the steps using LLMs via prompt tuning on a handful of examples. We will describe this tuning in full detail in the appendix via a worked example that goes through all the steps of NATURAL for the Semaglutide vs. Tirzepatide dataset. - For methods that involve propensity score estimation, we use the balancing score property of propensity scores as a sanity-check. Figure 5 of our paper confirms that propensity scores estimated by NATURAL do indeed balance covariates across treatment groups. - We will also add a sensitivity analysis to gain more insight into the satisfaction of our Assumption 1 about Strong Ignorability or Unconfoundedness (see response to Q1 of Reviewer oSFK and attached PDF). > (Q1) Authors explain that as part of their process, they “filter for reports that are likely to conform to the experimental design”. Could this introduce bias? - The purpose of filtering for reports that conform to the experiment design is to estimate local ATEs within a specific population. This also makes our estimators comparable to the treatment effect from real-world RCTs that enforce inclusion criteria for participants. In practice, we use LLMs for this step, which can definitely introduce bias and is a limitation of our work, as discussed in Section 5. - In general, we do not expect one estimated ATE to transport to a different experiment design. Given another experiment design, one would have to execute the NATURAL pipeline with that design and estimate a new ATE. One advantage of NATURAL is that doing this for different designs is feasible in a time- and cost-effective manner. --- Rebuttal 2: Title: Rebuttal by Authors (continued) Comment: > (Q2) All of the “classical strategies” used assume no-hidden confounding. Could you imagine how someone would use an LLM with a more complex identification strategy (say IV, or RDD)? You claim at the end of the intro that you can anticipate someone doing so, I wonder if you can elaborate. This is a great question! The general strategy behind deriving NATURAL estimators from classical strategies is as follows: - Identify all (structured) variables required to compute the classical estimator. In the case of the Instrumental Variables (IV) approach, this includes variables representing the instrument $Z$, in addition to the $(T, Y)$ that our methods consider. - Next, collect natural language reports that contain such information. For instance, electronic health records or clinical notes at hospitals may contain information relevant to the IV setting. - Extract the required conditional distributions from an LLM using these natural language reports. - Finally, marginalize out $(T, Y, Z)$ and average over reports to compute an ATE. For example, IV estimators are used to measure ATE conditional on compliance, which is a ratio of two treatment effects (see chapter 21 of Ding (2024)) and can be estimated consistently given the reports. [1] Ding, P. (2024). A first course in causal inference. CRC Press. Again, thank you for your very thoughtful review! We hope we have addressed all your questions, but please let us know if we can clarify anything further. --- Rebuttal Comment 2.1: Comment: thank you addressing my comments! i remain supportive of this paper's acceptance.
Summary: Estimating causal effects is costly and time-consuming. The authors propose to use large language models (LLMs) to mine unstructured text data for causal effect estimation. This paper introduces NATURAL, a family of causal effect estimators using LLMs to process unstructured text data. This seems to be a good application of LLMs for the task of causality. Strengths: - Determine the effects of treatments can be expensive and time-consuming so the paper introduces a novel task of effect estimation using unstructured natural language data. - This method is adaptable to various domains where textual data is abundant. - This seems to be a more practical application of LLM for the task of causality. Weaknesses: - LLMs might inherit biases present in their training data or hallucinate. - Social media users might not reflect the demographics or behaviors of the broader population. - Finetuning the model to one domain may not lead to good generalisation in others, during fine-tuning does the LLM learn inductive biases regarding the "topic" or task? Technical Quality: 3 Clarity: 3 Questions for Authors: - How do the authors make sure that all of the confounders are accounted ? I can imagine there to be many confounders mentioned in the text. - There could be a bias in social media text, I thought people would report negative cases more often than positive, did you observe any such biases? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Mentioned Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful review! Here, we will clarify and address the points brought up in your review. > (W1) LLMs might inherit biases present in their training data or hallucinate. As described in Section 5, this is an important limitation in the use of LLMs, in general as well as in the case of NATURAL. In our experiments, we mitigate errors or hallucinations via prompt tuning. We believe NATURAL will benefit from progress in the field of research with LLMs, but the specific challenge is outside the scope of our work. > (W2) Social media users might not reflect the demographics or behaviors of the broader population. - It is true that there exists selection bias in the data on social media forums and this is an important limitation that we’re hoping to explore in future work. We discuss this and other biases in Appendix E of the submission and plan to expand Section 5 with this discussion in the final version of our paper. - Related to this is the treatment of inclusion criteria that define the population over which we are interested in estimating ATEs. In Appendix G of the paper, we made certain structural assumptions that allowed us to impute unobserved covariates restricted to the inclusion criteria. An alternate approach that we also discussed in Appendix G is to weight the estimated potential outcomes for each datapoint by the relative likelihood that they meet the inclusion criteria of the experiment given the report: $\frac{P(X \in I | R)}{P(X \in I)}$ where $I$ defines the inclusion criteria. We revisited this alternate approach during the rebuttal period and explored using the LLM to estimate $P(X_i \in I | R_i)$ and renormalizing to estimate $\frac{P(X_i \in I | R_i)}{P(X_i \in I)}$. These weights can be estimated with an LLM similar to other conditional distributions described in the paper and involves averaging over reports for the denominator. - We show these new results in Table 2 of the attached PDF, which are close to our original results. We believe this alternate weighting approach is a simpler, more intuitive way to understand the treatment of inclusion criteria that also removes the need for additional assumptions. We will include both original and new results in the final draft. > (W3) Finetuning the model to one domain may not lead to good generalisation in others, during fine-tuning does the LLM learn inductive biases regarding the "topic" or task? To clarify our use of LLMs, none of our methods require any fine-tuning. Instead we use LLMs purely for inference. We will clarify this point in Section 4 of the paper. > (Q1) How do the authors make sure that all of the confounders are accounted ? I can imagine there to be many confounders mentioned in the text. - We rely on domain expertise to define the confounders for each setting such that necessary causal assumptions are satisfied. In our case, we used baseline characteristics from real-world RCT specifications which are designed by experts. Together with Assumption 4, which guarantees access to the conditional distribution $P(T = t, Y = y, X = x|R = r)$, these assumptions ensure that our estimators are consistent in theory. - Intuitively, we break this problem into two parts: (1) defining the confounder set $X$ such that Assumption 1 (Strong Ignorability or Unconfoundedness) is satisfied, and (2) correctly sampling these $X$ from $R$, which is guaranteed by Assumption 4 (in theory) and implemented with the help of LLMs (in practice). - Still, we think you bring up a really important point. So, we conducted a sensitivity analysis for the Semaglutide vs. Tirzepatide dataset. Specifically, we follow the sensitivity analysis strategy of Lu and Ding (2023). Briefly, the idea is to introduce sensitivity parameters ($\varepsilon_0(X) = \frac{\mathbb{E}[Y(0) | T=1,X]}{\mathbb{E}[Y(0) | T=0,X]}$ and $\varepsilon_1(X) = \frac{\mathbb{E}[Y(1) | T=1,X]}{\mathbb{E}[Y(1) | T=0,X]}$) that quantify the degree of unobserved confoundedness. In our case, sensitivity parameters are the density ratio between the likelihood of each potential outcome in the treated vs. untreated group. The ATE estimate is non-increasing in the sensitivity parameters. So, for positive ATEs, we are looking for the largest sensitivity parameters that maintain the positivity of the ATE, which tells us the degree of unobserved confoundedness that an estimator is robust to. Table 1 of the attached PDF shows that the estimated ATE changes from positive to negative at large values of sensitivity parameters, which means that it is robust to large degrees of unobserved confoundedness. - We further investigate the importance of each confounder as suggested by Section 4 of Lu and Ding (2023), by dropping that covariate as if it is an unobserved confounder and measuring the corresponding sensitivity parameters with the remaining covariates. Figure 1 of the attached PDF follows Figure 1 of Lu and Ding (2023) and shows that in the worst-case (over all possible values of the remaining covariates), our estimator is sensitive to the covariate set considered and each covariate is important to ATE estimation. - We believe this sensitivity analysis significantly improves our paper and will include it in the final version of our paper. Thanks for your feedback! [1] Lu, S., & Ding, P. (2023). Flexible sensitivity analysis for causal inference in observational studies subject to unmeasured confounding. arXiv preprint arXiv:2305.17643. --- Rebuttal 2: Title: Rebuttal by Authors (continued) Comment: > (Q2) There could be a bias in social media text, I thought people would report negative cases more often than positive, did you observe any such biases? - Since this relates to the concern about selection bias above, please see our response to W2 above. - Since our experiments compare similar treatments, e.g., comparable availability, we believe the probability of a user reporting their experience is approximately equal in both. Our empirical results suggest low bias, relative to estimates from a real-world RCT. Thank you for the great suggestions and questions! We believe our paper is significantly improved by the edits and clarifications above. We would be happy to clarify anything further in the discussion period. --- Rebuttal Comment 2.1: Title: Thank you for rebuttal Comment: I would like to thank the reviewer for replying to my concerns regarding biases and confounders. I have increased my score. Best regards,
Summary: The paper seeks to use LLMs for a workflow that extracts structured variables from free text, filters the dataset, and then computes a causal estimate using the imputed variables. The paper applies this methodology to two synthetic and four real-world datasets and shows that it produces estimates that are comparable to those from known ground truth or RCT estimates. Strengths: This is a very ambitious paper that pulls together a complicated workflow for estimating causal effects from real-world natural language datasets. Just showing that this can be done is a significant contribution. The paper is clearly written and the main points are easy to follow. While I am somewhat skeptical of any practical applicability of this work, the paper is reasonable conservative in highlighting its limitations. Weaknesses: The paper at almost every turn relies on prompting an LLM and implicitly assuming that the model returns data from a desired distribution. In general, this will introduce measurement error which could systematically bias the overall method's estimates. See for example [1] and [2] below. The synthetic data evaluation is fairly limited. It would be nice to at least understand how performance varies as sample size increases from a few hundred examples to many thousands (especially as the IPW oracle baselines are not particularly good). This would also allow you to stress-test the different assumptions being made. If you have synthetic data in which reports for certain patients are systematically truncated or censored, or where different groups write reports in different languages or styles, how does that affect the different steps (i-vi)? [1] Ashwin, Julian, Aditya Chhabra, and Vijayendra Rao. "Using Large Language Models for Qualitative Analysis can Introduce Serious Bias." Development Research (2023). [2] Egami, Naoki, et al. "Using imperfect surrogates for downstream inference: Design-based supervised learning for social science applications of large language models." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 2 Clarity: 3 Questions for Authors: Line 267-268: What is the point of sampling a persona from Big Five personality traits? How does this simulate realism? Line 147-149 says "if all reports are the constant, empty string ... we have full access to the true observational joint density." Am I correct that this is a necessary assumption for the overall method to work in theory, but the actual implementation (steps (v) and (vi) in lines 206-220) would not work if all reports are the constant, empty string? This may be obvious, but it could be worth clarifying that when you say "we cannot formally guarantee that [our method] satisfies Assumptions 3 and 4" you mean the form of Assumption 4 that does not involve all reports being empty strings. In Tables 1 and 2, which LLMs are used? LLAMA2-70B? Overall, it would be helpful to more clearly label which LLMs are used when. In Line 204, the paper says that the GPT-4 API sufficed for filtering; was GPT-4 used anywhere else? How was the decision made of which LLM to use where? It might be helpful to have a figure or appendix that walks through every step (the paper does this for many steps, just not all in one place) in which an LLM is queried and discusses what assumptions are being made. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Overall, the paper does a good job of not overclaiming. However, see weakness 1 above. I would also suggest the authors discuss the challenges of using RCT estimates to validate observational studies -- there are many reasons why those estimates might not align (selection bias, dropout, etc). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and positive feedback! Below, we address the questions in your review and describe corresponding changes to improve our paper. > (W1) The paper at almost every turn relies on prompting an LLM and implicitly assuming that the model returns data from a desired distribution. In general, this will introduce measurement error which could systematically bias the overall method's estimates. We are able to measure this error in synthetic settings, which we quantify as KL divergence between the true and predicted distributions and visualize in Figure 3. With increasing amounts of data, this KL divergence reduces, as does the ATE error. Nevertheless, we agree that measurement error due to an LLM’s output distribution is a potential limitation of our work, as described in Section 5. We discussed selection and other possible biases in more detail in Appendix E of our submission due to space constraints. We will include this discussion in the main paper in the final version. > (W2) The synthetic data evaluation is fairly limited. It would be nice to at least understand how performance varies as sample size increases from a few hundred examples to many thousands (especially as the IPW oracle baselines are not particularly good). This would also allow you to stress-test the different assumptions being made. If you have synthetic data in which reports for certain patients are systematically truncated or censored, or where different groups write reports in different languages or styles, how does that affect the different steps (i-vi)? - We would like to clarify that the oracle estimates in Table 1 are denoted by “IPW (Structured)”. We will update the discussion of results in Section 6 to clarify this. - We constructed synthetic datasets of 2000 reports each and found these IPW (Structured) oracle estimates to be very close to the ground truth ATE, as sample size was increased to 1024 or higher. If the reviewer thinks it is important, we are happy to generate more synthetic data and show performance at higher sample sizes. - Simulating systematic biases in synthetic settings is a great suggestion and important future work! For example, one could simulate reporting bias by subsampling the data based on a subset of the covariates. We ran this during the rebuttal period for the Hillstrom dataset using two covariates (channel and zip code). Indeed this results in a NATURAL IPW ATE of $-3.49$ which has a large error of $9.58$. We will include this result in Table 1 as an example of NATURAL’s current limitation in handling reporting bias, which we hope to address in future work. - We are also open to further discussion on how to investigate data that is biased in other ways, such as being “truncated”, if the reviewer has concrete suggestions. > (Q1) What is the point of sampling a persona from Big Five personality traits? How does this simulate realism? The Big Five personality traits (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) helped simulate diversity in writing style, tone and report lengths, and hence made the synthetic data more realistic. We observed the generated reports were diverse in their verbosity, style and tone, as one might expect in real-world online forums. > (Q2) Am I correct that this is a necessary assumption for the overall method to work in theory, but the actual implementation (steps (v) and (vi) in lines 206-220) would not work if all reports are the constant, empty string? Your understanding is correct! If the reports are all empty strings, then Assumption 4 requires that the LLM is able to simulate trial outcomes unconditionally (without any data). This would be a very strong assumption that is likely to be violated. We will clarify this in Section 3 of the paper. In practice, we filter our data to look for the most informative reports such that they contain some information about $(X,T,Y)$, as described in Section 4. > (Q3) In Tables 1 and 2, which LLMs are used? LLAMA2-70B? Overall, it would be helpful to more clearly label which LLMs are used when. In Line 204, the paper says that the GPT-4 API sufficed for filtering; was GPT-4 used anywhere else? Thank you for pointing this out! We used GPT-3.5 Turbo for filtering steps, GPT-4 Turbo for sampling and LLAMA2-70B for computing conditional probabilities, for all datasets. We will explicitly state these details in Section 6. > (Q4) How was the decision made of which LLM to use where? - We tried different LLMs and chose the best performing ones available at the time, by examining a handful of examples at different points in the pipeline. We also took into account the costs associated with these models. - The filtering steps were relatively easy tasks, but were executed on larger amounts of data; hence we opted for the cheaper GPT-3.5 Turbo model. The sampling steps were tasks involving more reasoning and were executed on smaller datasets that had already been filtered; hence we opted for the best performing model at the time: GPT-4 Turbo. The LLM conditional probabilities required access to log-probabilities, and we opted for LLAMA2-70B as it was a top open-source model providing this functionality. An ablation on the scale of the model used for this step is included in Figure 4 (right). > (Q5) It might be helpful to have a figure or appendix that walks through every step (the paper does this for many steps, just not all in one place) in which an LLM is queried and discusses what assumptions are being made. This is a great suggestion; thank you! We will add a section in the appendix that walks through a complete worked example of executing our pipeline with the Semaglutide vs. Tirzepatide dataset. --- Rebuttal 2: Title: Rebuttal by Author (continued) Comment: > I would also suggest the authors discuss the challenges of using RCT estimates to validate observational studies -- there are many reasons why those estimates might not align (selection bias, dropout, etc). Thank you for this suggestion! Since these are important considerations for future work, we will expand Section 5 of the paper to elaborate on the limitations of NATURAL. Thank you for all your suggestions, which we believe will greatly improve our paper! Please let us know if there are any further questions we can address. --- Rebuttal Comment 2.1: Comment: I appreciate the authors’ careful response. The planned additions and clarifications to the paper will strengthen it. > We would like to clarify that the oracle estimates in Table 1 are denoted by “IPW (Structured)” […] oracle estimates to be very close to the ground truth ATE This was clear to me when writing my original review, but from my perspective Table 1 shows “closer” but not “very close” estimates. Given the many limitations of synthetic data (in general and specifically in the comparison between Hillstrom and Retail Hero compared to the domain to which you apply this method), I think it would be very useful to show that the IPW (structured) oracle converges to an arbitrarily low RMSE as sample size increases. That then provides a better comparison for how well the NATURAL methods are empirically performing compared to a method that theoretically should and practically does achieve unbiased estimation. Two follow-up questions that I didn’t catch in my original reading: - What explains the mismatch between the low RMSE in Figure 3 (close to $10^{-2}$ on both synthetic datasets) and the N-FULL RMSEs in Table 1? - Can you clarify the specifics of the ATE (%) versus RMSE metrics in Table 1? How are the standard errors being calculated? Are there any insights from the mismatches between the two metrics (e.g., N-MC OI on Hillstrom and N-OI on RetailHero have closer average ATEs to the ground truth but worse RMSE)? > We are also open to further discussion on how to investigate data that is biased in other ways, such as being “truncated”, if the reviewer has concrete suggestions. Perhaps the easiest way to do this would be to have a “truncation model” that’s conditional on covariates (e.g., $\mathbf{P}(\text{Truncation}) = \text{logit}(w_0 + w_c\cdot \text{Channel} + w_z\cdot\text{ZipCode})$ or similar) where if you sample Truncation=1, you truncate the corresponding example’s report to contain only the first $k$ tokens. The real-world analogue I think is likely applicable is that there may be systematic biases in terms of the availability or verbosity of certain groups’ reports. This is a somewhat heavy-handed way to imitate that, but it should be easy to implement. > helped simulate diversity in writing style, tone and report lengths, and hence made the synthetic data more realistic. Thanks for the clarification. I was interpreting this as a claim that the Big Five traits were specifically relevant to these synthetic datasets, rather than this is a general way to introduce diversity in generation. --- Reply to Comment 2.1.1: Comment: Thank you for your comments on our rebuttal! > I think it would be very useful to show that the IPW (structured) oracle converges to an arbitrarily low RMSE as sample size increases. Thanks for clarifying your point further! We can demonstrate the convergence of the IPW (Structured) oracle to ground truth on a larger number of datapoints. Using all 2000 data points of the observational Hillstrom dataset, the oracle gives an ATE estimate of 6.08 (on a scale from -100 to 100), as compared to the ground truth of 6.09. To test and compare to NATURAL methods on even larger samples of data, we would need to generate more synthetic reports. > What explains the mismatch between the low RMSE in Figure 3 (close to $10^{-2}$ on both synthetic datasets) and the N-FULL RMSEs in Table 1? Thank you for catching this - the apparent mismatch is due to the scale on which the results (ATE and RMSE) are reported. The tables use a scale from -100 to 100, while the plots use -1 to 1. To maintain consistency, we will update the plots to use the same scale as the tables in the final version of our paper. > Can you clarify the specifics of the ATE (%) versus RMSE metrics in Table 1? How are the standard errors being calculated? Are there any insights from the mismatches between the two metrics (e.g., N-MC OI on Hillstrom and N-OI on RetailHero have closer average ATEs to the ground truth but worse RMSE)? The ATE (%), standard error and RMSE are all calculated over 10 runs, sampling 80% of the data points without replacement in each one. The difference in performance based on mean ATE vs RMSE can be explained by the standard deviation of the estimates. For instance, N-MC OI on Hillstrom estimates average ATE closer to the ground truth, but has higher variance across runs. > Perhaps the easiest way to do this would be to have a “truncation model” that’s conditional on covariates (e.g., $\mathbf{P}(\text{Truncation}) = \text{logit}(w_0 + w_c\cdot \text{Channel} + w_z\cdot\text{ZipCode})$ or similar) where if you sample Truncation=1, you truncate the corresponding example’s report to contain only the first $k$ tokens. The real-world analogue I think is likely applicable is that there may be systematic biases in terms of the availability or verbosity of certain groups’ reports. Thanks for this suggestion! It makes complete sense to try and simulate varying verbosity as a function of individuals’ covariates. Since truncating to the first $k$ tokens for some $k$ will likely lead to incomplete words or sentences at the end of the report, these reports might not be very realistic data. Instead, we could use the “Openness” trait from the Big Five and set its probability to be a function of covariates (as in your example with channel and zip code). This could be a realistic way to vary verbosity because we found “Low Openness” to generate short and succinct reports, and vice versa. We are excited to explore these variations in future work!
Rebuttal 1: Rebuttal: We thank the reviewers for their time and the effort they took to provide valuable feedback! Overall, all of the reviewers appreciated the significance and novelty of our work, with oMqj calling it “ambitious work” and with oSFK calling it a “cool” and “properly formalized” methodology. Most reviewers also appreciated the clarity of our presentation. Reviewer feedback has helped us identify a few ways to improve the paper. Based on the reviewers’ suggestions, we would like to include the following edits. We expand on these points in more detailed reviewer feedback in the comments below. 1. **Complete walkthrough of NATURAL for Semaglutide vs. Tirzepatide (oMqj)**: We will add a section working through the entire NATURAL pipeline for the Semaglutide vs. Tirzepatide dataset, in the appendix. This will include a step-by-step explanation of how to implement our method as a function of the experiment design, as well as the strategies we used to minimize error in each step. 2. **Expanded section on limitations (oMqj and oSFK)**: We agree that it is very important to detail the limitations of NATURAL. We stated these in our paper and included an extended discussion of these in the appendix of our original submission, due to space constraints. We will consolidate all discussions addressing limitations in an expanded Section 5 of the final version of our paper. 3. **Sensitivity analysis to address the Strong Ignorability assumption (7S1Z)**: We have run an analysis of the sensitivity of our ATE estimates to the degree of unobserved confoundedness, following the strategy in Lu and Ding (2023). In the attached PDF, Table 1 shows that the direction (sign) of our estimates is robust to large changes in the degree of unobserved confoundedness. Further, Figure 1 shows that each covariate we accounted for is important to our ATE estimation, via a leave-one-covariate-out approach. We expand on the interpretation of these analyses in our comments to reviewer 7S1Z below. 4. **Intuitive way to treat inclusion criteria and partially address selection bias (oMqj and 7S1Z)**: Selection effects are important considerations for our work, which includes factors like inclusion criteria and reporting biases. Reporting bias is outside of the scope of our current paper, and we address it in the expanded limitations section (see appendix and above). When it comes to inclusion criteria, in Appendix G of the paper we made certain structural assumptions that allowed us to impute unobserved covariates restricted to the inclusion criteria. During the rebuttal period, we implemented an alternate approach, which is also described in Appendix G in the original paper, that allows us to remove these additional assumptions by estimating an additional probability with the LLM. We show these new results in Table 2 of the attached PDF, which are close to our original results and which we will include alongside the original results in the final draft. We describe this in more detail in our response to reviewer 7S1Z. We look forward to engaging further during the discussion period to clarify anything else. [1] Lu, S., & Ding, P. (2023). Flexible sensitivity analysis for causal inference in observational studies subject to unmeasured confounding. arXiv preprint arXiv:2305.17643. Pdf: /pdf/0c22c2e5590ca5be4e5c9b9319babec9ef17b059.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Q-VLM: Post-training Quantization for Large Vision-Language Models
Accept (poster)
Summary: The authors present a novel post-training quantization framework for large multimodal language models to enhance inference speed. This method accounts for cross-layer dependencies that significantly impact overall model discretization errors and leverages activation entropy to effectively balance these errors with search costs. Extensive experimental results demonstrate that this framework can reduce memory usage and improve generation speed by 1.44x on the 13B LLaVA model, while maintaining comparable performance across diverse multimodal downstream tasks. Strengths: 1. The paper considers cross-layer dependencies during the quantization process for the first time, which is novel. 2. The proposed quantization method is simple and effective. It significantly reduces memory usage and improves inference speed, which is crucial for MLLM deployment. 3. The paper is well-written and easy to follow. Weaknesses: 1. Is there any quantization for the adapter between the vision encoder and the LLM? With various types of adapters such as MLP for LLaVA, Q-former for BLIP-2 [1], and Visual Abstractor for mPLUG-Owl [2], does the proposed method apply to models with different adapters? 2. Instead of comparing with outdated methods such as Q-LoRA and AWQ, it would be beneficial to compare with more advanced techniques like ZeroQuant-FP [3] and include a more in-depth discussion. [1] Li, Junnan, et al. "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models." International conference on machine learning. PMLR, 2023. [2] Ye, Qinghao, et al. "mplug-owl: Modularization empowers large language models with multimodality." arXiv preprint arXiv:2304.14178 (2023). [3] Wu, Xiaoxia, Zhewei Yao, and Yuxiong He. "Zeroquant-fp: A leap forward in llms post-training w4a8 quantization using floating-point formats." arXiv preprint arXiv:2307.09782 (2023). Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Does this quantization method only work for LLaVA-like architectures? How does it apply to cross-attention based VLMs such as Flamingo [1] and Otter [2]? [1] Alayrac, Jean-Baptiste, et al. "Flamingo: a visual language model for few-shot learning." Advances in neural information processing systems 35 (2022): 23716-23736. [2] Li, Bo, et al. "Mimic-it: Multi-modal in-context instruction tuning." arXiv preprint arXiv:2306.05425 (2023). Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have addressed the limitation in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your suggestion and agree that comparisons with more advanced techniques and cross-attention based VLM architectures would provide a deeper insight into the effectiveness of our method. Below are our detailed responses. **Q1: Ablation study about the projectors.** **[Reply]** For quantization about the projector between the vision encoder and the LLM, we did not quantize it as it contains only 20M parameters which is negligible compared to the 7B parameters of the entire model. Additionally, remaining the projector to FP would not result in significant increase in inference speed by **0.06h (6.13h vs. 6.07h)** compared with 4-bit projector. We conduct experiments on OpenFlamingo architecture with encoder-decoder structure which emphasize the leverage of the model inherent architecture for multimodal feature alignment and our Q-VLM still achieves outstanding performance compared with baseline methods. The results are presented as follows based on Tables in Overall Author Rebuttal Q1: |Model|Dataset|FP|8bit ZeroQuant-V2|8bit Q-VLM|4bit ZeroQuant-V2|4bit Q-VLM| |---|---|---|---|---|---|---| |LLaVA-7B|ScienceQA|89.81|89.04|89.58|78.08|**79.79**| |LLaVA-13B|ScienceQA|90.00|89.13|89.81|78.81|**80.78**| |**Model**|**Dataset**|**FP**|**8bit Q-LoRA**|**8bit Q-VLM**|**4bit Q-LoRA**|**4bit Q-VLM**| |OpenFlamingo-3B|Vizwiz|39.76|36.38|37.60|31.64|**35.52**| ||Hateful Memes|50.27|50.02|51.05|45.76|**47.84**| We conclude that our Q-VLM designed in LLaVA-like architectures can be effectively adapted to other transformer-based VLM architectures with different projectors due to the effectively vision-language alignment information preserved in projectors. **Q2: Performance on more baseline methods.** **[Reply]** In response to **Overall Author Rebuttal Q1**, we have conducted experiments and include a more in-depth discussion with more advanced techniques ZeroQuant-V2 compared with proposed Q-VLM. ZeroQuant-V2 fails to acquire the optimal rounding strategy with severe outliers under low bitwidth led to significant quantization loss. On the country, our Q-VLM leverages Information Entropy as a proxy and mines the cross-layer dependency to achieve optimal block partitioning. As a result, our method outperforms ZeroQuant-V2 by **1.71 (79.79 vs. 78.08)** in answering accuracy on ScienceQA dataset under 4-bit in LLaVA-7B model. Additionally, our method enhances inference speed, exceeding ZeroQuant-V2 by 0.06h (6.13h vs. 6.07h) due to utilizing stored rounding parameters instead of dynamic per-token quantization. The additional baseline provides a more comprehensive evaluation framework to highlight the strengths of our approach. **Q3: Performance on other multi-modal architectures.** **[Reply]** In response to **Overall Author Rebuttal Q1**, we additionally explored our Q-VLM into another multi-modal architecture OpenFlamingo. Q-VLM designed in LLaVA-like architectures can be effectively adapted to cross-attention based VLMs due to the consistent core mechanism of cross-attention and the robust multimodal alignment capabilities pre-trained on large-scale vision-language pairs. Since OpenFlamingo is a cross-attention based VLM, exploiting cross-layer dependency is particularly suitable. Our method outperforms Q-LoRA by **2.08 (47.84 vs. 45.76)** under 4-bit in OpenFlamingo-3B model. Q-VLM achieves high accuracy on different cross-attention based multi-modal architectures, which means that our method maintain effectiveness and generalizability. --- Rebuttal 2: Comment: Hi. Thank you for writing the rebuttal! I confirm I have read the rebuttal, which addresses most of my concern. So I would keep my score.
Summary: This paper proposes Q-VLM, a post-training quantization framework for Large Vision-Language Models (LVLMs). It aims to reduce the model complexity of LVLMs for practical deployment by replacing float numbers with quantized ones and substituting multiply-accumulate operations with integer arithmetic. The key innovation lies in mining cross-layer dependency to efficiently search for optimal rounding functions that minimize quantization noise across the entire model. The authors also optimize the visual encoder to further reduce search costs and maintain quantization accuracy. Experimental results on LLaVA and MoE-LLaVA models demonstrate significant memory compression and speed increases without severe performance degradation. Strengths: 1. **Novel Approach to Quantization**: The paper introduces a novel approach to post-training quantization that considers cross-layer dependencies, which is a significant departure from traditional layer-wise methods. This approach has the potential to improve the efficiency and accuracy of quantized LVLMs. 2. **Clear Presentation**: The paper is well-organized and clearly written, making it easy to follow the authors' thought process and understand their contributions. Weaknesses: 1. **Limited Evaluation on Diverse Datasets**: The majority of experiments are conducted on the ScienceQA dataset, which may not fully represent the diverse range of tasks and challenges that LVLMs encounter in real-world applications. Evaluating the method on a wider range of datasets would provide a more comprehensive assessment of its effectiveness and generalizability. 2. **Marginal Performance Improvement**: The observed improvements in some cases are relatively small, potentially due to the specific characteristics of the ScienceQA task. It would be beneficial to investigate whether the method's impact is consistent across different tasks and datasets, or if its effectiveness is limited to specific scenarios. Some typo: In the abstract and introduction, the phrase "compresses the memory by 2.78x and increase the generate speed by 1.44x **about** 13B LLaVA model" may be unclear. It could be revised to "compresses the memory by 2.78x and increases the generation speed by 1.44x **for** the 13B LLaVA model" for improved clarity. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the authors provide more experiments on other tasks/datasets to demonstrate the generalizability of their method beyond ScienceQA? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Not found. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for careful reading and valuable comments. We will check the paper carefully, and modify the presentation of the ambiguous parts in the final version. We provide answers to the questions as follows: **Q1: Evaluation on Diverse Datasets.** **[Reply]** LVLMs encounter diverse range of tasks and challenges in real-world applications. We evaluate our proposed method on various datasets with different architectures to verify the effectiveness and generalizability. The experiments are presented in Table 3 and as follows: |Model|Dataset|FP|6bit AWQ|6bit Q-LoRA|6bit Q-VLM|4bit AWQ|4bit Q-LoRA|4bit Q-VLM| |---|---|---|---|---|---|---|---|---| |LLaVA-7B|MM-Vet [1]|31.40|30.79|31.03|**31.59**|28.17|28.42|**29.28**| ||ScienceQA|66.79|65.87|66.16|**66.67**|56.73|56.50|**57.70**| ||VizWiz|49.87|48.51|49.23|**49.86**|44.29|44.73|**45.82**| ||VQA v2|78.50|77.51|77.58|**78.52**|71.89|72.02|**72.51**| |LLaVA-13B|MM-Vet [1]|36.07|34.78|34.67|**35.83**|30.16|30.71|**31.64**| ||ScienceQA|71.61|71.37|71.52|**72.27**|68.13|68.04|**68.84**| ||VizWiz|53.63|52.35|52.85|**53.69**|47.55|47.97|**49.28**| ||VQA v2|79.94|78.83|79.25|**79.65**|71.55|72.19|**73.02**| ScienceQA challenges LVLMs with domain-specific question answering requiring deep understanding of scientific concepts and visual data interpretation. VQA v2 and MM-Vet[1] tasks LVLMs with the need to accurately answer open-ended questions about diverse images, testing their ability to integrate visual and linguistic information. VizWiz leverage LVLMs to interpret and answer questions about everyday images captured by visually impaired users, often dealing with low-quality and diverse real-world content. Our method outperforms Q-LoRA by **1.31 (49.28 vs. 47.97)** under 4-bit in LLaVA-v1.5-13B model on VizWiz dataset, which shows the superiority of mining cross-layer dependency to effectively reduces quantization errors and enhances generalizability for low-quality and diverse real-world content. Q-VLM achieves the highest accuracy among various post-training quantization methods for LVLMs across different VQA datasets, indicates that our method can be robustly deployed in diverse downstream tasks. **Q2: Evaluation on Different Tasks.** **[Reply]** Our Q-VLM has achieved the highest accuracy across different visual question answering tasks including ScienceQA, VizWiz, VQA v2 and MM-Vet, where our Q-VLM outperforms Q-LoRA by 0.93 (31.64 vs. 30.71) under 4-bit in LLaVA-v1.5-13B model on MM-Vet dataset. Additionally, we conduct experiments on other multi-modal architecture OpenFlamingo with Hateful Memes dataset for classification task to further demonstrate effectiveness and generalizability of our proposed method in the **Overall Author Rebuttal Q1**. Our method outperforms Q-LoRA by 2.08 (47.84 vs. 45.76) and 1.03 (51.05 vs. 50.02) in OpenFlamingo-3B model under 4-bit and 6-bit respectively. The Hateful Memes dataset is a collection of multimodal data containing images paired with text captions, specifically designed for research in detecting hate speech in memes. Our Q-VLM excels in the classification task on the Hateful Memes dataset by effectively combining vision-language information to accurately identify hate speech in multimodal memes. **Q3: Correcting Some Typos.** **[Reply]** Thank you for your valuable comments. We have thoroughly revised the paper, carefully improving the writing, and correcting misleading wording and grammatical errors. We hope these revisions make the writing more professional and easier to understand. [1] Yu, Weihao, et al. "Mm-vet: Evaluating large multimodal models for integrated capabilities." arXiv preprint arXiv:2308.02490 (2023). --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Prior to the rebuttal, my primary concern was the universal effectiveness of the proposed Q-VLM methodology. I am pleased to see the additional experimental results across various tasks, datasets, and models, all of which support the superior performance of Q-VLM. As such, I am raising my score from 4 to 6. --- Reply to Comment 1.1.1: Comment: Thank you so much for your valuable feedback on our paper. We greatly appreciate your insights and will incorporate your suggestions to enhance the quality of our work.
Summary: The authors propose a new post-training quantization method for LVLMs. The authors separate several layers in a LVLM into blocks and search for the optimal quantization bitwidth for each block individually. The authors also introduced a new objective function for quantizing the vision encoder. The extensive experimental results show the proposed methods outperform the SOTA post-training quantization methods on LLaVA models. Strengths: - The authors provide a relatively comprehensive related work discussion in the paper. - The paper is well-motivated. Post-training quantization on LLM / VLM is critical for deploying these models due to their extensive computation consumption during training. - The authors provide a large amount of evaluations and ablations in the experimental section to help other researchers better understand their model performance. Weaknesses: - The overall methods part is difficult to follow. It is not clear if the authors adopt different quantization strategies in visual encoders and language model and the projects in LLaVA. Also, it is not clear to me why the authors design those specific methods for LVLM, i.e., why the proposed methods are particularly suitable for LVLM compared to LLM, CNN, etc. It would be better that the authors can discuss more details about their insight or motivation for those designs. - The main advantages of post-training quantization compared to quantization-aware training is that it does not need the entire training data and requires less computation. Therefore, it would be better that the authors can provide some ablations in terms of the amount of training data used during the post-training quantization and the number of computation needed during quantization. Otherwise, it is difficult to position this paper w.r.t. other relevant work. - The authors claimed the prior quantization methods are suboptimal. However, adopting entropy as proxy and separating layers into blocks are also sub-optimal solutions. Then, there exists another question to be answered, why the sub-optimal solutions proposed in the paper are better than the others? Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the careful reading and valuable comments. We address the questions and clarify the issues accordingly as described below. **Q1: Confusion about the quantization strategies.** **[Reply]** We apologize for the confusion. The detailed construction of the baseline quantization methods is described in Section 4.3 as we leverage RPTQ and Outlier Suppression for activation quantization in single language and vision modality model. For our methods, the language model employs cross-layer dependency mining method exclusively, while the vision model not only utilizes cross-layer dependency mining but also incorporates additional optimization for vision encoder with vision-language synergistic optimization. Our vision encoder optimization method utilizes Jacobian matrices to assign various importance weights to different layers for vision-language synergistical optimization. We did not quantize the projector, as it contains only 20M parameters which is negligible compared to the 7B parameters of the entire model. Additionally, remaining the projector to FP would not result in significant increase in inference speed by 0.06h (6.13h vs. 6.07h) compared with 4-bit projector. **Q2: Explain why the proposed methods particularly suitable for LVLM.** **[Reply]** According to the **Overall Author Rebuttal Q2**, we point out our visual encoder optimization method is designed specifically for LVLMs due to the multi-modal data distribution compared with LLM. CNNs primarily rely on the layer-by-layer transmission of local features within the receptive field and the pooling operations that gradually lose detailed information. This makes it challenging for them to achieve global information capture and direct associations like the self-attention mechanism of LVLMs. **Q3: Ablation study about the number of training data.** **[Reply]** Thanks for your advice. We have conduct ablation studies to evaluate the effectiveness of varying amounts of calibration images used during the post-training quantization on both accuracy and calibration cost. Leverage more calibration images can reduce the rounding function overfitting, while may result in extreme distribution data selection and increased calibration time overhead with large-scale learnable parameters. The results are presented as follows: |Model|Calibration Images|Accuracy|Calibration Cost| |---|---|---|---| |7B|16|78.85|2.1| ||64|**79.79**|**8.8**| ||256|79.61|34.9| |13B|16|79.06|3.4| ||64|**80.78**|**14.2**| ||256|80.96|56.4| Observing the accuracy and calibration cost for different amounts of calibration images settings, we analyze that medium calibration images achieve the optimal performance. Limited amounts of calibration images face the challenges of overfitting, while high calibration images select extreme distribution data with marginal improvements in accuracy and substantially increased calibration cost due to large-scale learnable parameters. We suggest that utilizing 64 calibration images achieve optimal trade-off between quantization accuracy and rounding function generalizability. **Q4: Confusion about the mining cross-layer dependence.** **[Reply]** We detailly describe the significance of leveraging entropy as a proxy which identifies sensitive layers and facilitates optimal layer allocation through efficient cross-layer dependency correlation in **Rebuttal for TnuV Q2**. In data-limited PTQ quantization, it is impractical to minimize discrepancies by reconstructing the network final output to achieve the optimal solution for large-scale second-order errors. Directly searching the optimal rounding function is NP-hard because the search space increases exponentially with the layer number. Compared with conventional layer-wise quantization which minimizes quantization errors for each layer in the greedy way to solve that NP-hard problem, our method considers cross-layer dependency and employs entropy as a proxy for block-wise quantization to achieves a satisfying trade-off between discretization errors and search cost. Our cross-layer dependency mining method outperforms BRECQ by **0.56 (78.66 vs. 78.10) and 1.04 (79.89 vs. 78.85)** in the 7B and the 13B models respectively. Consequently, our method achieves optimal block partitioning and effectively utilizes cross-layer dependency. --- Rebuttal 2: Comment: Dear Reviewer, Thanks for reviewing this paper! The authors have provided rebuttal to address your concerns. Could you have a look and let the authors know if you have further questions? Thanks, Your AC
Summary: The paper introduces Q-VLM, a PTQ framework for VLMs that leverages entropy as a proxy to manage cross-layer dependencies for both language model and visual encoder. Experimental results on ScienceQA, VizWiz, VQAv2 datasets and LLaVA variant architectures validate the efficacy of the proposed method. Strengths: * Q-VLM achieves good performance with W6A6 on ScienceQA, VizWiz, VQAv2 compared to the FP counterparts. * Ablation studies in Table 1 and Section 4.2 are helpful for understanding the importance of each component of Q-VLM, such as cross-layer dependency mining and visual encoder optimization. * Experiments across LLaVa model sizes and bit-width demonstrates the robustness of the proposed method. Weaknesses: There are several concerns: * The benefit and motivation for using entropy as a proxy are unclear. Why not directly use the quantization error as a metric for performing block-wise searches to mine cross-layer dependency? Are there any accuracy or inference efficiency gains by using entropy as a proxy? * The idea of mining cross-layer dependency is not new. Existing works, such as BRECQ, already address block-wise quantization. * Important baseline methods are missing in the comparison results. For a more solid comparison, include other state-of-the-art PTQ methods for the vision branch and more recent works such as SmoothQuant and ZeroQuant variants for the language branch. Comparing with the previous best method on the language model and another prior method on the vision model would provide a clearer picture of the overall improvement. * The proposed method appears to be a general approach for both language and vision models. It would be interesting to demonstrate its individual effect on either the language or vision model. Technical Quality: 2 Clarity: 3 Questions for Authors: Suggest to clarify the concerns in the weakness section Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate the opportunity to clarify the points you raised regarding our methodology and its contributions. **Q1: The benefit and motivation for using entropy as a proxy are unclear.** **[Reply]** We are sorry for the confusion. For block-wise quantization, we partition the entire model into different blocks and search the optimal rounding function by considering the output quantization errors for each block achieves better trade-off between the search cost and the quantization accuracy. Our goal is to obtain the **optimal block partition** for rounding function search, where we leverage **Information Entropy** to judge the sensitive layers with homogeneous distribution. For sensitive layers, the noise in the former layer caused by the deviation from the global optimal value usually leads to higher deviation for the current layer, so that jointly search these layers decrease the block output quantization errors. As a result, discretization error difference (DED) between layer-wise search and joint search is obvious for sensitive layers with high entropy. The benefit and motivation for using entropy rather than quantization errors as a proxy for block-wise searches lie in several key considerations. First, we analyze that larger entropy indicates more homogeneous data distribution, which is a well-established principle in information theory. Consequently, DED and activation entropy are strongly correlated with an $R^2$ value of **0.97**. However, greater quantization error does not necessarily imply more homogeneous data distribution and does not show a positive correlation with DED, having an $R^2$ value of **0.81**, which is empirically verified in the figure presented in the Author Rebuttal. Meanwhile, the search cost of quantization errors doubles compared with entropy as a proxy, as the calculation of quantization errors requires multiple forward passes for both the FP model and the quantized model. The weak correlation and the unbearable search cost render quantization error unsuitable as a metric for measuring cross-layer dependency. Furthermore, we conducted experiments comparing the proxy effectiveness of quantization error and entropy across different models under various bitwidths. Entropy outperformed quantization errors by a significant margin **(78.66 vs. 77.97)**, showing a strong cross-layer dependency within each block. This allowed us to achieve optimal block partitioning by mining the cross-layer dependency. |Model|6bit Method|6bit Accuracy|6bit Search Cost|4bit Method|4bit Accuracy|4bit Search Cost| |---|---|---|---|---|---|---| |7B|Errors|88.59|43.7|Errors|77.97|41.2| ||Entropy|88.95|26.9|Entropy|**78.66**|**25.9**| |13B|Errors|88.96|58.6|Errors|78.82|54.2| ||Entropy|89.04|33.1|Entropy|**79.89**|**31.8**| **Q2: The idea of mining cross-layer dependency is not new.** **[Reply]** Existing works such as BRECQ incorporates Fisher information and jointly optimizes two layers within each residual block, it does not capture interactions across neighboring residual layers. Additionally, they do not provide the optimal configuration of the reconstruction granularity, with their choice of block-wise optimization stemming solely from experimental results. Consequently, these methods leverage fixed block partitions for rounding function search, leading to suboptimal performance in large models. We conduct experiments on LLaVA model with 7B and 13B parameters under 4-bit in ScienceQA dataset. Our cross-layer dependency mining method outperforms BRECQ by **0.56 (78.66 vs. 78.10) and 1.04 (79.89 vs. 78.85)** in the 7B and the 13B models respectively. Conventional quantization methods[1-3] for LLMs indicate that as model scale increases, systematic outliers with large magnitude emerge in activations, leading to significant quantization errors and accuracy degradation. Therefore, BRECQ's fine-grained block partitioning cannot effectively address outliers in LVLMs under low-bitwidth. On the country, our cross-layer dependency mining method which leverage Information Entropy achieve optimal block partition and sufficiently utilizes the cross-layer dependency. **Q3: Performance on more baseline methods.** **[Reply]** In response to **Overall Author Rebuttal Q1** as well as Q2 from your previous responds, we have conducted experiments with additional state-of-the-art PTQ methods including BRECQ and QLoRA for the vision branch and more recent methods such as ZeroQuant-V2 for the language branch. We further performed experiments combining ZeroQuant-V2 and BRECQ as baseline methods. Our Q-VLM method outperforms the baseline method by 1.71 (79.79 vs. 78.08) for answering accuracy in ScienceQA dataset under 4-bit in LLaVA-7B model. Both ZeroQuant-V2 and BRECQ are insufficient handling severe outliers in LVLM under low bitwidth which significantly degrades performance. In contrast, our Q-VLM method effectively mines cross-layer dependency by leveraging Information Entropy and utilizing Jacobian matrices to assign various importance weights to different layers for vision-language synergistical optimization. **Q4: Why the proposed methods are particularly suitable for LVLM.** **[Reply]** According to the **Overall Author Rebuttal Q2** and the ablation study in Table 1, we conducted experiments to demonstrate individual effect of cross-layer dependency mining method on both LLaMA language model and the vision model. Solely deploy cross-layer dependency mining method indicate limited performance improvement. As the visual encoder optimization method with vision-language synergistic optimization, we conclude our Q-VLM are particularly suitable for LVLM. [1] Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale. [2] Smoothquant: Accurate and efficient post-training quantization for large language models. [3] Outlier suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling. --- Rebuttal 2: Comment: Dear Reviewer, Thanks for reviewing this paper! The authors have provided rebuttal to address your concerns. Could you have a look and let the authors know if you have further questions? Thanks, Your AC --- Rebuttal Comment 2.1: Comment: Thank you to the authors for the detailed response. Most of my concerns have been well-addressed, so I am raising my rating to 5: Borderline Accept. However, I still believe it would be valuable to provide a more solid comparison by benchmarking the proposed method against a baseline that includes the best prior method for the language branch and another existing top approach for the vision branch. The baseline method does not have to be the same for both the language and vision parts. --- Reply to Comment 2.1.1: Comment: We are truly grateful for your valuable feedback on our paper. We provide a more solid comparison by benchmarking the proposed method against a baseline that includes ZeroQuant-V2 for the language branch and BRECQ for the vision branch. Our Q-VLM method outperforms the baseline method by 1.71 (79.79 vs. 78.08) in ScienceQA dataset under 4-bit in LLaVA-7B model. Both ZeroQuant-V2 and BRECQ fail to handle severe outliers in LVLM under low bitwidth which significantly degrades performance. In contrast, our Q-VLM method effectively mines cross-layer dependency and achieves vision-language synergistical optimization. Due to time constraints, we plan to implement SOTA DopQ-ViT [1] instead of BRECQ for the vision branch in future work. [1] Yang, Lianwei, and Haisong Gong. "DopQ-ViT: Towards Distribution-Friendly and Outlier-Aware Post-Training Quantization for Vision Transformers." arXiv preprint arXiv:2408.03291 (2024).
Rebuttal 1: Rebuttal: We appreciate the valuable feedback and insightful questions provided by the reviewers. Below are our detailed responses to two common concerns raised by multiple reviewers. Detailed responses to other specific comments are provided under each individual reviewer's comments. **Q1: Performance on more baseline methods and different multi-modal architectures.** **[Reply]** Thanks for suggestions which would provide a deeper insight into the effectiveness and generalizability of our method. We have extended our experiments to an additional baseline method ZeroQuant-V2[1] and compared it against our proposed methods. ZeroQuant-V2 leverages per-token quantization with different rounding functions to minimizing activation discretization errors. However, ignoring cross-layer dependency of discretization errors fails to acquire the optimal rounding strategy with severe outliers under low bitwidth and degrades the performance significantly. On the contrary, our Q-VLM mines the cross-layer dependency of output distribution across layers, minimizing the block-wise discretization errors to avoid suboptimal quantization. We further optimize the visual encoder to disentangle the cross-layer dependency for fine-grained search space decomposition. As a result, our method outperforms ZeroQuant-V2 by **1.71 (79.79 vs. 78.08)** in answering accuracy on ScienceQA dataset under 4-bit in LLaVA-7B model. Additionally, our method enhances inference speed, exceeding ZeroQuant-V2 by **1.2h (6.1h vs. 7.3h)** due to utilizing stored rounding parameters instead of dynamic per-token quantization. The results are presented as follows: |Model|8bit Method|8bit Accuracy|4bit Method|4bit Accuracy| |---|---|---|---|---| |7B|ZeroQuant-V2 [1]|89.04|ZeroQuant-V2 [1]|78.08| ||Q-VLM|**89.58**|Q-VLM|**79.79**| |13B|ZeroQuant-V2 [1]|89.13|ZeroQuant-V2 [1]|78.81| ||Q-VLM|**89.81**|Q-VLM|**80.78**| The additional baseline provides a more comprehensive evaluation framework to highlight the strengths of our approach. We also explored the multi-modal architecture OpenFlamingo [2] to ensure the robustness and generalizability of our methods. We deploy our method on OpenFlamingo 3B model using Vizwiz and Hateful Memes[3] datasets, selecting bitwidths of 4 and 8 for quantized layers. The experiments are presented as follows: |Dataset|Shots|FP|8bit Q-LoRA|8bit Q-VLM|4bit Q-LoRA|4bit Q-VLM| |---|---|---|---|---|---|---| |Vizwiz|0|23.79|21.24|21.47|17.62|18.69| ||4|27.05|25.83|26.59|24.17|24.55| ||32|39.76|36.38|**37.60**|31.64|**35.52**| |Hateful Memes [3]|0|50.23|47.75|49.12|43.86|44.22| ||4|50.10|48.62|49.55|45.12|45.26| ||32|50.27|50.02|**51.05**|45.76|**47.84**| Q-VLM designed in LLaVA-like architectures can be effectively adapted to cross-attention based VLMs due to the consistent core mechanism of cross-attention and the robust multimodal alignment capabilities pre-trained on large-scale vision-language pairs. Since OpenFlamingo is a cross-attention based VLM, exploiting cross-layer dependency is particularly suitable. Our method outperforms Q-LoRA by **1.22 (37.60 vs. 36.38)** under 8-bit in OpenFlamingo-3B model. The advantage of our method becomes more obvious for 4-bit 3B LVLMs because quantization errors and cross-layer dependency play a more significant role in networks with low capacity. These results underscore the robustness and generalizability of our approach across different tasks, model architectures and datasets, demonstrating its effectiveness in diverse scenarios. [1] Yao, Zhewei, et al. "Zeroquant-v2: Exploring post-training quantization in llms from comprehensive study to low rank compensation." arXiv preprint arXiv:2303.08302 (2023). [2] Awadalla, Anas, et al. "Openflamingo: An open-source framework for training large autoregressive vision-language models." arXiv preprint arXiv:2308.01390 (2023). [3] Kiela, Douwe, et al. "The hateful memes challenge: Detecting hate speech in multimodal memes." Advances in neural information processing systems 33 (2020): 2611-2624. **Q2: Why are the proposed methods particularly suitable for LVLM?** **[Reply]** We appreciate the reviewer's interest in understanding the suitability of our proposed methods for LVLM. Our cross-layer dependency mining method is also suitable for single modality model. Specifically, utilizing entropy as proxy to assess layer sensitivity and achieve optimal block partitioning is not exclusively beneficial for LVLMs. For single language modality, we deploy our CDM into the LLaMA-7B[1] model and evaluated it on the PIQA[2] dataset. Our method achieved improvement compared to QLoRA by **0.51 (72.23 vs. 71.82)** in 4-bit precision. For single vision modality, we incorporated our CDM into the CLIP only quantized LLaVA-7B model and observed enhancement of about 0.23 (84.15 vs. 83.92) in answering accuracy on the ScienceQA dataset under 4-bit precision. However, ablation studies in Table 1 indicate limited performance improvement by 1.13 (78.66 vs. 77.53) when solely using cross-layer dependency mining without visual encoder optimization method. The proposed visual encoder optimization method is unsuitable for single-modal models from the perspective of vision-language synergistic optimization to achieve fine-grained blocks with search cost and cross-layer dependency trade-off. Significant performance gains by **2.26 (79.79 vs. 77.53)** are achieved by jointly employing cross-layer dependency mining and visual encoder optimization method. Our approach achieves vision-language synergistic optimization, resulting in superior performance in LVLM tasks. [1] Touvron, Hugo, et al. "Llama: Open and efficient foundation language models." arXiv preprint arXiv:2302.13971 (2023). [2] Bisk, Yonatan, et al. "Piqa: Reasoning about physical commonsense in natural language." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 05. 2020. Pdf: /pdf/3c4cdf45d41c082e0deff041412589bce83541e4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
KptLLM: Unveiling the Power of Large Language Model for Keypoint Comprehension
Accept (poster)
Summary: The paper introduces and studies the problem of Semantic Keypoint Comprehension to evaluate the capability of Multimodal Large Language Models (MLLMs) in tackling fine-grained perception and comprehension. It does so by defining three related tasks: (1) keypoint semantic understanding, (2) Visual prompt-based keypoint detection and (3) Textual Prompt-based Keypoint Detection. The first task targets the semantic interpretation capabilities of objects and keypoints, while the other two tasks involve position detection and semantic comprehension from visual or textual descriptions. In the context of this paper, keypoint detection (a.k.a object pose estimation) is the task of localizing the semantic keypoints of objects. The authors further present KptLLM, an MLLM for Semantic Keypoint Comprehension based on LLaVA. The model employs an identify-then-detect strategy, where the model first captures the required semantics and then detects the precise keypoint position. The model employs attention to fuse features from image and keypoint encoded prompts. The fused encoding and the encoding of visual inputs are used as additional latent contexts for an LLM, which is finetuned with LORA to generate the prompt. The keypoint position is further regressed with an MLP head. The authors evaluate their proposed model on the MP-100 (diverse and category agnostic) dataset and the AP-10K dataset and compare its performance to state-of-the-art methods for keypoint detection and to pretrained and finetuned LLaVA models. Strengths: - The paper introduces new tasks for studying the semantic comprehension of MLLMs at a fine-grained level - The paper presents an MLLM-based method for tackling the introduced tasks, which outperforms current keypoint detection methods across datasets and tasks by notable margins - The proposed method also outperforms state-of-the-art MLLM (pre trained and finetuned) on the target tasks - The authors ablate their proposed detection strategy and other architectural choices. Weaknesses: - The properties (and potential limitations) of the MLLM-based approaches in terms of runtime, memory, and scalability to multiple key points, compared to existing state-of-the-art methods, are not discussed - Lack of details on the methods chosen for cross-comparison and their key properties/assumptions makes it harder to evaluate the contribution While the paper is overall well structured, the evaluation protocol and details are not always easy to follow Technical Quality: 3 Clarity: 3 Questions for Authors: - Can you please provide more details on runtime, memory, and scalability? - Is the visual encoder trained/fine-tuned or frozen? The textual description mentions a pretrained frozen encoder, while the image shows it to be fine-tuned Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: It is worth elaborating on memory, runtime and scalability aspects of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1: The runtime, memory, and scalability of our model During inference, our model processes multiple keypoints by stacking them into a batch. For the standard 17 keypoints, our model obtains their positions in 3.45 seconds, consuming 25,100 MiB of memory. It is important to note that in our implementation, we did not utilize LLM-related acceleration and optimization techniques, which could potentially enhance the model's efficiency. ### Q2: Details of comparable methods and evaluation protocol **Details of comparable methods:** For visual prompt-based keypoint detection, all the methods take the visual prompt and query image as the inputs, and output the corresponding keypoints. These methods only rely on matching support keypoint features with query image features. This matching is semantic-agnostic, which can fail when there is insufficient similarity between the support and query instances, especially when they differ significantly in poses, textures, or styles. Additionally, these methods struggle when dealing with similar keypoints that exhibit symmetrical appearances, such as the left eye and right eye. For textual prompt-based keypoint detection, all the methods take the textual prompt and test image as the inputs, and output the corresponding keypoints. **Details of evaluation protocol:** For **keypoint semantic understanding** and **visual prompt-based keypoint detection**, we employ the standard benchmark-MP100 for evaluation. This benchmark ensures that the species categories used in the training and testing phases are non-overlapping, meaning the test objects are entirely unseen during training. Following the standard evaluation protocol used by comparable methods, we sample 3,000 random pairs (one as the support image and another as the query image) for each novel category during testing. With 20 test categories for each split, we construct a total of 60,000 pairs for evaluation. For **textual prompt-based keypoint detection**, we follow CLAMP to evaluate the models’ generalization ability on unseen animal species in the zero-shot learning paradigm. Two experimental settings are defined based on whether the animal species in the training set and test set belong to the same animal order or not. Species belonging to the same order tend to have similar appearances, while species from different orders exhibit more diverse appearances. ### Q3: About Visual Encoder We have finetuned the visual encoder, as the original pre-trained CLIP image encoder cannot adequately capture the pixel-level fine-grained visual information, which is essential for the keypoint detection tasks. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. Overall, the soundness and presentation of the presented work are good: the paper describes an interesting problem and a method for tackling it while demonstrating superior results and performing ablations, so I keep my original rating for these criteria. However, after reading through the comments and concerns made by the other reviewers and the authors' responses regarding the contribution of the task presented, I have decided to change my original rating from excellent to good.
Summary: This paper proposes an LLM-based keypoint localization method that can detect keypoint by visual or text prompt and classify the keypoint category by given its coordinates. Experiments on several public benchmarks demonstrate the effectiveness of the proposed method. Strengths: Adopting LLM to perform in-context learning of keypoint localization is interesting and this paper demonstrates that this pipeline is feasible and can get superior performance. Weaknesses: 1. It seems that the Visual Prompt-based Keypoint Detection and Textual Prompt-based Keypoint Detection are trained separately. However, it is more useful if we can complete these tasks using a single model. So how about jointly training above two tasks? Can they benefit from each other or not? 2. What’s the meaning of keypoint identifiers in Table 2 and 3? It would be confusing if these is no specific illustration. 3. Some typos. “Texual”->Textual in line217, 234, 254, etc. 4. Some claims are not proper. For instance, in line 57-58 “To the best of our knowledge, this is the first work that equip LLMs with the point-level visual perception capabilities.” [A] already introduces LLM into keypoint localization and this paper should emphasize the different aspect such as visual prompt-based keypoint localization. [A] D. Wang, S. Xuan, S. Zhang. LocLLM: Exploiting Generalizable Human Keypoint Localization via Large Language Model. CVPR 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: In Weakness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper discusses the limitation in A.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1: The mutual benefit between textual prompt and visual prompt In the main article, since textual prompt-based and visual prompt-based keypoint detection have their own benchmarks, combining both prompts for training would result in unfair comparisons. Here, we supplement the experiment by using both textual and visual prompts. The results on MP100 split-1 show that the textual prompt could provide valuable high-level and rich semantic guidance to enhance the keypoint localization. | Visual Prompt | Textual Prompt | MP100-Split1 | |------------|------------|--------------| | &#10004; | | 91.66 | | &#10004; | &#10004; | 92.18 | Table: The ablation study of the mutual benefit between textual prompt and visual prompt ### Q2: The explanation of keypoint identifier The keypoint identifier used in the Capeformer can alleviate the ambiguity of keypoints with similar appearances when compared to using only visual features for low-level matching. However, the keypoint identifier requires additional effort to annotate keypoints across different categories, which limits real-world applications where keypoint identifiers are not available. ### Q3: About typo Thanks for pointing out the typo errors. We will correct them in the revised version. ### Q4: About related work LocLLM [A] was publicly available on June 7, 2024, which is later than our submission date. Therefore, our work can be considered concurrent. LocLLM only uses textual prompts for human keypoint localization using LLM. In contrast, we explore the use of LLM for more comprehensive keypoint comprehension of diverse objects. We will revise some claims and add discussions to address this.
Summary: The authors aim to enhance multi-modal LLMs with semantic keypoint comprehension. They introduce a hybrid visual prompting approach using a query and a support input image. In their pipeline, visual features from both images are extracted via a vision encoder model. Additionally, a support keypoint prompt, indicating keypoint positions on the support image, is used. The embeddings from this prompt are combined with the support image embeddings and fed into the LLM. This mixed embedding helps the model detect and understand the keypoint in the input query image, generating the desired keypoint location information. Strengths: The paper is well written and easy to follow. The proposed method shows promising results albeit its scope of usability and also generalization should be better examined. Weaknesses: The proposed support image and support keypoint prompt essentially act as few-shot examples for the model. It would be beneficial to explicitly acknowledge this in the paper. Another limitation of the proposed method is the requirement for a visually similar support image with keypoint locations for each input query image. Have the authors tested the model with support images that are visually different from the query images? How much visual difference can the model handle without a significant drop in accuracy? Conducting an ablation study on this would be crucial to determine the extent of semantic (high-level) or texture (low-level) distribution shifts the method can tolerate without a notable accuracy decline. Technical Quality: 2 Clarity: 3 Questions for Authors: The authors should compare their approach to GPT-4V as a baseline to better understand the benefits of the additional steps they have added to their proposed pipeline. It is unclear where the support image and support keypoint prompts come from during inference. Are these data available to other methods in the experiments? Figure 2 is somewhat confusing. (Z_q) is fed to the LoRA section, while the rest of the information is fed to the rest of the LLM. Is the figure designed this way on purpose? If so, could you please clarify this? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1: Visual prompt-based keypoint detection is equal to few-shot keypoint detection The visual prompt-based keypoint detection can be considered as the few-shot keypoint detection. In our main article, we have explicitly illustrated the input requirements of visual prompt-based keypoint detection. ### Q2: The impact of visual differences between support images and query images on model performance In fact, visual prompt-based keypoint detection does not require a visually similar support image with keypoint locations for each input query image. In the MP-100 dataset, objects within the same novel category are not always similar, and there are many challenging cases. To illustrate this, we select the tern body as a novel category, which exhibits significant species variation, to demonstrate the impact of visual differences between support images and query images on model performance. As shown in the visualizations in the PDF, using the same support images with support keypoints, our model can effectively detect different query images, despite variations in poses, object appearances, and environments. ### Q3: Compared with GPT-4V We compare our model with GPT-4V for visual prompt-based keypoint detection. Given the high cost of utilizing the GPT-4V API, we randomly sample 10 paired images for each category in the test set. For the split-1 set, which includes 20 test categories, this results in a total of 200 image pairs used for evaluation. As shown in the table below, our model effectively detects novel objects given the support image and support keypoints, outperforming GPT-4V in these scenarios. | Method | MP100-Split1-Sub | |--------|------------------| | GPT-4V | 25.23 | | Ours | 91.58 | Table: Compared with GPT-4V for visual prompt-based keypoint detection. ### Q4: The evaluation protocol is consistent across our model and other comparable methods Our model and other comparable methods all use the standard benchmark MP-100 for the evaluation of visual prompt-based keypoint detection task. This benchmark ensures that the species categories used in the training and testing phases are non-overlapping, meaning that the test objects are entirely unseen during training. Following the standard evaluation protocol used by comparable methods, we sample 3,000 random pairs (one as the support image and another as the query image) for each novel category during testing. With 20 test categories for each split, we construct a total of 60,000 pairs for comprehensive and effective evaluation. ### Q5: About the figure of our framework We apologize for any misunderstanding. LoRA (Low-Rank Adaptation) is an efficient fine-tuning technique that modifies existing layers of a Large Language Model (LLM) by introducing additional low-rank matrices. These matrices enhance the model's ability to comprehend keypoints without the need for extensive retraining of the original weights. As a result, all multimodal inputs ($Z_q$ and other inputs) are processed through both the LLM and the LoRA weights. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ rebuttal. Q5: I did not ask “What is LoRA,” which I already know. I asked why the figure is designed in that particular way; the figure is confusing for the average reader. However, this question and the answer do not affect my rating. Considering the other reviewers’ feedback and the authors’ rebuttal, I plan to increase my overall rating. Before doing so, I would like to know if the authors plan to include the reviewers’ feedback in their final camera-ready version (specifically for my reviews, Q2 and Q3), as I did not see any promises being made by the authors in their comments. --- Reply to Comment 1.1.1: Title: Rebuttal by Authors Comment: Thank you for your response. We greatly appreciate all the valuable feedback from the reviewers, which will help us improve our paper. We promise that all reviewers' feedback, including yours, will be incorporated into our final camera-ready version. Specifically, your comments, along with the visualizations and additional experiments in response to Q2 and Q3, will further clarify the advantages of our method for visual prompt-based keypoint detection. Additionally, we appreciate your suggestions regarding the framework diagram, and we will revise it to prevent any misunderstandings.
Summary: The paper proposes to build a MLLM for keypoint detection. They propose three main points to achieve this. First, the paper introduces a new benchmark that measures various types of keypoint detection including both text and visual-prompt-based, as well as traditional keypoint understanding. Next, they propose to build a multimodal LLM, which is designed specifically for keypoint detection that is shown to perform quite well on this task. Strengths: [S1] Good results: The paper boasts quite good results for the task at hand. The experiments are thorough and clearly outline the contribution of various components of the model. [S2] New benchmark: The paper proposes a new benchmark for keypoint understanding, which goes beyond the traditionally available keypoint understanding benchmarks by bringing visual and text prompting. While this is mainly in service to the current paper, it could be independently useful for posterity. Weaknesses: [W1] Not a lot of novel insights: This is my main, and mostly only, weakness for the paper, but it take a few different forms. Firstly, this is a straightforward "building" of MLLM formula; i.e., convert the task at hand into "instruction format" -> train a vision to LLM connector --> fine-tune LLM; that is used by a lot of recent works. As such, this would be mostly of interest to the domain performance, Keypoint comprehension in this case. This already puts some doubt as to whether this is of interest enough to a broad community. Next, there are also not a lot of modular components that might be broadly useful that hasn't already been studied by other works. Some details below: W1.1 LLava does almost as good (for supported tasks): The paper shows that, once fine-tuned, the popular LLAVA model does almost as good as the proposed method. The significance of this finding is not thoroughly discussed. Why build a new model at all? Furthermore, there are more capable MLLMs baseline such as R1, R2, R3 that might be more suitable targets for fine-tuning. See 1.2 below as well W1.2 The proposed method of generating a special token that can be decoded to a key point is also explored before (R1). Besides, there are also various MLLMs that are capable of doing region comprehension (R2, R3...). Those are not sufficiently discussed. If fine-tuned, like LLAVA, how would they do on this task? W1.3 Is visual prompting a good paradigm for keypoint detection? What is the importance of visual prompting for this task? [EDIT: Clarified in post-rebuttal] References: R1: https://arxiv.org/abs/2311.06612 R2: https://arxiv.org/abs/2306.15195 R3: https://openreview.net/forum?id=2msbbX3ydD Technical Quality: 3 Clarity: 3 Questions for Authors: Please address the weaknesses above, which also contain questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1: Novelty and Contribution **To Keypoint Detection Community:** To the best of our knowledge, this is the first work to address the problem of semantic keypoint comprehension, which aims to understand keypoints within different human-AI interaction contexts. Previous keypoint detectors primarily rely on directly learning visual or textual prompts for keypoint localization through extensive data fitting, often neglecting the semantic understanding of keypoints. This oversight often leads to misinterpretations of the prompts and inaccurate predictions. In contrast, our work proposes the "identify-then-detect'' strategy, which requires the model to first comprehend the semantics of keypoints and then accurately determine their positions through a chain-of-thought process. This paradigm could enhance both interpretability and localization accuracy. **To MLLM Community:** Compared with previous works (R1, R2, R3) that focus on region-level understanding, our model takes a further step to achieve more fine-grained, pixel-level understanding and localization, which is critical for various real-world applications. For instance, **semantic keypoint understanding** equips MLLMs with enhanced visual comprehension and analytical capabilities, essential for tasks such as structural analysis, action recognition, and medical image interpretation. **Visual prompt-based keypoint detection** allows MLLMs to acquire keypoint definitions from visual prompts, enabling cross-class and cross-keypoint localization using sample images provided by users. Additionally, **textual prompt-based keypoint detection** enables MLLMs to follow human language guidance, facilitating keypoint localization on arbitrary objects and keypoint categories in a zero-shot manner. ### Q2: Compared with LLaVA, R1, R2, R3 The preliminary results in our paper indicate that LLaVA can address the keypoint semantic understanding task through fine-tuning, primarily because this task only involves answering questions based on images. Given the architecture of LLaVA and similar models like R1, R2, and R3, these models can be directly fine-tuned to support this task. **However, they all lack capabilities for fine-grained keypoint localization, a critical aspect that our article emphasizes.** For example, LLaVA, R1, R2, and R3 cannot support the input of visual prompts (support images along with support keypoint definitions) for keypoint detection, which could enhance the generalization capabilities of in-context learning within LLMs. In addition, although R1 employs a special token to extract features from LLMs and uses an additional decoder to decode corresponding masks or boxes, it does not address the new challenge of keypoint detection, which remains unhandled by existing models. ### Q3: The importance of visual prompt-based keypoint detection Actually, visual prompt-based keypoint detection is a task that has been widely studied in keypoint detection community. It is also supported by a well-established benchmark, which further demonstrates its significance and practice. Moreover, visual prompt-based keypoint detection is important and practical in various real-world scenerios. For example, real-world applications across various fields often require detecting keypoints on a variety of unseen objects. To address this, it is a good choice to use a visual prompt-based keypoint detection model, which can detect any poses of unseen objects. In such scenerio, compared to textual prompt-based keypoint detection, visual prompts could allow the model to more accurately detect keypoints that are difficult to describe semantically, such as cloth. --- Rebuttal Comment 1.1: Title: Post rebuttal comments Comment: Thank you for your detailed reply. > Response to Q1 I agree with some of the points and am unclear on others. Even the points that sounds good. E.g., "For instance, semantic keypoint understanding equips MLLMs with enhanced visual comprehension and analytical capabilities, essential for tasks such as structural analysis, action recognition, and medical image interpretation." is not shown in the paper. A stronger submission would indeed do a generic eval to highlight these properties as well. > Response to Q2 I kinda disagree that existing MLLMs cannot be fine-tuned for this task. However, I do not think that is a weakness by itself. MLLMs are very versatile and they could be fine-tuned to a broad range of tasks, that does not, by itself pose a weakness. However, I am worried about a slight overclaim in treating the current work as more than what it does, i.e., enabling a new decoding to take place similar to perception GPT. > Response to Q3 Thank you for your response. I understand that it is indeed an established benchmark and I was able to verify that. So, I stand corrected from my earlier comments. Overall, some of my concerns have been clarified but I think a major rewrite with additional experiments and discussion is needed. So, I will slightly raise my score (also in light of other reviews and responses) but I am still leaning towards a major revision is needed. --- Reply to Comment 1.1.1: Comment: Thanks for your response. - In addressing Q1, we emphasize the significance and potential applications of the proposed semantic keypoint detection task. However, these downstream applications are beyond the scope of our current work and are intended for future exploration. We will adjust some of our claims accordingly. - Regarding Q2, fine-tuning existing MLLMs for keypoint detection is a non-trivial task, as it involves challenges such as keypoint data preparation and necessary architectural modifications. This highlights the importance of our work, which pioneers the advancement of MLLMs for fine-grained keypoint understanding and localization. We will add some discussions to ensure clarity and avoid any potential misinterpretations.
Rebuttal 1: Rebuttal: We appreciate the efforts of all reviewers in reviewing our paper and providing insightful comments and valuable suggestions. The supplementary visualization results (To \# Reviewer Hwow) have been included in the rebuttal PDF. Pdf: /pdf/2db4bc321cea9123f93ab0c567793c9867e97f3a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Dual Cone Gradient Descent for Training Physics-Informed Neural Networks
Accept (poster)
Summary: **Summary:** The paper proposes viewing the optimization of PINNs as a multi-objective optimization problem, with boundary and residual terms as potentially conflicting objective functions. To address conflicting gradients in the optimization process, the dual cone of the cone generated by the gradients of the boundary and residual terms is designated as the set of non-conflicting update directions. Several algorithms that guarantee updates from this dual cone are proposed, and their empirical performance is assessed. **General Impression:** The manuscript is well-written, easy to follow, and generally of high quality. The numerical experiments are convincing and demonstrate the good performance of the proposed method. However, I disagree to a certain extent with the multi-objective viewpoint for the optimization of PINNs. For well-posed forward problems, the boundary loss and the interior loss are not really conflicting losses—at least if we assume that the network ansatz is sufficiently expressive. I expand more on this issue below in the questions. Nevertheless, the preprint presents a convincing suite of numerical experiments that seem to benefit from the multi-objective optimization approach. Strengths: **Strengths:** 1. Considering the dual cone of the interior and residual gradients is a simple and sound idea. The presentation is clear and understandable. 2. The section on numerical results demonstrates the benefits of the proposed method and incorporates many recently proposed techniques for the optimization of PINNs. Weaknesses: See questions. Technical Quality: 3 Clarity: 3 Questions for Authors: **Questions** 1. It is unclear whether PINN optimization should be viewed as a multi-objective optimization problem. Please comment. 2. I am missing a clarification on whether the proposed dual cone gradient descent is a completely novel idea, even in the field of multi-objective optimization, or if it is well-known in that field and its application to PINNs is novel. Please clarify and give references if appropriate. 3. Could the authors comment on how their proposed method relates to loss reweighting strategies? It seems that the proposed approach could equivalently be understood as a loss reweighting strategy. 4. Can the authors comment on the case of more than two loss functions. 5. Have the authors considered measuring angles in function space instead of parameter space by using the differential of the parametrization map? Recently, optimization methods that employ function space geometry have shown promise for the optimization of PINNs (see [1, 2, 3]). This might lead to further improvement of the proposed method. **Details to question 1:** A fundamental question for me is whether to view PINNs as a multi-objective optimization problem or not. To illustrate this question, consider the following academic example of a PDE with homogeneous forcing and inhomogeneous boundary data: \begin{align} Nu &= 0 \quad \text{in }\Omega, \\ \newline Bu &= g \quad \text{on }\partial\Omega. \end{align} Assume further that $N$ maps constant functions to zero (any PDE operator without a zeroth-order term). Assume a trial neural network is initialized representing (almost) the zero function (or any other constant function). Any change in the neural network will now likely increase the PDE residual (which is at its global minimum for any constant function). Furthermore, the increase in the PDE residual is necessary to obtain the correct solution. This might lead to a Pareto stationary point that has nothing to do with the sought-after solution. **References** [1] https://proceedings.mlr.press/v202/muller23b/muller23b.pdf [2] https://openreview.net/pdf?id=z9SIj-IM7tn [3] https://arxiv.org/pdf/2402.07318 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments and positive feedback. > **Response to Q1.** **Simple Answer**: As discussed in Section 3, it is empirically observed that gradient conflicts and imbalances between losses frequently arise during PINN training, which has led to the development of various loss balancing strategies. Our work is motivated by the idea that viewing and addressing these issues from a multi-objective optimization perspective can offer a more effective solution. **Long Answer**: We appreciate the depth and challenge of this question, as well as the detailed example you provided. We think this question is closely associated with a fundamental aspect of the PINN learning process, which is indeed complex and not yet fully understood both theoretically and empirically. The trade-off between increasing PDE residuals and achieving better approximations to the true solution is indeed a critical question, and there is no theoretical guarantee that this trade-off will always lead to improvements. In the example you mentioned, even if the network is initialized at the trivial solution 0, we think it is possible for the network to reach the true solution smoothly without increasing PDE residuals. This is because, in the absence of boundary conditions, there are infinitely many solutions that satisfy $\mathcal N u=0$, including solutions other than the trivial one. The neural network can transition smoothly among these solutions and approach the true solution. In this context, the main goal of DCGD is to facilitate a smooth learning process without sacrificing PDE residuals. This approach is particularly important in failure modes like the double pendulum example in Section 5.2, where maintaining a smooth and effective learning trajectory is crucial. >**Response to Q2.** Thank you for the opportunity to clarify the novelty and originality of our DCGD framework. While the concept of dual cones has been used in convex optimization for duality theory and formulating optimality conditions, our paper employs dual cones to characterize a space where gradient conflicts do not occur and integrates this with gradient descent methods. This approach is novel not only in the context of PINNs but also within gradient descent methods more broadly. For a more detailed discussion of the novelty and originality of DCGD, please refer to "Novelty and Originality of DCGD" in our global response. >**Response to Q3.** Thank you for raising this insightful question. While our proposed DCGD framework does share similarities with loss reweighting strategies in that it adaptively adjusts the weight of each loss term (in fact, $\nabla_t \mathcal{L}_{\|\nabla \mathcal{L}_r^{\perp}}$, $\nabla_t \mathcal{L}_{\|\nabla \mathcal{L}_b^{\perp}}$ in DCGD) at each iteration, it is more accurately described as a gradient surgery approach. This is because DCGD focuses on manipulating the direction of the updated gradient to address gradient conflicts and dominating gradient issues. As discussed in Appendix B, various multi-task learning algorithms [1,2,3] based on gradient surgery techniques can be seen as special cases of DCGD. Additionally, since loss reweighting schemes operate fundamentally differently, combining DCGD with loss reweighting strategies could offer potential synergies. For example, as shown in Section 5.3, integrating DCGD with existing loss reweighting methods like LRA and NTK leads to improved performance. [1] Yu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K., & Finn, C. (2020). Gradient surgery for multi-task learning. *Advances in Neural Information Processing Systems*, *33*, 5824-5836. [2] Liu, B., Liu, X., Jin, X., Stone, P., & Liu, Q. (2021). Conflict-averse gradient descent for multi-task learning. *Advances in Neural Information Processing Systems*, *34*, 18878-18890. [3 ]Désidéri, J. A. (2012). Multiple-gradient descent algorithm (MGDA) for multiobjective optimization. *Comptes Rendus Mathematique*, *350*(5-6), 313-318. >**Response to Q4.** Please refer to “more than two losses” in our global response. >**Response to Q5.** Thank you for the interesting suggestion and for bringing the references [4,5,6] to our attention. We have carefully studied these papers and recognize that measuring gradients in function space rather than parameter space offers valuable insights for optimizing PINNs. While more in-depth experimental validation is needed to fully understand gradient conflicts in function space, we have explored this approach by measuring angles using the natural gradient, as illustrated in **Figure 2** of the attached PDF. Our findings indicate that gradient conflicts occur in function space similarly to parameter space. Furthermore, our method appears to handle these conflicts effectively. We believe that extending our framework to consider the dual cone in function space could be a promising direction for future research. [4] Müller, J., & Zeinhofer, M. (2023, July). Achieving high accuracy with PINNs via energy natural gradient descent. In *International Conference on Machine Learning* (pp. 25471-25485). PMLR. [5] Zeng, Q., Kothari, Y., Bryngelson, S. H., and Schaefer, F. T. Competitive physics informed networks. In The *Eleventh International Conference on Learning Representations.* [6] Müller, J., & Zeinhofer, M. Position: Optimization in SciML Should Employ the Function Space Geometry. In *Forty-first International Conference on Machine Learning.* --- Rebuttal Comment 1.1: Comment: I thank you very much for the detailed answer. **Concerning Q1:** I understand your argument that the neural network can be trained in a way that does not increase the residual. I am still not fully convinced by the multi-objective viewpoint, but your experimental results are convincing and me not being convinced should not influence my assertion of your work. In any case, it is an interesting question. **Connection to loss re-weighting strategies:** I believe it holds $$ G_t \subset \operatorname{span} ( \nabla \mathcal L_b, \nabla \mathcal L_r ) $$ hence, no matter which update $g\in G_t$ is picked, we can write $$ g = \alpha \nabla \mathcal L_b + \beta \nabla \mathcal L_r $$ and hence view the update direction $g$ as stemming from the loss re-weighting $$ \tilde{\mathcal L} = \alpha \mathcal L_b + \beta \mathcal L_r $$ and $\nabla \tilde{ \mathcal L} = g$. **Function spaces** Thank you for the extra experiments. Could you elaborate which metric was chosen to compute the natural gradients? I am convinced that this pre-print merits publication. I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our paper and for raising our score. Below are our responses to your additional questions. >**Concerning Q1** Thank you once again for your deep and interesting question. >**Connection to loss re-weighting strategies** We agree with your observation. The proposed DCGD framework can be interpreted as a loss re-weighting strategy that ensures the prevention of gradient conflicts. >**Function spaces** For computing the natural gradients, we used the metric induced by the inner product in $H^1$ as in [1]. [1] Müller, J., & Zeinhofer, M. (2023, July). Achieving high accuracy with PINNs via energy natural gradient descent. In *International Conference on Machine Learning* (pp. 25471-25485). PMLR.
Summary: The paper concerns with the training of physics-informed neural networks (PINNs). It observes that in a multi-objective setting a naive gradient update which decreases the total objective might not decrease every individual objective. This is used to explain the challenge of training PINNs which have a potentially conflicting PDE and a boundary-condition loss. As a remedy, the work proposes to find another descent direction which decreases both of the PINN losses. Three different methods are proposed and the superior "center" method is selected for more detailed empirical investigation. It demonstrates strong results relative to several existing PINN optimizers. Strengths: The paper presents a clear central story that explains multi-objective optimization in terms of simple geometric concepts. The theoretical analysis in addition to the empirical evaluation is a welcome addition. Weaknesses: However, the significance of the theoretical contributions must be questioned. There are just two theorems, of which 4.2 is fairly trivial and 4.5 is unclear: why is $\frac{1}{T+1}$ on both sides of Equation 4.2, and what does this equation even state? I cannot make sense of the provided intuition that the convergence rate is $\mathcal{O}(1/T)$, i.e., the more iterations, the slower you converge. Even more so, it is unclear how this intuition follows from Eq. 4.2 in the first place: it says that the sum of gradient norms is upper-bounded by the gap from the initial to the optimal loss, up to some constant. The work also overlooks some recent literature explaining the difficult behavior of PINNs in terms of poor conditioning and proposing strong optimization strategies, which to the best of my knowledge achieve state-of-the-art convergence. I would argue these ideas should be included in the related work and as baselines contributing to a much more complete story. - *De Ryck et al. An operator preconditioning perspective on training in physics-informed machine learning. ICLR 2024.* - *Rathore et al. Challenges in Training PINNs: A Loss Landscape Perspective. ICML 2024.* Several presentation choices could be improved, including minor mathematical and typing errors, odd wording, and superfluous parts (see suggestions below). The work also overstates the practical success of PINNs: "a prominent approach for solving PDEs" (line 2), "remarkable empirical performance" (line 4), "PINNs have achieved great success in a wide range of applications" (line 8). This sets a misleading context for the practical significance of PINNs and by extension the presented work. - *Grossmann et al. Can Physics-Informed Neural Networks beat the Finite Element Method? IMA Journal of Applied Mathematics 2024.* Technical Quality: 3 Clarity: 2 Questions for Authors: - I would kindly ask the authors the clarify Theorem 4.5 in terms of the aforementioned concerns. - How do the aforementioned optimization strategies compare to your experiments? Do you foresee a way for your method to be compatible to these (similar to section 5.3)? - Does your method extend to more than two losses? Even in PINNs, there might be additional losses for data and initial conditions. - How important is the assumption of fixing $\omega_r = \omega_b = 1$? Do the other optimizers show improved performance for other choices of these loss weights? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations are discussed rather sparingly and should also include that the method currently applies to just two losses. It should also be noted that the performed experiments are closer to toy than practical problems. **Errors and suggestions** - Line 59: It would be helpful if you briefly described what each of these three approaches do, instead of only listing examples. - Line 73: $d$ is used for both the dimension of the domain as well as the parameter space. - Eq. 2.2 and line 88: $\omega_u$ should be $\omega_b$ - Line 79: should this be an equality? - I do not think writing out algorithm 1 adds much value as this is precisely gradient descent but with a different gradient. - Line 170: should there be an argmin or arginf instead of inf? Or alternatively $\mathcal{L}(\theta*)$ instead of $\theta*$ - Lines 229-240 and Figure 6 might better belong to the appendix as it detracts attention from the main story. - Line 185: "presents" -> "present" - Line 221: "ADAM" -> "Adam" - A.4: the first inequality should be an equality - Adjust some odd wording, e.g. "mysterious challenges" (line 44), "harmoniously" (line 48) - I suggest to moderate the claims about the practical success of PINNs (see weaknesses for concrete examples). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review and for raising questions from the perspective of readers who may be less familiar with optimization theory. We hope the following responses will clarify any misunderstandings and address your concerns. > **Response to W1 (Theorem 4.2. is trivial).** We respectfully disagree with the assessment that Theorem 4.2 is trivial. We believe that this theorem is not only intuitive but also provides significant insights. As detailed in Section 3, much of the existing literature on PINNs has explained gradient pathologies using concepts like gradient conflict and dominating gradients. However, dominating gradients have been somewhat ambiguously defined, often leading to ad hoc strategies. Theorem 4.2 clarifies that dominating gradients should be adaptively defined based on the angle between gradients, addressing a fundamental gap in understanding gradient pathologies that has been overlooked in previous work. >**Response to W2 and Q1 (Interpretation of Theorem 4.5).** Firstly, the statement of Theorem 4.5 (equation 4.5) is standard in optimization theory for showing the convergence of optimization algorithms (for example, see Corollary 2.2.1 of [1] and Theorem 3.2 of [2]). Additionally, your interpretation of "the more iterations, the slower you converge" is incorrect. The left-hand side of equation 4.5 represents the average of the squared gradient during training, and it implies that: $$ \min_{t=0,1,\ldots, T}\|\nabla\mathcal L(\theta_t)\|^2\leq\frac{1}{T+1}\sum_{t=0}^T\|\nabla\mathcal L(\theta_t)\|^2=\mathcal O(1/T) $$ Thus, the optimization algorithm converges to a stationary point such that $\|\nabla\mathcal L (\theta_t)\|=0$ as the number of iterations increases. This result represents the optimal convergence rate that can be achieved under the condition that only the gradient being Lipschitz continuous is provided. Furthermore, Theorem 4.5 provides crucial information compared to the convergence results of existing gradient descent methods like SGD and Adam. While SGD and Adam are only guaranteed to convergence to a stationary point, DCGD not only converges to stationary points but also has the potential to converge to Pareto-optimal solutions in multi-objective optimization. [1] Zhuang, J, et al. Adabelief optimizer: Adapting stepsizes by the belief in observed gradients. NeurIPS 2020. [2] Hazan, Elad. Lecture notes: Optimization for machine learning. 2019. > **Response to W3 and Q1 (Comparison with Two Recent Papers [3,4])** Thank you for introducing the recent papers. Since one of these papers was published after our submission, we were unable to include it in our related work at that time. After carefully studying the papers, we compare the motivation, theoretical aspects, and performance of the papers with ours: **Motivation:** Both papers focus on the ill-posedness of PINN loss and discuss the importance of preconditioning and the combination of second-order and first-order methods. In contrast, our work analyzes PINN gradient pathology from a multi-objective optimization perspective and proposes a novel algorithm to address these issues. **Theoretical Aspects:** We kindly point out that the review's statement that “the convergence results in the two papers are state-of-the-art” is somewhat inappropriate because their convergence results cannot be directly compared to ours. The papers derive convergence results for optimizers by making assumptions about the Hessian (PL condition) and the overparameterized model (assumption that the model can achieve zero loss on the training dataset, see assumption 8.1 of [4]). In contrast, our results are derived under **the minimal assumption** that gradients are Lipschitz continuous, without any assumptions on the Hessian or the overparameterized model. Thus, the theoretical settings and assumptions are completely different, making a direct comparison of theoretical results inappropriate. To reiterate, our derived convergence rate represents the optimal rate that can be achieved when the Lipschitz continuity condition of the gradient is only provided. **Performance:** Our paper and the two mentioned papers take completely different angles on PINN optimization. Thus, combining these ideas harmoniously could lead to further improvements. Indeed, our additional experiments show that the combination of NNCG from [4] with DCGD results in improved performance, see **Table 4** of the attached PDF. [3] De Ryck et al. An operator preconditioning perspective on training in physics-informed machine learning. ICLR 2024. [4] Rathore et al. Challenges in Training PINNs: A Loss Landscape Perspective. ICML 2024. >**Response to W4. overstatement of the practical success of PINNs** Thank you for pointing this out. We are happy to adjust our language to be more neutral in the revised manuscript. >**Response to Q4 ($w_r=w_b=1$)** The setting $w_r = w_b = 1$ is a common choice in the PINN literature for simplicity in presentation (e.g., [5, 6, 7] ). Furthermore, since DCGD updates using a dual cone region to avoid gradient conflicts across different values of $w_r$ and $w_b$, this ensures robust performance of DCGD regardless of the specific values chosen for $w_r$ and $w_b$. To support this, we have compared the performance of optimizers under different values of weights. As shown in **Table 3**, DCGD consistently demonstrates robust and superior performance compared to other competitors across different values of $(w_r, w_b)$. [5] Raissi, M., et al. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics. [6] Wang, S., et al. (2021). Understanding and mitigating gradient flow pathologies in physics-informed neural networks. Journal on Scientific Computing. [7] Zhao, Z., et al. PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks. ICLR 24. --- Rebuttal 2: Comment: I kindly thank the authors for their insightful and strong rebuttal. I was indeed not aware of this as a standard notation in the optimization literature. I appreciate you pointing this out, including the references, and I acknowledge my oversight on Theorem 4.5. I will also reduce my confidence rating on these grounds. I also appreciate the theoretical and empirical comparison to the invoked references. For clarification, I referred to [4] achieving the lowest error, not the best convergence rate. It is impressive to see that DGCD can replace Adam + L-BFGS, when fine tuning with NNCG. If possible, could you also report the performance of just DGCD on this task? I understand that this is combining the DGCD and NNCG sequentially. I assume these can also be combined simultaneously by computing the gradients from NNCG for each loss and then combining them according to your method? Given that my main concerns have been addressed, as well as the discussion on the broader applicability beyond PINNs and extension to >2 losses, I will adjust my rating. --- Rebuttal 3: Comment: Thank you very much for your valuable comments and raising our score. Here are our responses to your additional questions. - could you also report the performance of just DGCD on this task? The table below shows the performance of DCGD without fine tuning with NNCG. | Optimizer | $L^2$ error | | --- | --- | | Adam | 2.12e-2 | | Adam + L-BFGS | 1.92e-2 | | DCGD | **1.20e-2** | - I assume these can also be combined simultaneously by computing the gradients from NNCG for each loss and then combining them according to your method? There are two methods to combine DCGD with the straetegy of [1]. **Combination 1**. Apply DCGD and NNCG sequentially First, train using DCGD for a certain number of epochs, and then fine-tune with NNCG (as similar to the approach of ADAM + L-BFGS + NNCG). **Combination 2**. Apply the updated gradients of DCGD to Each Optimizer For each optimizer—ADAM, L-BFGS, and NNCG—the updated gradients from DCGD are used, as described in the pseudo-code provided below. ```jsx adam_optimizer = Adam(model_params, adam_params) lbfgs_optimizer = L_BFGS(model_params, lbfgs_params) nncg_optimizer = NNCG(model_params, nncg_params) # for combined: adam_DCGD = True, lbfgs_DCGD = True, nncg_DCGD = True # for sequential (DCGD + NNCG): adam_DCGD = True, lbfgs_epoch = 0, nncg_DCGD = False adam_DCGD = True lbfgs_DCGD = True nncg_DCGD = True for i in range(adam_epoch): if adam_DCGD == True: # dcgd_closure set the param.grad as the dcgd update vector adam_optimizer.step(dcgd_closure) else: # use loss.backward() where loss = loss_b + loss_r adam_optimizer.step(closure) for i in range(lbfgs_epoch): if lbfgs_DCGD == True: # dcgd_closure set the param.grad as the dcgd update vector lbfgs_optimizer.step(dcgd_closure) else: # use loss.backward() where loss = loss_b + loss_r lbfgs_optimizer.step(closure) for i in range(nncg_epoch): if nncgd_DCGD == True: # dcgd_closure set the param.grad as the dcgd update vector nncg_optimizer.step(dcgd_closure) else: # use loss.backward() where loss = loss_b + loss_r nncg_optimizer.step(closure) ``` The results shown in Table 4 of the attached PDF were obtained using **Combination 2**. Additionally, we have also experimented with **Combination 1** and achieved further improvements. Due to time constraints, we conducted the experiments under the default parameter settings. Thus, there might be room for further improvement with hyperparameter tuning. | Optimizer | $L^2$ error | | --- | --- | | Adam+L-BFGS+NNCG | 9.92e-3 | | Combination 1 | **5.60e-3**| | Combination 2 | 9.49e-3 | [1] Rathore et al. Challenges in Training PINNs: A Loss Landscape Perspective. ICML 2024.
Summary: In this paper, it is identified that PINNs can be adversely trained when gradients of each loss function exhibit a significant imbalance in their magnitudes and present a negative inner product value. To address these issues, the authors propose a novel optimization framework, Dual Cone Gradient Descent (DCGD), which adjusts the direction of the updated gradient to ensure it falls within a dual cone region. This region is defined as a set of vectors where the inner products with both the gradients of the PDE residual loss and the boundary loss are non-negative. Theoretically, the convergence properties of DCGD algorithms in a non-convex setting are analyzed. On a variety of benchmark equations, DCGD outperforms other optimization algorithms in terms of various evaluation metrics, achieves superior predictive accuracy and enhances the stability of training for failure modes of PINNs and complex PDEs. Strengths: 1. Theoretical analysis is provided, including the necessary and sufficient conditions under which the total gradient falls within the dual cone region, and the convergence rate. 2. Superior performance. Weaknesses: 1. It is better to provide the visualization of PINN predictions and compare with exact solutions, especially for the failure modes of PINNs. 2. Training timing is not reported. Technical Quality: 3 Clarity: 3 Questions for Authors: Line 277, "Failure model" vs. "Failure modes"? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The proposed Dual Cone Gradient Descent (DCGD) optimization method for PINNs may still be trapped in local minima. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and suggestions. - **Response to Weaknesses (W)** > **W1.Visualization of PINN predictions and exact solutions.** PINNs predictions and the exact solution, along with absolute error plots, can be found in Appendix C of the original manuscript. Please refer to Figures 9-15 of the original manuscript. > **W2. Training time** We provided a comparison of the training speed of the proposed method with other optimization algorithms in Appendix D.2 of the original manuscript. For additional details on the computational cost, please refer to our global response. - **Response to Questions (Q)** > **Q1. Line 277** Thank you for the correction. - **Response to Limitations** It is indeed a common challenge in high-dimensional nonconvex optimization problems for all optimization algorithms to potentially get trapped in local minima. However, DCGD has a significant advantage in that it not only converges to stationary points but also to Pareto-stationary points, distinguishing it from methods like SGD and Adam that only guarantee convergence to stationary points. This distinction is a key contribution of our work, as demonstrated in Theorem 4.5. Additionally, the empirical results from the toy example shown in Figure 6 effectively support this theoretical result. Moreover, integrating noise injection techniques, such as Stochastic Gradient Langevin Dynamics (SGLD) [1], could further enhance the exploration of better local minima, although we have not investigated such variations in this paper. [1] W. Max and Y. Teh. "Bayesian learning via stochastic gradient Langevin dynamics." ICML, 2011. --- Rebuttal Comment 1.1: Comment: Thank the authors for their replies. I am satisfied with their answers and will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you once again for reviewing our work. We are pleased that you are satisfied with our responses.
Summary: PINNs have become commonplace in both scientific computing and machine learning communities and have found widespread applications. There have been many modifications to various aspects of PINNs, such as the loss functions, initialisations, initial and boundary condition representations, learning algorithms and network architectures. A major problem with learning effective solutions through PINNs relative to other learning tasks is their difficulty in converging to the actual solution due to the complex nature of the underlying physics and equations involved and the corresponding loss landscape. The authors have provided a comprehensive survey of the previous works, making the unaddressed problems and goals very clear. A novel algorithmic framework based on convex optimization and geometric analysis is presented, which demonstrates superior performance on many tasks. The contributions of this paper are multifold, one being a new optimization algorithm while the other being a unified, open and flexible framework to adapt, modify and integrate the current technique with previous and future techniques. The paper is well written with enough information on the theoretical and mathematical underpinnings, experimental setup, implementation details, results and justifications. Strengths: Although the concept of dual cone is well known in the optimization domain, specifically in the convex optimization area, the authors utilise and combine existing techniques from optimization theory in a novel way to develop a new algorithmic framework, which they refer to as DCGD. The authors propose the DCGD algorithm which involves picking a gradient from the Dual Cone region formed by the two losses (boundary loss and residual loss) of PINNs and also theoretically prove that it converges to a pareto-stationary point. Thus, along a new algorithm, theoretical guarantees for convergence are provided, which are very crucial for the goal of improving the training convergence of PINNs. With a suitable candidate being chosen as the new gradient from an explicitly defined space (called here as $G_t$), multiple variants of the DCGD algorithm can be created. The DCGD can also be combined with many existing optimization techniques, such as Adam Optimizer and Neural Tangent Kernel (NTK). This work introduces a fairly novel optimization technique which has huge potential for applications in PINNs and multi-objective optimization. The authors have provided strong justification for the generality of their work by showing that most previous MTL approaches are special cases of DCGD. The general, modular and extensible nature of the work paves the way to significantly accelerate further research in various allied areas. The supplementary material includes the datasets and code for all the experimental details mentioned in the paper and the appendices. I have glanced through the code and data files provided but have not personally verified the results by running the code. (might update here if I get to install and run the code provided) Weaknesses: The authors have provided enough empirical and theoretical evidence for their proposed DCGD method, though since this work proposes a theoretical and algorithmic framework, albeit it would be notable to have a section on comparing the computational complexity and qualitative terms with existing algorithms. I would say a short section in Appendix D/E would go well. Since the problem at hand is of multi-objective optimization which overlaps with multi-task learning problems and PINNs are a natural suitable choice, it would be very helpful to see the results of this approach on PINOs (Physics Informed Neural Operators). There is no comparison (even qualitative) of DCGD on PINOs (There are PINOs which are trained only on residual loss, but here I'm specifically referring to PINOs which include both residual and boundary loss for being relevant to DCGD). Was this not considered or omitted, or is it expected to show improvement in performance, as PINNs have shown? (Personally, I believe it should show improvements as the authors have already demonstrated superior performance on PINNformer with DCGD) Presentations/Grammar/Typos (I do understand that the authors would be well aware of these and would rectify these in the pre-print or camera-ready versions. However, I would like to mention them here together to aid them. Some might have been missed; might update if needed) Equation 2.2: The weight of the boundary loss is mentioned as $\omega_u$ and also on line 80, whereas it is mentioned as $\omega_b$ (which seems more appropriate) on line 79 Section 4.3: (This is a nitpick and can be conveniently ignored) $g_t^c$ could be written on the same line as eq. 4.3 or as a newline as eq. 4.4 similar to how equation E.3 is written Section 5.2: Is the title intended to be “P(I)DEs” or “PDEs”? I believe this is just a typo Appendix A, line 769: Equations A.8 and A.9 could be mentioned here (as A.4 and A.5) and then referenced in the Proof of Proposition 4.6 section. I believe this will greatly enhance readability Appendix A: Proof of Theorem 4.2 line 757: The residual loss is repeated twice $\nabla\mathcal{L}_r(\theta_t)$ is repeated twice, it should be both boundary and residual loss, $\nabla\mathcal{L}_r(\theta_t)$ and $\nabla\mathcal{L}_b(\theta_t)$ Appendix A, line770: "One can derive that …" should we include the boundary loss term instead of residual loss as the derivation with residual loss is already shown in line769 i.e it should be $\langle g, \nabla L_b (\theta_t) \rangle \geq 0$ Appendix A, line 781: For the sake of notational consistency, could you replace "condition (2)" with "condition (ii)" Appendix A, line 785: (nitpick) The subscript t is omitted in the boundary loss and included in the residual loss ($\nabla\mathcal{L}_r(\theta_t)$ and $\nabla\mathcal{L}_b(\theta)$). I suggest the subscript "t" be included at both the places for notational consistency Appendix A, line 818, Proof of Corollary 4.7: 3. DCGD (Center): “vector” instead of “veoctr” Technical Quality: 4 Clarity: 4 Questions for Authors: I have gone through the previous works mentioned in detail, and does seems like the problem of multi-objective optimization or even optimization, specifically in the case of PINNs, has not been looked at from the perspective of Dual Cones. Could the authors add some more works related to Dual Cone-based techniques to the previous/related works section? Typos and presentation issues are already covered in the weaknesses section; however, here, I would like to mention a few pointers where I would like to seek clarification. (This is more from a clarification perspective for me individually than a comment on the work, though I believe these clarifications will help the broader community) Appendix A, line 779, Proof of Theorem 4.5: In the derivation of line779, from the first step to the next, should there be an equality sign in place of the first inequality sign as we are just distributing the dot product over addition on the first term $\nabla \mathcal{L}(y+t(x-y))$? (i.e. the first of the 3 inequality signs should be an equality instead) Appendix A line 797: Is the expression $\left| \cos(\phi_t) - \pi \right| < \alpha$ is correct and as intended? The inequalities on lines 810 and 811 and the threshold criterion in the algorithms in Appendix E ($\pi - \alpha < \phi_t \leq \pi$) point otherwise. I believe there should not be the cosine of $\phi$; rather, it should just be the angle/phase difference $\phi$. I have a similar comment for line 984, as the same expression is used. Appendix C.5 line 925: Is the statement complete? Is it intentionally left this way? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: I am appreciative of the authors who have mentioned (and also acknowledged in the checklist) a few limitations of the work in section 6 this is more from a future work perspective than from a limitations of the work presented as part of this paper. The limitations of the current work are not very evident from the present text. It would be further helpful if they could include a more detailed section on where this cannot be applied or other multi-objective scenarios where the DCGD optimization framework is not as effective as other previous techniques. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments, and for thoroughly checking details such as typos. We deeply appreciate your high-quality review. - **Response to Weaknesses (W)** > **W1. Comparison of Computational Complexity** Please refer to the computational complexity discussion in our global response. > **W2. Application to PINOs** Thank you for your valuable suggestion regarding the application of DCGD to PINOs. We have conducted additional experiments on PINOs for Burgers' equation and Darcy flow. The results are summarized in **Table 1** of the attached PDF, which confirm that DCGD can also enhance the performance of PINOs. For further details on the broader implications and potential impact beyond PINNs, please refer to our global response. > **W3. Typos** Thank you for carefully checking the details. We will incorporate the suggested corrections into the revised manuscript. One point to clarify: “P(I)DEs” refers to partial integro-differential equations. - **Response to Questions (Q)** > **Q1. Previous Work Related to Dual Cone-Based Optimization** Dual cones are a fundamental concept in optimization, particularly in convex optimization. They are used to characterize optimal solutions and feasibility conditions in constrained convex optimization and duality theory. However, to the best of our knowledge, there have been no previous instances of incorporating dual cones into gradient manipulation or developing a dual cone-based gradient descent method. We emphasize that our proposed framework is not merely an application of existing methods to multi-objective problems or PINNs but represents a completely novel development in the field. For further discussion on the novelty and originality of our work, please refer to our global response. > **Q2. Derivation of line 779** You are correct. We perform the distributive property of the inner product, which results in an equality. Thus, the inequality still (obviously) holds, but for clarity, we will change it to an equality as suggested. > **Q3. Line 797** Thank you for your comment. The threshold $\alpha$ is set to stop the algorithm when it reaches near the Pareto-stationary point. Therefore, the cosine should be removed from the equation, i.e., the expression $\vert \cos(\phi_t) - \pi \vert < \alpha$ should be changed to $\pi - \alpha < \phi_t \leq \pi$ . > **Q4. Line 925.** Thank you again for your careful review. The sentence that should have been removed during the annotation process was accidentally included. - **Response to limitations** While this paper demonstrates the valuable result that DCGD converges to Pareto-stationary points, which is a notable advantage over popular optimizers like SGD and Adam, there are still unexplored aspects regarding its convergence. For instance, an important and challenging question is whether DCGD can be guided to converge to a better Pareto-optimal point within the Pareto set. Additionally, although we proposed three algorithms—Center, Projection, and Average—within the DCGD framework and found DCGD (Center) to be the most effective for PINNs, there may be more efficient algorithms for other problems (e.g., PINO, MTL). These possibilities were not addressed in this paper and represent interesting directions for future research. --- Rebuttal 2: Comment: I have read through all the reviews, rebuttals and responses (including the general author rebuttal). I am satisfied with the authors' response and the general comment. I am pleased to see the applications of DCGD in unlearning problems. I would like to retain my review and the associated scores. --- Rebuttal Comment 2.1: Comment: Thank you for your feedback. We are glad that our rebuttals have addressed your concerns.
Rebuttal 1: Rebuttal: **Global response to all reviewers** We would like to express our sincere appreciation for the insightful comments and valuable suggestions provided by all reviewers. We assure you that all comments and suggestions will be thoroughly addressed in the revised manuscript. In this global response, we will summarize and address the common questions and key comments raised by the reviewers. > **Novelty and originality of DCGD** (*in response to Reviewers Nfs2, SdDT*) To the best of our knowledge, our work represents the first instance of applying the concept of dual cones specifically to gradient manipulation within the context of gradient descent. This proposed framework is novel not only for PINNs but also across the broader field of multi-objective optimization. While dual cones are indeed a fundamental concept in optimization, particularly in relation to constrained convex optimization (e.g., duality theory and formulating optimality conditions for convex problems), as noted by Reviewer Nfs2, our approach introduces a novel application of dual cones to manage gradient conflicts. This innovative use in the context of gradient descent is an area that has not been previously explored. We would like to highlight the key contributions of our work. Existing strategies to resolve gradient conflicts are often ad hoc and lack a systematic framework. In contrast, our method provides a principled approach by utilizing the dual cone characterization to avoid gradient conflicts, as detailed in Theorem 4.2 and Proposition 4.3. This approach allows DCGD to be viewed as a generalization of various multi-task learning (MTL) algorithms, as discussed in Appendix B. Consequently, the proposed method has the potential to advance the development of future multi-objective optimization algorithms.Furthermore, our convergence results, presented in Theorem 4.5, demonstrate that DCGD converges not only to stationary points but also to Pareto-stationary points in a multi-objective non-convex setting. This provides both theoretical and empirical advantages compared to methods such as SGD and Adam, which guarantee convergence only to stationary points. > **Broader Implications and Potential Impact Beyond PINNs** (*in response to Reviewer Nfs2*) Our initial focus was on PINNs, but as suggested by Reviewer Nfs2, DCGD can be applied to various modern machine learning problems where multiple loss functions need to be managed simultaneously, such as in Physics-Informed Neural Operators (PINO), multi-task learning (MTL), and machine unlearning (where both forgetting and retraining losses must be considered). To demonstrate the extensibility and applicability of DCGD, we have conducted additional experiments applying DCGD to PINO (in response to reviewer Nfs2) and machine unlearning problems. As shown in **Table 1** in the attached PDF, DCGD improves performance compared to optimally tuned methods when applied to PINO. Furthermore, in the context of machine unlearning, DCGD enables unlearning to be performed without compromising the quality of the generated images, as illustrated in **Table 2** and **Figure 1** in the attached PDF file. > **More than two losses** (*in response to Reviewers NFZX, SdDT*) - **Easy and simple approach**: For cases involving more than two loss functions, such as those arising from multiple governing equations and boundary conditions in PINNs, DCGD can be effectively applied by treating the multiple losses as a combination of residual loss and the other losses. For instance, in Section 5.2, we applied this approach to A-PINNs with three loss functions and obtained improved results. - **More general approach**: Let us consider the total loss function $\mathcal{L}(\theta)$, which is the sum of multiple loss terms: $$ \mathcal{L}(\theta):= \sum^{n}_{i=1}\mathcal{L}_i (\theta), \text{where }i=1,2, \cdots, n $$ Denote by $\mathbf{K}^*_t$ the set of vectors satisfying $\langle u, \nabla \mathcal{L}_i(\theta_t) \rangle \geq 0$ for all $i = 1, 2, \cdots, n$. Then we can characterize $\mathbf{G}_t$, which is a subset of $\mathbf{K}^*_t$, by similar approach developed in this paper. For example, let $\mathbf{G}^{ij}_t$ be a subset of a dual cone region for $\nabla \mathcal{L}_i(\theta)$, $\nabla \mathcal{L}_j(\theta)$ which defined as follow: $$\mathbf{G}_t^{ij} := \big\lbrace c_1g_i+c_2g_j+c_3g_c^{ij} \big| c_1, c_2, c_3\geq 0 \big\rbrace$$ where $g_k = \nabla_t \mathcal{L}_{\|\nabla \mathcal{L}_k^\perp} $ for $k= i,j$ and $g_c^{ij}$ is a unit vector that is orthogonal to both vectors $g_i, g_j$. We can get $g_c^{ij}$ by a cross product of $g_i$ and $g_j$ (More specifically, after express each vector by sum of $\nabla \mathcal{L}_i (\theta), \nabla \mathcal{L}_j (\theta)$ ). Then we can determine the subset of dual cone as follows: $\mathbf{G_t} := \mathop{\bigcap}\limits_{i\neq j}^{n} \mathbf{G}_t^{ij}$. In other words, the principles developed in this paper allow for the characterization of the dual cone region even when dealing with more than two loss functions. > **Computational cost** (*in response to Reviewers Nfs2, Bm5Z*) We compared the training speed of the proposed method with other optimization algorithms in Appendix D.2 of the original manuscript. It is important to highlight that DCGD algorithms are essentially first-order (explicit) gradient methods. Therefore, while there may be differences in speed due to the complexity of the update rule, these differences in training speed are not significant. Pdf: /pdf/835dc05dd063f72e09802cf184e9f5b239142fbf.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Image Copy Detection for Diffusion Models
Accept (poster)
Summary: This paper studies a novel task of Image Copy Detection for Diffusion Models (ICDiff). Different from traditional Image Copy Detection task, ICDiff focuses on detect and evaluate the degree of image copy for images generated by text-2-image (T2I) generative diffusion models. This is an important and meaningful task for the community to study, with the emergence and influence of T2I diffusion models on downstream tasks, such as image editing, icon generation, or style transfer. Effectively and precisely detecting the generated images that might "copy" content/style/character of the artifically created images in its training data can help the end users avoid violating issues such as Image Copyright. To study such a problem, the authors collected and proposed a new dataset called D-Rep. To build such a set, the authors first try to find text prompts from DiffusionDB that are highly similar to image titles from LAION-Aesthetics V2, using Sentence-Bert score. Subsequently, they generate 40k images using these collected prompts using Stable Diffusion 1.5 to simulate possible image copies from LAION. To obtain the degree of image copy annotation, the authors employ human annotators to score each generated-image-&-LAION-image paris from 1 to 6. Furthermore, the authors propose and train a probablity density based embedding model (PDF-Embedding) to score the degreee of image copy for input images. 90% of the collected D-Rep dataset is used as training set for the proposed method. In the experiments, the proposed PDF-Embedding more effectively and efficiently detect image coplies in the D-Rep test set, comparing to the compared methods. However, the quantative experiments are limited to the proposed dataset. In addition, the comparison is somewhat unfair because many of the compared methods (such as GPT-4V, CLIP, DINOv2) are not trained or fine-tuned on the training split of D-Rep, but the proposed method is. Strengths: - [Clear writing]: The writing is clear. The sturcture of the paper is easy to follow. - [Novel and Important Task]: The studied and proposed task Image Copy Detection for Diffusion Models (ICDiff) is novel, practical, and meaningful. It is important for the community to address the Image Copy issues caused by the training data of T2I diffusion models. This will significantly help downstream tasks avoid copyright issue. - [Good baseline]: The proposed PDF-embedding is an efficient and good baseline to address the proposed task. Weaknesses: - [Incompleteness]: This paper is not complete in itself. The authors largely use "refer to Section XXX in the Appendix" throughout the paper. At Sec. 5.1 (Line#185), the entire section for Training Details contains only 1 sentence "Please refer to Section E in the Appendix". I understand the space is limited in the main manuscript and the authors want to use the space for more important content. However, a paper should be complete in itself. After carefully reading the paper, I cannot know: i) how the model is trained, what is the architecture, what are the configuration; ii) how the other methods are compared, do the authors train them, how are they trained or adapted to the current task? These details are important for the main paper for the readers to understand the method and the experiments. - [Limited experiments]: The quantitaitve experiments are only conducted on one dataset (4k images in the test set) in Tab. 1. In addition, this dataset is generated by only one specific T2I model Stable Diffusion 1.5. Different generative models might have different image copy patterns. The current experiments can hardly validate the effectiveness and advantages of the proposed method over other methods. More comprehensive experiments should be conducted. When switching prompts (data) and the generative models, maybe zero-shot methods such as CLIP, M-LLMs (GPT4-V), or DINOv2 are more generalize. Without large scale experiments, it is hard to draw a conclusion. - [Unfair comparison]: In Tab. 1, the compared Vision-Language Models (CLIP, GPT-4) and Self-supervised Learning Models (e.g., DINOv2) are not trained or fine-tuned on the evaluation dataset's training split. However, the proposed method is trained on the training split. This makes the comparison unfair. To ensure a fair comparison, either the compared methods should be trained or fine-tuned on the same training split; or, evaluate all methods on novel datasets in a zero-shot manner. With the current unfair comparison, the advantage and effectiveness of the proposed method is not clear. - [Duplicate content]: Sec. 5.2 and 5.3 are highly overlapped. Technical Quality: 2 Clarity: 3 Questions for Authors: Q1: Do the authors study the quality of text matching using Sentence-Bert? Pure linguistic matching could be quite ambigious and subseqeuntly leads to incorrect matching. For instance, "containers" can be matched to both the huge containers for shipment at the port and the plastic containers (such as lunch box) in a kitchen context. In addition, do the authors study different scores, such as CLIP text encoder score instead of Sentence-Bert score? As observed in [1], for visual content related text, CLIP encoder seems better than Sentence-Bert. Q2: How do you define Image Copy for Diffusion Models? What kind of "Copy" should be detected? This is important for the studied task. For instance, when a user prompts a T2I diffusion model "A portrait of the Mona Lisa.", of course, a highly similar image that copies the famous Mona Lisa potrait will be generated. But, is this an image copy? This kind of detection might not helpful for downstream tasks. The task should be defined better and thereby the method can be designed better. From the visualization in Fig. 3, I found the high-score copies are mainly "object copy". These copies might occur frequently if the prompt tasks the diffusion model to generate such an image. By conditioning and limiting the prompt text, this can be solved directly. So why is it useful? T In my opinion, a worth-to-detect image copy would be when prompting "Generate a yellow cartoon-style mouse" and a Pikachu-like image is automatically generated without explicitly mentioning it in the prompt. Q3: How well the proposed method will generalize to images generated by other diffusion models, such as SDXL or DALL-E-3, quantatively? [1] Chen, Z., Chen, G. H., Diao, S., Wan, X., & Wang, B. (2023). On the Difference of BERT-style and CLIP-style Text Encoders. arXiv preprint arXiv:2306.03678. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We sincerely appreciate your positive feedback and helpful suggestions. We hope our paper will meet your approval once we address the following concerns.* **Q1. The large use of "refer to Section XXX in the Appendix".** A1. Due to limited space, we indeed have placed the ***expected implementation details*** in the Appendix. Since the camera-ready version will have ***one more page***, we will move these details into the main paper. We also summarize them as below: - *How the model is trained, what is the architecture, what are the configurations?* ``` We implement our PDF-Embedding using PyTorch and distribute its training over 8 A100 GPUs. The ViT-B/16 serves as the backbone and is pre-trained on the ImageNet dataset using DeiT. We resize images to a resolution of 224 × 224 pixels before training. A batch size of 512 is used, and the total number of training epochs is 25 with a cosine-decreasing learning rate. ``` - *How the other methods are compared, do the authors train them, how are they trained or adapted to the current task?* ``` In Table 1, we apply the methods in a zero-shot manner. Specifically, we use these models as feature extractors and calculate the cosine similarity between pairs of image features, except for GPT-4V Turbo. We ask GPT-4V Turbo to reply with one similarity directly. For the five different methods compared in Table 2, we ensure a fair comparison with the proposed method by using the same training schedule, network architecture, and configuration as described above. ``` **Q2. Generalizability to other datasets or diffusion models.** A2. We provide the required experimental results in the table of the attached PDF, along with the associated analysis in the “Common Concerns” section. ***These experimental results validate the generalizability of our model across different diffusion models.*** **Q3. Table 1: Unfair comparison with other models.** A3. We apologize for causing this confusion. The purpose of Table 1 is to demonstrate that ***all existing models fail to solve the proposed ICDiff task, highlighting the necessity for a specialized ICDiff model*** rather than to compare our model against them. For details, please refer to the “Common Concerns” section. **Q4. The overlap between Sec 5.2 and Sec 5.3.** A4. We will delete the first part, which contains an inappropriate/unfair comparison, in Section 5.3. **Q5. The quality of text matching using Sentence-Bert.** A5. ***We are aware that relying solely on linguistic matching can lead to incorrect matching; however, this does not affect the quality of our D-Rep dataset.*** This is because the text matching is used as a ***pre-filtering*** step, and we engage ***professional labelers*** after identifying candidate pairs. Specifically, as shown in Line 113 ~ Line 121, these labelers assess the pairs from a visual perspective; for example, they distinguish between large shipping containers and small plastic kitchen containers, assigning a replication level of $0$. As Fig. 3 illustrates, only $15.92\\%$ of the dataset consists of image pairs at Level $0$, *indicating the number of mismatched pairs is not large*. ***In conclusion, this approach ensures that mismatching pairs also provide supervision signals (as negative samples) to train models instead of compromising dataset quality.*** **Q6. The definition of copy or replication.** A6. The definition of copy/replication in this paper follows [1]: ***“We say that a generated image has replicated content if it contains an object (either in the foreground or background) that appears identically in a training image, neglecting minor variations in appearance that could result from data augmentation.”*** According to [1], this definition “focuses on object-level similarity because it is likely to be the subject of intellectual property disputes”. After clarifying this, we provide a point-to-point reply to your questions. - *The generated Mona Lisa is not a copy, and thus this kind of detection is not helpful for downstream tasks.* We respectfully disagree with that. First, it is important to note that ***this paper aims to detect replication itself, providing a basis for subsequent copyright checks rather than directly detecting copyright infringement***. Furthermore, ***such cases also cause plausible infringement problems; therefore, detecting them assists in copyright protection.*** Specifically, although the Mona Lisa is in the public domain and not subject to copyright, diffusion models are equally capable of generating other famous works, such as “*The Persistence of Memory*”, which remains under copyright by the Dalí Foundation. This is illustrated in the figure of the attached PDF. Utilizing these generated images for profit would constitute a copyright infringement against the Foundation. ***Therefore, it is reasonable to regard such cases as copies.*** - *Limiting the prompt solves the copy directly.* We respectfully disagree with that. Because: (1) Many open-source diffusion models, like Stable Diffusion, do not have such limits. Therefore, ***users can easily generate copies of copyrighted images.*** (2) ***Although commercial models limit the prompt text, as shown by [2], this approach does not completely prevent object-level copying***. For instance, Fig. 1 in [2] shows copyrighted Superman logo can be generated by ChatGPT without mentioning Superman directly. - *Detecting the case without explicitly being mentioned in a prompt.* ***This case will also be solved by our method*** because we focus on ***matching visual features*** rather than ***prompting***. For instance, whether the prompt is ‘Generate a yellow, cartoon-style mouse’ or ‘Generate a Pikachu,’ if the generated image resembles Pikachu, our algorithm can match it with a copyrighted Pikachu image. [1] Diffusion art or digital forgery? investigating data replication in diffusion models. *CVPR, 2023*. [2] On Copyright Risks of Text-to-Image Diffusion Models. *Arxiv, 2023*. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks the authors for provding such detailed rebuttal. Most of my concerns are well-addressed. However, I highly suggest the authors to largely polish the writing and structure of the current manuscript in the revision. Page limitation exists for most of the publications. Yet, a paper should be complete by itself, and allows the readers to understand necessary preliminaries, methods, and experiments by only reading the main paper. I found the current manuscript has not reached this standard yet. Nevertheless, this is a good work with valid motivation. I would like to accept this paper for the task it studies. Therefore, I raised my ratings after rebuttal. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your positive rating and will definitely move the required information to the camera-ready version (which has one more page) and polish our manuscripts accordingly.
Summary: This paper constructs a Diffusion-Replication dataset aiming to solve the image copy detection problem for diffusion models. This paper proposed a strong baseline named PDF-Embedding which transforms the replication level into a probability density function as the supervision signal. Extensive experimental results and analysis demonstrate the efficiency of the proposed method. Strengths: 1. The topic is timely and interesting. The presentation is clear and easy to follow. 2. This paper creates a valuable dataset, which is important to identify the replication caused by diffusion models. 3. This paper gives a reasonable theoretical explanation for the proposed method. The analysis of the proposed PDFs is convincing on three primary functions: Gaussian, linear, and exponential. 4. The experiments are intensive and reasonable results are achieved. Weaknesses: 1. As shown in Figure 5 and Figure 15, authors use different values of A for each function, while there is no explanation on both the selection rule and influence of A. 2. As shown in Table 2, the training time, inference time and matching time are not better than other methods. 3. This paper creates a valuable dataset, while the details of the data set, such as the label distribution of the dataset, are missing, which is important in practical applications. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Authors use different values of A for eq3-5, but there is no explanation on both the selection rule and influence of A. 2. The details of the data set, such as the label distribution of the dataset, are missing. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the authors addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We sincerely thank you for your positive feedback and helpful suggestions. We address your questions below.* **Q1. The selection rule and influence of $A$.** A1. We thank you for this insightful question. We provide the selection rule and influence of $A$ here. **Selection rule.** ***We select $A$ to maintain $g \left( x \right )$ as a validation PDF.*** Because the random variables are discrete in practice, $A$ cannot be randomly selected, and its value must lie within a certain range. The below example demonstrates how to calculate the range of $A$: For the *Gaussian* function, according to Eq. 16: \begin{equation} \begin{gathered}\sum_{x \in\{0,0.2,0.4,0.6,0.8,1\}}\left(A \cdot \exp \left(-\frac{\left(x-p^l\right)^2}{2 \cdot \sigma^2}\right)\right)=1, \\\\ A \cdot \exp \left(-\frac{\left(x-p^l\right)^2}{2 \cdot \sigma^2}\right) \geq 0, \end{gathered} \end{equation} we have: \begin{equation} 6 \cdot A \geq \sum_{x\in \{ 0,0.2,0.4,0.6,0.8,1\}} \left( A\cdot \exp \left( -\frac{\left( x-p^{l} \right)^{2}}{2\cdot \sigma^{2}} \right) \right) =1. \end{equation} Therefore, we have $A \geq \frac{1}{6}$. As shown in Fig. 15, experimentally, we start with $A=0.167$. Similarly, we can calculate the range of $A$ for the *Linear* and *Exponential* functions. **Influence.** As shown in Fig. 15, for the *Gaussian* and *Linear* functions, a larger $A$ implies steeper supervision, while for the *Exponential* function, a larger $A$ implies smoother supervision. According to our intuition that “the probability of neighboring replication levels should be continuous and smooth,” the parameter $A$ controls the expected smoothness of the learned distribution. ***We thank you again for this insightful question, which helps make our method more theoretically sound. We will incorporate these points into our revised paper.*** **Q2. The efficiency training and testing is not better than others.** A2. Thanks for this question. Since we use a set of vectors to calculate similarity instead of the original one, it is reasonable to expect little computational overhead. However, in our paper, we argue that ***the computational overhead introduced by our method is negligible***. Specifically: (1) during training, our method is only $5.8\\%$ slower than the baseline; (2) during inference, our method is only $2.8\\%$ slower than the baseline; (3) the magnitude of matching, which is $10^{-9}$, is negligible compared to the magnitude of inference, which is $10^{-3}$. Furthermore, as shown in Lines 262 to 266, we find that in practice: *Our PDF-Embedding requires only $2.07 \times 10^{-3}$ seconds for inference and an additional $8.36 \times 10^{-2}$ seconds for matching when comparing a generated image against a reference dataset of 12 million images using a standard A100 GPU. This time overhead is negligible compared to the time required for generating (several seconds).* In conclusion, ***given the significantly enhanced performance, the introduction of such a minimal computational overhead is worthwhile in practice.*** **Q3. The details of the proposed dataset, such as the label distribution.** A3. Thanks. We have provided the label distribution of the proposed dataset on the left side of Fig. 3 in the main paper. To make this clearer, we will highlight it in the main text. Additionally, we provide more labeling details in the response to **Q2** of *Reviewer 2Ky2*, which enriches the details of the proposed dataset. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I have checked the feedback from my fellow reviewers as well as the corresponding rebuttals. The concerns appear to have been resolved satisfactorily. My own concerns have also been addressed effectively. Therefore, I raise my score to 7 and confidence to 5. Thanks for authors’ efforts. --- Reply to Comment 1.1.1: Comment: We are glad that our efforts have satisfactorily addressed the concerns raised and thank you very much for your thorough review.
Summary: This paper introduces a novel method named ICDiff for detecting whether images generated by diffusion models replicate the training set. The authors have constructed a new dataset called D-Rep and proposed a new embedding method called PDF-Embedding. This approach transforms the replication level of image pairs into a probability density function, using this as a supervisory signal to train the model. The experimental results demonstrate that ICDiff outperforms many existing image copy detection method. Strengths: 1. Innovation: ICDiff is the first image copy detection method specifically aimed at replicas generated by diffusion models, filling a gap in current research. 2. Dataset Construction: The creation of the D-Rep dataset provides a valuable resource for the study of image copy detection, with its replication level annotations offering clear guidance for model training and evaluation. 3. Methodology: The PDF-Embedding method, which utilizes a probability density function as the supervisory signal, is both innovative and effective. Weaknesses: See questions. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Intuitively, we would use a single set of vectors to characterize the similarity between images, with larger dot product values indicating higher similarity and smaller ones indicating the opposite. This paper, however, uses six sets of vectors to measure six levels of similarity, which seems counterintuitive. Specifically, when two images are almost identical, the set of vectors indicating their similarity as zero needs to be quite different; when two images are completely unrelated, the set of vectors indicating their similarity as zero needs to be as consistent as possible. I am curious about the rationale behind this design. 2. Although ICDiff performs well on the D-Rep dataset, its generalizability to other datasets or different types of diffusion models has not been verified. In particular, I am concerned whether other methods compared with ICDiff (such as SSCD) have been trained on the D-Rep training set? If so, how was the training conducted; if not, is this comparison on the D-Rep test set somewhat unfair? 3. Regarding the design of the Relative Deviation (RD). The authors point out in Appendix B that when $s^l=3,s^p=0$, the penalty should be greater than when $s^l=5,s^p=2$; my question is, when $s^l=5,s^p=0$, should the penalty be greater than when $s^l=3,s^p=0$ (although both can't be more wrong, the former seems to be more egregiously wrong)? 4. How do the authors utilize the trained ICDiff to assess the replication ratio of diffusion models? The paper seems to only mention conclusions (such as 10.91% or 20.21%) without explaining how these figures are derived. Specifically, ICDiff provides six replication levels for each set of images. When assessing the replication ratio of diffusion models, what criteria or which level does the author use as a threshold? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We sincerely thank you for your positive feedback and helpful suggestions. We address your questions below.* **Q1. The rationale behind using six vectors.** A1. We appreciate this insightful question. As you say, when we understand our PDF-Embedding approach ***separately***, it seems counterintuitive: Typically, we use **larger** dot product values to indicate higher similarity. In contrast, in the context of PDF-Embedding, when two images are more similar, the dot product of a certain set of vectors, which indicates their similarity as zero, needs to be **smaller**. However, as the name of our method, Probability-Density-Function, suggests, under ideal conditions, our method is ***continuous*** and should not be focused on local regions. From the perspective of continuity, the rationale or intuition is that the probabilities of neighboring replication levels should be continuous and smooth. For instance, if an image-replica pair is annotated as level-3 replication, the probabilities for level-2 and level-4 replications should not be significantly low either. After training, the largest-scored entry indicates the predicted replication level. This intuition is experimentally verified by comparing our method against two standard supervision signals, namely “One-hot Label” and “Label Smoothing,” as shown in Table 2. Furthermore, even if we only focus locally, learning such features by optimizing the neural network will **not** cause contradictions. The neural network is capable of capturing the underlying distribution and continuity, ensuring that the learned embeddings reflect the smooth transition between different levels of similarity. 
**Q2. Generalizability to other datasets or diffusion models.** A2. We provide the required experimental results in the table of the attached PDF, along with the associated analysis in the “Common Concerns” section. ***These experimental results validate the generalizability of our model across different diffusion models.*** **Q3. Table 1: Somewhat unfair comparison with other models.** A3. We apologize for causing this confusion. The purpose of Table 1 is to demonstrate that ***all existing models fail to solve the proposed ICDiff task, highlighting the necessity for a specialized ICDiff model*** rather than to compare our model against them. For details, please refer to the “Common Concerns” section. **Q4. The design of Relative Deviation (RD): it does not reflect more egregiously wrong in some cases.** A4. We appreciate your deep understanding of our designed Relative Deviation metric. Indeed, the Absolute Deviation (Eq. 12) indicates that when $s^l = 5$ and $s^p = 0$, the penalty is greater than when $s^l = 3$ and $s^p = 0$. Although we notice it is difficult to design an evaluation metric suitable for all cases, both Relative Deviation and Absolute Deviation fulfill the primary purpose of proposing the second evaluation metric. Specifically, as mentioned in Lines 132 to 134: *A limitation of the PCC is its insensitivity to global shifts. If all the predictions differ from their corresponding ground truth with the same shift, the PCC does not reflect such a shift and remains large. To overcome this limitation, we propose a new metric called the Relative Deviation (RD).* Both Relative Deviation and Absolute Deviation can reflect global shifts: ***for the same label, a larger distance from the label results in a higher deviation/penalty score***. Furthermore, if we maintain when $s^l = 5$ and $s^p = 0$, the penalty is greater than when $s^l = 3$ and $s^p = 0$, ***we cannot normalize the penalty to the range $0 \sim 1 $ for each pair, which may lead to overly optimistic results.*** This is because: assuming the maximum penalty is $1$, we have \begin{equation} 1=penalty\left( s^{l}=5,s^{p}=0 \right) >penalty\left( s^{l}=3,s^{p}=0 \right). \end{equation} In conclusion, while we acknowledge that our Relative Deviation may not reflect more egregious wrong in some cases, it empirically aligns better with human intuition. Additionally, we will also seek more perfect evaluation metrics. **Q5. How to assess the replication ratio of diffusion models.** A5. Thank you for your reminder, and we apologize for missing this important detail. When assessing the replication ratio of diffusion models, we consider image pairs rated at Level 4 and Level 5 to be replications. We will add this to the revision. --- Rebuttal 2: Comment: Thank you for your detailed response. You summarized my questions as Q1-Q5, which I find to be appropriate. Regarding Q2, Q3, and Q5, I have no additional queries. For Q1, I would like to delve deeper into a hypothetical scenario. Imagine figures A and B are identical (or almost identical), and we have a vector derived from A and another from B following identical procedures. In such a case, these vectors should have a minimal dot product. How does the model address this requirement? For Q4, I would appreciate further clarification on the statement, "we cannot normalize the penalty to the range $0 \sim 1$ for each pair, which may lead to overly optimistic results." Why will we have overly optimistic results? Thank you again and look forward to your further response. --- Rebuttal Comment 2.1: Title: Thanks and further clarification Comment: Thank you for your prompt response and for participating in the rebuttal discussion. We are happy to have addressed 3 out of 5 questions and would like to further discuss your follow-up concerns. **Q1 Further. Imagine figures A and B are identical (or almost identical), and we have a vector derived from A and another from B following identical procedures. In such a case, these vectors should have a minimal dot product. How does the model address this requirement?** A1 Further. Thank you for this insightful question. If images A and B are almost identical, their vectors at Level 0 are indeed very similar, resulting in a high inner product. However, the inner product at Level 5 is even larger. Therefore, because our method relies on the relative scale relationship — using the highest-scoring entry to indicate the predicted replication level — it remains effective. For instance, the second subfigure in the first row of Fig. 16 shows two almost identical images, where the original dot products for Levels 0 and 5 are 0.864 and 0.998, respectively. Our method successfully predicts this pair as Level 5. **Q4 Further. Why will we have overly optimistic results?** A4 Further. Sorry for the confusion, and we clarify this as follows: Our statement was to highlight that the Relative Deviation is more intuitive because it normalizes the deviation: assigning 1 deviation score to the worst case and 0 deviation score to the best case. In contrast, Absolute Deviation does not provide this intuitive scale. For example, Absolute Deviation for the case where $s^l = 3$ and $s^p = 0$ is 0.6. Although the case is the worst, one may be confused by the 0.6 deviation score and misunderstand that the prediction has a certain degree of correctness. --- Rebuttal 3: Comment: I appreciate the thorough responses to my questions, the revisions made to the paper, and the additional experiments detailed in the "Common Concerns" section. As a result, I have adjusted my evaluation positively. From my current understanding, to better capture the properties of an image, multiple vectors are required instead of one. Within this set, some vectors carry a significant amount of the image's information, while others carry less. When assessing the similarity between two images, as their differences increase, the dot product of all the corresponding vector pairs will decrease, but those of vectors that carry more substantial information about the images will decrease faster. Thus, the relative scale relationship could change. Is my understanding accurate? --- Rebuttal Comment 3.1: Comment: We sincerely thank you for your positive evaluation. Your interpretation on the "6-vector rationale" is insightful and has given us great inspiration: it triggers a deeper thinking on the underlying mechanism and raises a very plausible explanation. We will also continue our study, and if we find anything new, we will let you know. Thanks again.
Summary: This paper proposed a new image copy detection model for diffusion models. A dataset of replication labels (0 to 5) is collected using human labelers and is used to train a model for replication grading. The proposed model estimates a pdf of replication labels and minimizes the error with a pdf representation of class labels. The problem is timely, and the proposed solution is novel. Strengths: A new model and framework for image copy detection (ICD) is proposed in the paper. The paper is well-written, and the results indicate a significant step forward compared to the mentioned literature. Weaknesses: * The use of continuous labels 0-1 will be more useful in practice. * Some details of the labeling procedure are missing: how many labels per image, variance for each labeler, and across labelers. * The same dataset D-Rep is used for evaluation. Technical Quality: 4 Clarity: 3 Questions for Authors: * I don't see why the authors did not choose continuous labels instead of the 6-categories approach. Instead of using argmax in eq(10), a weighted average score that is normalized between 0-1 will be more appropriate for practical applications. * Some details of the labeling processing are missing: how many images did each labeler label? What is the variance between the labelers for a specific image? * Evaluation can be more robust using a separate dataset with a different prompt/image generation procedures. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We sincerely thank you for your positive feedback and helpful suggestions. We address your questions below.* **Q1. The use of continuous labels 0-1 will be more useful in practice.** A1. Thanks. According to your suggestion, we implement this idea and find that it improves the original performance while allowing for continuous predictions. For instance, the PCCs for three variants of our PDF-Embedding change from ($53.7\\%$, $54.0\\%$, and $56.3\\%$) to ($55.1\\%$, $55.9\\%$, and $57.7\\%$), respectively. We will also include this more useful implementation when we release the code. **Q2. Details of the labeling procedure are missing.** A2. Thanks. Currently, we have 10 professional labelers and 40,000 image pairs. Initially, we assign 4,000 image pairs to each labeler. If labelers are confident in their judgment of an image pair, they will directly assign a label. Otherwise, they will place the image pair in an undecided pool. On average, each labeler has about 600 undecided pairs. Finally, for each undecided pair, we vote to reach a final decision. For example, if the votes for an undecided pair are 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, the final label assigned is 3. As a result, we do not calculate the variance for each image, and we believe this process ensures high-quality labeling. **Q3. Evaluation with a separate dataset.** A3. We appreciate this constructive suggestion and provide the required experimental results in the table of the attached PDF, along with the associated analysis in the “Common Concerns” section. ***These experimental results validate the generalizability of our model across different diffusion models.*** --- Rebuttal 2: Comment: Thanks for your reply and glad to see that my suggestion improved the original performance. It would be helpful to include the details you mentioned in Q2 in the paper. --- Rebuttal Comment 2.1: Comment: We sincerely thank you again for your suggestion and will include the labeling procedure in A2 into the camera-ready version.
Rebuttal 1: Rebuttal: **Thanks and Solve Common Concerns** We sincerely thank the ACs and reviewers for their dedicated efforts in reviewing our paper. We also thank all reviewers for their positive, thoughtful, and helpful feedback, and we will add all these suggestions to the final version of our paper. We are encouraged that they found our work to be “novel” (`Reviewer 2Ky2`, `Reviewer dJ32`, and `Reviewer vdEZ`), “important” (`Reviewer 2jDa` and `Reviewer vdEZ`), “timely” (`Reviewer 2Ky2` and `Reviewer 2jDa`), “valuable” (`Reviewer dJ32` and `Reviewer 2jDa`), and “interesting” (`Reviewer 2jDa`). We are also glad that `Reviewer 2jDa` recognized our work as “This paper gives a reasonable theoretical explanation for the proposed method” and “The experiments are intensive and reasonable results are achieved.” Finally, we sincerely appreciate that `Reviewer 2Ky2`, `Reviewer 2jDa`, and `Reviewer vdEZ` think our paper is “well-written”/“clear”/“easy to follow”. We have thoroughly addressed all the concerns raised by the reviewers in the below separate responses. Here, we want to solve two common concerns regarding **generalizability to other datasets or diffusion models** (`Reviewer 2Ky2`, `Reviewer dJ32`, and `Reviewer vdEZ`) and **unfair comparison** (`Reviewer dJ32`, and `Reviewer vdEZ`). - **Generalizability to other datasets or diffusion models** Thanks for this constructive suggestion. According to the suggestions of `Reviewer 2Ky2`, `Reviewer dJ32`, and `Reviewer vdEZ`, we provide the **quantitative** experimental results on 6 unknown diffusion models (in addition to the qualitative results in Fig. 6 in the manuscript) to validate our generalizability further. The experimental results in the **table** of **the attached PDF file** show that our model has good generalizability compared to all other methods: 1. Our PDF-Embedding is more generalizable compared to all *zero-shot* solutions, such as CLIP, GPT4-V, and DINOv2. 2. Our PDF-Embedding still surpasses all other plausible methods *trained* on the D-Rep dataset in the generalization setting. 3. Compared against testing on SD1.5 (same domain), for the proposed PDF-Embedding, there is *no* significant performance drop on the generalization setting. **The quantitative evaluation protocol in the attached table:** Because the collection process of the images from some diffusion models (see Appendix H) differs from the process used to build the test set of our D-Rep dataset, it is difficult to label 6 levels in a short time and the proposed PCC and RD are not suitable. In the attached table, we consider a quantitative evaluation protocol that measures the average similarity predicted by a model for given $N$ image pairs, which are manually labeled with the highest level. When normalized to a range of 0 to 1, a larger value implies better predictions. This setting is practical because, in the real world, most people’s concerns focus on where replication indeed occurs. Due to time constraints, our team and labelers manually confirm 100 such pairs for each diffusion model. *Note that, to ensure fairness, the pre-filtering of these pairs was based on an internal model which is not related to any of the models compared*. - **Unfair comparison** Thanks for this kind reminder. ***We did not intend to use Table 1 for direct comparison***. The purpose of Table 1 was to investigate the characteristics of our new dataset, D-Rep, by benchmarking it against popular methods. The results show that D-Rep is ***very challenging*** and popular vision-language models (1st row), self-supervised learning models (2nd row), supervised pre-trained models (3rd row), and current ICD models (4th row) ***fail to solve the new task***. The reason is clear: these models are not specifically trained for it. Given this context, our method offers a reasonable solution. Furthermore, Table 2 validates the effectiveness of our method by ***fairly comparing it against five different methods trained on our D-Rep training set***. Additionally, the table in the attached PDF provides a relatively fair comparison to these pre-trained popular methods to further validate the superiority of our method by ***testing these pre-trained popular methods and our method directly on the image pairs generated by unseen diffusion models***. We will revise the caption and description in the manuscript to clarify that Table 1 is not intended for comparison. Please let us know if you have any additional questions or concerns. We are happy to provide clarification. Authors of Submission #367 Pdf: /pdf/d5942a245dc0487f3b0aabc84459c8ef1a4fc8ad.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Diversify, Contextualize, and Adapt: Efficient Entropy Modeling for Neural Image Codec
Accept (poster)
Summary: The authors proposed a novel method to improve the entropy encoding in deep image compression. The key idea is to introduce three contexts, local, region, and global, to capture contextual information at different scales while balancing the compute cost. The authors show that the proposed method achieves SOTA compression rate while being computationally cheap. Strengths: * While introducing global context in entropy model has been explored in previous, how to efficiently capture these context information with minimal compute cost is still an open challenge. This work proposed a novel solution to fill the gap. * Experiments on real-world high resolution images show significant improvements over baseline methods. Weaknesses: * There are quite a few deep image compression methods proposed recently. The authors only compared 4 baseline methods. It would be necessary to compare more STOA methods. Technical Quality: 3 Clarity: 3 Questions for Authors: * Is the proposed model mobile-friendly? Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Additional SOTA Methods** Following your advices, we compare our method with six additonal SOTA methods. Two methods are employed in Tab. 1 of the uploaded PDF. For the experiment, we use the same structure of transforms (i.e., ELIC-sm) and different entropy models for fair comparison. The results clearly show that the DCA's superiority in terms of efficiency. Furthermore, four methods are compared in Fig. 2 of the uploaded PDF. We believe this will improve the completion of our paper. Thank you. **[Q1] Mobile-friendly** There can be two orthogonal research directions for efficient entropy modeling: 1) Developing an efficient model by novel algorithm and 2) Making existing entropy models more efficient by conducting QAT or PTQ as in the recent work [1]. We focused on the former direction, but the latter direction can be further studied in the future. [1] "MobileNVC: Real-time 1080p neural video compression on a mobile device," in WACV 2024.
Summary: This paper proposes an entropy model that achieves higher compression rates than previously proposed models in the context of learned image compression. This model combines global, regional, and local information to make better (i.e., more accurate, and therefore requiring fewer bits) predictions with respect to the latent values (quantized) that the autoencoder produces. The authors discuss the method in detail and provide a visualization. The results show an improvement over the existing methods.. Strengths: - This paper ran well designed experiments - The authors do discuss runtime, which is great. I wish more paper did this. - If I am reading the numbers correctly, the entropy model seems to provide a pretty good speedup over the existing models. Weaknesses: - This paper does not seem to be too theoretical, but it's clearly results-focused. I wonder whether a more appropriate venue would afford this a) more visibility to the correct (i.e., compression) audience; and b) be much more well received. I don't really think that this will be of broad enough interest at NeurIPS. - When discussing runtime performance, it would be great if the authors could break it down by sub-system. For example, it's useful to know how long does the autoencoder take to encode the image/decode the image. It would be great to know how much time does the entropy model take to predict all values needed to encode/decode. It would also be interesting to know (this is highly optional, but would make this paper much more interesting from a deployment standpoint) do discuss the runtime when ANS is also interleaved in the decoding / encoding process of the image. - The proposed entropy model shows some slight improvement, but it's nothing exciting. I was hoping to see a 30% improvement given the added complexity, but we got a much smaller number than that. The biggest issue: - Since this paper is focused on entropy modeling, which I believe is a great topic, I would have thought that a primary concern would have been reproducibility. Entropy models are famously flaky, meaning that it's quite possible that an image encoded on a V100 GPU may not be decoded by any other device (including CPUs.). As such, I would have liked the authors to go into detail on what mitigations were put in place to ensure that their model can be used on variety of hardware. I truly believe, that this is the difference between a paper that slightly improves theoretical compression rates, and one which makes it possible to apply in practice. Technical Quality: 3 Clarity: 3 Questions for Authors: - Do you plan to open source your code? I am asking this because (a) it would allow researchers to use it (as it's clearly a useful component of any learned image compression pipeline); and (b) determine how well it works in practice when submitting to competitions such as CLIC which will almost entirely guarantee that the decoding will happen on different hardware than the encoding. - I appreciate the experiments with the ordering of contexts, but I am very puzzled by the final order. Regional -> Global -> Local sound quite counterintuitive. I wonder why Global is not actually first, as it would "seem" like it should be the most general, and a decrease in granularity with respect to ordering would make sense. - Since the goal of this paper was to achieve the absolute best RD, have you considered backpropping into the latents once an encoding has been found (after a forward pass) in order to see whether jointly optimizing the RD loss for the particular image would yield even larger improvements? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: They seem addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Venue** It is well known that entropy models in neural image codecs are highly related to generative models. We sincerely believe that our work will be helpful not only to compression experts but also to many researchers in the NeurIPS conference. **[W2] Runtime Analysis** Following your advices, we added an in-depth runtime analysis in Tab. 2 of the uploaded PDF. You can see that runtime by sub-systems and total encoding/decoding time including the ANS entropy coding. **[W3] Performance Improvements** To explain the significant performance improvements, we clarify our goal $-$ To develop an **efficient** entropy model. As already mentioned by you, performance improvements can be easily achieved at the expenses of massively more compute and memory usage. However, for the image compression task, which is highly required to be efficient, it is very important to efficiently work. Therefore, we focused on studying how we can efficiently modeling entropy. In this respect, we would like to say that 3.73% BD-rate gain over the previous SOTA method is significant because it is achieved with only a small amount of additional computation and memory. For a more in-depth analysis, we compared our method with two additional SOTA methods (Tab. 1 of the uploaded PDF). You can confirm that our method improves performance most efficiently $-$ DCA achives the best improvement over basesline (-3.73% BD-rate gain) with the most efficiency in terms of the decoding time and the number of parameters. **[W4] Reproducibility** Although we deeply agree that the cross-platfrom robustness is cirtical for real-world applications, we kindly argue that the topic is beyond the scope of this paper focusing on efficient entropy modeling. It would be one of the interesting future works. Thanks for your thoughtful advice. **[Q1] Source Code** We completely agree with you and plan to release the source code for the advancement of the NeurIPS community. **[Q2] Ordering of Context** According to the study on theoretical understanding of masked autoencoder via hierarchical latent variable models, the semantic level of the learned representation varies with the masking ratio [1]. Specifically, extremely large or small masking ratios lead to low-level detailed information such as texture, while non-extreme masking ratios result in high-level semantic information. Inspired by this, we can infer that the local and global hyper latent representations correspond to relatively low-level information because they are extracted via limited utilization of the latent representation. The receptive field of the local hyper latent representation is limited by 1x1 convolutional layers. While the receptive field of the global hyper latent representation is whole image area, its attention mechanism selectively use the latent representation. Through the same reasoning, we would like to argue that regional hyper latent representation corresponds to relatively high-level information. It sounds reasonable that modeling higher-level informaton first and lower-level information later is effective, because higher-level information can be helpful to lower-level information, while the opposite seems to be relatively difficult. The proposed modeling ordering is also intended to take that approach. We set the order to be regional (higher-level information), global (lower-level information), and local (lower-level information) contexts in sequence. Here, the order between global and local is empirically determined, which is shown to be not influetial in Fig. 10. Thanks for your questions and we will add the above discussion. [1] Kong et al., "Understanding masked autoencoders via hierarchical latent variable models," in CVPR 2023. **[Q3] Further Optimization** Our goal is to achieve not only RD performance improvements but also efficienty entropy modeling in terms of computation and memory usage. So, we haven't tried the test-time further optimization for best RD performance, but we believe it would be an interesting future topic.
Summary: **Disclaimer:** My research direction is not related to image compression. I will try my best to review this work but could be biased. The work studies neural image codec. The work proposes an entropy modeling framework that uses contents for forward adaptation without comprising the bit rate. The authors build upon the quadtree partition-based backward adaptation method and introduce additional forward context to address limitations in existing approaches, particularly in the first modeling step. Strengths: 1. The work conducts in-depth analysis of the different components in the proposed method。 2. The proposed method enjoys very efficient decoding while having a better BD-rate. Weaknesses: - There are some presentation issue, e.g. line 256-257, the links to citation are missing. Please see the questions for methodological discussion Technical Quality: 3 Clarity: 3 Questions for Authors: 1. While the proposed method enjoys better performance, it seems the proposed method is largely derived from B (Quadtree) + F (R) (Li et al., 2023), while uses a lager network, and therefore, slightly longer decoding time. 2. I think it will further improve the paper if the authors put some additional qualitative results say in the appendix for the readers. Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Presentation Issue** We will revise it. Thank you for commenting. **[Q1] Difference from Baseline** First of all, we would like to say that the proposed entropy model, DCA, addresses the limitation of insufficient context for forward adaptation in the baseline (Li et al., 2023). We would like to emphasize that simply using more contexts for forward adpatation does not guarantee performance improvements because forward adaptation requires additional bit allocation unlike backward adaptation. For example, Fig. 9 of the previous work [10] shows using both regional and global contexts does not always helpful. In addition, in our paper, Fig. 7 shows just using more regional contexts (denoted as "Large R") does not lead to perfromance improvemnts compared to the baseline (denoted as "R"). Please note that our proposed method is very strong, consistently leading to significant performance improvements with a simple and intuitve idea. **[Q2] Additional Qualitative Results** We provide additional results in Fig. 1 of the uploaded PDF. We will add it. Thank you for commenting.
Summary: This paper presents a method for learned image compression called DCA, which stands for "Diversify, Contextualize, and Adapt". DCA is a more sophisticated approach for building the entropy model compared to previous methods. The entropy model is a key component in compression models and is optimized to predict the distribution of quantized latents that represents an encoded image. This is crucial for compression since a more accurate entropy model directly leads to higher compression rates when used with an entropy coding algorithm. The core insight of DCA is to use multiple ("diverse") contexts for forward adaption. More specifically, the authors define a local, regional, and global context corresponding to a single spatial location in the latent representation (local), a spatial patch in the latent tensor (regional) and a compact summary of the full latent (and thus the full image) learned via cross-attention. The authors show that the three contexts are complementary (Fig. 7) and that a serial applications (architecture details in Fig. 2) outperforms a merged approach (Fig. 9 and 10). The result is a SOTA model in terms of compression rate with a moderate model size and relatively fast decoding time (see Fig. 5). Strengths: The primary strength of this paper is achieving a non-trivial rate savings over very strong existing image codecs: 11.96% average rate savings over VTM, the current best standard codec for lossy image and video compression, and 3.73% rate savings over a strong neural method [12]. This is achieved with a well-motivated, intuitive improvement over previous entropy models along with extensive experimentation and evaluation. Furthermore, the rate savings are achieved with a moderate-size model (37.9M) and with a relatively fast decode speed. This is import because ultimately neural compression models compete with standard codecs with extremely fast decode speeds, and because it's already clear that incremental improvements to compression rates can be easily achieved at the expense of massively more compute and memory usage. Finally, the evaluation in the paper is quite thorough in terms of comparisons with appropriate baselines, runtime analysis, and investigation of importance of the different contexts used, their order, and the architecture (CNNs vs. attention). I also appreciated that the authors showed *where* DCA helped (Fig. 6) by showing the difference in the scale of the normalized latents with and without DCA. Specifically, DCA (with a global context) shows much better prediction for the first step (\bar{y}^1) where there is no opportunity for backward adaptation and the baseline only utilizes regional context. Weaknesses: The primary weakness of the paper is modest gains (at least in the context of NeurIPS) on a version of the learned compression problem that is no longer the focus of the compression subfield. In terms of the rate gains, 3.73% over the previous SOTA method is non-trivial (for reference, papers at conferences focused on improvements to standard codecs often show a gain of less than 0.5%), but it's still incremental progress. It's also not a large enough change to garner more interest from practitioners developing new standard codecs. In terms of the problem itself, there is now more attention on modeling video and boosting subjective (perceptual) quality. Optimization for MSE can still be important since it allows clear, objective evaluation, but then we're back to the first point where I think a larger improvement is now needed to have high impact at NeurIPS. Technical Quality: 4 Clarity: 4 Questions for Authors: No questions Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Performance Improvements** To explain the significant performance improvements, we clarify our goal $-$ To develop an **efficient** entropy model. As already mentioned by you, performance improvements can be easily achieved at the expenses of massively more compute and memory usage. However, for the image compression task, which is highly required to be efficient, it is very important to efficiently work. Therefore, we focused on studying how we can efficiently modeling entropy. In this respect, we would like to say that 3.73% BD-rate gain over the previous SOTA method is significant because it is achieved with only a small amount of additional computation and memory. For a more in-depth analysis, we compared our method with two additional SOTA methods (Tab. 1 of the uploaded PDF). You can confirm that our method improves performance most efficiently $-$ DCA achives the best improvement over basesline (-3.73% BD-rate gain) with the most efficiency in terms of the decoding time and the number of parameters.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. **Strengths.** The proposed entropy model was appreciated for the performance improvement over state-of-the-art baseline methods (all reviewers: hsbA, 295Z, TpTZ, YQWe) with the well-motivated and intuitive method (hsbA). The reviewers recognized that our experiemnts were well-designed (TpTZ), extensive (hsbA), and quite thorough with in-depth analysis (hsbA, 295Z). Efficient entropy modeling, the problem we address, was recognized as important and necessary (hsbA, TpTZ, YQWe), and we were appreciated for providing a novel solution (YQWe). **Summary of Rebuttal.** We address reviewers' concerns and questions by providing experiments and clarifications, which are summarized as follows: - **Significant improvements** are explained by additional comparison with SOTA in terms of BD-rate and efficiency (Tab. 1 of the below PDF). $-$ hsbA, TpTZ - **In-depth runtime analysis** is conducted (Tab. 2 of the uploaded PDF). $-$ TpTZ - **Additional qualitative results** are provided (Fig. 1 of the uploaded PDF). $-$ 295Z - **Additional SOTA methods** are employed for evaluation (Tab. 1 and Fig. 2 of the uploaded PDF). $-$ YQWe - **Differences from baseline** are discussed. $-$ 295Z - **Discussion on the ordering of context** is provided. $-$ TpTZ - **Source code** will be released. $-$ TpTZ Details are provided in each rebuttal below. Pdf: /pdf/46c00bbcbe7b9e95714624c05f7bad3585ce2e8a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks
Accept (poster)
Summary: The authors study the performance of ConvResNets trained with weight decay from the perspective of nonparametric classification. Specifically, the authors consider a smooth target function supported on a low-dimensional manifold, then prove that ConvResNets can adapt to the function smoothness and low-dimensional structures and efficiently learn the function without suffering from the curse of dimensionality. Strengths: This is an excellent, novel and strong work. The paper provides theoretical guarantees for the ConvResNets in the setting of nonparametric classification. The minimax optimal rate under ConvResNets has been an open problem which was successfully resolved by the authors in the conditional class probability. Weaknesses: Literature review is not sufficient. Minimax optimal nonparametric classification using DNN has been recently studied by other authors, though the current paper consider another novel neural network, the ConvResNets, which is different from the traditional DNN. Technical Quality: 4 Clarity: 4 Questions for Authors: Nonparametric classification can be done in different frameworks such as (1) smooth decision boundary, (2) smooth conditional class probability or (3) smooth margin. The minimax optimal rates in the above settings are different. For instance, some authors find that DNN classification in (1) and (2) yield different minimax rates. In particular, the minimax optimal rate under (1) is an open problem (see Kim's work) which was recently addressed by Hu et al. The minimax rate under (2) has been recently established for ultrahigh-dimensional case for DNN classification by Wang's work. The minimax rates under (1) and (2) may have different expressions. I guess that this paper considers the setting of smooth conditional class probability, and the manifold dimension D is fixed. It would be great to comment on its extension to ultrahigh-dimensional case as in Wang's work. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper and for your positive feedback. I'm grateful for your engagement and would like to address the question you've raised with the following response. **Weaknesses: Literature review is not sufficient. Minimax optimal nonparametric classification using DNN has been recently studied by other authors, though the current paper considers another novel neural network, the ConvResNets, which is different from the traditional DNN.** Due to space limits, we are only able to briefly mention the existing literature on FNNs/DNNs in Line 23-26. We will add more details of literature in the next version. **Q: Nonparametric classification can be done in different frameworks such as (1) smooth decision boundary, (2) smooth conditional class probability or (3) smooth margin… The minimax rate under (2) has been recently established for ultrahigh-dimensional case for DNN classification by Wang's work… I guess that this paper considers the setting of smooth conditional class probability, and the manifold dimension D is fixed. It would be great to comment on its extension to ultrahigh-dimensional case as in Wang's work.** A: Thank you for the reference [1] and your insightful question! You are correct that our paper focuses on the setting of smooth conditional class probability (2), with a fixed manifold dimension. We will add further discussions about [1] in our next version. The ultrahigh-dimensional case would be an interesting extension of our work! Regarding the specific Wang’s paper you mentioned, we’re currently reviewing [2] and [3] as they appear relevant. While neither paper explicitly defines "ultrahigh-dimension," [3] characterizes it as a scenario where the dimension $D$ grows at a non-polynomial rate relative to the sample size $n$, though it does not establish a minimax convergence rate. Our statistical rate avoids exponential dependence in $D$. However, to achieve a convergence rate that logarithmically depends on $D$, additional assumptions on input features are required, such as having only a small fraction of the variables active for the outputs. We would greatly appreciate any further clarification on the reference and the definition of ultrahigh-dimension. We are eager to delve deeper into this topic and discuss it further. Reference: [1] Kim, Yongdai, Ilsang Ohn, and Dongha Kim. "Fast convergence rates of deep neural networks for classification." Neural Networks 138 (2021): 179-197. [2] Wang H, Jin X, Wang J, Hao H. Nonparametric Estimation for High-Dimensional Space Models Based on a Deep Neural Network. Mathematics. 2023; 11(18):3899. [3] Li, K., Wang, F., Yang, L., & Liu, R. (2023). Deep feature screening: Feature selection for ultra high-dimensional data via deep neural networks. Neurocomputing, 538, 126186. --- Rebuttal Comment 1.1: Title: Thanks for responses! Comment: My concerns were well addressed unless some other references in DNN classification are still missing. For instance, I notice the following works are relevant: Minimax optimal deep neural network classifiers under smooth decision boundary Sharp rate of convergence for deep neural network classifiers under the teacher-student setting Deep neural network classifier for multidimensional functional data Minimax optimal high‐dimensional classification using deep neural networks --- Reply to Comment 1.1.1: Comment: Thank you for the references! We will cite these works and add further discussions in our next version.
Summary: The paper studies the performance of an overparametrized convolutional residual network architecture on a nonparametric classification problem trained with weight decay. This model, known as ConvResNeXt, involves N residual blocks, where each residual block has a parallel architecture of M building blocks, and each building block with L layers. As such it generalizes alternative ConvResNets in the literature and includes them as a special case, e.g., ConvResNet [He et al 2016] when M=1 and aggregated Resnet [Xie et at 2017] when N=1. Specifically, the analysis results consider infinitely many building blocks and shows that weight decay training implicitly enforces sparsity of these blocks considering that the optimal classifier is supported on a low dimensional smooth manifold. As such, they show that this model can efficiently learn the assumed function class without suffering from the curse of dimensionality. Strengths: 1) The paper shows that for learning Besov functions trained with weight decay out of n samples, the rate of convergence depends only on the intrinsic low dimension enabling the curse of dimensionality. 2) They also provide a tighter bound for the weight-decayed ConvResNeXt in computing the critical radius of local Gaussian complexity. Weaknesses: 1) The actual benefit of having parallel paths in the proposed ConvResNeXt model was not addressed fully. Indeed, the results stating that only the product of number of parallel paths M and depth N determines the convergence rate, and one may not need parallel blocks when residual connections are present contradict with some of the earlier findings in the literature. For example, in [Xie et at 2017]. In the mentioned prior work, several experimental results focus on the posive effect of having higher cardinality (more multiple parallel paths) than increased depth and width. 2) If there is no additional benefit of having additional parallel paths, and an equivalent performance can be achieved by increasing the depth only, this will limit the capability of the proposed ConvResNeXt over the ConvResNets unline the claimed benefits of the former. Technical Quality: 2 Clarity: 3 Questions for Authors: 1) This new architecture introduce significantly more complex nested function form, but similar performance in both approximation and estimation errors can be achieved by rather simpler one. The provided bounds on the two errors do not suggest any practical benefit of ConvResNeXts over ConvResNets. If so, what would be the motivation to generalize for the former? 2) In Sec 3.2, it was stated that the number of building blocks M does not influence the estimation error as long as it is large enough. This must be contrasted with the earlier findings about the cardinality of the parallel paths, and any difference must be clearly stated. 3) In Sec 3.2, the tradeoff between width and depth was mentioned to be less important in ResNext. In particular, it was stated that the lower bound on the error does not depend on the arrangements of the blocks M and N as long as their product remains same and large enough. One should further address the reason behind this and compare it with the literature again. Moreover, if that’s so, does this suggest that an equivalent model that can achieve equivalent performance without parallel paths? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: No, authors did not adequately address the limitations of their work, and no potential negative societal impact of their work is identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper. I would like to address the concerns you raised in your review. **Weakness: The actual benefit of having parallel paths was not addressed fully… In Xie et at 2017, several experimental results focus on the positive effect of having higher cardinality (more multiple parallel paths)... this will limit the capability of the proposed ConvResNeXt over the ConvResNets unline the claimed benefits of the former.** We would like to emphasize that our paper focuses on the representation and generalization abilities of overparameterized ConvResNeXts, not their optimization. We demonstrate that **the statistical rate and representation power are equivalent** whether increasing the number of blocks (M) or paths (N), as long as MN exceeds a certain threshold (Theorem 3.3). This novel finding from approximation and generalization theory perspectives explains the practical success of overparameterized ConvResNeXts. Practical performance involves factors beyond our theoretical scope, including optimization aspects and techniques like layer normalization which are barely considered by theorists. Xie et al. 2017 used FLOPs as a complexity measure, while we consider $MN$ values. Notably, their results showing comparable performance for ResNeXts with similar MN values still align with our findings. We acknowledge the potential practical benefits of parallel paths. Our theories suggest this advantage does **not** stem from improved representation or generalization capabilities, but likely from optimization aspects. ResNeXts with more parallel paths may be easier to learn with certain algorithms, but this warrants further investigation. Our approach, decoupling learning and optimization, follows the tradition of Vapnik, Bartlett, and other learning theorists. It allows for a more fine-grained learning theory, complementing optimization-focused studies. We develop generalization bounds for regularized ERM, which address overfitting and potentially unifying optimization and statistical theories in future research. **Q1: This new architecture introduces significantly more complex nested function form... The provided bounds on the two errors do not suggest any practical benefit of ConvResNeXts over ConvResNets. If so, what would be the motivation to generalize for the former?** A1: Our paper not only generalizes ConvResNets to ConvResNeXts, but more importantly, we address the ***overparameterization*** regime where parameters can significantly outnumber samples. This contrasts with previous work like Liu et al., which imposes unavoidable cardinality constraints on block numbers, limiting their ability to explain the success of overparameterized networks in practice. Analyzing overparameterized ConvResNeXts presents unique theoretical challenges, particularly in ***bounding metric entropy***. We tackle this using an advanced method leveraging Dudley's chaining of metric entropy (via critical radius / local Gaussian/Rademacher complexity, Bartlett et al., 2005; see Corollary F.5, Line 1100, Page 21). Understanding complex networks is inherently difficult. Our theory takes significant steps towards explaining real-world applications, bridging the gap between theoretical analysis and practical success of ***overparameterized*** models. While our bounds don't suggest immediate practical benefits of ConvResNeXts over ConvResNets, they provide crucial insights into the behavior of complex, highly parameterized architectures that dominate modern deep learning practice. **Q2: In Sec 3.2, it was stated that the number of building blocks M does not influence the estimation error as long as it is large enough. This must be contrasted with the earlier findings about the cardinality of the parallel paths, and any difference must be clearly stated.** A2: Our findings differ from previous work like Liu et al. 2021, where network complexity and optimal estimation error required careful bounding of cardinality. In our paper, we don't constrain the number of blocks or paths in ConvResNeXts, thanks to ***weight decay*** training. Weight decay effectively bounds the weights' norms (line 254). As shown in Lemma 4.1, this induces weak sparsity in ConvResNeXt architectures, where many blocks have negligible influence on the network output post-training. Consequently, only a finite number of blocks with significant weight norms contribute meaningfully, allowing us to bound the complexity of overparameterized ConvResNeXts. This approach enables us to achieve optimal estimation error **without** explicit constraints on network structure. We'll elaborate on the benefits of weight decay in our revised version. **Q3: In Sec 3.2, the tradeoff between width and depth was mentioned to be less important in ResNext... One should further address the reason behind this and compare it with the literature again. Moreover, if that’s so, does this suggest that an equivalent model that can achieve equivalent performance without parallel paths?** A3: Our analysis shows that the arrangements of blocks ($M$) and paths ($N$) don't affect the representation and generalization ability of overparameterized ConvResNeXts, as long as their product ($MN$) remains sufficiently large. Theoretically, this is because: 1. ConvResNeXts with the same MN have equivalent representation power, regardless of M and N arrangements. 2. These arrangements don't significantly influence network complexity or generalization capability. Our findings suggest that an equivalent model without parallel paths could achieve similar ***representation*** and ***generalization*** results. However, practical performance also depends on optimization aspects, which our paper doesn't address directly. Our work provides valuable insights into approximation and generalization that can't be obtained solely from an optimization perspective. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you again for the detailed review. As there are only two days left in the discussion period, we wanted to ask if you have any further comments or questions for us to consider. If our rebuttal discussing our result's novelty and contributions addressed your concerns, we would really appreciate it if you would consider raising your score. Best, Submission7950 Authors --- Rebuttal Comment 1.2: Comment: I thank the authors for detailed response in the rebuttal. After reading all the comments and the paper again, I still have the same concerns regarding the novelty and the contribution of this work. The authors claim that they focus on the representation and generalization abilities of overparameterized ConvResNeXts (which is not sufficiently motivated over the ConvResNets), and not their optimization. However, they state that the potential practical benefits of parallel paths does not stem from improved representation or generalization capabilities, but likely from optimization aspects which they do not cover. I believe this induces a contradiction to their initial motivation if not misinterpretation. Moreover, their theoretical results also reflect the same argument that as long as the product of number of paths and the depth remains the same, the architecture can be reduced down to ConvResNets (with residual connections but no parallel paths). Although I find the discussion and the results interesting, I think it needs further clarification on the motivation and the actual benefit of the proposed model over the existing one by exploring the optimization aspects of their architecture with parallel paths. As such, I would keep my initial rating. --- Reply to Comment 1.2.1: Comment: Thank you for your reply and letting us know your concerns! We understand your main arguments and would like to explain our motivations here. As noted in Section 1.1 and our answer to Q1, our biggest motivation is to provide new theoretical understanding of **overparameterized** networks. Specifically, we establish representation theory and statistical rate for the **overparameterized** ResNeXts (or ResNets as a special case) when learning Besov functions on low-dimensional manifolds, rather than merely generalizing ResNets to ResNeXts. In comparison, existing results such as Liu et. al. 2021 can only work for ResNets with exact-parameterization, and fails in overparameterized regimes for either ResNeXts or ResNets. Moreover, we would like to clarify that we are **not** motivated to either propose new architectures or demonstrate the marginal benefits of parallel paths in ResNeXts. Instead, we focus on “overparameterization”, and attain the equivalent representation and generalization power of ResNeXts with the same $MN$ as a **side product**. Therefore, our results do not ***“induce a contradiction to initial motivation”***. To demonstrate this equivalence observed in ResNeXts, we ran some CIFAR10 experiments with a number of $M,N$ combinations and got the following results: | # of Blocks $M$ | # of paths $N$ | Width | Epochs | CIFAR10 Accuracy | |--------------------|-------------------|--------|----------|-----------------------| | 8 | 3 | 64 | 450 | 96.38| | 4 | 6 | 64 | 320 | 96.37| | 3 | 8 | 64 | 300 | 96.35 (Table 7, Xie et. al. 2017)| Our experiments show that ResNeXts with more blocks, if allowed more training efforts, can achieve comparable performance. This also evidentiates that ResNeXts with less parallel paths, though likely harder to be optimized, can possess the same representation and generalization power. We hope the reviewer can value our contributions and see the practical implications of our theoretical findings.
Summary: This paper builds the deep learning theory for studying convolutional residual neural networks with data lying on an embedded lower-dimensional manifold. Theoretical results on both approximation error and estimation error are provided, and interesting implications from the theory are discussed. Strengths: 1. The paper is well-written and I have enjoyed reading it. 2. The theory is well-developed and clear. 3. Good contributions are made to the deep learning theory. Weaknesses: 1. This paper avoids the curse of dimensionality through assuming that the data lies in a low-dimensional manifold embedded in the ambient space. While real world data typically exhibit manifold structure, they commonly don't lie precisely on a manifold, but instead are surrounding a manifold (i.e. + noise). The theory of this paper highly depends on the exact manifold assumption since it needs the associated partition of unity to split the problem into local charts. 2. Among the remarks/implications made on page 7, it is said that overparameterization is fine. However, to my understanding, the problem of overparameterization is not about an overparameterized model not being expressive enough (which this paper addresses), but really about the difficulty in finding a good enough local minimizer for it during optimization. To this regard, Theorem 3.3 based on the global minimizer doesn't really answer it. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The remarks on tradeoff between width and depth are very interesting. Have you tried this in numerical experiments to see if this theoretical driven intuition actually appears in experiments? 2. Other than the analysis being more general, what are the theoretical benefits of adding the identity map (i.e. the residual network) and multiple CNN blocks together compared to just using a CNN block? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper and for your positive feedback. I'm grateful for your engagement and would like to address the question you've raised with the following response. **Weakness 1: The theory of this paper highly depends on the exact manifold assumption since it needs the associated partition of unity to split the problem into local charts.** While our paper focuses on the exact manifold setting for clarity, our data assumptions can be relaxed in two significant ways: 1. Our findings extend to settings where data concentrate on the manifold with orthogonal noise, as suggested by [1]. 2. Further, our results apply to data not necessarily near a manifold, but concentrated around a subset of $\mathrm{R}^D$ with effective Minkowski dimension [2]. In this case, the statistical rate depends only on this effective dimension. These extensions demonstrate the broader applicability of our theoretical framework beyond the exact manifold assumption, encompassing more general data structures found in practical scenarios. Reference: [1] Cloninger, Alexander, and Timo Klock. "A deep network construction that adapts to intrinsic dimensionality beyond the domain." Neural Networks 141 (2021): 404-419. [2] Zhang, Z., Chen, M., Wang, M., Liao, W., & Zhao, T. (2023, July). Effective minkowski dimension of deep nonparametric regression: function approximation and statistical theories. In International Conference on Machine Learning (pp. 40911-40931). PMLR. **Weakness 2: The problem of overparameterization is not about an overparameterized model not being expressive enough (which this paper addresses), but really about the difficulty in finding a good enough local minimizer for it during optimization. To this regard, Theorem 3.3 based on the global minimizer doesn't really answer it.** Our paper focuses on **approximation** and **generalization** aspects of overparameterized ConvResNeXts, rather than optimization. We demonstrate that overparameterized ConvResNeXts can adapt to Besov functions on low-dimensional manifolds, achieving near-optimal convergence rates. While optimization is crucial, existing research [1-4] on this aspect is limited to **simpler networks** or specific function classes. Considering optimization for deep networks often leads to NTK analysis, which suffers from the curse of dimensionality as mentioned in Line 260-261. Our approach, decoupling learning and optimization, follows the tradition of Vapnik, Bartlett, and other learning theorists. It allows for a more fine-grained learning theory, complementing optimization-focused studies. We develop generalization bounds for regularized ERM, explaining how these mitigate **overfitting**. We acknowledge that addressing all aspects (approximation, generalization, optimization) for complex networks adapting to Besov functions on low-dimensional structures remains an open challenge. Even so, among all existing works that focus on approximation and generalization, our work is the most closely aligned with real-world practice, considering weight-decay training, overparameterization setting and practical network architectures. Reference: [1] Nichani, Eshaan, Alex Damian, and Jason D. Lee. "Provable guarantees for nonlinear feature learning in three-layer neural networks." Advances in Neural Information Processing Systems 36 (2024). [2] Wang, Zihao, Eshaan Nichani, and Jason D. Lee. "Learning Hierarchical Polynomials with Three-Layer Neural Networks." The Twelfth International Conference on Learning Representations. 2023. [3] Allen-Zhu, Zeyuan, and Yuanzhi Li. "What can resnet learn efficiently, going beyond kernels?." Advances in Neural Information Processing Systems 32 (2019). [4] Allen-Zhu, Zeyuan, and Yuanzhi Li. "Backward feature correction: How deep learning performs deep (hierarchical) learning." The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023. **Q1: The remarks on tradeoff between width and depth are very interesting. Have you tried this in numerical experiments to see if this theoretical driven intuition actually appears in experiments?** A1: Thank you for this insightful question. Our paper focuses on theoretical analysis, demonstrating that representation and generalization abilities are equivalent for increasing either depth ($M$) or width ($N$), given a sufficiently large product $MN$. Our findings align with experimental observations in Xie et al. 2017 (Table 3,4), where ResNeXts with similar MN values achieve comparable performance. While they noted slightly better performance for networks with more paths, our theory suggests this advantage doesn't stem from improved representation or generalization capabilities, but may instead be related to optimization aspects. **Q2: What are the theoretical benefits of adding the identity map (i.e. the residual network) and multiple CNN blocks together compared to just using a CNN block?** A2: Identity maps address vanishing gradients, enabling deeper networks and easier optimization (Xie et. al. 2017). Multiple CNN blocks allow learning hierarchical features at different scales. While we don't prove their theoretical benefits directly, these components are crucial in state-of-the-art architectures (He et al. 2016, Xie et al. 2017). Our work contributes by establishing a close-to-optimal convergence rate for a setting that closely mimics real applications: practical architectures with overparameterization, training with weight decay, and data exhibiting low-dimensional structures, bridging theory and practical success of these complex structures. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you again for the detailed review. As there are only two days left in the discussion period, we wanted to ask if you have any further comments or questions for us to consider. If our rebuttal discussing our result's generality and contributions addressed your concerns, we would really appreciate it if you would consider raising your score. Best, Submission7950 Authors
Summary: The paper provide theoretical analysis for the good prediction performance of convolutional residual neural networks, even overparameterized. Strengths: 1. The theory they developed with overparamterized ConvResNeXts trained with weight decay is novel. 2. their theory does not suffer from the curse of dimensionality. Weaknesses: In line 71, 72, the overparameterization is defined as " the number of blocks is larger than the order of the sample size n". Where is this definition comes from? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you generalize from the empirical logistic risk to other risks? How will the choice of loss change the bound? 2. Can you explain in details about the extra effort (underlying theoretical techinique) you made in this paper to differentiate from Liu et al. 2021 (Besov function approximation and binary classification on low-dimensional manifolds using convolutional residual networks)? How do you generalize it to over-parameterized setting and ConvResNeXt? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The analysis is limited to convolutional networks. It's interensting to see if it can be further generalized to other architectures like transformers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper. I would like to address the questions you've raised with the following responses. **Weakness: In line 71, 72, the overparameterization is defined as "the number of blocks is larger than the order of the sample size n". Where is this definition from?** Thank you for this question. The definition in lines 71-72 is specific to our analysis, not drawn from an existing source. We define overparameterization as 'the number of blocks larger than the order of the sample size' because: 1. It aligns with the general concept of overparameterization in deep learning, where parameters significantly exceed training samples. This concept is central to recent groundbreaking work [1, 2]. 2. In our ConvResNeXt setting, each block has nearly constant parameters. Thus, when blocks exceed sample size order, total parameters do too. 3. This definition enables our theoretical analysis of highly expressive models relative to available data. While not a standard definition, it provides a clear, quantifiable criterion for our work on ConvResNeXts. We'll clarify this in our revised manuscript. Reference: [1] Zhang et al. (2017). Understanding deep learning requires rethinking generalization. ICLR. [2] Belkin et al. (2019). Reconciling modern machine learning practice and the bias-variance trade-off. PNAS. **Q1: Can you generalize from the empirical logistic risk to other risks? How will the choice of loss change the bound?** A1: Yes, our findings generalize to other risk functions that exhibit Lipschitz continuity in a bounded domain with respect to model outputs. While our paper predominantly uses the logistic loss due to its popularity in classification tasks, our results can be extended to a broader range of loss functions. The statistical rate remains the same, differing only by a constant factor that depends on the Lipschitz constant of the chosen loss. **Q2: Can you explain in details about the extra effort (underlying theoretical technique) you made in this paper to differentiate from Liu et al. 2021? How do you generalize it to over-parameterized settings and ConvResNeXt?** A2: Our paper extends beyond Liu et al. 2021 in several key ways. We study overparameterized settings and the more complex ConvResNeXt architecture, allowing for flexible network block design. Unlike traditional approaches as adopted by Liu et. al. 2021, we **avoid cardinality constraints** on block numbers through weight decay training, enabling highly parameterized networks. Our analysis of weight decay's effects (line 254 and Lemma 4.1) shows it induces weak sparsity, where only a finite number of blocks significantly contribute. This allows us to bound complexity in overparameterized ConvResNeXts without explicit structural constraints. A major theoretical challenge we address is **bounding metric entropy** in complex, overparameterized network settings. We employ an advanced method using Dudley's chaining of metric entropy via critical radius / local Gaussian/Rademacher complexity (Bartlett et al., 2005; see our Corollary F.5, Line 1100, Page 21). These innovations enable us to achieve optimal estimation error for overparameterized ConvResNeXts, extending beyond traditional neural network analysis techniques. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you again for the detailed review. As there are only two days left in the discussion period, we wanted to ask if you have any further comments or questions for us to consider. If our rebuttal discussing our result's novelty and generality addressed your concerns, we would really appreciate it if you would consider raising your score. Best, Submission7950 Authors
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Brain-JEPA: Brain Dynamics Foundation Model with Gradient Positioning and Spatiotemporal Masking
Accept (spotlight)
Summary: This paper introduces Brain-JEPA, a self-supervised learning approach that leverages joint-predictive architecture to learn representations of brain fMRI images. The authors introduce two novel components on top of the JEPA architecture to adapt it to brain images: 1) Brain Gradient Positioning, which encodes the functionality of each ROI into the patch positional encoding, and 2) an fMRI-specific masking strategy. The authors pretrain a family of ViT encoders using Brain-JEPA on the UK Biobank dataset and then explore downstream task performance using both fine-tuning and linear probing protocols to evaluate the quality of their learned representations. They look into trait prediction of a held-out set from the same UK Biobank dataset and three other datasets: HCP-Aging, ADNI, and another resting state fMRI data source. The methods achieve strong empirical results, outperforming previous approaches such as BrainLM with a significant margin across evaluation protocols (fine-tuning, linear eval). Strengths: - The proposed approach demonstrates strong empirical results compared to previous work in the field. - The authors provide a clear ablation study that highlights the significance of the proposed contributions, specifically the brain-gradient position embedding and masking strategy, for applying JEPA to fMRI data. - The methods exhibit good scaling properties, indicating their potential for broader applicability. Weaknesses: - It is unclear from the empirical evaluation how the pretraining data affects downstream performance. Are the baselines, such as BrainLM, using the same pretraining dataset and computational budget? - Similarly, it would be informative to explore the scaling properties of BrainJEPA with respect to dataset size. Does performance improve as the size of the pretraining dataset increases? - What is the impact of some of the contributions (brain gradient positioning, masking strategy) compared to other design choices, such as predictive in latent space? Would the former contributions also benefit a BrainLM baseline? Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Pretraining scheme of baselines (dataset and computational budget).* Thank you for your inquiry regarding the pretraining process. We confirm that for self-supervised pretraining baselines like BrainLM, we used the same pretraining dataset and computational budget, specifically the UKB dataset on 4 A100 GPUs (40GB each). > *Scaling of dataset size.* Thank you for your suggestion on exploring the impact of dataset size scaling. We have compared the performance of Brain-JEPA trained with varying portions of the UKB pretraining dataset: 25%, 50%, 75%, and 100%. As shown in the table below, the performance improves as the dataset size increases, highlighting the scalability of Brain-JEPA in relation to the pretraining dataset size. | | HCP-Aging | HCP-Aging | ADNI | |------|:------------:|:------------:|:------------:| | | Age | Sex | NC/MCI | | | $\rho$ | ACC (%) | ACC (%) | | 25% | 0.659 (.043) | 68.03 (1.21) | 67.89 (9.18) | | 50% | 0.768 (.012) | 74.24 (1.36) | 71.05 (3.86) | | 75% | 0.813 (.015) | 77.42 (2.00) | 74.74 (4.88) | | 100% | 0.844 (.030) | 81.52 (1.03) | 76.84 (1.05) | > *Combine contributions with BrainLM.* Thank you for your insightful suggestion. We further compared the performance of BrainLM combined with our contributions to vanilla BrainLM. As shown in the **global author rebuttal Table 4**, BrainLM combined with our contributions outperforms vanilla BrainLM consistently, demonstrating that our contributions (gradient positioning and spatiotempotal masking) could benefit the training of BrainLM as well. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal. Comment: The rebuttal effectively addressed my primary concerns about the size of the pretraining dataset and the evaluation of each contribution in the BrainLM framework. As a result, I have updated my score to 8, as I believe this paper will be a valuable addition for both the SSL and neuroscience communities. --- Reply to Comment 1.1.1: Title: Appreciation for Your Positive Feedback Comment: Thank you very much for your positive feedback and for taking the time to review our rebuttal. We are delighted to hear that our responses have addressed your concerns regarding the size of the pretraining dataset and the evaluation of contributions within the BrainLM framework. We appreciate your support and are glad that you find our work valuable for both the SSL and neuroscience communities.
Summary: In this paper, the authors train a foundation model on fMRI data. To this end, they combine multiple deep learning techniques in a novel way: - they rely on a Joint-Embedding Predictive Architecture, and devise a specific masking strategy for brain data (referred to as spatio-temporal masking) - they make use of pre-trained embeddings containing functional information (as opposed to anatomical only) of fMRI data as positional embeddings for transformers comprised in their JEPA model Overall, this study shows that the foundation model obtained yields state-of-the-art performance in a variety of downstream tasks (some of which test the obtained embedding without further tuning through linear probing). Strengths: In my opinion, this paper tackles an important problem: large collections of neuro data have been collected in the past decade, but inter-individual variability makes it hard to derive meaningful models of the brain. Many recent endeavours have sought to show how deep-learning can help alleviate this issue and provide meaningful embeddings of brain data that can be used in downstream tasks. I believe the current work would be of interest to a growing number of computational neuroscience researchers. Moreover, I find the writing to be clear and rather easy to follow. Weaknesses: I find the paper interesting and well written. I think it brings valuable information to the community. Methodological contributions could be deemed poor compared to other submissions, but I think the kind of benchmarks featured in this paper are challenging to implement and represent a great amount of work ; in this regard, I would encourage the authors to make their code public so that other research teams can potentially reproduce this benchmark. However, some important pieces of information are missing at this stage, notably concerning how data was processed for downstream tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: At this stage, it is not clear to me from line 235 how the downstream dataset were divided into training, validation and test sets. For instance, can data from a given participant appear in different categories? In my understanding, the division is the same across all models tested (Brain-JEPA, BrainLM, etc), is that indeed the case? I believe it is crucial to evaluate how models like Brain-JEPA generalise to unseen participants. The authors write that the temporal resolution of the dataset used for the pre-training phase is 0.735s (line 216). Classical repetition times in fMRI are usually higher. Do the datasets used in downstream tasks have the same TR as the one used for pre-training? If not, how did the authors adapt to this change? In particular, since $p$, the number of brain volumes concatenated to form patches is set to 16 (Table 4), patches are approximately 10 seconds long during the pre-training phase be would be about 30 seconds long with a classical TR of 2 seconds in downstream tasks, which seems pretty long. This brings the question: how was the value of $p$ chosen? Does it seem possible to the authors that different downstream tasks would have different optimal values for $p$? I am curious about the size of the brain gradients used to derive the positional embeddings. The authors indicate in Table 4 that $m$ the dimension of the brain gradients is 30, which seems pretty high to me. I would expect that only the first few dimensions are actually useful to the model (maybe the first 3, as illustrated in Figure 2, are enough to distinguish the most important networks cortical networks). Moreover, these vectors are actually mapped to a higher-dimensional space of size $d/2$ (where $d$ depends on the size of the ViT used). I am curious as to what the final functiono-anatomical embedding obtained here looks like. Can the authors try to give more intuitions about it? **Additional suggestions:** In my humble opinion, this paper mostly targets the neuroscientific community and should therefore help members of this community who are not deep-learning experts to dive into the current work. In particular, I think the authors could add short paragraphs (maybe in the supplementary materials if they cannot fit in the main text) to explain concepts behind JEPA, linear probing, with simple words. The left and middle panes of Figure 7 could be merged together so that bars for the caucasian and asian participants for each network would be side by side. It would be nice to indicate how much time the pre-training and fune-tuning phases required (lines 556-560). In my opinion, the positional embedding part of Figure 1 (top right) is rather unclear. I think the figure would benefit from highlighting clearly what the inputs of $g_{\phi}$ are (embeddings, positional embeddings) The authors write: "In the field of brain activity analysis, brain language model (brainLM) is the first and only foundation model to date" (line 32). I would personally be less assertive about this. In my understanding, many existing works have trained models on large fMRI datasets to extract meaningful representations (before these would be called "foundation models"). In the list below, the last two examples are rather close to what is being done in the current paper (self-supervised settings), and the last example even uses masking strategies close to that of BrainLM: - Mensch, Arthur, Julien Mairal, Bertrand Thirion, and Gaël Varoquaux. ‘Extracting Representations of Cognition across Neuroimaging Studies Improves Brain Decoding’. _PLoS Computational Biology_ 17, no. 5 (2021): e1008795. - Thomas, Armin, Christopher Ré, and Russell Poldrack. ‘Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data’. _Advances in Neural Information Processing Systems_ 35 (6 December 2022): 21255–69. - Chen, Zijiao, Jiaxin Qing, Tiange Xiang, Wan Lin Yue, and Juan Helen Zhou. ‘Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding’. arXiv, 14 November 2022. [https://doi.org/10.48550/arXiv.2211.06956](https://doi.org/10.48550/arXiv.2211.06956). Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Full open-source code, preprocessing.* We appreciate your important suggestions. We have supplemented the following materials/information: **Code and Data Source:** We have now supplemented the codebase to include all downstream tasks on public datasets mentioned in the paper. The complete codebase, along with the list of subject IDs for all datasets used, is available through an anonymous link. In accordance with the author guidelines, we have provided this link to the Area Chair in a separate comment. **Data Preprocessing:** We utilized open-source preprocessing pipelines as outlined in [1] and [2]. These pipelines are fully open-sourced and have been referenced in our paper. For data normalization, we implemented robust scaling as practiced in BrainLM. By sharing the complete codebase, the list of subject IDs, and using open-source preprocessing pipelines, we believe that our results are highly reproducible. > *Data splitting of downstream datasets.* Thank you for your inquiry regarding the splitting of our downstream datasets. We confirm that the data is divided at the participant level, ensuring that each participant appears only once in either the training, validation, or test set. It ensures that all participants in the test set are unseen during training, thereby demonstrating the excellent generalizability of Brain-JEPA. Additionally, the same division is applied consistently across all models tested. > *Different TR for different datasets.* Thank you for the insightful question regarding the temporal resolution of datasets used. For fMRI in UKB, we used multi-band data with a high temporal resolution (TR is ~0.7s). In our downstream datasets, HCP-Aging also uses multi-band data with a TR of ~0.7s, while both ADNI and the Asian cohort use single-band data with a TR of ~2s. To address the differences in TR, we downsampled the multi-band data with a temporal stride of 3, aligning the TR to ~2s. This ensures consistency across different datasets. For future work, we plan to use learnable TR embeddings to enable the model to adapt to different TRs dynamically. > *Dimension of brain gradient embedding.* Thank you for your insightful feedback on the dimensionality of the brain gradient. We compared the model performance between 3-dimensional (3-dim) and 30-dim brain gradient positioning, shown in the **global author rebuttal Table 5**. The 30-dim model consistently outperformed the 3-dim model by a large margin. This indicates that higher-dimensional brain gradients may encapsulate finer-grained information on brain network organization, which benefits the learning of brain dynamics. We leave the investigation of the mapped gradient embedding in the future work. Specifically, we will conduct comprehensive experiments to explore how these gradients are organized in the latent space and their relationships with brain network organization. > *Explain to neuroscience community.* We appreciate your valuable suggestion. We will include short explanatory paragraphs on concepts like JEPA and linear probing in the supplementary materials to make the paper more accessible to the neuroscientific community. > *Fig7.* Thank you for your insightful suggestion regarding Figure 7. However, the absolute attention values are not meaningful in isolation, so it may not be appropriate to directly compare attention values across different cohorts. Our focus is on whether the rankings among different networks are aligned between the two cohorts (Caucasian versus Asian). From the current Figure, we can clearly see the top 4 networks are the same across the two cohorts. Please let us know if you have any further concerns regarding this. > *Training time.* Thank you for your inquiry regarding the training time. For pre-training (UKB) with the ViT-base model on 4 A100 GPUs (40GB each), using a batch size of 16x4x8, the training time per batch is ~0.3s. The total training time is ~16.5 hours. For fine-tuning, taking HCP-Aging as an example, with the ViT-base model on 1 A100 GPU (40GB), using a batch size of 16. The total training time is ~14 mins. > *Fig1.* Thank you for the valuable feedback on Figure 1. The positional embedding part is intended to be a 2D schematic view of brain gradient positioning along with temporal positioning. We will revise the figure to explicitly highlight the inputs to the predictor $g_{\phi}$. > *Related work.* Thank you for highlighting other related works. While it is true that the mentioned studies have trained models on large fMRI datasets, their downstream applications are limited. [3] and [4] focus on mental state classification, and [5] on brain decoding. A brain foundation model should exhibit broad applicability across diverse brain-related tasks, such as demographic prediction, trait prediction, and disease diagnosis/prognosis. We have added additional comparisons with CSM [4], together with most recent work suggested by other reviewers. Shown in the **global author rebuttal Table 1 and 2**, Brain-JEPA outperforms them on most tasks. We highlight that Brain-JEPA demonstrates the most diverse range of downstream applications, showcasing its powerful generalizability across different cohorts and tasks. We will include the mentioned works in our revised version to better illustrate these distinctions. References: [1] Spatial topography of individual-specific cortical networks predicts human cognition, personality, and emotion. Cerebral Cortex 2019. [2] Global signal regression strengthens association between resting-state functional connectivity and behavior. Neuroimage 2019. [3] Extracting Representations of Cognition across Neuroimaging Studies Improves Brain Decoding. PLoS Computational Biology [4] Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data. NeurIPS 2022. [5] Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding. CVPR 2023.
Summary: The study introduces Brain-JEPA, an fMRI foundation model that leverages joint-embedding predictive architecture and a novel position-embedding approach based on brain gradient. This model achieves state-of-the-art performance in demographic prediction, disease diagnosis/prognosis, and trait prediction, and excels in off-the-shelf evaluations like linear probing. Strengths: 1. The idea of using fMRI gradient information to guide the position encoding is very interesting and is proven to be effective. 2. Overall the paper is nicely organized and written. 3. The model shows great generalizability. Weaknesses: 1. I think the effectiveness of JEPA, i.e., predicting the representations of target blocks rather than constructing the masked input like MAE, is not well supported by the ablation study: * BrainLM uses the AAL-424 atlas instead of the Schaefer functional atlas. Since these two studies use different atlases, it is not a very fair comparison. * I think using different atlases is totally fine if there is a very significant performance difference between the current model with anatomical position embedding (incorporating the same position settings as BrainLM) and BrainLM. However, from the ablation study, the performances with anatomical position embedding on three downstream tasks are 0.716, 78.79%, and 74.74%, respectively, while for BrainLM, they are 0.832, 74.39%, and 75.79%, with two of them higher than Brain-JEPA (with anatomical position embedding). With these results, I feel the main contribution would be the novel position embedding based on the fMRI gradient rather than the JEPA framework. Additionally, the author didn’t report other ablation results on downstream tasks like Neuroticism, Flanker, Amyloid, and NC>MCI (Asian), so there is no further evidence that JEPA consistently performs well on other downstream tasks. Nor did the author explore Brain-JEPA without the JEPA framework, i.e., predicting the original signals rather than representations. 2. Since the model performs prediction at the latent embedding level rather than at the original data level, the current model is unable to reconstruct the original missing time series. However, reconstructing the unseen missing brain time series is also an important and valuable application of the brain foundation model. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In lines 71 to 73, I wonder if the author made it italics on purpose or if it’s a formatting mistake. 2. In line 154, what’s the shape/size of functional connectivity $c_i$ in ROI i? 4. Why does the author choose to sample 160 frames from the original data with a temporal stride of 3, instead of using the original data frame? 3. I think BrainLM is not the only self-supervised study on fMRI; there are other works like “Thomas et al., Neurips 2022” and “SwiFT” by Kim et al., Neurips 2023, etc, that could be discussed in the introduction or even compared with the current model. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author has adequately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Ablation study on framework.* Thank you for your feedback on our ablation study. To thoroughly compare the performance between JEPA with anatomical locations (AL) and BrainLM (MAE), we have extended our comparison to include all the tasks except for the three in the current version, as well as two newly added datasets, OASIS-3 and CamCAN. The results shown in the table below, demonstrates that JEPA *w* AL outperforms BrainLM in seven out of eleven tasks, demonstrating the superiority of prediction in latent space. For the tasks where BrainLM performs better, it is likely that JEPA requires gradient positioning for precise ROI placement to achieve optimal performance. In future work, we will further investigate the possible interactions between the self-supervised learning framework and brain gradient positioning. | | UKB | UKB | HCP-Aging | HCP-Aging | ADNI | Asian | OASIS-3 | CamCAN | |---------|:-------------|:------------|:------------|:------------|:------------|:------------|:-------------|:------------| | | Age | Sex | Neurotism | Flanker | Amy+/- | NC/MCI | AD Conversion | Depression | | | $\rho$ $\uparrow$ | ACC (%) $\uparrow$ | $\rho$ $\uparrow$ | $\rho$ $\uparrow$ | ACC (%) $\uparrow$ | ACC (%) $\uparrow$ |ACC (%) $\uparrow$ | ACC (%) $\uparrow$ | | BrainLM | 0.632 (0.020) | **86.47** (0.74) | 0.231 (.012) | 0.318 (.048) | **67.00** (7.48) | 61.65 (3.35) | 65.00 (7.75) | 70.00 (6.17) | | Brain-JEPA *w* AL | **0.686** (0.013) | 84.11 (0.50) | **0.267** (.003) | **0.374** (.022) | 65.00 (6.32) | **64.33** (1.80) | **67.00** (4.00) | **71.82** (6.03) | We further compared Brain-JEPA without JEPA architecture (i.e., BrainLM *w* contributions) to Brain-JEPA. As shown in the **global author rebuttal Table 4**, Brain-JEPA (JEPA framework) outperforms BrainLM (MAE framework) *w* contributions consistently, demonstrating the superiority of JEPA framework. > *Reconstruction of time series.* Thank you for pointing out the underlying difference between the MAE and JEPA frameworks. We would like to emphasize that the primary goal of reconstructing masked fMRI time series is to facilitate self-supervised training rather than for a specific clinical application. Our primary focus is on learning representations that directly enhance performance across diverse downstream tasks. By achieving better latent performance for the samples, we can utilize powerful decoders such as diffusion models to reconstruct input signals for various applications. We leave this as the future work. > *Italic lines 71-73.* Thank you for your comment. Regarding lines 71 to 73, we intentionally italicized this part to emphasize the importance of these questions. The italicization was meant to highlight the significance of developing a functional coordinate system and a masking strategy for large-scale pretraining with fMRI data, which are crucial interdisciplinary challenges at the intersection of AI and neuroscience. > *Line 154, the value of $c$.* Thank you for your attention to detail. The functional connectivity is represented by an adjacency matrix. Given $N$ ROIs, the adjacency matrix is an $N×N$ symmetric matrix. Each row/column $i$ of this matrix represents the connectivity of ROI $i$ with other regions. In our paper, the brain is parcellated into $N=450$ regions, therefore, the dimension of connectivity for each ROI is 450. > *Downsampling.* We thank the reviewer for the insightful question on temporal downsampling. We performed temporal downsampling for some of our datasets because of the variable temporal resolutions among different datasets. For the fMRI data in the UKB (our pretraining dataset), we used multi-band data with a high temporal resolution, where the TR is ~0.7s. In our downstream datasets, HCP-Aging also uses multi-band data with a TR of ~0.7s, while both ADNI and the Asian cohort use single-band data with a TR of ~2s. To address the differences in temporal resolution, we downsampled the multi-band data with a temporal stride of 3, effectively aligning the TR to approximately 2s. This ensures consistency across datasets. For future work, we plan to incorporate learnable TR embeddings, enabling the model to handle different temporal resolutions in a learning-driven manner. This approach will enhance the model's flexibility across varied datasets. > *Other self-supervised study on fMRI.* Thank you for your suggestion on discussing other self-supervised studies on fMRI. We have incorporated CSM [1], SwiFT [2], and BrainMass[3] (one concurrent work suggested by another reviewer) into our comparisons on the HCP-Aging and ADNI datasets, as well as two newly added datasets, OASIS-3 and CamCAN. As shown in the **global author rebuttal Table 1-3**, our proposed Brain-JEPA consistently outperforms these models on most tasks. CSM is domain-specific, trained exclusively for mental state decoding. SwiFT specializes in demographic prediction, and BrainMass is limited to disease diagnosis and prognosis. Brain-JEPA, however, has demonstrated versatility across a broader range of downstream applications, including demographic prediction, trait prediction, cognitive score prediction, and disease prognosis/diagnosis, showcasing its extensive potential. In our revised version, we will include comparisons with additional models to further demonstrate the outstanding performance of Brain-JEPA. We will also incorporate a discussion of the above-mentioned works into the related work section to better position our contributions. References: [1] Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data. NeurIPS 2022. [2] SwiFT: Swin 4D fMRI Transformer. NeurIPS 2023. [3] BrainMass: Advancing Brain Network Analysis for Diagnosis with Large-scale Self-Supervised Learning. TMI 2024. --- Rebuttal Comment 1.1: Title: Thank you for rebuttal Comment: Thank you for the detailed explanations. The additional experiments have addressed most of my concerns, and I have increased my score accordingly. However, regarding Concern 1, I actually still think that BrainLM and Brain-JEPA with AL perform comparably, given the large standard deviations observed in some tasks and the absence of statistical testing here. Therefore, the effectiveness of JEPA remains uncertain from my perspective. --- Reply to Comment 1.1.1: Title: Thank you for increasing your score Comment: Thank you for recognizing that we have addressed most of your concerns in our rebuttal, and we appreciate your decision to increase the score accordingly. We apologize for not mentioning the statistical testing for the table when addressing your concern 1. Regarding the comparison between the JEPA and MAE frameworks, particularly with the use of anatomical locations as positional embeddings for ROIs (instead of our contribution of gradient positioning), we would like to add on that our t-test results show a significant advantage (p < 0.05) of JEPA over MAE in seven out of eleven tasks. We acknowledge that, based on our current experiments, the JEPA architecture performs best when combined with gradient positioning, while in some cases, MAE may outperform when using anatomical positioning. This suggests that **the collaborative effect of JEPA and gradient positioning** could be the key to achieve optimal performance. In future work, we plan to explore the potential interactions between self-supervised learning frameworks and brain gradient positioning further. We believe that this could lead to even greater enhancements and broader impact of our approach.
Summary: This paper introduces a foundation model for fMRI time series, using a classic vision transformer backbone. It incorporates two main original developments: (1) a positional encoding for brain regions, using a “functional gradient” analysis (aka PCA on a Jacobian matrix derived from a temporal correlation matrix), (2) a masking strategy that explicitly masks different type of spatial and/or temporal interactions. The model is trained on a large public popular resource (UK biobank), and evaluated on downstream supervised learning in the same resource (age and sex prediction). Then the model is used for transfer learning in three independent datasets (including both north american and asian participants) for various supervised downstream tasks, predicting either demographic data (age, sex), clinical diagnosis (mild cognitive impairment) and AD biomarker status (amyloid beta deposition). The performance of the proposed model Brain-JEPA is contrasted with other models from the literature with different kinds of architectures, and in particular another “fMRI foundation” model called BrainLM. Brain-JEPA outperforms other models on most tasks, and also retains good performance when a simple linear layer is used for transfer, unlike BrainLM. These results are very promising, as they show self-supervised pretraining can successfully be applied to fMRI time series, and may lead to more robust brain biomarkers for brain disorders. Strengths: The work is very clear and well constructed. It adapts a now classic vision transformer architecture by proposing compelling solutions for the two main domain-specific ingredients: positional encoding and masking. It includes a fair number of baseline models, and tasks for evaluation. The code and the checkpoints of the models are shared. I have not done an in depth review of these, but the code is clear and is meant to replicate one of the tasks. The comparison of transfer with fine-tuning vs linear probing as well as the improvement in downstream accuracy across training epochs all improve confidence in the results and the generalizability of the learned representations by brain-JEPA. The ablation study is also great. Weaknesses: Although the code is shared, it does not appear to cover all of the downstream tasks in the paper though. Importantly, I could not find detailed information on data sources. ADNI for example is a massive dataset, and depending on the details of data release and other criteria, the sample size and exact list of subjects for “NC vs MCI” may vary dramatically. I am fairly confident I would not be able to reproduce all the results in this paper based on the provided information, and it’s not clear to me which parts of the paper would be easy to reproduce. There are also no details in fMRI data preprocessing, and the normalization applied across subjects is not standard. Were all datasets preprocessed with similar tools? EDIT: the authors shared anonymized code and a list of subject IDs for their analysis. The level of reproducibility is thus adequate. Surprisingly, the paper lacks a good SVM/SVR or linear regression baseline. Given the regime of limited data size in fMRI, these vanilla models are really tough to beat. Based on reported accuracy, SVM would likely beat brain-JEPA prediction on sex prediction on uk biobank. https://www.nature.com/articles/s41467-020-18037-z EDIT: the authors added an SVM baseline (along with other methods). More of a stylistic weakness: some of the text uses excessively strong languages, in particular the abstract. “This pioneering model sets an unprecedented chapter in brain activity analysis with fMRI, achieving state-of-the-art performance in demographic prediction, disease diagnosis/prognosis, and trait prediction through fine-tuning.” This is not accurate: several similar papers have come out in the past two years (see below), at least the brainLM paper. EDIT: the authors have toned down claims of novelty in the abstract. The paper fails to acknowledge several closely related works. BrainLM is not the only fMRI foundational model. I have listed a few below. The brainMASS paper in particular is an impressive work very relevant to this submission, using 30 different datasets and seven downstream tasks. It would be important to position the paper compared to some of these models, and tone down the claims of novelty. EDIT: the authors have added more recent models to their evaluation and added a discussion of other models that could not be directly compared. Yang et al., 2024 BrainMass: Advancing Brain Network Analysis for Diagnosis with Large-scale Self-Supervised Learning Ferrante et al., 2024 TOWARDS NEURAL FOUNDATION MODELS FOR VISION : ALIGNING EEG, MEG AND FMRI REPRESENTATIONS TO PERFORM DECODING , ENCODING AND MODALITY CONVERSION https://openreview.net/forum?id=nxoKCdmteM Thomas et al., 2023 Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data https://arxiv.org/abs/2206.11417 Finally, and related to that last point, there are many public dataset to benchmark downstream tasks. ABIDE I, ABIDE II and ADHD200 in particular are readily available. It is surprising to see “only” three datasets used for downstream. EDIT: the authors have added several datasets for downstream tasks. Technical Quality: 4 Clarity: 3 Questions for Authors: How do the authors handle the massive difference in fMRI temporal resolution between UK biobank and ADNI? EDIT: all required details were provided in the rebuttal below. Positional encoding would be critical with variable brain region location and temporal sampling, but fMRI only has variable temporal sampling, and it is not clear from the text how this is handled by the model. The brain representation in Figure 7 is hard to decipher. What are the readers supposed to observe? EDIT: not really resolved, see my comment to the rebuttal. I believe UK biobank uses three Siemens 3T scanners, not one. EDIT: manuscript was amended to reflect this. Could the authors please double check that IRB approval is not required for secondary analysis of human neuroimaging data in their institution.EDIT: authors have required IRB approvals. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The number of downstream tasks is limited, and the authors did not use simple baseline models (such as SVM on connectomes) despite the known excellent performance of these models at the scale of fMRI datasets. EDIT: the number of baseline models and downstream tasks have been expanded. It is also unclear based on the current set of results how the brain-JEPA model handles a diversity of scanner and image acquisition characteristics, considering it was trained on a single protocol. EDIT: dowstream tasks include datasets with various protocols, demonstrating robustness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Code, data source, and preprocessing.* Thank you for your valuable feedback. We have supplemented code of all downstream tasks on public datasets to the codebase. The code and subject IDs are available via an anonymous link provided to AC in a separate comment, per author guidelines. We utilized open-source preprocessing pipelines as outlined in [1] and [2]. We implemented robust scaling for normalization as practiced in BrainLM. By sharing the complete codebase, subject IDs, and using open-source preprocessing pipelines, we believe that our results are highly reproducible. > *Other related work.* Thank you for your important suggestions. We have incorporated SVM/SVR, BrainMass [3], CSM [4], and SwiFT [5] (as suggested by another reviewer) into comparisons. As shown in the **global author rebuttal Table 1 and 2**, Brain-JEPA outperforms these models on most tasks. Although SVM/SVR may perform well in demographic prediction (e.g., sex), previous research has indicated that SVMs perform worse than fMRI deep learning models [6] [7]. Additionally, note that BrainMass is a concurrent work to ours (published after our initial submission). We have demonstrated a broader range of downstream applications, including demographic prediction, trait prediction, cognition score prediction, and disease prognosis/diagnosis. In contrast, experiments in BrainMass are limited to disease diagnosis/prognosis only. Moreover, the work mentioned [8] lacks available code and is applied to a multi-modal setting, making it difficult to reproduce and not suitable for our context. In revised version, we will include comparisons with additional models to further demonstrate the outstanding performance of Brain-JEPA. We will also incorporate a discussion of the above works to better position our contributions. > *More downstream datasets.* Thank you for the valuable suggestion. Note that datasets such as ABIDE I, ABIDE II, and ADHD200 are all children cohorts. Given that our model was pretrained on UKB, which primarily includes middle-aged to elderly participants, it may not generalize well to younger populations yet (same as BrainLM). To further demonstrate the diversity of our downstream applications, we have conducted additional experiments using two aging-related public datasets: OASIS-3 and CamCAN, for AD conversion prediction and depression classification, respectively (**global author rebuttal Table 3**). By applying Brain-JEPA to five downstream datasets across eight distinct tasks totally, we have demonstrated its versatility in a wider range of applications compared to the existing models. Specifically, Brain-JEPA excels in demographic prediction, trait prediction, and disease diagnosis and prognosis. This stands in contrast to experiments done in BrainLM, which is limited to demographic and clinical score prediction, and BrainMass, which focuses solely on disease diagnosis and prognosis. > *Temporal resolution.* We thank the reviewer for the insightful question. For fMRI in UKB, we used multi-band data with a high temporal resolution (TR is ~0.7s). In our downstream datasets, HCP-Aging also uses multi-band data with a TR of ~0.7s, while both ADNI and the Asian cohort use single-band data with a TR of ~2s. To address the differences in TR, we downsampled the multi-band data with a temporal stride of 3, aligning the TR to ~2s. This ensures consistency across different datasets. For future work, we plan to use learnable TR embeddings to enable the model to adapt to different TRs dynamically. > *Clarification on Fig 7.* Brain networks are systems of interconnected regions that work together to perform specific functions, such as the Default Mode Network (DMN) for self-referential and memory tasks and the Control Network (CN) for executive functions [9] [10]. Figure 7 shows attention values among networks. Higher attention values in DMN, CN, and SAN indicate their significant involvement in MCI, consistent across different ethnic groups, showcasing Brain-JEPA's robustness and generalizability. > *Claim in abstract.* Thank you for your valuable feedback. While we recognize the contributions of recent works, our aim was to highlight specific advancements introduced by Brain-JEPA. These include the integration of Brain Gradient Positioning and Spatiotemporal Masking. Furthermore, the downstream applications of Brain-JEPA are exceptionally diverse. We will remove the “unprecedented chapter” in the abstract for our revision to better place our work in the context of recent developments in the field. > *UKB has three scanners.* In revised version, we will revise "one scanner" to "three scanners" in the UKB introduction of our revised version. > *IRB approval.* We have obtained all necessary IRB approvals for the datasets used for analysis. References: [1] Spatial topography of individual-specific cortical networks predicts human cognition, personality, and emotion. Cerebral Cortex 2019. [2] Global signal regression strengthens association between resting-state functional connectivity and behavior. Neuroimage 2019. [3] BrainMass: Advancing Brain Network Analysis for Diagnosis with Large-scale Self-Supervised Learning. TMI 2024. [4] Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data. NeurIPS 2022. [5] SwiFT: Swin 4D fMRI Transformer. NeurIPS 2023. [6] Interpretable Graph Neural Networks for Connectome-Based Brain Disorder Analysis. MICCAI 2022. [7] Beyond the Snapshot: Brain Tokenized Graph Transformer for Longitudinal Brain Functional Connectome Embedding. MICCAI 2023. [8] TOWARDS NEURAL FOUNDATION MODELS FOR VISION : ALIGNING EEG, MEG AND FMRI REPRESENTATIONS TO PERFORM DECODING , ENCODING AND MODALITY CONVERSION. ICLR 2024 Workshop Re-Align. [9] The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J Neurophysiol 2011. [10] Correspondence of the brain's functional architecture during activation and rest. PNAS 2009. --- Rebuttal Comment 1.1: Comment: Re reproducibility: thanks for taking these steps. This is strengthening the submission. I am going to update my soundess score to 4 (excellent). Re prior works: this point is appropriately addressed by incorporating additional models when possible, and adding discussions on the model that cannot be directly compared or implemented. Re downstream tasks: adding several new datasets and downstream tasks addresses the issue and is strengthening the manuscript (this is participating to my upgrade of the soundness score). Re temporal resolution: thanks for the clarification. Those are critical details that should be added in future revisions of the article. Re figure 7: I am familiar with the functional organization into intrinsic connectivity networks. My point is that the figure is very hard to read due to the choice of visualisation. The point you are trying to make would be better served by showing distribution of attention in those networks. Re the tone of the abstract: I understand this paper makes novel contributions. But I maintain that the tone was too dramatic, and I agree with the proposed revision. I disagree that the range of downstream tasks is exceptional. Even after revision, it is smaller than some other works in the field. Check this work for example: https://doi.org/10.1162/imag_a_00222 Scanners in UK biobank: I would encourage you to refer to the documentation of the UK biobank to double check. IRB: this adequately addresses my concerns. Based on the substantial improvements made by the authors, I have decided to revise my overall score to 7 (accept). --- Reply to Comment 1.1.1: Title: Thank you for increasing the score and acknowledging our improvements. Comment: We appreciate your recognition that we have addressed most of your concerns in our rebuttal, and we are grateful for your decision to raise the score. Thank you for your valuable feedback on Figure 7. We acknowledge that displaying the attention values as brain surface maps would enhance clarity, and we will include this additional brain map figure in the revised version. We also recognize that 'exceptional' may not be the most precise wording when referring to downstream tasks involving a broader range of disease types and etc. In the revised version, we will emphasize that Brain-JEPA can be applied to a diverse array of downstream applications. Additionally, we will include the evaluation on more related downstream tasks in the future. We sincerely appreciate your thorough review and thoughtful suggestions, which have greatly contributed to the improvement of our work.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and effort in reviewing our work. We appreciate the positive feedback and great interest in our work from all reviewers, along with their insightful questions and suggestions. We are pleased to see that the reviewers acknowledge and appreciate the following aspects: 1. The idea is interesting and effective. The model shows great generalizability, scaling property, with strong empirical results. (Reviewers **GJHn**, **M77o**, **iLSi**, **fytG**) 2. The work is very clear and well constructed. (Reviewers **GJHn**, **M77o**, **iLSi**) 3. Excellent contribution (Reviewer **GJHn**) and presentation (Reviewer **fytG**). 4. Great and clear ablation study. (Reviewers **GJHn**, **fytG**) 5. The code and the checkpoints of the models are shared. (Reviewer **GJHn**) 6. The paper is of interest to the computational neuroscience community. (Reviewer **iLSi**) We have addressed each reviewer's questions and suggestions point-by-point. This includes adding experiments with more baselines, and evaluating on additional downstream datasets and tasks, as well as clarifying the temporal resolution. Additionally, we responded to questions related to ablation study about the framework's contribution, the gradient dimension, and the scaling of the pretraining dataset size. We would like to note that for the added baselines, BrainMass [1] is a concurrent work published after our initial submission. Additionally, CSM [2] and SwiFT [3] are not time series models; CSM utilizes text-like representations, while SwiFT operates on raw fMRI data. We thank the reviewers for suggesting these works, and we believe that including comparisons with them strengthens our results. Additional results are shown below: - **Table 1. Results of additional baselines on HCP-Aging.** | | Age | Age | Sex | Sex | |-----------|:------------|:------------|:------------|:------------| | | MSE $\downarrow$ | $\rho$ $\uparrow$ | ACC (%) $\uparrow$ | F1 (%) $\uparrow$ | | SVM/SVR | 0.586 (.019) | 0.699 (.022) | 76.67 (1.88) | 80.82 (1.15) | | BrainMass | 0.396 (.002) | 0.831 (.014) | 74.09 (3.87) | 75.78 (3.37) | | CSM | 0.409 (.012) | 0.733 (.023) | 74.85 (1.11) | 76.23 (0.37) | | SwiFT | 0.341 (.007) | 0.755 (.063) | 73.48 (2.20) | 74.65 (2.32) | | **Brain-JEPA** | **0.298** (.017) | **0.844** (.030) | **81.52** (1.03) | **84.26** (0.82) | - **Table 2. Results of additional baselines on ADNI.** | | NC/MCI | NC/MCI | Amy+/- | Amy+/- | |-----------|:------------|:------------|:------------|:------------| | | ACC (%) $\uparrow$ | F1 (%) $\uparrow$ | ACC (%) $\uparrow$ | F1 (%) $\uparrow$ | | SVM/SVR | 64.21 (5.16) | 73.06 (4.71) | 62.00 (4.00) | 63.84 (5.44) | | BrainMass | 74.21 (5.10) | 81.36 (3.56) | 68.00 (7.48) | 69.29 (8.96) | | CSM | 68.42 (4.99) | 76.74 (4.54) | 63.00 (9.80) | 65.89 (9.79) | | SwiFT | 73.16 (5.31) | 80.46 (4.16) | 65.00 (6.32) | 67.79 (6.38) | | **Brain-JEPA** | **76.84** (1.05) | **86.32** (0.54) | **71.00** (4.90) | **75.97** (3.93) | - **Table 3. Additional tasks of AD conversion prediction and depression classification on OASIS-3 and CamCAN datasets.** | | OASIS-3 | OASIS-3 | CamCAN | CamCAN | |:-----------:|:-------------|:-------------|:------------|:------------| | | AD Conversion | AD Conversion | Depression | Depression | | | ACC (%) $\uparrow$ | F1 (%) $\uparrow$ | ACC (%) $\uparrow$ | F1 (%) $\uparrow$ | | SVM/SVR | 56.00 (2.81) | 52.05 (1.66) | 63.64 (3.07) | 56.79 (2.32) | | BrainNetCNN | 62.00 (2.45) | 59.53 (0.58) | 62.73 (4.45) | 56.85 (4.47) | | BrainGNN | 59.00 (2.00) | 56.53 (4.34) | 63.64 (4.98) | 56.68 (3.26) | | BNT | 68.00 (8.72) | 64.73 (11.29) | 65.45 (4.64) | 55.32 (8.67) | | BrainLM | 65.00 (7.75) | 62.67 (9.04) | 70.00 (6.17) | 64.18 (3.82) | | BrainMass | 67.00 (6.00) | 66.53 (6.95) | 70.91 (2.23) | 63.56 (2.93) | | CSM | 61.00 (4.90) | 61.97 (5.49) | 64.55 (4.45) | 56.08 (6.23) | | SwiFT | 65.00 (6.32) | 66.80 (4.12) | 69.09 (6.68) | 61.78 (9.26) | | **Brain-JEPA** | **69.00** (7.35) | **67.32** (7.92) | **72.73** (2.87) | **67.45** (1.57) | - **Table 4. Comparisons of different frameworks.** | | HCP-Aging | HCP-Aging | ADNI | |-------------|:------------|:------------|:------------| | | Age | Sex | Amy+/- | | | $\rho$ $\uparrow$ | ACC (%) $\uparrow$ | ACC (%) $\uparrow$ | | BrainLM | 0.832 (.028) | 74.39 (1.55) | 67.00 (7.48) | | BrainLM *w* contributions | 0.838 (0.014) | 76.36 (2.58) | 70.00 (11.40) | | JEPA *w* contributions | **0.844** (.030) | **81.52** (1.03) | **71.00** (4.90) | - **Table 5. Comparison of different number of gradient components. ‘bg’ means brain gradient.** | | HCP-Aging | HCP-Aging | ADNI | |-------------|:------------|:-----------|:------------| | | Age | Sex | Amy+/- | | | $\rho$ $\uparrow$ | ACC (%) $\uparrow$ | ACC (%) $\uparrow$ | | 3-dim bg | 0.819 (0.003) | 76.96 (1.77) | 67.00 (6.00) | | 30-dim bg | **0.844** (.030) | **81.52** (1.03) | **71.00** (4.90) | The main results on additional baselines and datasets can also be found in the attached PDF. If reviewers have any further questions or concerns, please let us know. We are happy to engage in further discussion. References: [1] BrainMass: Advancing Brain Network Analysis for Diagnosis with Large-scale Self-Supervised Learning. TMI 2024. [2] Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data. NeurIPS 2022. [3] SwiFT: Swin 4D fMRI Transformer. NeurIPS 2023. Pdf: /pdf/e3288b567e2fcff645e69c22fe2f9d5b91c78d11.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Transparent Networks for Multivariate Time Series
Reject
Summary: GATSM effectively captures temporal patterns and handles dynamic-length time series while preserving transparency, outperforming existing GAMs and matching the performance of black-box models like RNNs and Transformers. Strengths: * This paper is easy to understand. * GATSM can be understood as a linear representation with good transparency. * Surprisingly, this method improves performance while providing better interpretability compared with black-box. Weaknesses: * It is not clear how multi-head attention works in Definition 3.1 to learn temporal patterns. My understanding is that the global feature interacts with the current feature in Eq.3, so why is it that the input to the attention is not $x$ but the transformed $\tilde_x$. And then, are temporal patterns captured from attention? * The addition of attention to NBM is under-motivated, so why not just replace $w^{nbm}$ to attention weight, so that you can learn one less set of parameters $w^{nbm}$. * The experiment lacks DLinear [1], a strong baseline and providing interpretability. I think it's highly relevant. 1. Are Transformers Effective for Time Series Forecasting? AAAI 2023. * Given the emphasis on interpretation or white-box modeling, qualitative experiments of the contributions/explanations need to be compared rather than visualization. If there is no ground-truth of the contribution, occlusion experiments in post-hoc methods [2, 3] can also be designed to explore the trade-off between performance and additive features. 2. Encoding time-series explanations through self-supervised model behavior consistency, NeurIPS 2023. 3. TimeX++: learning time-series explanations with information bottleneck, ICML 2024. * In Table 7, why is the throughput of GATSM so much lower than NBM? Does NBM add up as features at the time level without sharing? Technical Quality: 2 Clarity: 3 Questions for Authors: See the weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: It's hard to deal with the higher-order interactions, e.g. GA^2M/GA^NM in time series, since time series often have long time points and the high complexity of gam-based techniques. In addition, the temporal-level causal interrelationships can be further explored. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: It is not clear how multi-head attention works in Definition 3.1 to learn temporal patterns. My understanding is that the global feature interacts with the current feature in Eq.3, so why is it that the input to the attention is not but the transformed \tilde_x. And then, are temporal patterns captured from attention? **A1**: In this paper, we aim to separately model the effects of features using NBM and temporal patterns using the attention mechanism to maintain transparency. In Equation (5), the input to the attention mechanism is $\textbf{v}$, which is a linearly transformed version of $\tilde{\textbf{x}}$ with positional encoding. In Equations (7) and (8), the current time step $\textbf{v}_i$ interacts with the previous time step $\textbf{v}_j$. Therefore, the attention mechanism can capture temporal patterns. In Equation (11), we rewrote the vector-level representation of the final prediction of GATSM in scalar form. This demonstrates that the prediction of GATSM satisfies Definition 3.1 by encapsulating the attention scores and NBM (i.e., $h_b$) into a function $f\_{u,m}$. **Q2**: The addition of attention to NBM is under-motivated, so why not just replace w^nbm to attention weight, so that you can learn one less set of parameters w^nbm. **A2**: $w^{nbm}$ and the attention score $\alpha$ have different roles. $w^{nbm}$ is applied to the basis functions in time-sharing NBM. Thus, it can be viewed as feature-wise weight. In contrast, the attention score $\alpha$ is applied to time-steps, serving as time-wise weight. Therefore, employing both $w^{nbm}$ and $\alpha$ improves the expressivity of GATSM. **Q3**: The experiment lacks DLinear [1], a strong baseline and providing interpretability. I think it's highly relevant. **A3**: We have added new experimental results comparing GATSM with two recent models, suggested DLinear and PatchTST. These experiments were conducted using three popular forecasting datasets—electricity, traffic, and weather—to evaluate 24-step, 48-step, and 72-step forecasting accuracy. We reported the average MAPEs and standard deviations over five random seed. The results show that PatchTST performs best in multi-step forecasting tasks, while GATSM demonstrates lower accuracy compared to recent state-of-the-art methods. However, GATSM has a unique advantage: it can provide fully faithful interpretations of the model. Additionally, GATSM is capable of handling dynamic time series, and we have reported its performance on various tasks, including forecasting, binary classification, and multi-class classification. We will add a figure illustrating the interpretations of GATSM and DLinear in multi-step forecasting ||Electricity|||Traffic|||Weather||| |---------------|--------------------|---------------|-------------------|---------------|-------------------|---------------|---------------|---------------|---------------| ||24h|48h|72h|24h|48h|72h|24h|48h|72h| |GATSM|0.122(+-0.005)|0.135(+-0.010)|0.137(+-0.007)|0.314(+-0.030)|0.302(+-0.046)|0.347(+-0.108)|0.859(+-0.115)|0.966(+-0.182)|0.935(+-0.133)| |PatchTST|0.100(+-0.005)|0.107(+-0.007)|0.104(+-0.005)|0.209(+-0.005)|0.227(+-0.013)|0.228(+-0.007)|0.622(+-0.006)|0.580(+-0.022)|0.581(+-0.019)| |Dlinear|0.108(+-0.003)|0.115(+-0.004)|0.113(+-0.001)|0.242(+-0.011)|0.250(+-0.007)|0.252(+-0.012)|0.844(+-0.085)|0.860(+-0.013)|0.817(+-0.032)| **Q4**: Given the emphasis on interpretation or white-box modeling, qualitative experiments of the contributions/explanations need to be compared rather than visualization. If there is no ground-truth of the contribution, occlusion experiments in post-hoc methods [2, 3] can also be designed to explore the trade-off between performance and additive features. **A4**: We conducted feature occlusion experiments using five datasets, each with more than ten input features. We determined the importance of features by averaging the absolute magnitude of their contributions and retrained GATSM without the bottom 20%, 40%, 60%, and 80% of features. The experimental results show a slight decrease in performance as the occlusion rate increases in the Mortality and Sepsis datasets, indicating that GATSM effectively identified important features. Additionally, performance increases as the occlusion rate increases in the Energy, Heartbeat, and NATOPS datasets, suggesting that GATSM can identify noisy features. ||occlusion rate|||| |---------------|---------------|---------------|---------------|---------------| ||20%|40%|60%|80%| Energy ($\uparrow R^2$)|0.408(+-0.111)|0.487(+-0.062)|0.464(+-0.099)|0.562(+-0.098)| Heartbeat ($\uparrow AUROC$)|0.811(+-0.049)|0.838(+-0.028)|0.815(+-0.048)|0.806(+-0.085)| Mortality ($\uparrow AUROC$)|0.857(+-0.018)|0.853(+-0.018)|0.85(+-0.019)|0.846(+-0.016)| Sepsis ($\uparrow AUROC$)|0.796(+-0.005)|0.793(+-0.005)|0.784(+-0.013)|0.768(+-0.017)| NATOPS ($\uparrow Acc.$)|0.947(+-0.032)|0.953(+-0.027)|0.956(+-0.033)|0.961(+-0.025)| **Q5**: In Table 7, why is the throughput of GATSM so much lower than NBM? Does NBM add up as features at the time level without sharing? **A5**: The feature functions in NAM, NBM, and GATSM share learnable parameters across time steps (i.e., time-sharing). GATSM additionally requires the computation of attention scores, resulting in lower throughput compared to NBM. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. My major concerns have been well addressed, and I have decided to raise my rating to 5.
Summary: This paper introduces GATSM(Generalized Additive Time Series Model), designed for handling multivariate time series data with a focus on transparency and interpretability. Using independent networks to learn feature representations and transparent temporal modules to learn cross-time step dynamics, GATSM effectively learns temporal patterns and maintains interpretability. It achieves comparable results achieves comparable performance to black-box time series models on various datasets, and proves the transparent predictions with cases. Strengths: 1. A key strength of GATSM is its focus on transparency, providing clear insights into the decision-making process, which is crucial for applications in high-stakes domains like healthcare. 2. The paper presents a thorough evaluation across multiple datasets, including Energy, Rainfall, AirQuality, and several healthcare datasets, showcasing the model's robustness and generalization capabilities. 3. The authors provide extensive details about the experimental setup, including data splits, hyperparameters, and computational resources, facilitating reproducibility. Weaknesses: 1. The need for a large number of feature functions can limit scalability (even if it is reduced from TxM to B), particularly with high-dimensional data. 2. The dataset tasks encompass both 1-step forecasting and classification, each requiring distinct evaluation metrics. Presenting all results in a single table without proper clarification leads to confusion. 3. The selected baseline black-box models are relatively simple. Consider including one or two state-of-the-art methods for a more comprehensive comparison. Technical Quality: 2 Clarity: 2 Questions for Authors: Can this model be applied to a multi-step prediction explanation since one-step forecasting is simple to real-world applications. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The need for a large number of feature functions can limit scalability (even if it is reduced from TxM to B), particularly with high-dimensional data. **A1**: A large number of feature functions in GAM are required to maintain its transparency, which is a key limitation. We will address this limitation in future work. **Q2**: The dataset tasks encompass both 1-step forecasting and classification, each requiring distinct evaluation metrics. Presenting all results in a single table without proper clarification leads to confusion. **A2**: We agree with your comment. We will add the names of the evaluation metrics to the table and provide a description of both the table and the metrics in the caption. **Q3**: The selected baseline black-box models are relatively simple. Consider including one or two state-of-the-art methods for a more comprehensive comparison. **A3**: We have added new experimental results comparing GATSM with two recent models, PatchTST and DLinear. These experiments were conducted using three popular forecasting datasets—electricity, traffic, and weather—to evaluate 24-step, 48-step, and 72-step forecasting accuracy. We reported the average MAPEs and standard deviations over five random seed. The results show that PatchTST performs best in multi-step forecasting tasks, while GATSM demonstrates lower accuracy compared to recent state-of-the-art methods. However, GATSM has a unique advantage: it can provide fully faithful interpretations of the model. Additionally, GATSM is capable of handling dynamic time series, and we have reported its performance on various tasks, including forecasting, binary classification, and multi-class classification. ||Electricity|||Traffic|||Weather||| |---------------|--------------------|---------------|-------------------|---------------|-------------------|---------------|---------------|---------------|---------------| ||24h|48h|72h|24h|48h|72h|24h|48h|72h| |GATSM|0.122(+-0.005)|0.135(+-0.010)|0.137(+-0.007)|0.314(+-0.030)|0.302(+-0.046)|0.347(+-0.108)|0.859(+-0.115)|0.966(+-0.182)|0.935(+-0.133)| |PatchTST|0.100(+-0.005)|0.107(+-0.007)|0.104(+-0.005)|0.209(+-0.005)|0.227(+-0.013)|0.228(+-0.007)|0.622(+-0.006)|0.580(+-0.022)|0.581(+-0.019)| |Dlinear|0.108(+-0.003)|0.115(+-0.004)|0.113(+-0.001)|0.242(+-0.011)|0.250(+-0.007)|0.252(+-0.012)|0.844(+-0.085)|0.860(+-0.013)|0.817(+-0.032)| **Q4**: Can this model be applied to a multi-step prediction explanation since one-step forecasting is simple to real-world applications. **A4**: Our GATSM can be applied to multi-step forecasting tasks and can provide interpretations for these tasks. We have reported GATSM's performance on multi-step forecasting datasets above. Additionally, we will add a figure illustrating the interpretation of GATSM in multi-step forecasting. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. My major concerns have been well addressed, and I believe they will be addressed in the final versions thus I decided to raise my rating to 5.
Summary: The paper introduces a Generalized Additive Model for time series, combining feature embedding and attention layer. The proposed solution is evaluated on forecasting, binary and multiclass classification, over 8 datasets, against black box and transparent models. Global, local, time-focused and feature focused interpretability methods are provided. Strengths: Experimental evaluation is convincing: the presented model has state of the art performance. On the transparency side, the comparison between models allows the author to postulate on the presence of absence of interactions of covariates, and time-specific patterns. Evaluations of weights at the final linear layer give varied visualizations. The reasoning behind the model construction and component is clear, and ablation experiment for each component were provided. Paper is clear with no major problem in writing. Weaknesses: I am not sure if the work is original. The idea of using DNN first on the time axis without covariate interaction is not new, but wether there is a model similar to the proposed solution, I do not know. The review of previous works focuses on Generalized Additive Models, but a similar neural structure may have been presented without being positioned as a GAM. One improvement to be done would be to add more clarity on figure captions. In its present form, it requires back and forth to the text to understand both what is plotted and what conclusions to draw from it. Technical Quality: 3 Clarity: 3 Questions for Authors: In forecasting tasks, there are often long horizons to be considered. Is it possible to build a multioutput model for forecasting using GAM formalism? In which case, would the interpretation be able to relate future timestamps together, as series are often autocorrelated? Can correlation among dimensions be detected with GAM? This would be especially relevant in high dimensional MTS. Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Several limitations are identified: possible overfitting of the model due to overparameterization in NBM part, slow attention mechanisms that do not benefit from state of the art methods, and the fact that it was not evaluated for long sequences (and might not be suited for them). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: I am not sure if the work is original. The idea of using DNN first on the time axis without covariate interaction is not new, but wether there is a model similar to the proposed solution, I do not know. The review of previous works focuses on Generalized Additive Models, but a similar neural structure may have been presented without being positioned as a GAM. **A1**: Several methods, such as DeepAR, apply neural networks along the time axis without covariate interaction, but they involve non-linear interactions with the current and previous states. As a result, they lack transparency, which is the main advantage of our GATSM. To the best of our knowledge, no transparent time series models have been proposed in the previous literature. We will add relevant time series models to the related work section. **Q2**: One improvement to be done would be to add more clarity on figure captions. In its present form, it requires back and forth to the text to understand both what is plotted and what conclusions to draw from it. **A2**: We will add detailed captions to the figures and tables to enhance their clarity and improve understanding. **Q3**: In forecasting tasks, there are often long horizons to be considered. Is it possible to build a multioutput model for forecasting using GAM formalism? In which case, would the interpretation be able to relate future timestamps together, as series are often autocorrelated? **A3**: Implementing multi-output and multi-step prediction systems using the GAM formalism is possible. We have conducted 24-step, 48-step and 72-step prediction experiments using GATSM and two recent time series forecasting models, PatchTST and DLinear on three population forecasting datasets, Electricity, Traffic and Weather. We reported the average MAPEs and standard deviations over five random seed. We believe that GATSM can capture autocorrelations between time steps in the auto-regressive setting because it learns temporal patterns across different time steps. If GATSM captures autocorrelation, the interpretation will relate different time steps together. ||Electricity|||Traffic|||Weather||| |---------------|--------------------|---------------|-------------------|---------------|-------------------|---------------|---------------|---------------|---------------| ||24h|48h|72h|24h|48h|72h|24h|48h|72h| |GATSM|0.122(+-0.005)|0.135(+-0.010)|0.137(+-0.007)|0.314(+-0.030)|0.302(+-0.046)|0.347(+-0.108)|0.859(+-0.115)|0.966(+-0.182)|0.935(+-0.133)| |PatchTST|0.100(+-0.005)|0.107(+-0.007)|0.104(+-0.005)|0.209(+-0.005)|0.227(+-0.013)|0.228(+-0.007)|0.622(+-0.006)|0.580(+-0.022)|0.581(+-0.019)| |Dlinear|0.108(+-0.003)|0.115(+-0.004)|0.113(+-0.001)|0.242(+-0.011)|0.250(+-0.007)|0.252(+-0.012)|0.844(+-0.085)|0.860(+-0.013)|0.817(+-0.032)| **Q4**: Can correlation among dimensions be detected with GAM? This would be especially relevant in high dimensional MTS. **A4**: GAM can capture correlations among different dimensions (or features). One straightforward approach is to input high-order features directly into GAM. For example, given three features $x_1$, $x_2$, and $x_3$, we can manually craft second-order features such as $x_1 \times x_2$, $x_2 \times x_3$, and $x_1 \times x_3$ and include them in the input to GAM. This allows GAM to learn second-order interactions, capturing correlations between pairs of dimensions. Additionally, recent methods for high-order interactions in transparent models, such as Scalable Polynomial Additive Model and High-order Neural Additive Model, can be employed. Using these methods enables GAM to capture correlations between different dimensions without manually crafted features. --- Rebuttal Comment 1.1: Comment: Thank you for the answer. I believe that the answer to Q3 especially would improve the paper, paired with interpretation plots attending temporal patterns in multioutput models. I will pass my rating to weak accept, but I will keep my review confidence as 1, as proposing new neural structure isn't my domain.
Summary: This work aims to build transparent models for the time series domain for better interpretability. Specifically, they proposed a work called Generalized Additive Time Series Model (GATSM) that consists of independent feature networks as well as a temporal attention module to learn temporal patterns. The corresponding model can be written into a scalar form to ensure interpretability/transparency. The authors applied their model to several datasets, and showed that GATSM can outperform existing generalized additive models. The model can also be used to interpret the features in original dataset. Strengths: - Important motivation. Building transparent models for time series is a critical task. - The scalar representation of features seems clean (eq 11) Weaknesses: While this work is not of my direct expertise, I think the following contents have room for improvement: 1. Experimental results are weak. - Black-box Time Series Models seem out-of-dated. The authors should consider better models such as TimesNet, PatchTST, FreqTransformer, or Informer for commonly used black-box models. - Forecasting tasks use R2 score for evaluation, but an R2 score of 0.07 (or in general, below 0.5) seems very low. The authors should show some visualization examples to ensure the model is functioning. - Figure 4/5 are not self-explainable, the authors should try to explain what is happening in those figures, and how the interpretability is quantified. - The work could benefit from synthetic dataset, where casual relationships are manually crafted and thus can be evaluated. Technical Quality: 2 Clarity: 3 Questions for Authors: - Line 123: How one should understand the proposed basis functions, are they similar to quantized vectors in works like VQVAE? - Eq 5: How would positional embeddings affect the interpretability of the model? - How does the interpretability results differ in datasets with very long length (e.g. Mortality) v.s. that of shorter length? Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Black-box Time Series Models seem out-of-dated. The authors should consider better models such as TimesNet, PatchTST, FreqTransformer, or Informer for commonly used black-box models. **A1**: We have added new experimental results comparing GATSM with two recent models, PatchTST and DLinear. These experiments were conducted using three new datasets—electricity, traffic, and weather—to evaluate forecasting accuracy for 24-hour, 48-hour, and 72-hour predictions. We reported the average MAPEs and standard deviations over five random seed. The results show that PatchTST performs best in multi-step-ahead forecasting tasks, while GATSM demonstrates lower accuracy compared to recent state-of-the-art methods. However, GATSM has a unique advantage: it can provide fully faithful interpretations of the model. Additionally, GATSM is capable of handling dynamic time series, and we have reported its performance on various tasks, including forecasting, binary classification, and multi-class classification. ||Electricity|||Traffic|||Weather||| |---------------|--------------------|---------------|-------------------|---------------|-------------------|---------------|---------------|---------------|---------------| ||24h|48h|72h|24h|48h|72h|24h|48h|72h| |GATSM|0.122(+-0.005)|0.135(+-0.010)|0.137(+-0.007)|0.314(+-0.030)|0.302(+-0.046)|0.347(+-0.108)|0.859(+-0.115)|0.966(+-0.182)|0.935(+-0.133)| |PatchTST|0.100(+-0.005)|0.107(+-0.007)|0.104(+-0.005)|0.209(+-0.005)|0.227(+-0.013)|0.228(+-0.007)|0.622(+-0.006)|0.580(+-0.022)|0.581(+-0.019)| |Dlinear|0.108(+-0.003)|0.115(+-0.004)|0.113(+-0.001)|0.242(+-0.011)|0.250(+-0.007)|0.252(+-0.012)|0.844(+-0.085)|0.860(+-0.013)|0.817(+-0.032)| **Q2**: Forecasting tasks use R2 score for evaluation, but an R2 score of 0.07 (or in general, below 0.5) seems very low. The authors should show some visualization examples to ensure the model is functioning. **A2**: The scores of all models on the Rainfall dataset are low in the experiments, suggesting that this dataset is very complex and hard to forecast. Nevertheless, time series models significantly outperform tabular models, indicating that they effectively capture temporal patterns. We will add figures of forecasting results of GATSM on the Rainfall dataset to demonstrate that it is functioning appropriately. **Q3**: Figure 4/5 are not self-explainable, the authors should try to explain what is happening in those figures, and how the interpretability is quantified. **A3**: In Figures 4 and 5, the x-axis (left) represents feature contributions, the sub-x-axis (right) represents feature values, and the y-axis represents time steps. These visualizations illustrate the patterns between feature values and their contributions. Due to GATSM's separate modeling scheme of feature contribution and time importance, we can obtain both time-independent and time-dependent feature contributions. Figure 4 shows the time-independent feature contributions, indicating the effects of features if no temporal patterns exist. In contrast, Figure 5 shows the time-dependent feature contributions, indicating the effects of features while considering previous history. Therefore, the feature contributions in Figures 4 and 5 can differ. For example, in Figure 4, SO2, NO2, and CO show only a positive correlation between contribution and feature value. However, in Figure 5, time lags appear in these three features. We will add a more detailed description of Figures 4 and 5 in the manuscript. **Q4**: The work could benefit from synthetic dataset, where casual relationships are manually crafted and thus can be evaluated. **A4**: We believe that causal discovery and making precise predictions by taking into account confoundings using transparent models are promising directions for our future work. Since causality is outside the scope of this paper, we plan to pursue these works in future studies. **Q5**: Line 123: How one should understand the proposed basis functions, are they similar to quantized vectors in works like VQVAE? **A5**: Vector quantization and basis functions are similar concepts with slight differences. Both methods aim to decompose a function into smaller components or construct a function by combining multiple functions. However, the key difference is that vector quantization maps a continuous vector into a discrete space and uses a new vector corresponding to that mapped space. In contrast, basis functions construct a larger continuous function using multiple smaller continuous functions. **Q6**: Eq 5: How would positional embeddings affect the interpretability of the model? **A6**: Positional encodings do not affect the feature functions (NBM); they only influence the attention scores. This design choice is intended to separately model feature effects through NBM and temporal patterns through the attention mechanism, ensuring transparency in time series. Therefore, positional encoding does not decrease the interpretability of GATSM. **Q7**: How does the interpretability results differ in datasets with very long length (e.g. Mortality) v.s. that of shorter length? **A7**: Due to the transparency of GATSM, it consistently produces fully faithful interpretations of the model. As a result, the length of the time series does not affect the interpretability of GATSM. We will add figures similar to Figures 2, 3, 4, and 5 for long-length datasets. --- Rebuttal 2: Title: Thanks Comment: Thank you for the response. The new results show that GATSM performs worse on SOTA black-box networks on new datasets, and the authors did not provide results showing the performance of SOTA black-box networks on datasets presented in the paper. Additionally, the authors did not provide new synthetic results as I requested. Thus, I'd keep my score to be the same. --- Rebuttal 3: Title: Response to Reviewer JRxn Comment: Thank you for your feedback. We agree that the experiments you suggested will improve the quality of our paper. Thus, we have added the following new experimental results: - In the multi-step forecasting experiments, our PatchTST and DLinear outperform our GATSM. However, our primary contribution is the development of a novel, interpretable model for multivariate time series, GATSM, which offers human-understandable interpretations with a slight trade-off in accuracy. - We trained PatchTST and DLinear on the Energy, Rainfall, and AirQuality datasets, reporting their $R^2$ scores and standard deviations across five random seeds. However, performance results for the remaining five datasets are unavailable for the following reasons: the Mortality and Sepsis datasets have dynamic-length sequences, which are incompatible with forecasting models that require fixed-length observations. Additionally, the Heartbeat, LSST, and NATOPS datasets involve many-to-one classification tasks (binary or multi-class), a setting not supported by the publicly available implementations of PatchTST and DLinear. ||Energy($\uparrow R^2$)|Rainfall($\uparrow R^2$)|AirQuality($\uparrow R^2$)| |---------------|--------------------|---------------|-------------------| |DLinear|0.486(+-0.102)|0.086(+-0.014)|0.685(+-0.024)| |PatchTST|0.413(+-0.236)|0.098(+-0.019)|0.638(+-0.045)| |GATSM|0.493(+-0.173)|0.073(+-0.027)|0.583(+-0.026)| - We conducted an experiment using a synthetic tumor growth dataset[1] that simulates changes in tumor size over time. The tumor size at any given time step is influenced by three factors: prior tumor size, chemotherapy, and radiotherapy. We set the chemotherapy coefficient to 10, that is chemotherapy significantly reduce tumor. The experimental results showed that while GATSM underperforms compared to more complex black-box models, it still achieves strong accuracy ($R^2$ > 0.9). Additionally, we visualized the contributions of the three factors, revealing that recent time steps are notably more influential than earlier time steps due to the direct impact of prior tumor size. GATSM also effectively captured the effects of chemotherapy and radiotherapy on reducing tumor size, with chemotherapy having a significantly greater impact than radiotherapy. ||SyntheticTumorGrowth ($\uparrow R^2$)| |-|-| |Transformer|0.965(+-0.002)| |DLinear|0.956(+-0.009)| |PatchTST|0.953(+-0.006)| |GATSM|0.906(+-0.016)| [1] Geng, C., Paganetti, H., & Grassberger, C. (2017). Prediction of treatment response for combined chemo-and radiation therapy for non-small cell lung cancer patients using a bio-mathematical model. Scientific reports, 7(1), 13542.
Rebuttal 1: Rebuttal: ### **Response to all reviewers** We appreciate all the reviewers for their helpful comments and discussion on our manuscript. The feedback provided was instrumental in improving the quality of the manuscript. We have addressed each of the reviewers' questions and concerns individually.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Diffusion4D: Fast Spatial-temporal Consistent 4D generation via Video Diffusion Models
Accept (poster)
Summary: The paper presents a novel framework for generating 4D content, leveraging video diffusion models, and introduces motion magnitude reconstruction loss and 3D-aware classifier-free guidance to refine dynamics learning and generation. The framework is shown to enhance generation efficiency and 4D geometry consistency relative to existing methods. Strengths: 1. This paper is the first to directly generate 4D videos using a video diffusion model, partially addressing the spatial-temporal inconsistencies caused by using separate multi-view 3D synthesis models and monocular video generation models in previous works. 2. Curate a large-scale, high-quality dynamic 3D dataset sourced from Objaverse to facilitate future research. Weaknesses: 1. This method still uses two models (VideoMV and Diffusion4D). Although it is claimed that the outputs of the two models are very similar, the supplementary video still shows considerable inconsistencies, which to some extent undermine the overall consistency. 2. In the paper, both numerical and visual comparisons only used 24 images in orbital videos as targets. However, this reduces the difficulty because the viewpoints of the final 4D renderings are almost the same as those generated by Diffusion4D, and the viewpoints generated by Diffusion4D are too limited compared to other methods. Maybe the author could provide more results to demonstrate that the 4D Gaussian obtained by this method can still render high-quality images from different viewpoints. For example, they could fix the camera at a frontal view to render images at different time points or use viewpoints opposite to those generated by diffusion4D. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have acknowledged some limitations in their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer mXZW, many thanks to you for taking the time and effort to review our work and providing constructive comments. We are grateful for your recognition of the novelty and contributions of our research, noting that **it is the first work training a singular video diffusion model directly on 4D datasets for explicit synthesis of spatial-temporal consistent views of 4D assets. We also appreciate your acknowledgment of the great value of our curated high-quality dataset in facilitating future research.** We have thoroughly considered your constructive feedback, and we are committed to addressing all your concerns as follows: **Q1. VideoMV and Diffusion4D outputs show differences.** In the 4D construction stage, we apply the dynamic images from our 4D-aware video diffusion model to optimize the 4DGS. We also involved static images from VideoMV to help initialize the 3D geometry. Since our 4D-aware video diffusion model was finetuned from VideoMV, we noticed that the inconsistency between two sets of images is very limited. **We proposed a coarse-to-fine optimization strategy in the 4D construction stage, and we carried out extensive experiments and validated that this strategy can effectively resolve the potential inconsistency.** Specifically, In the coarse stage, the static images from VideoMV are only used to initialize the 3D geometry (the attributes of gaussians), and are not used to train the deformation network for motion learning. The potential inconsistency is mitigated in the fine stage, where only dynamic images from Diffusion4D are used to finetune both the attributes of gaussians and the deformation network to guarantee spatial-temporal consistency. **Q2. More numerical and visual comparisons on novel views** We highly appreciate your comments at this point which have greatly contributed to enhancing the quality and clarity of our work. In response, we have conducted more extensive numerical comparisons and visual demonstrations on novel views, as detailed below. **1) Numerical Comparisons.** We first clarify the details regarding numerical comparison in our main submission. As shown in Tab.1 of main.pdf, we evaluated both the images from 4D-aware video diffusion model (Diffusion4D*) and the renderings from constructed 4D assets (Diffusion4D). For the latter, we uniformly rendered 36 orbital viewpoints along the timeline as the targets (L252), which resulted in most of them being at novel azimuth angles distinct from the 24-frame videos. At this point, our comprehensive evaluation covered a set of metrics including CLIP-O, SSIM, PSNR, LPIPS, and FVD. Additionally, we measured the novel front views using CLIP (CLIP-F in Tab.1), where we fixed the viewpoint at the front view and similarly, uniformly rendered 36 images along the timeline as targets (L253). Thus, most of these images are at novel timestamps distinct from the 24-frame videos. We will clarify this in the refined version. To strengthen our quantitative evaluation, we used our validation set and further evaluated **two sets of novel viewpoints with all the metrics**. The first set is the previously used 36-frame front-view renderings (F); the second set is a new 36-frame orbital-view renderings with an elevation angle fixed at 30° (O-30°). For both sets, we used ground truth images rendered from the dynamic 3D assets in Objaverse as references. Metrics were computed between ground truth images and rendered images from the same viewpoints and timestamps. The results are shown in the table below: ||CLIP-F↑|SSIM-F↑|PSNR-F↑|LPIPS-F↓|FVD-F↓|CLIP-O-30°↑|SSIM-O-30°↑|PSNR-O-30°↑|LPIPS-O-30°↓|FVD-O-30°↓| |-|-|-|-|-|-|-|-|-|-|-| |(text) 4D-fy|0.78|-|-|-|-|0.55|-|-|-|-| |(text) Animate124 |0.75|-|-|-|-|0.51|-|-|-|-| |(text) Diffusion4D |**0.81**|-|-|-|-|**0.62**|-|-|-|-| |(image) 4DGen|0.84| 0.72|15.1| 0.28 | 691.5 | 0.66 | 0.66 | 13.9 | 0.34 | 760.5 | |(image) Stag4D|0.86|0.78| 16.0 | 0.25 | 624.7 | 0.69 | 0.74 | 15.1 | 0.30 | 705.8 | |(image) Diffusion4D | **0.89** | **0.80** | **16.2** | **0.24** | **594.5** | **0.72** | **0.81** | **16.5** | **0.23** | **526.4** | Note that CLIP-F is adopted from Tab.1 in main.pdf. Comparing these results along with the results in Tab.1, we can observe the following: - Our method consistently outperforms the baselines across all the metrics at the novel viewpoints. - For the semantic metric(CLIP), all the methods perform better in front views than orbital views; - For the photometric metrics, the competitors (4DGen, Stag4D) perform better in front views than orbital views, while our method performs better in orbital views than front views. This is because the competitors involved the ground-truth front-view videos in their training for 4D construction, whereas our method utilized the orbital videos generated by our 4D-aware video diffusion model in 4D construction. **2) Visual comparisons.** First, we would like to clarify that each rendered video in the supplementary material consists of 160 frames around the constructed 4D asset, with most frames presenting novel viewpoints not seen in the 24-frame videos. Also, we kindly invite you to refer to Fig.A in rebuttal.pdf, where we provide renderings of samples from more novel views. In addition to orbital-view at elevation angle of 0°, we also visualize the renderings from orbital-view at an elevation angle of 30°, and monocular view by fixing the camera at the front view. Compared to the baselines, we can observe that our constructed 4D assets exhibit **larger motion magnitude, more complex texture, smoother appearance, and more consistent geometry** across multiple viewpoints. Please refer to global rebuttal (1) and (2) for detailed discussions. We are willing to address any further questions or concerns you might have within this review window. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed explain (has solved my main considersion). I will maintain 5: Borderline accept. The most reason is it's the first to directly generate 4D videos, which I believe is a important and un-solved problem. --- Reply to Comment 1.1.1: Title: Thank you for your comment Comment: Dear Reviewer mXZW, Thank you very much for taking the time to read our responses and we are grateful for your recognition. And we are truly happy that our rebuttal resolves your concerns. Should there be any ambiguities or further questions, please know that we are more than willing to provide clarity or delve deeper into any topic. Best wishes, Authors of Paper 3414
Summary: The paper tackles 4D asset generation. Its pipeline consists of two parts: a 4D-aware video diffusion model, where orbital videos circling the 4D assets can be seamlessly generated, and a coarse-to-fine 4D construction stage afterward. The video diffusion model is trained on a novel dynamic dataset curated by the authors from Objaverse. The method can generate pretty high-quality 4D assets while being quite efficient compared with SDS-based methods. Strengths: 1. The task of 4D asset generation is extremely challenging and well-motivated. 2. The paper is nicely written, and ablation studies were done to justify its design choices. 3. Evaluations were done with reasonable metrics and against SOTA methods. 4. Tremendous engineering efforts can be observed from all the design choices, such as the motion magnitude conditioning and guidance, details about the 4D construction, etc. Weaknesses: 1. The effectiveness of the Diffusion4D model heavily relies on a meticulously curated high-quality 4D dataset. The dependency on large-scale, high-quality datasets might limit its applicability in scenarios where such data is scarce or hard to generate. 2. The current resolution and temporal sequence length significantly limit the quality and realism of the generated 4D content --- I could barely tell whether there is motion in many samples until after looking at them several times. 3. The introduction of a novel 3D-to-4D motion magnitude metric and the corresponding reconstruction loss is innovative; however, the paper could explore in more detail how sensitive the model’s performance is to the tuning of these parameters. It’s also not clear how much the performance depends on the exact settings of these loss functions, which could be critical for users trying to replicate or build upon this work. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer BwxH, We would like to extend our great thanks for your time and effort in reviewing our work and providing insightful comments. We thank you for recognizing the values and contributions of our work, noting that **the paper is nicely written, the solution to the challenging task is well-motivated, the experimental evaluations and ablation studies are comprehensive and convincing, and the generated assets are of pretty high quality. We are also grateful for your acknowledgment of our tremendous engineering efforts in high-quality data curation and experimental design.** We have carefully considered your feedback and would like to address your comments as follows: **Q1. Dependency on 4D dataset.** We acknowledge that the success of our method relies on a curated high-quality dataset. To the best of our knowledge, our work is the first that trains a singular video diffusion model directly on 4D datasets for explicit synthesis of spatial-temporal consistent views of 4D assets. We adopt a data-driven optimization strategy that reduces the generation time from a couple of hours to several minutes. The data scarcity is an issue in this research field. To mitigate this problem, we made significant efforts to curate a high-quality 4D dataset from Objaverse. We believe that this dataset can make contributions to future research in this community. **Q2. Resolutions, temporal length, and large motion.** 1) *Resolution and temporal length*. We followed prior works [1, 12, 27, 34, 50, 52, 55] and chose a resolution of 256², considering GPU memory. With the same resolution, as shown by more examples in Fig.A of rebuttal.pdf, our method can generate assets with **larger motion, more complex textures, clearer appearance, and more consistent geometry** compared with baselines. We have adopted a longer temporal length of 24, compared to 16 in most prior works. We appreciate your feedback and are committed to exploring higher resolutions in the future study. 2) *Large motion*. Please refer to Fig.3 in main.pdf and Fig.A in rebuttal.pdf for more demonstrations of our method and the baselines. - In Fig.3 in main.pdf, as highlighted by the contours, our animations include cartoon characters and humans stepping forward, running or raising arms, birds flapping wings, and lights changing. Due to space constraints, for the baselines, we only showed two representative viewpoints at starting and ending timestamps. The objects’ poses at two timestamps are almost identical, as the baselines generated samples with invisible motions along the whole timeline. In contrast, the motions from our method are much more obvious and pronounced. - In Fig.A in rebuttal.pdf, we provide more samples with larger motion magnitudes, including the pose change of a knight, flapping of butterfly wings, turtle swimming, the kid falling, etc. In addition to orbital views that illustrate the 3D geometry consistency, we also exhibit novel front views. The generated motions are much more apparent seen from the front view. We can observe that our model is capable of generating 4D assets with very large motions and great geometry consistency. We believe that these demonstrations, along with the visualizations in the main submission, could provide a well-rounded view of our generation results and successfully address your concerns. We also kindly invite you to refer to global rebuttal-(1) and (2) for more details. **Q3. More analysis on the effect and the sensitivity of the design components.** Thank you for your recognition of the novelty of the proposed 3D-to-4D motion magnitude metric (_m_) and innovation of our motion magnitude reconstruction loss ($L_{mr}$). We are pleased to receive your constructive comment and provide a more in-depth analysis of the effect and the sensitivity of the design components as follows. 1) 3D-to-4D motion magnitude metric (_m_). In the ablation study, we conducted a qualitative analysis on the effect of _m_. By comparing (b) and (c) in Fig.5 of main.pdf, we can observe that increasing _m_ could augment the dynamics of the kid. We also provide more examples in Fig.B of rebuttal.pdf. We can observe that increasing _m_ augments the dynamics of the generated assets. 2) Effect of motion magnitude reconstruction loss ($L_{mr}$). In the ablation study, we conducted both quantitative and qualitative analysis on the effects of $L_{mr}$ in Tab.2 and Fig.5 of main.pdf. We can observe that adding in the $L_{mr}$ is critical to animating the static objects, and improving the photometric performance. We further evaluate the effect of $L_{mr}$ on the model performance by monitoring the validation loss along the training process. Please refer to Fig.C in rebuttal.pdf. Specifically, we trained two image-to-4D generation models under the same settings, except that one included $L_{mr}$ and the other did not. At each training step, we evaluate the model’s denoising capability with the validation set. We applied the same magnitude of noise (controlled by $t$) to the clean validation set in the diffusion process, and computed the noise prediction loss $L_{ldm}$ and motion magnitude reconstruction error (*mre*) (Eq.(2)) after denosing. As shown in Fig.C in rebuttal.pdf, **training with $L_{mr}$ leads to faster convergency in diffusion learning and better performance in motion magnitude reconstruction.** (Note that the curves are smoothed with a striding window of size 5.) Additionally, we empirically found that our framework is robust to the setting of the weight _w_ of $L_{mr}$. 3) For the explicit 4D construction stage, we followed [46] and adopted the same weights for $L_{1}$ and $L_{lpips}$. We empirically found that the model performance is robust to the choice of weight of $L_{depth}$, which serves as a regularizer. We will release our codebase soon to facilitate replication and further development upon our work.
Summary: The paper presents Diffusion4D, a novel method for generating 4D content that maintains spatial and temporal consistency. It improves upon previous approaches by embedding spatial-temporal consistency within a single video diffusion model, enabling efficient creation of 4D assets. The technique uses a curated dataset of high-quality dynamic 3D objects to train a 4D-aware video diffusion model, which can synthesize orbital views of moving 3D assets. The method outperforms existing techniques in terms of generation speed and 4D geometry consistency. Strengths: 1. This paper is well-written and easy to follow 2. The proposed subset of objaverse and objaverse-xl is valuable for subsequent research 3. The quantitative experiments presented in the main text demonstrate the superiority of the proposed method. Weaknesses: 1. The qualitative comparisons are not that convincing, the generated motions are quite small and easy, the authors are encouraged to provide more experimental results in the rebuttal. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The qualitative comparisons are not that convincing, the generated motions are quite small and easy, the authors are encouraged to provide more experimental results in the rebuttal. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer fabU, thank you very much for taking the time to review our work and providing constructive feedback. We also thank you for recognizing our paper with **novel method, valuable data contribution for subsequent research, good writing, and superior quantitative results**. We would like to take this opportunity to address your comments regarding qualitative comparisons. **Provide more qualitative comparisons** **1) Comparisons with baselines.** Please refer to Fig.3 in main.pdf and Fig. A in rebuttal.pdf for qualitative comparisons between our method and the baselines. - **Larger motion magnitude.** In Fig.3, due to space constraints, we only showed two viewpoints at starting and ending timestamps as representatives for the baselines. It is evident that the objects’ poses at two timestamps are almost identical, as the baselines generated samples with almost invisible motions along the whole timeline. In contrast, the motions from our method are much more obvious and pronounced. Also in Fig.A in rebuttal.pdf, we show more cases with large motions, including the pose change of a knight, the flapping of butterfly wings, a turtle swimming, a kid falling, and the arm movements of a cartoon dog and Mario. Our method is capable of generating 4D assets with very much larger motions. - **More complex texture, smoother appearance, and more consistent geometry.** For the generation details, by zooming in, we can observe that the baselines suffer from blurry texture, noisy appearances, excessive artifacts, and incomplete geometry. On the other hand, our method provides more complex texture, smoother appearances, and more consistent geometry. These phenomena can also be observed in both Fig.3 in main.pdf and Fig.A in rebuttal.pdf. **2) Consistent novel views.** - In Fig.A in rebuttal.pdf, in addition to orbital views with elevation angle of 0°, we also provide more novel views by rendering images from the constructed 4D assets. We show orbital views with elevation angle of 30° and front view images along the timeline. The motions are much more apparent by fixing the viewpoint at the front view. We can observe that the constructed 4D assets demonstrate large motions, coherent appearance, and consistent geometries across timestamps and viewpoints. We believe that these demonstrations, along with the visualizations in the main submission, could provide a well-rounded view of our generation results and successfully address your concerns. And we are pleased to offer more video demonstrations if policy allows. Thank you again! --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Thanks for your reply. I will maintain my original score. --- Reply to Comment 1.1.1: Title: Thank you for your comment Comment: Dear Reviewer fabU, Thank you very much for taking the time to read our rebuttal. We are very happy that our rebuttal resolves your main concerns and that we are very grateful for your recognition. Since today is the last day for author/review discussion, if you have further questions or queries, we are ready and more than willing to answer them promptly today. Best wishes, Authors of Paper 3414
Summary: This paper studies 4D object generation. Instead of using SDS, this paper proposes to use a video diffusion model for generating multi-view images first and then a 4D Gaussian Splatting representation is learned using the multi-view images. To facilitate the training of video diffusion model, this paper uses a filtered subset of Objaverse as pretraining data. It also adds an motion magnitude embedding to the diffusion model as extra condition for guiding the generation process. The developed framework Diffusion4D is able to generate 4D objects from various condition: image, text, static-3D. It's much faster than the previous SDS based method, and achieves better results on rendering metrics and human preference benchmark. Strengths: This paper has a few strengths: - It tries to solve the slow convergence problem of SDS, and proposes a solution based on multi-view images generation and Gaussian splatting re-training. - To address temporal consistency, this paper adopts a video diffusion model for generating the multi-view images. - A motion magnitude metric is developed as metric and also extra condition to the framework for improving the performance. - The proposed framework is able to generate 4D objects from various condition including image, text, static-3D. - The proposed framework also achieves better results on generation metrics and human preference. - The authors curated a filtered subset from objaverse, which will be released to the community of 4D generation research. Weaknesses: In my opinion, I found below things in this paper can be improved: - Unclear and insufficient comparison to related work in Sec.2. 4D content generation is an active research field, and multiple recent work are mentioned in Sec.2. However, I found it's unclear 1) what are the differences between the proposed method to these related work, 2) why aren't all of them considered in the experiments? A comparison table that clearly listing the pros and cons between the proposed methods and all related work will greatly improve the reability. The authors might even consider to include some concurrent arxiv works such as L4GM/DreamGaussian4D/Gaussian4D/etc (not required though). - Limited visual quality of the results. When I check the results in both paper and supplementary material (video), I found the quality are limited in two ways: 1) the rotating videos show that most generated objects are still suffering from view/time inconsistency. 2) The generated objects lack local fine details and are usually of very simplified texture. Also the local details are not sharp and often looks blurred. The 256x256 resolution might be one factor of this. 3) In the figures of this paper, the differences between the proposed method and the baselines are also hard to see. Overall, the generated objects are of low resolution and lack fine details. The inconsistency is also obvious in videos. - The method section isn't very clear. Specifically - In Eq.1 m takes as input a video, but in Eq.2 m takes as input one latent $z_0$. What does m do exactly? And how is the motion magnitude embedding extracted from m? - In Eq.2, both m terms in $L_{mr}$ has the same offset $\bar{z}_0$, isn't this loss equivalent to $\frac{1}{T}||z_0-\hat{z}_0||^2$? Why do we need to write in the current form where two squared terms inside a squared term? - Visually, what's the effect of the motion magnitude guidance term isn't clear. It's unknown to me which artifacts this term help to address and how it improves the overall results. - The entire Coarse-to-Fine 4D construction seems to be a model-specific thing, it's proposed to address the issue of 4D Gaussian Splatting (4DGS). Is this module really necessary if different 4D representation is used? Or what's the benefit of using 4DGS over other methods. Technical Quality: 3 Clarity: 2 Questions for Authors: - For different conditions such as text, image, and static-3D, are different diffusion model used for each input? - Limiting the camera with only 1 DOF (azimuth angle) seems to be unreasonable, especially during test time. For the generated 3D/4D content, there is no reason we only evaluate the rendering results at a fixed elevation and distance. - The backbone/pre-trained diffusion models are extremely important, and I'm wondering why current set of models (Sec.4.1) are selected, and what are the insights and benefits. - Why filtering out the objects with big motion in the dataset? These data are the challenging ones and should serve as the hard examples of the proposed methods. A good dataset should has long tail distribution of these hard examples. Removing them from the data might makes the established benchmark unfair. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: yes, the authors talked about limitation in Appendix such as the limited spatial temporal resolutions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer G7WS, we would like to thank you for taking the time to review our work and providing valuable feedback. We also thank you for recognizing the importance of our targeted issue, the effectiveness and versatility of our framework, and the value of our curated dataset. We thoroughly considered your questions, and we are committed to addressing all your concerns as follows: **Q1. Comparison to related works:** 1) We have elaborated the motivations of our method and its distinctions from prior works in the 2nd paragraph of Sec.1 and L99, 102-106 in Sec.2. Also, in response to your request, we provide a clearer summarization of prior 4D generation works in global rebuttal-(3). Our approach differs from priors in terms of optimization and consistency enhancement strategies. The innovative design of 4D-aware video diffusion model enables us to swiftly generate spatial-temporal consistent 4D assets. Please refer to global rebuttal-(3) for more details. 2) We compared our method with the most relevant, latest, and well-recognized works. For text-to-4D, we compared with 4dfy (CVPR2024) and animated124(arXiv, Nov.2023). For image-to-4D, we compared with Stag4D(ECCV2024) and 4DGen(arXiv, Dec.2023). Notably, Stag4D claimed better performance than DreamGaussian4D(arXiv, Dec.2023) in its Fig.5 of Sec 4.3. L4GM(arXiv, June.2024) focused on video-to-4D generation task and was released after NIPS submission deadline, making it impossible for us to include a comparison. **Q2. Visual quality** 1) For generation consistency, we quantitatively assessed the results with FVD and conducted qualitative comparisons via extensive user study. Results in Tab.1 in main.pdf comprehensively demonstrated our superiority over baselines in geometry consistency. We can observe from the supplementary videos that ‘Diffusion4D’ demonstrates consistent 3D geometry, the appearances and motions exhibiting coherence and smoothness across 360°. This can be attributed to the extensive spatial-temporal attentions in the video model architecture. 2) We trained our model on objaverse dataset consisting of simple objects and this is the reason for the simplified texture. The 256² resolution was also adopted in prior works[1, 12, 27, 34, 48, 50, 52, 55]. With the same resolution, as shown in Fig.A in rebuttal.pdf, our method can generate assets with more complex textures, clearer appearance, and more consistent geometry compared with baselines. We appreciate your feedback and are committed to exploring more complex textures and higher resolutions in the future study. 3) For qualitative comparison with baselines, we provide a detailed explanation in global rebuttal-(1) and (2). Please refer to global rebuttal-(1) (2) and Fig.A in rebuttal.pdf for more details. **Q3. Method section details** 1) _m_ is the proposed metric of 3D-to-4D motion magnitude and it can be evaluated in both pixel space (Eq.1) and latent space (Eq.2). In Eq.1, _m_ is computed by averaging the difference across T images $I_i$ in a video; In Eq.2, _m_ is also computed by averaging T image latents in _z_. These two operations are technically consistent so we use the same notation. We will revise the notations to avoid confusion. The motion magnitude is embedded in the same way as timestamp embedding. We project the motion magnitude into a positional embedding followed by two MLP layers. 2) As _z_ is a matrix, the mathematical form in the paper is computationally different from the suggested form. We derive the $L_{mr}$ in this form to encourage the model to reconstruct the motion magnitude explicitly. We empirically found the proposed form leads to better learning of motion magnitude. **Q4. Effect of motion magnitude guidance.** In ablation study, comparing (b) and (c) in Fig.5, we can observe that increasing _m_ augments the dynamics of the kid. In Fig.B in rebuttal.pdf, we show more cases with varying motion magnitudes. We can observe that increasing _m_ augments the dynamics of the generated assets. **Q5. Benefit of 4DGS.** Many previous works [37,46,49] have demonstrated that Gaussian Splatting benefits from explicit representation, better construction details, faster rendering, and less memory usage compared to NeRF. Therefore, to achieve fast and high-quality 4D generation, we select 4DGS for 4D construction. **Q6. More questions** 1) *Different conditions*. Yes, we trained different diffusion models for distinct input modalities. The flexibility of the video diffusion model architecture allows our framework to accommodate various prompt modalities with minor modification. Please refer to L196-L208 for more details. 2) *Camera pose settings during training and evaluation*. Generating consistent 4D assets across 360-degree is a challenging task. Therefore, **in the training stage**, we mitigate the modeling burden on the video diffusion model by fixing elevation angle and distance, making the model focus on learning the motion and geometry changes along orbital views. We can observe the same training settings in prior works [27,52,56]. **In the test stage**, we also adhered to the evaluation settings in prior works [1, 12, 34, 50, 52, 55] and rendered from constructed 4D assets with a fixed elevation and distance. 3) We selected VideoMV as it is a 3D-geometry-aware video diffusion model. We did extensive testing experiments and demonstrated that this model is stable and reliable in generating consistent orbital videos of static 3D assets. 4) *Data curation details*. We want to clarify that we do not filter out ‘big-motion’ assets. In the final curation stage, we remove cases with dramatic location changes where the objects exit the scene boundaries (leaving only partial parts within the scene). Such instances would deteriorate the model in learning stable and reasonable motions. We will clarify this point in the revised version. We are willing to reply to your further questions in this time window if you have any feedback. --- Rebuttal 2: Title: Q6.2. More quantitative evaluation during test time Comment: Following Q.6.2, to strengthen our quantitative evaluation during test time, we used our validation set and further evaluated **two sets of novel viewpoints with metrics CLIP, SSIM, PSNR, LPIPS, and FVD.** - For the first set, we fixed the camera at the front view, and uniformly rendered 36 images along the timeline as the targets. Note that in our main evaluations, we have measured the same set (L253) with CLIP score (CLIP-F in Tab.1 of main.pdf). - For the second set, we used a fixed distance and an elevation angle of 30°, and uniformly rendered 36 orbital images along the timeline as the targets (O-30°). For both sets, we used ground truth images rendered from the dynamic 3D assets in Objaverse as references. Metrics were computed between ground truth images and rendered images from the same viewpoints and timestamps. The results are shown in the table below: ||CLIP-F↑|SSIM-F↑|PSNR-F↑|LPIPS-F↓|FVD-F↓|CLIP-O-30°↑|SSIM-O-30°↑|PSNR-O-30°↑|LPIPS-O-30°↓|FVD-O-30°↓| |-|-|-|-|-|-|-|-|-|-|-| |(text) 4D-fy|0.78|-|-|-|-|0.55|-|-|-|-| |(text) Animate124 |0.75|-|-|-|-|0.51|-|-|-|-| |(text) Diffusion4D |**0.81**|-|-|-|-|**0.62**|-|-|-|-| |(image) 4DGen|0.84| 0.72|15.1| 0.28 | 691.5 | 0.66 | 0.66 | 13.9 | 0.34 | 760.5 | |(image) Stag4D|0.86|0.78| 16.0 | 0.25 | 624.7 | 0.69 | 0.74 | 15.1 | 0.30 | 705.8 | |(image) Diffusion4D | **0.89** | **0.80** | **16.2** | **0.24** | **594.5** | **0.72** | **0.81** | **16.5** | **0.23** | **526.4** | Note that CLIP-F is adopted from Tab.1 in main.pdf. Comparing these results along with the results in Tab.1, we can observe that our method consistently outperforms the baselines across all the metrics at the novel viewpoints. --- Rebuttal Comment 2.1: Title: Responses to rebuttal Comment: The authors have done a great job answering my doubts and questions. The provided additional details are very informative and convinced me to raise my rating. --- Reply to Comment 2.1.1: Title: Thank you for your comment. Comment: Dear Reviewer G7WS, Thank you very much for taking the time to read our rebuttal and we are very grateful for your recognition. We are truly happy that our rebuttal resolves your concerns and that you raised the rating. Should there be any ambiguities or further questions, we are more than willing to provide clarity or delve deeper into any topic. Best wishes, Authors of Paper 3414
Rebuttal 1: Rebuttal: Dear Area Chair and Reviewers, We would like to express our sincere gratitude for your valuable time and efforts in reviewing our work and providing insightful feedback. We are encouraged that reviewers find that our paper is well-written (fabU, BwxH), the usage of video diffusion model to address inefficiency and inconsistency issues is novel and clearly illustrated (G7WS, fabU, mXZW), the introduction of 3D-to-4D motion magnitude metric is innovative (BwxH), and the experimental evaluations are comprehensive and convincing (fabU, BwxH). We are also pleased that all reviewers recognize the value of our curated 4D dataset in facilitating future research in the community. Primarily, the reviewers requested more visualizations of the generated assets, in terms of comparisons with baselines and novel views. Therefore, we provided more cases in Fig.A in rebuttal.pdf and elaborated in detail as follows: **(1) Comparisons with baselines.** - **Larger motion magnitude.** In Fig.3 of main.pdf and Fig.A of rebuttal.pdf, we provided a qualitative comparison between our method and baselines. In Fig.3, due to space constraints, for the baselines, we only showed two representative viewpoints at starting and ending timestamps. It is evident that the objects’ poses at two timestamps are almost identical, as they generated samples with invisible motions along the whole timeline. In contrast, the motions from our method are much more obvious and pronounced. Additionally, in Fig.A, we show more cases with large motions, including the pose change of a knight, the flapping of butterfly wings, a turtle swimming, a kid falling, and the arm movements of a cartoon dog and Mario. Our method is capable of generating 4D assets with significantly larger motions. - **More complex texture, smoother appearance, and more consistent geometry.** For the generation details, by zooming in, we can observe that the baselines suffer from blurry texture, noisy appearances, excessive artifacts, and incomplete geometry. On the other hand, our method provides more complex texture, smoother appearances, and more consistent geometry. These phenomena can also be observed in both Fig.3 and Fig.A. - **More efficient generation and superior quantitative performance.** Also, in Tab.1 of main.pdf, extensive quantitative comparisons and human preference evaluations suggest the superiority and efficiency of our method, which can produce more favorable and high-quality 4D assets within several minutes, compared with the baselines that require hours of optimization. **(2) Consistent novel views**. In Fig.A in rebuttal.pdf, we provided more samples with large motion magnitudes from more novel views. In addition to our default setting of orbital views with elevation angle 0°, we also rendered more novel views from the constructed 4D assets, including orbital views with elevation angle 30° and front views. We can observe that the constructed 4D assets demonstrate large motions, coherent appearance, and consistent geometries across timestamps and viewpoints.The motions are more apparent by fixing the camera at front view. We believe that these demonstrations, along with the visualizations in the main submission, could provide a well-rounded view of our generations. And we are pleased to offer more video demonstrations if policy allows. **(3) Summarization of related 4D generation works.** To give a clearer illustration about the method differences btw. our work and previous works, we provide a summarization of prior 4D generation works in the table below. Our approach differs from previous methods in terms of optimization and consistency enhancement strategies. Most works depend on 2D/3D diffusion priors and use SDS optimization, resulting in hours of optimization cost, including some text-to-4D generation works(4D-fy[1], Animate124 [55]) and image/video-to-4D generation works(4DGen [50], DreamGaussian4D[27], Stag4D[52]). Concurrent arXiv work (Diffusion^2 [48]) eliminates SDS and combines multiple diffusion priors for joint denoising, but it ignores geometry consistency in the generation process. **In contrast, our work a) avoids SDS optimization and largely reduces the generation time from hours to several minutes; and b) utilizes a singular video diffusion model to ensure spatial-temporal consistency.** To the best of our knowledge, this is the first work training on 4D datasets to generate 4D assets, which can be termed as '4D Native' method [57]. Our framework is also the first to integrate 4D spatial-temporal consistency into a singular video diffusion model, enabling the generation of multi-timestamp cross-view supervisions in one shot. | | 2D/3D Priors | 2D/3D Priors|2D/3D Priors |2D/3D Priors |2D/3D Priors |2D/3D Priors |4D Native| |-|-|-|-|-|-|-|-| ||4D-fy[1]|Animate124[55]|4DGen[50]|DreamGaussian4D[27]|Stag4D[52]|Diffusion^2[48]|**Diffusion4D**| |Text-to-4D generation|√|√|×|×|√|×|√| |Image-to-4D generation|×|√|√|√|√|√|√| |SDS optimization|√|√|√|√|√|×|×| |Training on 4D data|×|×|×|×|×|×|√| |# of diffusion models used|3|3|2|2|2|2|1| In the following, we provide more experiments and explanations to address all the questions and concerns raised by the reviewers. And we will incorporate them in the revised version. Best regards, Paper 3414 Authors [57] Liu J, Huang X, Huang T, et al. A comprehensive survey on 3D content generation[J]. arXiv preprint arXiv:2402.01166, 2024. Pdf: /pdf/203e2481504d9a6608bb7e1a0f4d5bc1e2da69f1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction
Accept (oral)
Summary: The paper introduces VAR, a novel autoregressive generative model for images that treats each scale in a multi-resolution feature pyramid as a token. Unlike traditional models that predict the next token from a rasterized grid, VAR predicts the next scale in a multi-resolution grid. This approach demonstrates greater scalability than next-token prediction and extends the well-known scaling laws from language modeling to image generation. Extensive experiments show that VAR outperforms both diffusion and AR baselines while offering improved efficiency in both training and inference. Strengths: - The paper addresses the significant question of bridging the performance gap between autoregressive language models and autoregressive image generation. This makes the topic highly relevant for the community and potentially impactful. - The method is well-motivated and utilizes well-known building blocks from LLMs. Hence, VAR is an important step toward showing that with a proper scheme, widely used LLM architectures can perform competitively in the Image domain. - The experimental section is comprehensive, demonstrating VAR's performance and efficiency in image generation on ImageNet 256 and 512. Additionally, the authors provide in-depth discussion of scaling laws for VAR. - Ablation studies in Appendix D clearly illustrate the contribution of different aspects of VAR to the final model. Weaknesses: - While the writing of the paper is clear for the most part, the method section could benefit from better presentation. The authors provide some details on VAR tokenization and training, but I found section 3, especially section 3.2, slightly confusing. I suggest that the authors add more details and clarification on VAR's training process, the residual tokenization, and the workings of the transformer part. Currently, these details are somewhat obscured in Algorithm 1 and 2, and Figure 4. - Some claims in the paper are slightly exaggerated. For example, in the abstract, the authors mention that VAR brings the FID from 18.65 to 1.73. While this is true, the FID of 18.65 belongs to a relatively weak baseline for AR models. It would be better to rewrite such claims in relation to more realistic baselines, such as RQ-VAE models, which are closer to VAR's methodology - The baselines used for the diffusion part are also relatively weak. For instance, MDTv2 [1] is a transformer-based diffusion model that achieves an FID of 1.58 on ImageNet 256. Therefore, it would be more appropriate to state that VAR performs "competitively" with diffusion models rather than significantly outperforms them. [1] Gao S, Zhou P, Cheng MM, Yan S. MDTv2: Masked Diffusion Transformer is a Strong Image Synthesizer. arXiv preprint arXiv:2303.14389. 2023 Mar 25. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Does VAR perform next-scale prediction completely in the latent space? For example, if the image resolution is 256 and VQ-VAE has a latent size of 32, does VAR operate on resolutions up to 32, or does it extend up to the image resolution? 2. Is there a loss term missing in equation (5)? VQ-GAN models typically include a commitment loss and a codebook loss, but it appears VAR's quantizer only uses one of these losses. Could you provide more details on this part? 3. If I understand correctly, when predicting the probabilities for each scale with a transformer, the transformer now needs to estimate a much higher-dimensional distribution (a distribution over the entire grid instead of just one point in the grid). Can you provide some intuition on why the transformer part can handle this prediction task effectively? Is it due to the strong conditioning from the lower scales? 4. Are the features computed by the VAR quantizer also useful for image recognition tasks, or do they perform best in image generation? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors have addressed limitations and social impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer dFai, Many thanks to your professional, detailed, and valuable reviews. We're going to response to your concerns one by one. ----------------- > [W1] I suggest that the authors add more details and clarification on VAR's training process, the residual tokenization, and the workings of the transformer part. Currently, these details are somewhat obscured in Algorithm 1 and 2, and Figure 4. [A1] Thank you for your thorough review and suggestion. We've rearranged our Method part, adding more details on the two different training stages, and make them more closed to each other. &nbsp; > [W2] It would be better to rewrite such claims in relation to more realistic baselines, such as RQ-VAE models, which are closer to VAR's methodology. [A2] We agree that comparing RQ-VAE with VAR can also demonstrates the effectiveness and efficiency of VAR and without any potential overclaims. Also, this would not devalue VAR's novelty and technical contributions. We have updated these according descriptions. &nbsp; > [W3] Therefore, it would be more appropriate to state that VAR performs "competitively" with diffusion models rather than significantly outperforms them. [A3] Thanks for this professional advice. We'd first explain why models like MDTv2 was not inclued in table 1: Since our table 1 mainly focuses on **Generative model family** comparison on class-conditional ImageNet 256x256, we did not include some latest powerful models in it. As also mentioned in line 308, our main focus was on VAR algorithm and we used a plain GPT-2 transformer without SwiGLU, RMSNorm, or Rotary position embedding. So there is still large room to boost VAR. Comparing VAR with other long-optimized diffusion model can be a bit unfair. We have added some explanations and used "performs competitively" to describe our VAR. Nonetheless, we added an extra comparison which focuses on comparing VAR with latest, state-of-the-art model to the Appendix. We also updated the Future Work section to see if we can integrate more advanced technique like in MDTv2 or LLaMa to further upgrade VAR. &nbsp; ------------------------------- **Thanks for your comments and suggestions, we will add these experiments to our revision. Feel free to let us know if you have any further questions or concerns :-).** --- Rebuttal 2: Title: Response to Questions 1-4. Comment: > [Q1] Does VAR perform next-scale prediction completely in the latent space? For example, if the image resolution is 256 and VQ-VAE has a latent size of 32, does VAR operate on resolutions up to 32, or does it extend up to the image resolution? [A4] Yes, VAR is completely done in the latent space. As mentioned in line307, our VQVAE uses a downsampling ratio of 16. So the largest latet token map $r_K$ of 256px|512px image would have a size of 16|32. &nbsp; > [Q2] Is there a loss term missing in equation (5)? VQ-GAN models typically include a commitment loss and a codebook loss, but it appears VAR's quantizer only uses one of these losses. Could you provide more details on this part? [A5] The equation (5) is a simplified version of the loss function in VQ-GAN. We use the same VQVAE training loss as VQGAN so we also have a commitment loss and a codebook loss too. We'll update equation (5) to clarify this. &nbsp; > [Q3] Can you provide some intuition on why the transformer part can handle this prediction task effectively? Is it due to the strong conditioning from the lower scales? [A6] Recall our motivation in line41: "Our work reconsiders how to order an image. Humans typically perceive or create images in a hierachical manner, first capturing the global structure and then local details. This \textbf{multi-scale, coarse-to-fine} nature suggests an order for images". If this is acknowledged, then predicting multiple tokens at the same time is natural: it just mimics how human understands or creates images. On the other hand, the model and computation scaling is also vital to VAR's good performance. As continuing to scale-up the size of transformer and improving the expressive power of model, the model will have stronger capability to solve difficult task like VAR pretraining. &nbsp; > [Q4] Are the features computed by the VAR quantizer also useful for image recognition tasks, or do they perform best in image generation? [A7] Thanks for such an interesting and enlightening question. Exploring whether VARs can improve image understanding similar to previous image pre-training work (e.g., contrastive learning and masked modelling) is a highly valuable direction. We have also included this in the Future Work section and will actively explore it in the future! --- Rebuttal 3: Title: Response to the rebuttal Comment: I would like to thank the authors for taking the time to answer my comments and for the clarifications. I believe that VAR is a strong work that should be highlighted at the conference. Hence, I would like to keep my initial score. --- Rebuttal Comment 3.1: Title: Thanks for your timely reply and the kind words! Comment: Thank you for your quickly reply! We will remain active until the discussion period ends. Please feel free to get back to us if you have any new questions :-)!
Summary: The paper introduces Visual AutoRegressive modeling (VAR), which uses a coarse-to-fine approach for image generation. VAR drastically improves performance, reducing FID from 18.65 to 1.73 and increasing IS from 80.4 to 350.2, with 20x faster inference. It outperforms diffusion transformers in quality, speed, efficiency, and scalability. VAR models show scaling laws like large language models and demonstrate zero-shot generalization in image editing tasks. Strengths: - **Solid Motivation**: The scale in vision signals is a natural choice for autoregressive generation. The exploration of autoregression in visual generation is indeed a worthy topic. - **Novel Method**: This work is the first to explore a visual generative framework using a multi-scale autoregressive paradigm with next-scale prediction. - **Strong Performance**: The paper demonstrates significant advancements in visual autoregressive model performance, with GPT-style autoregressive methods surpassing strong diffusion models in image synthesis for the first time. - **Promising Scaling Law**: The paper presents a promising scaling law for the proposed visual autoregressive modeling paradigm. Weaknesses: - **Lack of Ablation Study on VQVAE**: There is no ablation study on the newly proposed VQ-VAE model. In Table 3, the performance differences between the first two rows cannot be solely attributed to the model change from AR to VAR, as the VQ-VAE model has also been modified. - **Resolution Flexibility**: The resolutions for VAR generation appear to be pre-defined and bound to the VQ-VAE model during its pre-training. Adjusting the number of resolutions or maximum resolution without re-training the VQ-VAE model seems non-trivial. Technical Quality: 3 Clarity: 4 Questions for Authors: **About VQVAE** - What is the rFID of your VQVAE? Can you provide a table comparing your VQVAE and VQVAEs of other works (e.g., VQGAN, MaskGIT) - What is the pre-training cost of your VQVAE? - If residual quantization is not used, and instead a multi-scale token map is constructed directly by some ways like: 1) Independently downsample the VAE encoder features to multiple scales, then quantize each scale directly, or 2) Construct multiple resolution ImageNet datasets (e.g., ImageNet 16x16, 32x32…, 256x256) and independently apply vector quantization to each dataset, thereby obtaining low-resolution to high-resolution token maps.Then, can the proposed visual autoregressive modeling still be performed? In other words, is residual quantization an *indispensable* part of VAR modeling? (Ignoring the efficiency or complexity of these alternative tokenization methods) - What is the impact on performance if residual quantization is not used? **Question on Figure 7** The apples-to-apples qualitative comparison, like in Figure 7, is common in diffusion-based models because the initial noise is of the same resolution as the final output, allowing significant control over the final image when coupled with deterministic sampling. However, in VAR, the counterpart to the "initial noise" is only the "teacher-forced initial tokens," which, if I understand correctly, are of 1x1 size. This suggests only a very loose control over the final image. Given this, why doesn't this result in a situation similar to the "butterfly effect," where identical initial states and random seeds, due to different model configurations, lead to significantly different final outputs after multiple generation iterations? **Question on zero-shot generation** - Is only the last resolution of the token map masked and then teacher-forced to generate the masked regions, or is each token map masked? - The model hasn't explicitly learned inter-token dependence at the spatial level during training. Could you provide an explanation of why the model can perform zero-shot editing/inpainting, which requires the model to condition on tokens at some spatial positions to generate tokens at other positions? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: This paper validates the effectiveness of VAR only in class-conditional generation scenarios. Applying VAR to text-to-image generation is a worthwhile area for future exploration. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer yx9H, --------------------------- > [W1 Q1, about VQVAE] [A1] We appreciate your detailed comments and will address them one by one. 1. VQVAE rFID: please see the overall Author Rebuttal part of "VQVAE concerns" 2. pre-training cost: please see the overall Author Rebuttal part of "VQVAE concerns" 3. why we didn't choose other multi-scale quantization ways: if the residual quantization is removed and **independently quantization** is used (e.t., independently downsampling VAE features, or independently encoding image in 16x16, 32x32, ..., 256x256), the mathematical premise of unidirectional dependency would be broken. The details can be found in line140 of our paper. To ensure that property, we have to use the **residual quantization** way. We'll add these explanations to our manuscript. 4. Lack of Ablation Study on VQVAE: please see the overall Author Rebuttal part of "VQVAE concerns". &nbsp; > [W2] Resolution Flexibility [A2] Thanks for this detailed comment. We've observed that the multi-scale VQVAE pretrained only on 256px images, can easily generalize to higher resolutions like 512 and 1024. We have added visualizations of this to a new Appendix part. > [Q2] Question on Figure 7 [A3] Thank you for this professional question. In practice, we: 1) use a fixed sampling seed for every step, and 2) also fixed the first token which intuitively determined the global structure of the image. We found both of them are crucial for the generation consistency. We'll add these details to our manuscript. &nbsp; > [Q3] Question on zero-shot generation [A4] For the details on our zero-shot generalisation algorithm, please see the overall Author Rebuttal part of "Zero-shot generalisation algorithm". For your second question, we actually allows full, bidirectional attention among tokens in the same scale $r_k$. So the model can learn inter-token dependence. &nbsp; ----------------------------------- **Lastly, thank you so much for helping us improve the paper and appreciate your open discussions! Please let us know if you have any further questions. We are actively available until the end of this rebuttal period. Looking forward to hearing back from you!** --- Rebuttal 2: Comment: ### Regarding [A1] 1. What I'm concerned is *exactly* this premise. From my perspective, coarse-to-fine scaled images may indeed not strictly have causal dependency if not constructed in your way. However, is this **strict** causal dependency fundamental to the success of your method? Or is the core idea that predicting the coarse-scale image first, followed by the fine-scale image, the key to your success (even if the strict causal dependency doesn't hold, as in the example I provided)? I believe this is a critical factor that warrants an ablation study. 2. It appears that there is no ablation study on the "residual quantization" technique. Specifically, I am referring to the case where you still construct a multi-scale token map in your VQ-tokenizer, but without applying residual quantization. ### Regarding [A2] Could you provide more details on how the 256x256 tokenizer can be directly applied to tokenize a 512x512 image? ### Regarding [A4] I understand that bidirectional attention is allowed during training. However, the model’s objective is to use this bidirectional attention to learn how to *predict the next scale*, rather than *predicting some spatial positions based on others*. My point is that the second objective was not explicitly optimized during training—can you confirm if this is accurate? --- Rebuttal 3: Title: Response to Reviewer yx9H Comment: ### Regarding [A1] Yes, we found the causal dependency (which is brought by the residual quantization) is critical to VAR's success, just as the left-to-right linguistic order is important to LLM's success. Earlier in the VAR research, we had tried the two independent encoding ways you mentioned and compared them to the causal one: - Independently encoding images of different resolutions was nearly impossible because a 16x downsampled VQVAE could barely reconstruct an image with <=64 resolution. Huge color differences and shape deformation. - Independent quantization can be seen as the **ablation** on the "residual quantization". We had tried this idea and while the reconstruction quality of the VQVAE did not change much, the VAR's validation token accuracy decreased a lot (tested on an ImageNet subset with 250 classes): |Method|Model|#Para|Accuracy$\uparrow$| |:-|:-|:-|:-| |Residual quantization|VAR-$d24$|1.0B|3.92%| |Independently quantization|VAR-$d24$|1.0B|3.01%| - Empirically, a 0.9% accuracy drop implies a huge performance degradation when the model size was close to 1B (can be sensed in Figure 5(c)). Therefore, we abandoned this independent scheme early in our research. We do agree with you that "coarse-to-fine scaled images may indeed not strictly have causal dependency". **If they don't**, the whole generation process is more like a series of super-resolutions; **but if they do**, it's much more similar to the way human paintings work: first the whole, then the details, with each step being a **refinement** to all the past steps (due to the **residual**). We highly believe the **refinement** is a key to VAR's success since it allows VAR to fix past mistakes like a diffusion model. Among AR methods, this is an unique advantage of VAR because it is even impossible for language AR models to do this -- they can't correct historical mistakes via some "residual" mechanism. We'll add these extra ablation studies and discussions to our paper and thanks for your insightful questions. &nbsp; ### Regarding [A2] The tokenizer consists of three parts, the CNN encoder, the quantizer, and the CNN decoder. Multi-scale exists only in the quantizer, so we just need to change the resolution hyperparameters $h_1, h_2, \dots, h_K$ in the multiscale quantizer, detailed in Algorithm 1 and 2. Since the operations in it are interpolations and convolutions which can generalize to any resolution, no additional operations are needed anymore. In other words, we only need to set $h_1, h_2, \dots, h_K$ from (1,2,3,4,5,6,8,10,13,16) to (1,2,3,4,6,9,13,18,24,32) and Algorithm 1 and 2 will still work. We'll update our paper to make this more clear. &nbsp; ### Regarding [A4] Yes, we agree that "predicting some spatial positions based on others" is not direct learned through VAR (which may be learned through BERT, MAE, etc.). But we checked the VAR's self-attention score when it generated an image, and observed that many tokens on a certain object's body would still show relatively high attention scores. Taking this into account, and the fact that VAR can fully utilize information from all previous scales, it still has the ability to do tasks like inpainting (Figure 8). We'll add these explanations to our paper for better presentation. ----------------------------------- Thank you again for your insightful and professional comment, which made our work more complete and solid! If there're any further questions, please let us know. If you feel all questions have been addressed, you can kindly consider re-rating our work. Thank you! --- Rebuttal Comment 3.1: Comment: Thanks for your detailed reply. I have one additional question: diffusion-based models are able to flexibly change inference steps for a more detailed generation result (large steps) or a sketchy yet faster generation (small steps). Is VAR possible of achieving this? Or the inference step is somewhat constrained in VAR? Does this constitute a weakness of VAR? --- Rebuttal 4: Title: Inference steps Comment: Thanks for this question. Unlike VQGAN or other AR image generators, where the number of steps is usually fixed, the inference steps in VAR (also the scale shapes in its VQVAE) can be slightly adjusted, e.g., replacing (1,2,3,4,5,6,8,10,13,16) with (1,2,4,6,8,10,13,16) or (1,2,3,4,5,6,8,10,12,14,16). This is possible because VAR treats a scale, rather than a single token, as an autoregressive unit, and operations based on the unit (interpolations) can generalize to different spatial resolutions. In other words, the reason is that we chose the scales in the visual signal for autoregressive generation, which is also mentioned as the **Strengths1** and **Strengths2** in the review comment. Another natural question is how diffusion transformers will perform when the number of steps is small enough to approach VAR's 10 steps. We investigated this by using some ODE solvers like DDIM and DPM-Solver to reduce the diffusion inference steps. The results are as below. | Model | #Para | #Step | Time | FID$\downarrow$ | |------------------------|-------|-------|-------|--------| | VAR-$d24$ | 1.0B | 10 | **0.6** | **2.09** | | DiT-XL/2 (original) | 675M | 250 | 45 | 2.27 | | DiT-XL/2 + DDIM | 675M | 250 | 45 | 2.14 | | DiT-XL/2 + DDIM | 675M | 20 | 2.9 | 4.68 | | DiT-XL/2 + DDIM | 675M | 10 | 1.8 | 12.38 | | U-ViT-H/2 + DPM-solver | 501M | 20 | 15.6 | 2.53 | | U-ViT-H/2 + DPM-solver | 501M | 10 | 7.8 | 3.18 | It can be seen that while diffusion can be accelerated many times by the ODE solver, the sacrifice of FID becomes non-trivial when the number of steps reach 10. &nbsp; When considered more broadly, we feel there's a **common limitation** of all Diffusion, AR, and VAR models: it is not possible to **automatically** determine the number of steps based on the input. For example, generating a blank blackboard clearly requires a different number of steps than a blackboard filled with math formulas. AR/VAR are expected to solve this with some modifications, like introducing an [EOS] token to allow the model itself to predict when to stop. We believe this is a valuable direction to be explored and will add it into our Limitation or Future Work section. --- Rebuttal Comment 4.1: Comment: The author has adequately addressed my concerns, and I am inclined to raise my score to 8. This is a commendable piece of work. --- Reply to Comment 4.1.1: Title: Thanks for your time and effort and kind words Comment: We're glad to hear that your concerns have been adequately addressed. We appreciate your professional and constructive feedback which made our work more solid and clear. We'll revise the paper based on the discussions to better present VAR. If you have any questions, please feel free to lcomment here. Thank you.
Summary: This work proposes a novel approach to image generation using an autoregressive decoder-only transformer model. Rather than decoding in a raster-scan their approach (VAR) decodes scales/resolutions conditioned on previously generated scales, reminiscent of traditional scale pyramids in computer vision. VAR demonstrates competitive performance on ImageNet in terms of generation quality, diversity and inference speed. It also demonstrates scaling laws up to 2.0B parameters. Strengths: - I am *very* impressed by the proposed method. It is simple, intuitive and novel. It is not difficult to see why it works well. - The empirical results are strong, and since the approach is based on a decoder-only transformer (a tried and tested architecture for autoregressive generation) I would expect it to scale to foundation-model T2I systems easily. - The paper is well written and easy to read, with the method/motivation/story communicated clearly to the reader. Weaknesses: As the reviewing burden has been heavy for this conference (6 papers) please understand that I can only dedicate so much time to this paper. Thus, I may have made mistakes in my understanding of the paper, and I welcome the authors to correct me if this is the case. 1. The reported inference efficiency is a bit disingenuous. The comparison with DiT uses 250 steps, which is much more than what SotA samplers require. Moreover, diffusion models can be distilled into 1-4 step models that are even faster. Compared to raster scan autoregressive models, the improvement in latency seems to be due to better use of parallel resources. However, for larger batches/measuring throughput this advantage may fade. 2. There are some missing details and insight, especially with regards to the multi-scale VQVAE. There are also details that are present in the code that really should be in the paper, such as the number of tokens per image/scale. What is its reconstruction performance of the VQVAE (compared to e.g. the SDXL VAE)? How does one choose the number of scales? How many codes end up being used over the different scales and do different scales capture similar (spatial) information in the latent space vs the image space? It would also be great to see an ablation like Table 3 for the VQVAE. Also, the code provided doesn't give details to reproduce the VQVAE, only the transformer. 3. The use of "zero-shot" to refer to the model's editing ability is different in nature to zero-shot generalisation in LLMs, so I find the link made in the paper to be a little disingenuous. Moreover, the editing performance doesn't seem to be very strong, with inpainting generation spilling outside of the box. The paper also doesn't give details on how the editing is performed. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. I'd like to see a throughput comparison in img/s as the batch size is increased. I'd also like to see some DiT results where a more advanced sampler such as DPM-solver or SA-solver is used. 2. See above. 3. Concretely, how is the editing performed? Are certain tokens teacher-forced? If so starting from which scale? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer oCJa, Many thanks to your valuable comments and questions, which help us a lot to improve our work. We address your questions as follows. --------------------------- > [W1 Q1, about the efficiency evaluation] I'd like to see a throughput comparison in img/s as the batch size is increased. I'd also like to see some DiT results where a more advanced sampler such as DPM-solver or SA-solver is used. [A1] Thank you for your suggestions to make our efficiency evaluation more complete and thoughtful! First, we add the throughput comparison with batched inference to our manuscript. All models are tested on a single A100 GPU. VQGAN's results are quoted from [1]. From the results we can see that VAR also **gets a big speedup from batch processing**, similarly to vanilla autoregressive models. We attribute this to the short sequence length of the first few VAR steps (like generating $1\times 1$ and $2\times 2$ tokens). After batching, VAR reaches a throughput of 0.04 $\text{img/s}$, which is still **4 times faster than batched VQGAN**, even if VAR has a larger model size. |Model|#Para|Batch size|Throughput$\downarrow$|FID$\downarrow$| |:-|:-|:-|:-|:-| | VQGAN-re | 1.4B | 1 | 5.76 $\text{img/s}$ | 5.20 | | VQGAN-re | 1.4B | 100 | 0.16 $\text{img/s}$ | 5.20 | | VAR-$d30$-re | 2.0B | 1 | 0.24 $\text{img/s}$ | 1.73 | | VAR-$d30$-re | 2.0B | 100 | **0.04 $\text{img/s}$** | **1.73** | ----------------------- Second, we add the results of DiT and U-ViT (another transformer-based diffusion model) with fewer diffusion steps ($<50$) to our manuscript. The results are quoted from [2, 3]. From the table one can see that: 1) Reducing the number of diffusion steps via ODE sampler (like DDIM, DPM-Solver) will result in a large FID rise, especially when reduced to near 10 steps. 2) Both using 10 steps, VAR-1B is still **faster and better than DiT-675M**. This is also because VAR has a shorter sequence length in the early steps, e.g., the first three VAR steps only generate $1\times 1 + 2\times 2 + 3\times 3 = 14$ tokens. 3) The Diffusion community has been building up for a long time for efficiency boosting. In contrast, VAR, as a newly proposed method, promises to see more ways to accelerate or distill VAR in the future. We'd like to add this to our Future Work section. |Model|#Para|#Step|Time$\downarrow$|FID$\downarrow$| |:-|:-|:-|:-|:-| | VAR-$d24$ | 1.0B | 10 | **0.6** | **2.09** | | DiT-XL/2 (original) | 675M |250 | 45 | 2.27 | | DiT-XL/2 + DDIM | 675M |250 | 45 | 2.14 | | DiT-XL/2 + DDIM | 675M | 20 | 2.9 | 4.68 | | DiT-XL/2 + DDIM | 675M | 10 | 1.8 | 12.38| | U-ViT-H/2 + DPM-solver | 501M | 20 | 15.6 | 2.53 | | U-ViT-H/2 + DPM-solver | 501M | 10 | 7.8 | 3.18 | > [W2 Q2, details on multi-scale VQVAE] [A2] We'd like to respond to your queries mentioned in Weakness 2 one by one, and add all the details below to Appendix section. Please see the overall Author Rebuttal part of "VQVAE concerns" for specific responses. &nbsp; > [W3 Q3, The use of "zero-shot" to refer to the model's editing ability is different in nature to zero-shot generalisation in LLMs; doesn't give details on how the editing is performed] [A3] Thanks for your insightful comments. In the field of language processing, it has been verified that every task can be formulated to an autoregressive generation task. This allows a pretrained LLM can generalize to many tasks different to its pretraining task without any finetuning. In our model, we also use the term "zero-shot" to emphasize that our model can also do tasks that are different to our pretraining task. We have updated some according descriptions in our paper to make it more clear and accurate. We added enough details on how the inpainting/outpainting/editing is performed to our manuscript. You can also find them in the overall Author Rebuttal part. ----------------------------------- Many thanks to Reviewer oCJa for their professional, detailed, and valuable reviews! We have done our best to address each of your concerns and hope our response can resolve them. Please let us know if you have any other questions. We will actively join the discussion until the end of the rebuttal period. We are looking forward to hearing from you :-) ! --- Rebuttal Comment 1.1: Title: References Comment: References: [1] Chang H , Zhang H , Jiang L ,et al. MaskGIT: Masked Generative Image Transformer[J]. arXiv.2202.04200. [2] Bao F , Nie S , Xue K ,et al. All are Worth Words: A ViT Backbone for Diffusion Models[J]. arXiv.2209.12152. [3] Ma X, Fang G, Michael Bi,et al. Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching[J]. arXiv.2406.01733 --- Rebuttal Comment 1.2: Comment: Thank you for the rebuttal; I feel like most of my questions have been well addressed. The additional results with faster diffusion samplers have strengthened the paper. I still have a couple of reservations: - I'm a bit confused by the throughput results, there appears to be some sort of typo (?) as throughput should be better when higher and the VQGAN results are higher. - If editing is performed using interpolation, how does one mask the boundary tokens that don't align with the mask at full resolution? E.g. in your example is the first token at the first scale teacher forced or not? My impression is that this lack of spatial resolution at lower scales probably limits fine-grained control over boundaries for the in/outpainting tasks. If this is indeed the case it would be best to mention this as a (current) limitation. --- Rebuttal 2: Title: We're glad to know most questions were addressed; We reply to the two more questions Comment: Thank you again for your professional review which did strengthen our paper. For the two more questions: - Yes it's a typo. All "img/s" should be "s/img". We've corrected this in our manuscript. - Yes, when the spatial resolution is too low, some interpolations could be inaccurate. But the smallest token map would be 2x2 because during inference the 1x1 was the start token. We've added this to our limitation. - Here we provide more details on that "interpolation": Considering an inpainting case on a 256x256 image where the upper left NxN pixels are masked. To mask the smallest 2x2 token map, the binary mask $M$ will be interpolated to 2x2 $M_2$. The upper left token on the 2x2 token map will be masked only if $M_2^{(0,0)} \ge 0.5$. We've added these to the Appendix to make it more clear. --- Rebuttal Comment 2.1: Comment: Thanks for the clarification. Now it is clear to me how the editing is performed. I just noticed that the batch size is 100 < 256 which is the max number of tokens VAR infers in parallel for a single sample. Does the throughput improve for VQGAN if the batch size is increased further? To be clear I fully agree that VAR uses parallel compute much more efficiently than raster-scan autoregression for low batch sizes. However, my intuition tells me that in the case that VQGAN fully utilises the parallel compute of a GPU/accelerator then VAR should not have a throughput advantage (and VQGAN may even do better since it has fewer tokens to infer per sample). --- Rebuttal 3: Title: Response to the batched inference Comment: Thanks for the insightful question on the throughput improvement. We further investigate the batched inference using batch size of 150, 200, and 250: |Model|#Para|Batch size|Throughput$\downarrow$|FID$\downarrow$| |:-|:-|:-|:-|:-| | VQGAN-re | 1.4B | 1 | 5.76 $\text{s/img}$ | 5.20 | | VQGAN-re | 1.4B | 100 | 0.16 $\text{s/img}$ | 5.20 | | VQGAN-re | 1.4B | 150 | 0.15 $\text{s/img}$ | 5.20 | | VQGAN-re | 1.4B | 200 | 0.14 $\text{s/img}$ | 5.20 | | VQGAN-re | 1.4B | 250 | OOM | $-$ | | VAR-$d30$-re | 2.0B | 1 | 0.240 $\text{s/img}$ | 1.73 | | VAR-$d30$-re | 2.0B | 100 | 0.041 $\text{s/img}$ | 1.73 | | VAR-$d30$-re | 2.0B | 150 | **0.039 $\text{s/img}$** | **1.73** | | VAR-$d30$-re | 2.0B | 200 | OOM | $-$ | **Observation.** When the batch size gets larger than 100, the throughput improvements of both VQGAN and VAR becomes marginal. Upon reaching the largest batch size without out of memory issue, VAR still shows **3.6x throughput**, though it was larger and has more tokens to infer. **Analysis.** To understand why VQGAN does not present higher throughput than VAR when larger batch sizes are used, we further check the behaviors of VAR and VQGAN when they do autoregressive inference -- we plot their attention masks. (since NeurIPS does not allow authors to upload an image or provide an external link, we put the python code here and it'll plot the masks) ```python import torch import matplotlib.pyplot as plt patch_nums = [1, 2, 3, 4, 5, 6, 8, 10, 13, 16] L = sum(pn**2 for pn in patch_nums) d = torch.cat([torch.full((pn * pn,), i) for i, pn in enumerate(patch_nums)]).view(1, L, 1) dT = d.transpose(1, 2) mask_VAR = (d >= dT).reshape(L, L).numpy() mask_AR = torch.tril(torch.ones(patch_nums[-1]**2, patch_nums[-1]**2)).numpy() fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(mask_VAR), axs[0].set_title('mask_VAR'), axs[0].axis('off') axs[1].imshow(mask_AR), axs[1].set_title('mask_AR'), axs[1].axis('off') plt.show() ``` From the figure one can see that: - VAR's block-wise causal mask and VQGAN's standard causal mask look very close. - Or in other words, VAR still maintains many AR properties. - So both VQGAN and VAR can benifit from the batched inference, and their batch size sweet-spots (when the throughput starts saturating) can be close to each other. &nbsp; We will add all of the above supplementary results and analysis to a new Appendix section named throughput benchmark using batched inference. Thank you again for your constructive and detailed response. --- Rebuttal Comment 3.1: Comment: Thanks for engaging throughout the rebuttal period. I am now happy with any issues I originally had with the paper and will keep my score as is. I would encourage the authors to perform some additional analysis on the throughput evaluations if they have time, however, I won't ask for more during this rebuttal period as it feels like chasing small details. I am still a little confused as to how VAR has better throughput compared to VQGAN given that VQGAN has significantly fewer tokens and lower attention cost according to the mask presented (half of the last scale of VAR).
Summary: This paper introduces next-scale prediction autoregressive models that satisfy mathematical premises (unidirectional dependency of autoregressive model) and preserve the 2D spatial locality. The core method is to develop multiscale VQ-VAE. The proposed method is more efficient than the traditional autoregressive model, requiring only $O(n^4)$ compared to $O(n^6)$ of the raster autoregressive model. Furthermore, this method is proven to follow scaling laws of LLM which guarantee better performance when scaling up the training process. Strengths: 1. This idea is natural and novel. It successfully solves mathematical premises violation of previous rastering scan autoregressive model. 2. The power-law scaling law is interesting and encourages follow-up work to scale up models for better performance. 3. The method’s performance is competitive to the diffusion model and other generative models. 4. Most of the paper claims seem valid to me. Weaknesses: 1. The second stage of training VAR transformers is too short and is hard for me to fully understand how it works. I wonder about the details of how to generate $h_w \times w_h$ tokens in $r_k$ parallel using k-th position embedding map. Is the embedding 1D or 2D embedding ?. How to make sure all tokens in $r_k$ are correlated to each other ? 2. The highest resolution of the scale $r_K$ is $16 \times 16$ and there are 10 scales (1,2,3,4,5,6,8,10,13,16). I wonder if there is any motivation to choose these scales. 3. I think the paper should include the sampling algorithm of autoregressive model with hyper-parameter details such as temperature, top-k, top-p and CFG sampling. 4. The zero-shot generalisation algorithm should be included in the paper for clarity. Technical Quality: 3 Clarity: 3 Questions for Authors: My main concerns are in the method section. I hope the author could provide more method details. See the weakness above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The limitation discussions are sound and clear to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Hisd, Many thanks to your valuable comments and questions, which help us a lot to improve our work. We address your questions as follows. --------------------------- > [W1] The second stage of training VAR transformers is too short and is hard for me to fully understand how it works... [A2] Thanks for pointing this and we'll detail more on the 2nd stage of VAR method in our manuscript. To generate $h_k\times w_k$ tokens $r_k$ in $k$-th autoregressive step (like to generate $r_2$ in Figure 4), the input is the previous token map $r_1$. It'll be reshaped to 2D ($1\times 1$ here), embedded to a 2D feature map, upsampled to $r_2$'s shape $2 \times 2$, projected to the VAR transformer's hidden dimension, and added by a 2D positional embedding to get $e_1$. So the actual input to the VAR transformer is $e_1$ in Figure 4, and these steps are the details of "word embedding and up-interpolation" noted in Figure 4. The detailed implementation can also be found in the code "codes/models/var.py autoregressive_infer_cfg" attached in supplementary material. To make sure all tokens in $r_k$ are correlated to each other, we don't apply attention mask within $r_k$ to keep a bidirectional attention on them. In other words, $r_k$ can attend to all tokens of $r_{\le k}$ &nbsp; > [W2] there are 10 scales (1,2,3,4,5,6,8,10,13,16). I wonder if there is any motivation to choose these scales. [A2] Yes the chose of scales is a key design of VAR. We choose an *exponential function* $h_k=w_k=\lfloor a\cdot b^k\rfloor$ to get these scales because: 1) As discussed in appendix F, we can reach a total complexity of $\mathcal{O}(n^4)$ via an exponential schedule. 2) We want to increase the number of steps to reach $16\times 16$ for image quality. An exponential function grows slowly in the early stages and quickly in the later ones. So it allows us to increase steps (mainly in the early stages) without a significant increase in total sequence length. In practice, we use $a=1.36$ and $b=1.28$ to get that (1,2,3,4,5,6,8,10,13,16). &nbsp; > [W3] I think the paper should include the sampling algorithm of autoregressive model with hyper-parameter details such as temperature, top-k, top-p and CFG sampling. [A3] We use temperature of $1.0$, top-k of $k=900$, top-p of $p=0.96$, and CFG of $1.25$ (VAR-$d16$) or $1.5$ (the others). We'll add these to our manuscript. &nbsp; > [W4] The zero-shot generalisation algorithm should be included in the paper for clarity. [A4] Thanks for this valuable advice. We'll add a pseudo code for detailing the algorithm of zero-shot generalisation in our manuscript. The algorithms of inpainting, outpainting, and class-condition editing are basically the same. Specifically, we would mask out the according region at each scale of (1,2,3,4,5,6,8,10,13,16) given the task, i.e., masking the inner area for inpainting, the outer area for outpainting, and the area we want to edit for editing. During VAR generation, ground-truth non-masked tokens are maintained (like teacher forcing) and we only collect VAR's predictions on those masked regions. ----------------------------------- **Thank you again for helping us improve the paper and hope our response can resolve your concerns! Please let us know if you have any further questions. We will be actively available until the end of rebuttal period. If you feel your concerns are addressed, please consider reevaluating our work. Looking forward to hearing from you :-) !** --- Rebuttal Comment 1.1: Comment: Thanks for addressing my concerns. I decided to raise score to 7 --- Rebuttal 2: Title: We appreciate your acknowledgement on our rebuttal and the reevaluation Comment: We're glad to hear your concerns have been addressed. We'll be active till the end of the discussion period. If you have more questions, please let us know. Thank you!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Boosting Vision-Language Models with Transduction
Accept (spotlight)
Summary: Summary: The paper under review investigates the integration of contrastive vision-language pretraining (CLIP) with transductive learning methodologies. The authors are motivated by the efficacy of transductive learning in utilizing unlabeled data to enhance the performance of conventional supervised learning frameworks. They propose to extend this approach to the domain of vision-language pretraining. The key innovation lies in modeling the distribution of language data using a Gaussian mixture model (GMM). The authors introduce three specific learning objectives: GMM clustering, Laplacian regularization, and text knowledge preservation. These objectives collectively aim to regularize the distribution of unlabeled data while maintaining the integrity of text-guided vision-language pretraining. The complexity of optimizing three intertwined variables necessitates an advanced optimization strategy. To address this, the authors propose the Block Majority-Minimize optimization technique, which iteratively fixes two variables while updating the third. This approach ensures the overall minimization of all three variables, providing a robust convergence guarantee. Through comprehensive quantitative experiments, the authors demonstrate that their proposed method can serve as a versatile framework, enhancing the performance of various backbone methods across different scenarios. Strengths: Strengths: - Organization and Clarity: The paper is meticulously organized, with a clear and coherent presentation of concepts. The structured formulation facilitates a straightforward understanding of the core ideas. The writing is lucid, allowing readers to seamlessly follow the logical progression of the methodology. - Experimental Rigor: The experimental evaluation is thorough and extensive. The authors employ a diverse array of datasets and baseline methods, lending credibility and robustness to their findings. The significant performance improvements observed in experiments underscore the practical utility and contribution of the proposed method. - Theoretical Foundations: A detailed convergence analysis is provided, offering a solid theoretical underpinning for the optimization process. This analysis enhances the credibility of the proposed optimization strategy and reassures readers of its reliability. Weaknesses: Weaknesses: - Unclear Motivation: The motivation for leveraging transductive learning in this context is somewhat ambiguous. The assertion that transductive learning can enhance the handling of unlabeled data, while valid in general, seems less compelling here. In the CLIP framework, vision and language data are inherently paired and labeled, diminishing the applicability of transductive learning, which traditionally targets unlabeled data. - Mismatch with Zero-Shot Learning: There is a conceptual misalignment between transductive learning and zero-shot learning paradigms. Transductive learning focuses on knowledge propagation from observed training data to observed test data. In contrast, zero-shot learning aims to generalize to unseen test data. This fundamental difference makes the application of transductive learning to zero-shot scenarios appear somewhat forced and unconvincing. - Lack of Intuitive Justification: The introduction of the GMM and Laplacian terms lacks intuitive explanation. While the quantitative results are impressive, there is an absence of qualitative analysis to elucidate why these terms specifically enhance learning performance. A deeper exploration of the underlying reasons for their effectiveness would strengthen the paper's contributions. - Computational Overhead: The integration of GMM and Laplacian regularization is computationally intensive. The paper would benefit from experimental validation of the method's efficiency, demonstrating that the performance gains justify the additional computational costs. Providing benchmarks or comparative analyses regarding computational efficiency would address potential concerns about scalability and practicality. Overall Assessment: In conclusion, this paper presents a novel approach to enhancing vision-language pretraining by incorporating transductive learning principles. Despite some motivational and conceptual ambiguities, the paper's methodological rigor, comprehensive experiments, and theoretical contributions make it a valuable addition to the field. Addressing the highlighted weaknesses through additional qualitative analyses and computational efficiency evaluations would further solidify the paper's impact and applicability. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see the weaknesses part. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitation in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Unclear motivation.** While VLMs such as CLIP enable zero-shot predictions, they are far from being perfect (please refer to the zero-shot performances of CLIP for 6 different model sizes in Table 8 in the Appendix). Please note that CLIP receives unlabeled samples and makes class predictions by comparing the images to several candidate text prompts. Therefore, CLIP yields pseudo-labels that could be used as a noisy supervision in transduction without any label. What we propose in our work is to improve zero-shot CLIP and inductive few-shot methods based on CLIP as pre-training, by incorporating the transduction paradigm during inference. **W2: Mismatch with Zero-Shot Learning.** It is true that many transductive, vision-only methods (e.g., [5, 13, 33, 36, 43, 73, 24 , 72], among others) propose to propagate knowledge from observed training (labeled) data to unlabeled test data. However, in this work, we show that, in the context of VLMs, transduction could also be conducted without any label, by leveraging the pseudo-labels generated from the text embeddings, as a form of noisy supervision. Indeed, we make zero-shot predictions by employing text-based noisy supervision (the KL term in Eq. 2) and the structure of the unlabeled data in the feature space (GMM clustering and Laplacian regularization terms), which effectively corresponds to the transductive paradigm. Please note that we propose an extension of TransCLIP, named TransCLIP-FS (the objective function in Eq. 4), for cases where labeled data are available, which combines noisy supervision from the zero-shot predictions and supervision from the few-shot samples, as in the above-mentioned transductive methods. **W3: Lack of Intuitive Justification.** Thank you for giving us the opportunity to clarify. The GMM term could be viewed as a maximum-likelihood estimation objective, like in the Expectation-Maximization algorithm. It could also be viewed as a probabilistic generalization of the K-means clustering objective (lines 157-162). The principal difference with the standard K-means clustering lies in the capability of TransCLIP to learn a covariance matrix (variable $\boldsymbol{\Sigma}$). As shown in Figures 2a and 2b, which we added for this rebuttal (please refer to the attached PDF document), the learned covariance matrix enables fitting the unevenly spread feature distribution across the dimensions of the embedding space. We also show that fixing the covariance matrix (which corresponds to K-means if we omit the Laplacian and KL terms) results in a performance drop (line 2 in Table 6a of the main paper). **W4: Computational Overhead.** The Block Majorize-Minimize procedure presented and detailed in Section 3.3 provides a decoupled $\mathbf{z}$-update equation (Eq. 5), as well as closed-form updates for GMM parameters $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$ (Eqs. 6 and 7). These decoupled updates enable the algorithm to run substantially faster than popular prompt-tuning approaches (such as UPL [25]), which require costly forward-backward propagation for multiple epochs. We report run-time comparisons in Tables 5a and 5b in the main paper. Furthermore, we add Table 1 in this rebuttal (please refer to the attached PDF document), which provides a more detailed analysis, showing: (i) TransCLIP's prediction time is a fraction of the time required for feature encoding, which makes its total runtime (encoding + prediction) comparable to the total runtime of the inductive zero-shot CLIP baseline; (ii) The total runtime of UPL is an order-of-magnitude slower, due to the training overhead. We will add these detailed runtimes to Tables 5a and 5b in the paper. --- Rebuttal Comment 1.1: Comment: Thanks for providing further comments to address my concerns. No further concerns remained from my side, so I would like to keep my current score.
Summary: The paper introduces TransCLIP, a transductive learning method designed to enhance the performance of vision-language models (VLMs) for zero-shot and few-shot learning scenarios. By incorporating a novel objective function constrained by Kullback-Leibler divergence, TransCLIP not only improves prediction accuracy but also ensures computational efficiency. This method operates as a plug-and-play module on existing VLMs, leveraging unlabeled data to boost the model's predictive capabilities significantly across various datasets. Strengths: TransCLIP integrates transductive learning into VLMs effectively, which is traditionally challenging due to the complexity of multimodal data integration. The method consistently outperforms standard inductive and transductive approaches by leveraging text-encoder knowledge, which guides the learning process and significantly boosts performance. Through the iterative Block Majorize-Minimize optimization procedure, TransCLIP ensures efficient computation, making it scalable for large-scale applications. The method's ability to function atop various pre-existing models without requiring modifications to the underlying architectures enhances its applicability in real-world scenarios. Weaknesses: The proposed objective function of TransCLIP is composed of multiple terms, including a Gaussian Mixture Model-clustering term, a Laplacian regularizer, and a Kullback-Leibler divergence penalty. While this multifaceted approach aims to integrate various aspects of the data, it raises concerns about potential conflicts or trade-offs between these terms. The interaction and balance among these components are crucial, as overemphasis on one could undermine the effectiveness of others, potentially leading to suboptimal learning outcomes. Although the paper addresses computational efficiency through an iterative Block Majorize-Minimize (BMM) optimization procedure, a detailed comparison of computation costs with other methodologies is lacking. Understanding how TransCLIP's computational demands stack up against alternative approaches, particularly in terms of time complexity and resource usage, is essential for evaluating its practical applicability and efficiency. The scalability and performance of TransCLIP when applied to very large datasets remain uncertain. While the method is claimed to be computationally efficient, the real-world effectiveness and efficiency on datasets significantly larger than those tested (like datasets beyond the scale of ImageNet) need thorough investigation. This includes an assessment of whether the benefits observed on smaller or moderate-sized datasets consistently translate to much larger scales. The paper provides theoretical convergence guarantees for the optimization procedure. However, the robustness of these proofs and their assumptions in practical, real-world scenarios could be questioned. Specifically, the conditions under which convergence is guaranteed should be scrutinized to ensure they are not overly restrictive or detached from practical applications. This scrutiny is vital to validate the method's theoretical foundation and to ensure its reliability across various deployment contexts. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses section. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Balance of terms.** We understand your concern. The terms in our objective function are not necessarily in conflict; they could be seen as a clustering (driven by the GMM term), which is regularized by the Laplacian term propagating the labels between neighboring samples, and by the KL-divergence term that incorporates the text knowledge of the VLM. Regarding the utility of the two regularization terms, we show in Table 6a (line 3) of the main paper that the Laplacian term helps improving performance. In Table 4 of the main paper, we demonstrate the importance of the KL text regularization (TransCLIP vs. TransCLIP *w/o text*). Regarding the sensitivity to hyper-parameters, we emphasize that ***the text-regularization weighting factor $\lambda$ is set to the same value for all the experiments that involve TransCLIP-ZS, regardless of the dataset or the few-shot method***. We evaluate *a posteriori* the impact of varying $\lambda$ and show that there is only a minor sensitivity as long as $\lambda$ does not tend towards $0$ (please refer to Table 6b of the main paper, and to Figure 1 that we added in this rebuttal, in the attached PDF document). We also evaluate the sensitivity to the number of neighbors considered in the Laplacian term, and show again that it does not significantly impact the performances (please refer to Table 6c of the main paper). **W2: Computation costs.** - *Time complexity*: The Block Majorize-Minimize procedure detailed in Section 3.3 presents a decoupled $\mathbf{z}$ update equation (Eq. 5) and closed-form update equations for the GMM parameters $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$ (Eqs. 6 and 7). These enable our algorithm to achieve rapid convergence compared to traditional prompt tuning methods such as UPL [25], which require extensive forward-backward propagation over multiple epochs. We provide runtime comparisons in Tables 5a and 5b of the main paper, and a more detailed analysis in Table 1 of the attached document. We will incorporate these new results into Tables 5a and 5b of the main paper to clarify the speed and efficiency advantages of TransCLIP. - *Resource usage*: we discuss the hardware requirements of TransCLIP in Appendix B, noting that TransCLIP consumes 16.9 GB of memory when inferring on ImageNet, making it feasible to run on a single 24 GB GPU for large datasets. **W3: Scalability to even larger datasets.** As discussed in Section 4.2, TransCLIP is easily applicable to multi-billion parameter models and large datasets since it does not necessitate gradient computation or model parameter training (i.e., it only requires the memory needed for single-sample inference, because the whole dataset processing can be performed one sample at a time). Results in the Table below support the scalability of TransCLIP in effectively managing more than one million images in a few minutes (on ImageNet training set which contains 1,281,167 images categorized into 1,000 classes). | Model | Zero-shot | TransCLIP-ZS | Runtime | |-----------|-----------|--------------|-----------| | ViT-B/16 | 68.09 | 71.02 | 284 sec. | | ViT-L/14 | 74.82 | 77.95 | 396 sec. | **W4: Theoretical convergence guarantees.** As stated in Section 3.4, our optimizer can be viewed as an instance of the general Block Majorize-Minimize (BMM) paradigm, and we establish convergence of our procedure. The technical conditions that guarantee convergence stem from the mathematical properties of our objective and its block-wise majorizing functions used in the BMM optimizer (such as the strong convexity of the block-wise majorizing functions). Hence, these conditions do not restrict any practical application aspects.
Summary: The paper proposes a transductive method to boost the performance of existing vision-language models by assuming that all unlabeled test samples are available during the training stage. Specifically, the paper models a Gaussian mixture model, where each class is represented by a Gaussian distribution. The method refines predictions for each test sample by constraining the similarity between predictions of similar points and the similarity between the final prediction and the prediction based on text features. Experiments are conducted on standard datasets. Strengths: 1. The paper addresses a less explored area of transductive learning on vision-language models, filling a gap in the existing research. 2. The motivation for the method is clear, the modeling is reasonable, and an appropriate optimization method is provided. 3. The proposed method is quite general and can be applied in both zero-shot and few-shot scenarios. It is also compatible with other methods. Weaknesses: 1. The paper does not discuss a significant limitation of the proposed method. Specifically, when the test dataset arrives online, one by one, the method may not be able to make predictions for individual data points immediately. Instead, it may require collecting a sufficient number of data points before training the model and making predictions. Furthermore, if a new data point arrives after the model has been trained, does the model need to be retrained to make a prediction for the new point? Is there an efficient way to handle new data points? 2. Although the authors provide an analysis of training times in Table 5, the testing speed is more critical for users. An analysis of the time complexity during testing is also needed. 3. Considering that the method requires using all unlabeled test data for training, it resembles unsupervised learning (e.g., [25]). It is recommended to discuss the differences between the proposed method and existing unsupervised learning methods in detail. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Copy from Q1: The paper does not discuss a significant limitation of the proposed method. Specifically, when the test dataset arrives online, one by one, the method may not be able to make predictions for individual data points immediately. Instead, it may require collecting a sufficient number of data points before training the model and making predictions. Furthermore, if a new data point arrives after the model has been trained, does the model need to be retrained to make a prediction for the new point? Is there an efficient way to handle new data points? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1.a: Online setting.** This is indeed a general limitation of transductive-inference approaches. Still, we believe this transductive setting is useful in a breadth of application domains and real scenarios, as pointed out by Reviewers NaRY and uSPt and evidenced by an abundant and recent literature on transductive few-shot learning; actually, in the recent years, the transductive setting has been very popular in the vision-only few-shot learning literature (see, for instance, [5, 13, 33, 36, 43, 73, 24, 72], among others). Use cases where batches of test imaging data are available occur frequently in practice, e.g., in video-stream sequences, or in smart-device photos taken and stored every day. The setting is also very appealing in various application domains, such as remote sensing or histopathology where large images are divided into patches, or more generally, to analyze existing databases or large corpora of web-scraped unlabeled images (e.g., to create pseudo-labels for large unlabeled image sets taken and stored every day). As correctly mentioned by the reviewer, if data points are coming sequentially, we would need to wait to collect a batch of them before conducting TransCLIP (while still using the zero-shot ability of the VLM if on-the-fly prediction is needed). We will add this discussion to the limitation section, as well as to the conclusion, as this could be an interesting future work to broaden the application range of TransCLIP. **Q1.b: Prediction for new samples.** This is, indeed, a very interesting point and extension that we missed in our first version. In fact, after running TransCLIP, the GMM parameters could be stored to make a prediction on a new individual sample, resembling the inductive paradigm: $$ \mathbf{z}_{i} = \frac{ \mathbf{p} _{i} }{ \mathbf{p} _{i}^\top \mathbf{1}_K } $$ This can be seen as a *Maximum Likelihood* predictor for sample $i$. We denote this naive prediction rule $\textbf{\textit{MLH}}$. More interestingly, Update Eq. 5 can be directly adapted. Let's assume we don't have access to other data points (e.g., very low resource application or for privacy concerns), we remove the Laplacian term and get the following prediction rule: $$ \mathbf{z}_{i} = \frac{\hat{\mathbf{y}} _{i} ^ {\lambda} \odot \exp ( \log ( \mathbf{p} _{i}))}{(\hat{\mathbf{y}} _{i} ^ {\lambda} \odot \exp (\log (\mathbf{p} _{i}))) ^ \top \mathbf{1}_K} = \frac{\hat{\mathbf{y}} _{i} ^ {\lambda} \odot \mathbf{p} _{i}}{(\hat{\mathbf{y}} _{i} ^ {\lambda} \odot \mathbf{p} _{i}) ^ \top \mathbf{1}_K} $$ This can be seen as a *Maximum A Posteriori* predictor for sample $i$ since it is the class probability likelihood $\mathbf{p}_i$ weighted by the initial pseudo-label $\hat{\mathbf{y}} _{i}^{\lambda}$ (prior knowledge). We denote this prediction rule $\textbf{\textit{MAP}}$. Following this very interesting comment by the reviewer, we conduct new experiments (average over 100 random seeds): We split the test set, keeping randomly 10% of the data points as a held-out set, and infer TransCLIP on the remaining 90% of the samples. We then apply the prediction rule on each sample of the held-out set (independent predictions, as in the inductive setting). The Table below summarizes the performance and shows that our $\textbf{\textit{MAP}}$ predictor rule for new unseen data points still brings a significant gain, validating the feasibility of this approach. | Model | ImageNet | SUN397 | Aircraft | EuroSAT | Average | |-------------------------------|----------|--------|----------|---------|---------| | CLIP (*on held-out*) | 66.6 | 62.5 | 24.7 | 48.3 | 65.3 | | $\textbf{\textit{MLH}}$ (*on held-out*) | 67.5 | 67.2 | 24.6 | 64.1 | 68.0 | | $\textbf{\textit{MAP}}$ (*on held-out*) | **69.9** | **68.6** | **26.6** | **64.7** |**70.1** | | | | | | | | | TransCLIP-ZS (*transductive*) | 70.3 | 68.9 | 26.9 | 65.1 | 70.3 **Q2: Precision on runtime.** We provide more details on the runtime of each step in the attached document (see Table 1). On ImageNet, UPL$^*$ takes 151 minutes for prompt tuning, followed by 59 seconds for images+texts encoding during the inference step, resulting in a total of 152 minutes. In contrast, TransCLIP has no training stage and requires 14 seconds to run, which is only a fraction of the 59 seconds needed for images+texts encoding. We will extend Table 5 of the main paper to include these clarifications. **Q3: Comparison to unsupervised learning.** We refer to Table 8 in the Appendix, which contains a direct comparison of TransCLIP with UPL [25]. Please note that UPL was initially designed to find pseudo-labels in a unlabeled training set, followed by a training stage before inferring on the test set, making UPL an inductive method. Therefore, we extended UPL to UPL$^*$, drawing pseudo-labels directly from the test set, hence making it a transductive method (please refer to the details in lines 569-572). In conjunction with our response to W6 by Reviewer 1 (NaRY), we will add a description of UPL and UPL$^*$ in the Appendix for better clarity. We also compare TransCLIP to other unsupervised methods (in Table 8 of the Appendix) such as TPT [42] and MTA [65] (two test-time augmentation techniques) and SwapPrompt [40]. We draw attention to the differences between their settings and ours (lines 567-569), which will be extended by discussion "W1 and Q2" with Reviewer aJGT. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Most of my concerns have been addressed. Considering the limitations in the online setting, I slightly increased my score to 5.
Summary: The paper proposes a method named TransCLIP that performs transductive inference to boost classification performance of Zero-Shot & Pre-trained Few-Shot CLIP models. The proposed methodology is an extension of [1], but for VLMs. TransCLIP proposes to learn the class prototypes, in contrast to fixed-prototypes considered in [1] and adds a language guidance term to penalize predictions that stray from the CLIP pseudo-label. Following [1], the paper uses a Majorize-Minimize framework to solve for the prototypes and class predictions in closed form. Experiments are shown on 11 standard datasets, showing that TransCLIP improves on top of existing few-shot fine-tuning methods. [1] Imtiaz Ziko, Jose Dolz, Eric Granger, and Ismail Ben Ayed. Laplacian regularized few-shot learning. In International conference on machine learning, pages 11660–11670. PMLR, 2020. [2] Ma, Xiaosong et al. “SwapPrompt: Test-Time Prompt Adaptation for Vision-Language Models.” Neural Information Processing Systems (2023). [3] Shu Manli et al. “Test-time prompt tuning for zero-shot generalization in vision-language models.” Neural Information Processing Systems (2022). Strengths: \+ The paper proposes to use transduction to improve CLIP’s downstream performance. This is a promising direction which has seen recent efforts [2,3]. The considered formulation is a scalable alternative to existing test-time methods. The solution is well motivated, and the paper is easy to follow. \+ TransCLIP builds on top of, and modifies [1] for VLMs. TransCLIP makes two major design choices i) The class prototypes are learnable ii) A language-guidance term is added to regularize updates. Both design choices lead to improved empirical performance on CLIP. \+ The experiments are comprehensive . The proposed language guidance term and other design choices used in TransCLIP lead to improved accuracy over zero-shot CLIP and few-shot finetuned CLIP. Weaknesses: \- Discussion on Test-Time methods. Recently many test time methods (a type of transductive learning) have been proposed to improve VLM performance. A discussion of the tradeoffs between the proposed approach and existing test-time adaptation (TTA) methods needs to be discussed. For instance, TransCLIP requires the entire test batch, while TTA methods require fewer test samples [2,3]. \- Issues with missed references. Important references are missing in Section 3.3. The entire section follows similar arguments from [1], but re-derived by adding the language-guidance term. \- The GMM argument is unclear to me. However, the GMM clustering term in eq 2 is minimized when label assignments are given by the closest prototype. The loss is reminiscent of $\mathcal{N}$ in [1], but the paper proposes to use a Mahalanobis distance instead of euclidean distance. I am unsure of the insight provided by posing it as GMM clustering instead of drawing parallels with [1]. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. As discussed above, the GMM angle is unclear, but maybe it boils down to learning class prototypes in the absence of few-shot data. 2. Existing test-time methods show strong empirical evidence of the power of transduction for VLMs. A discussion of the scope of these methods is necessary. 3. Since the primary novelty of the paper is to suggest that language-guidance can greatly improve transduction, the appropriate ablation is necessary. Setting $\lambda=0$ in the proposed KLDiv parametrization only removes the cross-entropy term, and is not the desired ablation. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed some limitations of the work. There is no potential negative societal impact from this work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 and Q2: Discussion on Test-Time methods**. We agree that the mentioned test-time methods also employ the transduction paradigm. However, their settings are in fact very different from the ones studied in our work (zero- and few-shot adaptation of CLIP, improving inductive few-shot methods). We still report their performance in Table 8 of the Appendix but we agree that a more complete discussion is needed. We will add the following discussions: - *Design*: SwapPrompt [40] has been designed to make batch predictions on-the-fly, and has continual-learning related mechanisms such as an exponential moving average prompt across batches. The setting of TPT [42] also differs from ours; it has been designed to work on a single sample with many data augmentations to train one prompt per image. - *Resource*: Both methods require access to model weights for training (i.e., do not operate in a black-box setting). We also note that prompt tuning does not scale well with the model size and is even impractical on very large models such as EVA-CLIP-8B. - *Computation overhead*: TPT takes nearly 5 hours for ImageNet. As SwapPrompt relies on prompt learning for several epochs and requires data augmentation, its running time is above the one of UPL$^*$ (please see Table 1 in the attached document). In comparison, TransCLIP takes only 73 seconds for the whole pipeline (images and texts encoding + transduction; please refer to Table 1 in the attached document). - *Batch size*: It is true that TransCLIP first requires a test batch to work on. However, we would like to highlight our answer Q1.b for Reviewer AvCq, which discusses and assesses how online predictions for new (single) samples can be processed after running TransCLIP. **W2: Issues with missed references. All Section 3.3 follows similar arguments from [1], but re-derived by adding the language term.** Please note that we mention [1] (LaplacianShot, reference [73] in the paper) in related work (lines 85-87), and provide direct experimental comparisons to it in Table 4 in the paper, as well as in Tables 17 to 21 in the Appendix. We agree with the reviewer that we could emphasize that the derivation of [73] (i.e., linearization of the Laplacian term) is a sub-step of our approach but without the text term (this could be done in sub-section Majorize-Minimize with respect to the z-block in section 3.3). We clarify, however, that [73] does not have a Block Majorize-Minimize (BMM) optimization procedure (i.e., the additional $\boldsymbol{\mu}$ (prototypes) and $\boldsymbol{\Sigma}$ (covariance) steps in Eqs. (6) and (7) in our section 3.3). The objective function in [73] is quadratic (the term using the fixed prototypes being linear) and has only assignment variables, whereas our objective function is higher-order with three blocks of intertwined variables (prototypes, covariance and assignments). Interestingly, Table 4 shows that, even without the text term, our objective function brings significant improvements over the LaplacianShot one when the number of shots increases (due to learning prototypes $\boldsymbol{\mu}$ and covariance $\boldsymbol{\Sigma}$ from both unlabeled and labeled data). We would also like note that the general principle of linearizing concave quadratic terms (which provides a majorizing function on the Laplacian term in our case) is quite common in the Majorize-Minimize optimization literature, and has been used in other works in machine learning, even much earlier than LaplacianShot, e.g., the following work in the context of conditional random fields (CRFs): Krahenbuhl and Koltun, Parameter Learning and Convergent Inference for Dense Random Fields, ICML 2013. We will add this reference and connect to the LaplacianShot procedure ([73]) in Section 3.3 (e.g., at line 193). **W3 and Q1: GMM argument.** They are two major differences with the linear term $\mathcal{N}(Y)$ in LaplacianShot [73], in which the class prototypes are fixed (computed from the labeled shots only) and the assignment vector is the only block of variables. First, as mentioned by the reviewer, our formulation enables to learn class prototypes $\boldsymbol{\mu}$ from both the unlabeled and labeled data (both in zero- and few-shot). This claim is supported by the results in lines 1 and 4 of Table 6a in the main paper, which show that performance tends to decrease when the class prototypes are fixed. Second, by introducing a learnable covariance matrix $\boldsymbol{\Sigma}$, we enable TransCLIP to better model the feature variance, which may be unevenly spread across the dimensions of the embedding space. To further support this claim, we add Figures 2a and 2b in the attached PDF document, which show that feature variance varies across dimensions, and that our learnable $\boldsymbol{\Sigma}$ tends to fit this particular distribution. You are correct that learning a covariance matrix is equivalent to learning the weights of the Mahalanobis distance between the data points and their class prototypes, and we will add this discussion to Sec. 3.1, in the GMM-based clustering part. Thank you for pointing that out! Interestingly, the learned GMM parameters can then be used to make predictions on new incoming samples, as discussed in Q1.b for Reviewer AvCq. **Q3: KL Div parametrization.** We would like to point to Table 6b in the main paper, which studies the impact of $\lambda$ for the zero-shot setting. However, this Table studies only the zero-shot case. Therefore, we provide here a more thorough study (please refer to Figure 1 in the attached document) for a wider range of settings (zero-, 1-, 4-, and 16-shot), which clearly shows both the importance of this term and its stability around the selected value. Indeed, the average zero-shot performance does not vary significantly between $\lambda=0.5$ and $\lambda=1.6$, and $\lambda=0.5$ is an acceptable value for all few-shot settings. We will add this extension to our current ablation studies. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. Since my concerns have been addressed, I have increased my score to 6. I recommend incorporating the discussions on test-time methods into the main paper.
Rebuttal 1: Rebuttal: We greatly appreciate the reviewers' insightful and constructive comments, and are pleased that four out of the five reviewers voted towards acceptance. We are also glad that the reviewers found the method useful for various domains and real-world scenarios (Reviewers NaRY and uSPt), the experiments comprehensive and convincing (Reviewers aJGT, bXXg and NaRY), and the work to fill a gap in the literature (Reviewer AvCq). The reviewers also highlighted the interest of our iterative block majorize-minimize (BMM) procedure, which tackles three blocks of intertwined variables (prototypes, covariance and assignments) while guaranteeing convergence. Below, we address all the questions raised by the reviewers. In particular, we provide additional ablation studies and clarifications, as requested. For the reviewers' convenience, we have included additional illustrations in the attached document. Pdf: /pdf/def8ea10c03d636c93a49f94cac36d41b35d0bb2.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: In this paper, the authors explore fine-tuning VLMs to specific unlabelled/partially-labelled datasets in a transduction setting. The authors propose an objective function to carry out joint inference of labels for all the test samples simultaneously. The authors then propose an iterative block Majorie minimize procedure to arrive at a local optimum. Empirical results on multiple datasets show clear superiority of the proposed method over other prompt-based fine-tuning and inductive zero-shot and few-shot methods. Strengths: The paper is generally well-written, easy-to-follow and intuitive. Experiments clearly show big improvement over current state-of-the-art. I believe this method could be very useful for downstream applications of VLMs in different domains. Weaknesses: Some aspects of the writing should be made more clear. 1. Is Eq 2 did you mean zi^Tz_j since both are vectors? 2. How is the final prediction done, for the ith sample? Is it the argmax of z_i? 3. Design choice: The experiments show great results, but is there apriori any expectation for using a GMM model for the classes? Perhaps this is related to the linear manifold hypothesis for these VLMs and LLMs. Some more discussion on this will be good. As it is written it comes as a random choice which seems to work well with no intuition behind it. 4. Line 167. why is the max function needed for W to be PSD? Even without the max operator, W will be a gram matrix and thus PSD. Am I missing something here? 5. Why does the laplacian regularization enforce visually similar points to have the same assignment? I understand this when z is one-hot, but here z is any vector in the simplex. Is this still true when z belongs to the simplex? 6. One of the baselines used in the paper is a modification of UPL, but this modification is not clearly mentioned in the paper or the appendix. How is the modification done in the transduction setting? Also a small description explaining UPL would be good in the appendix for added context to the readers. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses above. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors have adequately discussed limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Eq 2.** It's an omission on our side; indeed, it should be $\mathbf{z}_i^\top \mathbf{z}_j$. Thanks for pointing that out! We will update the objective function accordingly. **W2: Final prediction.** Yes, we take the argmax for the final prediction. We should have made it clearer at the end of Section 3.3 as follows: `'Note that, after convergence, we use the sample-to-class assignment variables $\mathbf{z}_i$ as predictions for each sample $i$ of the query set $\cal Q$, *using the argmax operation for conventional classification*.". We will also update Algorithm 1 in the Appendix to further clarify this (line 13: return argmax($\mathbf{z}$)). **W3: Design choice (using a GMM).** We agree that further discussions / illustrations on this choice would be helpful to readers. The most basic unsupervised clustering objective would be K-means. This could be viewed as an extension of inductive methods (e.g., CoOp, which also search for new class centroids via prompt tuning) to unlabeled test data. However, K-means makes the assumption that the clusters are 'spherical', which may not be the case for the embedding vectors ensuing from VLMs. Indeed, K-means corresponds to a particular (simplified) case of our formulation, in which the covariance matrix is fixed to the identity matrix, as pointed out in lines 160-161. The ablation studies in Table 6a show TransCLIP performance when $\boldsymbol{\Sigma}$ is not updated (note that the other regularization terms are present), resulting in a large performance drop. This suggests that the spherical-cluster assumption (i.e., identity covariance matrix), as in K-means, may not be valid for VLM embeddings. Table 6d also shows the impact of having an isotropic $\boldsymbol{\Sigma}$, which could be seen as an intermediate solution between K-means (with fixed $\boldsymbol{\Sigma}$) and the TransCLIP objective function (with learnable $\boldsymbol{\Sigma}$), demonstrating the importance of learning $\boldsymbol{\Sigma}$. To provide further insights, we added Figures 2a and 2b to this rebuttal (please refer to the attached PDF document), which show that feature variance varies across the dimensions and that our learnable $\boldsymbol{\Sigma}$ tends to fit such 'elliptical' cluster distributions. Additionally, the introduction of a learnable $\boldsymbol{\Sigma}$ can also be seen as learning the weighting factors of the Mahalanobis distance between the class centroids and the data points (as pointed out by Reviewer aJGT). **W4: Affinity matrix.** You are right; our affinity matrix W can be rewritten as $F^\top F$ and is indeed a Gram matrix, hence PSD. Therefore, the max operator is not necessary to ensure the PSD condition. We will remove this statement. Thanks! **W5: Effect of Laplacian regularization beyond the simplex vertices.** Yes, the effect of Laplacian regularization holds within the simplex, beyond the vertices. Indeed, for a given, non-vertex $\mathbf{z}_i$ lying within the simplex, the vector $\mathbf{z}_j$ that maximizes $\mathbf{z}_i^\top \mathbf{z}_j$ is the one-hot (vertex) vector where the entry corresponding to the argmax of $\mathbf{z}_i$ is 1 (e.g, for $\mathbf{z}_i = [0.7, 0.2, 0.1]$, $\mathbf{z}_j = [1, 0, 0]$ maximizes the dot product). This function is convex in $\mathbf{z}_j$ over the simplex, and any deviation of $\mathbf{z}_j$ from the maximizing vertex can only cause a decrease in the dot product. Of course, this is an intuitive example, but the general behavior of the Laplacian term in our objective function is more complex. Multiple points are linked to each other, with potentially different predictions (and different most confident classes), combined with the KL term penalizing divergence from the zero-shot prediction (potentially deviating from the one-hot solutions). **W6: Modification of UPL.** As suggested, we will add a small description of UPL and UPL$^*$ in the Appendix (in C.1 Zero-shot, line 570) for better clarity. UPL: Unsupervised Prompt Learning [25] where N = 16 hard pseudo-labels per class are generated from a training set. Then a cross-entropy loss function on soft tokens is minimized (see UPL$^*$ below). UPL$^*$: We implement a natural extension of UPL. N = 16 hard pseudo-labels per class are generated from the query (test) set ${\cal P}\subseteq {\cal Q}$ according to the prediction’s confidence. For fairness, we reevaluated the number of pseudo-labels to select and still found that 16 per class yields the best results on average, as seen in Table 22. The following cross-entropy loss function is minimized: $$ \mathcal{L}(\overline{\text{V}}| \\{ \mathbf{x}_i \\} _{i=1} ^{|{\cal P}|})) = \frac{1}{|{\cal P}|} {\sum} _{j \in {\cal P}} \mathcal{L} _{\tiny{UPL}} (\overline{\text{V}}|\mathbf{x}_j) $$ where $\overline{\text{V}}$ denotes the vector of learnable context token embeddings. At inference, the learned tokens are used to generate the text embedding of each class.
null
null
null
null
null
null
In-Context Learning State Vector with Inner and Momentum Optimization
Accept (poster)
Summary: This work introduces the novel concept of the state vector. It encapsulates the information of ICL examples from separate tokens as anchors. Drawing inspiration from the duality between transformer attention and gradient descent, the authors implement inner and momentum optimization to generate task-specific state vectors. To address the context length limitation of LLMs, a divide-and-conquer strategy is employed to aggregate information from multiple examples into a single state vector. Experimental results demonstrate that patching the state vector during the inference forward pass outperforms previously proposed task vector and function vectors. Strengths: 1. Although the idea of using vectors to capture in-context learning is not new, the proposed approach of optimizing the state vector using inner/momentum optimization is well-motivated. 2. The paper is well-organized and sound with experiments and ablation studies. 3. The experimental results and the analyses provide some useful insights. Weaknesses: 1. It is not clear whether larger models or more complex datasets can still benefit from this work. Is this work only applicable to relatively small models? 2. It is unclear why a dummy query has to be included when extracting the state vector, whose task is to encapsulate the examples only. What if not? 3. Can we use this vector to align LLMs further, considering encapsulating them into memory/vector bases for reuse and continuous learning of new knowledge? Technical Quality: 3 Clarity: 2 Questions for Authors: ## Summary of review In general, the paper presents an interesting concept, and the overall submission is technically sound. Therefore I am positive to this paper at this point. However, I hope the authors can address the several concerns raised in the weakness section by providing extra explanations and discussions regarding the unclear aspects. Also, please response to the additional questions shown as follows. If the authors can well address my concerns/questions, I will keep the positive accept rating. ## Additional questions 1. I would like to know whether the ICL state vector can be used in actual alignment scenarios. If we switch to a long-context version of the model, how will the ICL state vector be different? 2. The previous study demonstrates that the label words are information anchors [1], while this work aggregates the information of separate tokens in the state vector, is there any evidence that the separate tokens can also gather in-context information in a progressive manner? [1]Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. 2023. Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9840–9855, Singapore. Association for Computational Linguistics. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `MvM2`, Thank you for your review. According to the feedback from you and other reviewers, we have conducted additional experiments and analysis. We have uploaded a [Rebuttal PDF](https://openreview.net/attachment?id=ulPGXOjfvv&name=pdf) that contains new figures and tables. We provide the analysis in `General Response` (shown in a separate comment on this page). ***We would like to address your concerns and questions in detail below.*** *** ## **Concern 1: Limitation in Model Size and Datasets** To address your concerns, we have conducted additional experiments using the larger Llama-2-70B model to evaluate the effectiveness of our proposed optimization method and the D&C aggregation. Detailed results and analysis can be found in the "Experiment on Llama-2-70B" in our `General Response`. Notably, the performance of ICL shows some improvement with the larger Llama2-70B model; our method continues to significantly enhance the state vector and surpass regular ICL in few-shot settings. Regarding the dataset limitations, please refer to the response to "Question 2" where we present experiments on more complex alignment tasks. We hope that these additional results satisfactorily address your concerns. ## **Question 1: Reason of Introducing Dummy Query** In our preliminary experiments, we observed a decline in performance in state vector extraction when we omitted the dummy query and simply added a separate token right after the demonstration. This finding aligns with earlier studies [1,2]. The issue likely arises because omitting the query alters the input format significantly from regular ICL. Therefore, we follow the previous settings by introducing a "dummy query" and extracting the state vector from the separator token that follows this dummy query. Thank you for your insightful question. We appreciate your feedback and hope this clarifies our approach. [1] *In-context learning creates task vectors* [2] *Function vectors in large language models* ## **Question 2: Performance on Alignment Task** Thank you for your valuable question. We present the performance of the optimized state vector on the alignment task in a zero-shot setting. The results are shown in `Table 8` in the `Rebuttal PDF`. Our findings indicate that although the state vector is slightly inferior to the regular ICL baseline, it still demonstrates significant potential as an effective alignment approach. By omitting the demonstration in the input, our method significantly reduces the time cost (e.g., by 5.4$\times$ on Llama-2-7B) while achieving 90% of the performance of regular ICL. Moreover, compared to mainstream alignment-tuning methods such as instruction fine-tuning and reinforcement learning from human feedback, our method achieves an average of 85% of their performance without requiring any additional training. Notably, with models like Mistral-7B and Llama2-70B, our state vectors can achieve 80% of the alignment performance of GPT-4-0613. These results demonstrate that state vectors can enable efficient and effective model alignment, and that inner optimization is beneficial for complex alignment tasks. We appreciate your feedback and hope this addresses your question comprehensively. ## **Question 3: Performance on Longer Model** Thank you for your insightful question. Switching to a long-context version of the model will result in a slightly different ICL state vector due to several factors: 1. **Extended Token History:** The state vector can access a more extended history of tokens, allowing it to retain more contextual information and improve performance on tasks with long-term dependencies. 2. **Capturing Intricate Patterns:** With a longer context, the state vector can capture more intricate patterns and relationships within the data, leading to more accurate and contextually appropriate responses. 3. **Coherent Outputs:** Long-context models generally produce more coherent outputs, as the state vector maintains a stable understanding over extended sequences, reducing the risk of losing context. 4. **Enhanced Complexity Handling:** The state vector in a long-context model can handle complex tasks more effectively, benefiting from the comprehensive integration of extensive contextual information. We appreciate your feedback and hope this explanation clarifies the potential impacts of using a long-context model. ## **Discussion about information anchors** Thank you for your insightful comments. We are pleased to discuss the mechanism of information gathering in ICL. Our research provides further evidence that separate tokens can progressively gather in-context information. In this paper, we investigate the impact of layer selection on state vector extraction in `Section 6.2 Layer Selection.` We observe that initially, increasing the number of layers for state vector extraction improves performance, and it reaches peak performance around the middle layers. This indicates that ICL information is progressively integrated into the separate token. In the work "Label Words are Anchors," it is found that information flows first from the example to its corresponding label words in the early layers, and then from the label words' representation to the final separate token representation in different layers. This also shows that ICL information progressively flows to the separate token and peaks around the middle layers. These results are consistent with our observations. We hope this explanation addresses your concerns. If you have any further questions or need additional clarification, please let us know. We are more than happy to discuss this further. [1] *Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning.* *** Thank you very much again for your great questions and suggestions. Please let us know if you have any further questions, as we are happy to continue the discussion. --- Rebuttal 2: Title: A follow-up message about the rebuttal for the paper15186 Comment: Dear Reviewer `MvM2`, We hope you are doing well. As the discussion period is coming to an end (Aug 13), we wanted to reach out to see if you have any follow-up questions. If so, we would appreciate the opportunity to respond before the discussion period ends. We believe our above messages should have addressed your concerns, and therefore may warrant an increase in score if you find them helpful as well. Would you please let us know if you have any further questions or concerns? We are happy to continue the discussion. Thank you very much again for your thoughtful review and help in improving the paper. We appreciate your time and consideration. Best regards, Paper15186 Authors --- Rebuttal Comment 2.1: Title: Thank you for your response Comment: Thank you for your response to the rebuttal. I appreciate that my concerns have been taken into consideration. I suggest that the authors enhance the paper by integrating feedback from other reviewers, including the use of larger-scale models and addressing presentation issues. As a result, I maintain my acceptance of the paper at this stage. --- Reply to Comment 2.1.1: Comment: Dear Reviewer `MvM2`, We are glad to have addressed your concerns. Please let us know if you have any further questions, as we are happy to continue the discussion.
Summary: The paper presents an in-depth analysis and optimization of In-Context Learning (ICL) within large language models (LLMs). In particular, it proposes inner and momentum-based optimization techniques to increase the performance of in-context learning. Strengths: 1. In-depth analysis of ICL vectors across 12 different datasets (linguistic, translation, and knowledge). The analysis provides insights into ICL vectors and motivates the optimization. 2. The optimization method can be divided into inner and momentum-based stages, each of which has shown promising improvement. 3. The ablation study is abundant. 4. The presentation of the research question is clear to read and follow. Weaknesses: 1. The method itself is somewhat simple. Specifically, the inner optimization is merely a uniform averaging. 2. The definition of state vectors lacks a rigorous theoretical basis, making it difficult to infer the approach's generalizability and reliability across different NLP tasks. 3. The experiments are conducted on Llama-2-7B and GPT-J-6B, which are limited in terms of scale and performance. Technical Quality: 3 Clarity: 3 Questions for Authors: In principle, ICL plays a more significant role in larger models (also with better long-sequence performance). Does this method benefit these models? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `4gb3`, Thank you for your review. According to the feedback from you and other reviewers, we have conducted additional experiments and analysis. We have uploaded a [Rebuttal PDF](https://openreview.net/attachment?id=ulPGXOjfvv&name=pdf) that contains new figures and tables. We provide the analysis in `General Response` (shown in a separate comment on this page). ***We would like to address your concerns and questions in detail below.*** *** ## **Concern 1: Method is too Simple** We respectfully disagree with the comments on the simplicity of our work. Before we dive into the concrete points, we would like to clarify our motivations and contributions: - We have conducted a comprehensive study of the working mechanism of the compressed vector in ICL. We demonstrated that attention activations can be equivalent to updated parameters via gradient descent under specific conditions. Leveraging this understanding, we proposed the definition of the state vector. To the best of our knowledge, we are the first to analyze the compressed vector in ICL through the dual view of ICL and gradient descent. - Inspired by the concepts of model soup and momentum gradient optimizer, we proposed the Inner Optimization and Momentum Optimization methods for the state vector. Extensive experiments on a range of tasks have shown that our methods significantly enhance the effectiveness and efficiency of the state vector. - To address the example limitation caused by model input length constraints, we introduced the Divide-and-Conquer Aggregation method. Extensive experiments demonstrate that our approach effectively aggregates information from state vectors, successfully scaling up to a large number of examples. Although our inner optimization can be viewed as a kind of uniform averaging, it is a simple yet effective method to enhance the state vector within a single forward pass. Additionally, our proposed momentum optimization further enhances the inner optimized state vector. Our work not only provides a novel theoretical perspective on ICL compressed vectors but also introduces practical applications for the study of ICL's working mechanism. **In this way, our work, as the first to establish a bridge between the theoretical understanding of ICL's working mechanism and the application of ICL compressed vectors, offers a novel integration in the field.** We hope this explanation addresses your concerns. If you have any further questions, please feel free to let us know, we would be happy to discuss them with you. ## **Concern 2: Limitation in Theory Basis** We acknowledge that our theoretical development is still in the early stages. Due to the complexity of transformers, it is very challenging to rigorously prove the mechanisms of ICL is very challenging. However, our method demonstrates that under specific conditions, ICL can execute gradient descent algorithms. Previous works [1,2] have also built upon this understanding, conducting experiment to support it through attention pattern similarity. While we cannot mathematically infer the generalizability and reliability of our method, we have extensively validated it across 12 datasets. The experiments demonstrate that our method is both generalized and robust. Additionally, we have provided results on more complex and realistic alignment tasks. Please refer to the"State Vector for Alignment" in our `General Response` for detailed experiments and analysis. Notably, our optimized state vectors can achieve 85% of the alignment performance of GPT-4 on Mistral-7B and Llama-2-70B. We hope this explanation and the additional experiments address your concerns regarding the theoretical basis and practical applicability of our approach. [1] *Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers* [2] *In-context Learning and Gradient Descent Revisited* ## **Concern 3: Limitation in Model Size** Thank you for your insightful feedback and for pointing out the potential limitations concerning model sizes. In response to your concerns, we have performed additional experiments using the larger Llama2-70B model to evaluate the effectiveness of our proposed optimization method and the D&C aggregation. Due to resource constraints, we were unable to extend our method to even larger models. Detailed results and analysis can be found in the "Experiment on Llama-2-70B" in our `General Response`. Importantly, although the performance of ICL exhibits some improvement with the larger Llama2-70B, our method continues to significantly enhance the state vector and surpass regular ICL in few-shot settings. We hope that these additional results satisfactorily address your concerns. *** Thank you very much again for your great questions and suggestions. Please let us know if you have any further questions, as we are happy to continue the discussion. If you find that our response addresses your concerns, would you kindly consider raising your rating score for our paper? We greatly appreciate your consideration. --- Rebuttal 2: Title: A follow-up message about the rebuttal for the paper15186 Comment: Dear Reviewer `4gb3`, We hope you are doing well. As the discussion period is coming to an end (Aug 13), we wanted to reach out to see if you have any follow-up questions. If so, we would appreciate the opportunity to respond before the discussion period ends. We believe our above messages should have addressed your concerns, and therefore may warrant an increase in score if you find them helpful as well. Would you please let us know if you have any further questions or concerns? We are happy to continue the discussion. Thank you very much again for your thoughtful review and help in improving the paper. We appreciate your time and consideration. Best regards, Paper15186 Authors --- Rebuttal Comment 2.1: Title: Key concerns remain Comment: I thank the authors for this detailed reply. However, regarding my points, the response is not satisfactory. 1. The formalization of state vector is the key point if the authors claim this method "as the first to establish a bridge between the theoretical understanding of ICL's working mechanism and the application of ICL compressed vectors". What is the advance of this method in this perspective? I think this will enhance the theoretical basis. How does this improve the authors' understanding and motivate this simple method? 2. The added experiment in Figure 10 actually raises more questions. Firstly, it looks like the "regular" baseline is missing. Also, for the few shot settings on LIama-2-70B, the method does not generate significant results (near or lower than ICL), especially for the AVG. Does this mean average aggregation is not very effective (given its simple form)? Also, it is quite true for main figure 2. For the 70B results, the D&C is also improving less over AVG. Why does this happen? Improvement on larger/more powerful base models is more concern given the nature of ICL. All in all, I am currently unable to change my score. --- Reply to Comment 2.1.1: Comment: We sincerely appreciate your thoughtful feedback and questions. We are more than happy to engage in a discussion to address the concerns regarding the formalization of the state vector and the addition aggregation experiment on Llama-2-70B. *** ## **The Advantage of State Vector Formalization** We are glad to clarify that our formalization of the state vector contributes to the advancement of research in both the ICL working mechanism and ICL compression vector domains. **ICL Working Mechanism** Previous studies[1, 2, 3] that based on the dual view of ICL and gradient descent have primarily provided evidence from the perspective of attention pattern similarity. In contrast, our research demonstrates that certain optimization algorithms applied to the gradient can also effectively refine the state vector. This finding introduces new evidence from the perspective of ICL compression vector application. Although we acknowledge that this approach does not conclusively establish the ICL mechanism, we argue that our study presents a novel perspective and empirical support that contribute to advancing the understanding of ICL’s underlying mechanism. **ICL Compression Vector** Earlier studies[4, 5] on ICL compression vectors suggested that hidden states or attention activations in transformers contain information related to the ICL task function. However, these studies were largely based on hypotheses and empirical analysis, lacking a solid theoretical foundation. Our work extends this line of inquiry by leveraging the dual view of ICL and gradient descent. We propose that attention activations can be viewed as trained parameters. This offers theoretical justification for the idea that attention activations could store the ICL task function, thereby providing a more robust basis for understanding their role. In summary, we argue that our work provides new insights and evidence that contribute to the advancement of research in both the ICL working mechanism and the application of compression vectors. [1] *Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers* [2] *In-context Learning and Gradient Descent Revisited* [3] *Transformers Learn In-Context by Gradient Descent* [4] *In-context learning creates task vectors* [5] *Function vectors in large language models* --- Rebuttal 3: Comment: ## **The Experiment result on Figure 10** We sincerely appreciate your insightful observations regarding the experiment in `Figure 10`. Thank you for pointing out the issue with the missing regular baseline. To address this and supplement our analysis, we have provided the original results of the Llama-2-70B aggregation experiment below. We will also revise our tables accordingly. Due to space constraints, we are only presenting the results for the 10 examples and 100 examples settings. We categorize the result into zero-shot and few-shot settings. **Zero-shot**: | | Regular | Avg. 10-examples | Avg. 100-examples | D&C 10-examples | D&C 100-examples | | ----------------- | ------- | ---------------- | ----------------- | --------------- | ---------------- | | Person-Occupation | 0.0 | 7.6 | 18.1 | 0.7 | 31.8 | | Antonym | 0.1 | 61.0 | 63.1 | 51.7 | 64.5 | | English-French | 0.3 | 35.7 | 34.1 | 26.7 | 46.9 | | Product-Company | 0.1 | 11.0 | 17.5 | 32.9 | 34.2 | **Few-shot**: | | ICL baseline | Avg. 10-examples | Avg. 100-examples | D&C 10-examples | D&C 100-examples | | ----------------- | ------------ | ---------------- | ----------------- | ---------------- | ---------------- | | Person-Occupation | 70.4 | 68.7 | 71.1 | 48.0 | 72.6 | | Antonym | 68.1 | 67.7 | 68.3 | 62.2 | 71.7 | | English-French | 82.9 | 82.1 | 82.6 | 77.5 | 84.6 | | Product-Company | 80.1 | 80.2 | 80.9 | 75.5 | 85.1 | As shown in the few-shot results, our D&C aggregation performs worse than the ICL baseline and average aggregation when the number of aggregated examples is small. The primary reason for this is that the conquer stage relies on a limited number of examples (e.g., only 1-shot in the conquer stage of D&C aggregation, compared to 10-shot for average aggregation when using a total of 10 examples). This limitation prevents the model's ability to effectively compress information from the group-specific state vectors into the final D&C aggregated state vector. However, as the number of examples increases, the performance of D&C aggregation improves significantly. At the 100-example setting, it outperforms the ICL baseline across four datasets (**with improvements of 2.2 on Person-Occupation, 3.6 on Antonym, 1.7 on English-French, and 5.0 on Product-Company**). Additionally, in the experiments shown in `Figure 2`, our method also achieves notable improvements (**ranging from 1.4 to 11.3 on Llama-2-7B and from 1.3 to 10.1 on GPT-J**). **We achieved similar improvements on the larger Llama-2-70B model as we did on the smaller Llama-2-7B model**, which demonstrates the effectiveness of our D&C aggregation method. Regarding the average aggregation, we would like to emphasize that **we included the average aggregation as a straightforward and intuitive baseline instead of the proposed method (it's not our main contributions)**. Consistent with your observations, we also found that the averaging aggregation does not yield significant results on Llama-2-70B. We attribute this to the fact that the simple average method, while providing some improvement, is insufficient for state vector aggregation. In contrast, our proposed D&C aggregation surpasses average aggregation under the 100-shot setting, which shows that our method is more effective and offers greater improvement over average aggregation. *** Thank you very much again for your thoughtful review and feedback. Please let us know if you have any further questions, as we are happy to continue the discussion.
Summary: The authors focus on the compressed vectors in In-Context Learning (ICL). They highlights the similarities between the compressed vectors and parameters trained via gradient descent, proposing the formulation of the state vector. Then they applies two optimization algorithm to progressively refine state vector. The proposed method achieves SotA performance on diverse tasks. Strengths: 1.The authors introduce the formulation of state vector that encapsulates the processing state of ICL stored in the attention activations. 2.The proposed two optimization methods for the state vector seem to be effective. 3. The research topic is significant and intriguing Weaknesses: 1.The proposed state vector seems to be benefit from examples (as shown in Eqn. 5), could you explain how it works without demonstration in the zero-shot setting (as shown in Table 1)? 2.As ICL baseline makes predictions conditioned on the demonstrations, could you explain why its performance remains unchanged with different number of examples in Figure 2? 3.The proposed method is mainly draw inspiration from trainable parameters and gradient optimization algorithm. However, the performance of applying other classical gradient optimization algorithms has deteriorated significantly in Table 2. And the authors argue that “The outcome indicates a discrepancy between the state vector and updated parameters with gradient descent.” This confuses me a lot. The authors should state their similarities and differences more clearly and precisely. Technical Quality: 2 Clarity: 2 Questions for Authors: See weakness. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `yzeU`, Thank you for your review. According to the feedback from other reviewers, we have conducted additional experiments and analysis. We have uploaded a [Rebuttal PDF](https://openreview.net/attachment?id=ulPGXOjfvv&name=pdf) that contains new figures and tables. We provide the analysis in `General Response` (shown in a separate comment on this page). ***We would like to address your concerns and questions in detail below.*** *** ## **Question 1: State Vector Usage in Zero-shot** We would like to clarify the input of our optimization method. When extracting the state vector, we use demonstrations and a dummy query as input. Then we extract attention activations from the forward pass to form the state vector. Thus, the process is the same for both the zero-shot and few-shot settings. Subsequently, we obtain the processed state vector through optimization or aggregation methods. The extraction and the following optimization or aggregation are performed only once since they are unrelated of the test data. During testing, our experiments are divided into zero-shot and few-shot settings, which differ based on the input to the test data. In the zero-shot setting, we only input the query, whereas in the few-shot setting, we input both the demonstrations and the query. During the forward pass, we use the attention activations stored in the state vector to replace the ones computed in the transformer. We hope this explanation addresses your question. Should you have any further inquiries, we are at your disposal for any additional discussions. ## **Question 2: Baseline Result In Figure 2** Thank you for your question. In Figure 2, the ICL baseline denotes the 10-shot ICL result. Due to the input length limitations of the model, regular ICL cannot be directly used on multiple examples (e.g., the average input length of AG News in the 10-shot setting is 3872, while the maximum input length of Llama-2 is 4096). Therefore, we only use the 10-shot ICL as baseline, while we provide the averaging aggregation as a stronger baseline. We hope this explanation addresses your question. ## **Question 3: First-Order Gradient Optimizer Result** Here, we would like to provide a more detailed analysis. Firstly, we present the calculation formulas for all optimizers. In the following formulas, $g_t$ and $v_{t}$ denote the initial gradient and the optimized gradient at the $t$-th iteration, respectively. $\beta$ denotes the momentum coefficient. **1. Polyak Momentum Optimizer (mom.)** $v_{t}=\beta v_{t-1}+(1-\beta)g_t$ **2. AdaGrad Optimizer (adag.)** $s_t=s_{t-1}+g_t \cdot g_t$ $v_{t}=\frac{1}{\sqrt{s_t+\epsilon}} \cdot g_t$ **3. RMSprop Optimizer (rms.)** $s_t=\beta s_{t-1} + (1-\beta) g_t \cdot g_t$ $v_{t}=\frac{1}{\sqrt{s_t+\epsilon}} \cdot g_t$deacay **4. Adam Optimizer (adam.)** $s_t=\beta_1 s_{t-1} + (1-\beta_1) g_t \cdot g_t$ $h_{t}=\beta_2 h_{t-1}+(1-\beta_2) g_t$ $\hat{s}_t=\frac{s_t}{1-\beta_1^t}$ $\hat{h}_t=\frac{h_t}{1-\beta_2^t}$ $v_{t}=\frac{1}{\sqrt{\hat{s}_t+\epsilon}} \cdot \hat{h}_t$ We believe there are two main reasons why first-order gradient optimizers are not well-suited for state vector optimization: - First-order gradient optimizers use adaptive learning rates, which require a significant amount of historical information. This strategy often shows instability and reduced effectiveness when data is limited. - First-order gradient optimizers have more complex hyper-parameters than the Momentum Optimizer, making it challenging to find the optimal hyper-parameters. Experimental results indicate that directly using hyper-parameter settings commonly used in gradient descent results in poor performance. We hope this explanation resolves your concerns. If you have any further questions, please feel free to let us know, and we would be happy to discuss them with you. *** Thank you very much again for your great questions and suggestions. Please let us know if you have any further questions, as we are happy to continue the discussion. If you find that our response addresses your concerns, would you kindly consider raising your rating score for our paper? We greatly appreciate your consideration. --- Rebuttal 2: Title: A follow-up message about the rebuttal for the paper15186 Comment: Dear Reviewer `yzeU`, We hope you are doing well. As the discussion period is coming to an end (Aug 13), we wanted to reach out to see if you have any follow-up questions. If so, we would appreciate the opportunity to respond before the discussion period ends. We believe our above messages should have addressed your concerns, and therefore may warrant an increase in score if you find them helpful as well. Would you please let us know if you have any further questions or concerns? We are happy to continue the discussion. Thank you very much again for your thoughtful review and help in improving the paper. We appreciate your time and consideration. Best regards, Paper15186 Authors
Summary: This paper aims to revealing the working mechanism of the compressed state vector in context learning. They first prove the state vectors are similar with parameters trained via gradient descent. Then, they propose three methods to optimize such kind of parameters: (1) inner optimization averaging each vector of the separate token; (2) momentum optimization using momentum-based optimization algorithm; (3) divide-and-conquer strategy aggregating several vector groups. They test these methods on some medium-sized LLMs. Experiments show some improvements against other compressed vector baselines. Strengths: 1. The paper is well written and easy to follow. 2. The contribution of this paper is significant. It attempts to provide understanding of the mechanism of ICL via a gradient descent perspective. The proposed state vector can be viewed as trained parameters so that they can be optimized with future methods. 3. The proposed optimization methods are reasonable. Experiments with these methods support their claims. The performance can be improved in most scenarios. Weaknesses: A substantive assessment of the weaknesses of the paper. Focus on constructive and actionable insights on how the work could improve towards its stated goals. Be specific, and avoid generic remarks. For example, if you believe the contribution lacks novelty, provide references and an explanation as evidence; if you believe experiments are insufficient, explain why and exactly what is missing, etc. Please keep in mind that the rebuttal period is not necessarily enough to run new experiments. As a result, asking for new results is rather unrealistic. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Does the state vector without optimization actually function the same as the task vector? If not, I think there should be another baseline called state vector (w/o optimization). The authors should claim whether this is a new compressed vector method or just a prove of similarity to gradient descent. 2. Can the authors provide more rationality of optimizing the state vector? I understand the state vector is a compression of previous context, but what it would present after the optimization? 3. Experiments show that the inner optimization and momentum optimization prefer different tasks. Is there a conclusion when to use which? Or is there a combination to adaptively cover all the situations? 4. Can the authors elaborate more on the differences between averaging and D&C? Why D&C is worse when the number of examples equals 10 (Figure 2)? In my understanding, they should be the same since the authors use 10-shot ICL (there should be only 1 group). 5. The authors should check the correctness of the experiment results. For example, in Table 5, they report Llama-2 Zero-shot Regular with six zeros but with an average of 0.2. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `FZSw`, Thank you for your review. According to the feedback from other reviewers, we have conducted additional experiments and analysis. We have uploaded a [Rebuttal PDF](https://openreview.net/attachment?id=ulPGXOjfvv&name=pdf) that contains new figures and tables. We provide the analysis in `General Response` (shown in a separate comment on this page). ***We would like to address your concerns and questions in detail below.*** *** ## **Question 1: Difference with Task Vector** Thank you for your feedback. Since the task vector aims to extract the hidden state in the initial $L$layers for intervention, it is functionally equivalent to our state vector without optimization that extract attention activations in initial $L$layers. However, we must emphasize that the proposed state vector differs significantly in its integration into the model. In our theoretical framework, the state vector is defined based on attention activations of the separator token, which can be viewed as updated parameters. Therefore, even though it is mathematically equivalent to the task vector, the state vector possesses a stronger theoretical foundation and interpretability. ## **Question 2: Contribution Clarify** We would like to accurately describe our work and contributions. Previous work [1] proposed the dual view of ICL and gradient descent and provided evidence from the perspective of attention pattern similarity. Our work extends this understanding and applies it to the working mechanism of ICL compressed vectors. Based on our understanding and theoretical framework, we propose the definition of the state vector and introduce its optimization and aggregation methods. Our paper can be considered as primarily proposing a new compressed vector method, inspired by the similarity between ICL and gradient descent. Therefore, our work establishes a bridge between the study of ICL's working mechanism and the application of ICL compressed vectors. [1] *Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers* ## **Question 3: Rationality of Optimization** Thank you for your question. In our perspective, we consider the state vector not simply as a compression of demonstration text information, but rather as a storage of the implicit ICL task function information contained within the demonstrations. Our optimization methods have enhanced the accuracy of this task information. The experiments in `Section 6.1 Qualitative Study` can intuitively demonstrate this point. As shown in `Figure 4`, a trend is observable in the movement of these clusters as the example position increases, indicating that as the number of demonstrations increases, the state vector becomes closer to the ideal task representation. Our momentum optimization effectively utilizes this trend and captures the underlying task dynamics within the demonstrations, thereby enhancing the state vector. ## **Question 4: Choice of Optimization Method** Thank you for your insightful question. Based on the experimental results, we find that momentum optimization performs better on Knowledge tasks, the reason may be that more examples can effectively promote the model's recall of internally stored knowledge, while 10 examples may be insufficient. Therefore, our momentum optimization can enhance the accuracy of the knowledge-based task function stored in the state vector by extending its trend. On the other hand, inner optimization shows better average performance on Linguistics and Translation tasks. This might be because, for these tasks, 10 examples are sufficient for the task, and momentum optimization could over-edit the state vector, thereby introducing noise. In light of these findings, we suggest using momentum optimization when increasing the number of examples effectively enhances ICL performance. Conversely, when an increase in examples does not contribute much to performance improvement, using only inner optimization is recommended. We hope this explanation addresses your question. Please let us know if you have any further questions; we would be happy to discuss them with you. ## **Question 5: Difference of Aggregation Result in 10-shot** Thank you for your question. We would like to elaborate on the differences between averaging aggregation and D&C aggregation in the 10-shot setting. In the 10-shot setting, we have only one demonstration group. For averaging aggregation, we directly extract the group-specific state vector from this single demonstration group, which becomes the final aggregated state vector. For D&C aggregation, the process is slightly different. In the divide stage, we extract the group-specific state vector from the group-specific dummy query. In the conquer stage, we first pair the group-specific dummy query with its label to form a one-shot demonstration. Then, we input a new dummy query with this one-shot demonstration and extract the aggregated state vector from the dummy query. Thus, the D&C aggregated state vector and the averaging aggregated state vector are not the same. Our analysis suggests that the D&C aggregated state vector performs worse when the number of examples is small because the conquer stage relies on a limited number of examples. This limitation prevents the model from effectively compressing the information from the group-specific state vector into the final D&C aggregated state vector. ## **Typo in Table 5** We would like to clarify that the average result in Table 5 represents the average performance across 12 tasks, not just the 6 additional tasks. We appreciate your feedback and will revise our manuscript to ensure this is more clearly communicated. *** Thank you very much again for your great questions and suggestions. Please let us know if you have any further questions, as we are happy to continue the discussion. --- Rebuttal 2: Title: A follow-up message about the rebuttal for the paper15186 Comment: Dear Reviewer `FZSw`, We hope you are doing well. As the discussion period is coming to an end (Aug 13), we wanted to reach out to see if you have any follow-up questions. If so, we would appreciate the opportunity to respond before the discussion period ends. We believe our above messages should have addressed your concerns, and therefore may warrant an increase in score if you find them helpful as well. Would you please let us know if you have any further questions or concerns? We are happy to continue the discussion. Thank you very much again for your thoughtful review and help in improving the paper. We appreciate your time and consideration. Best regards, Paper15186 Authors
Rebuttal 1: Rebuttal: Dear Reviewers, We greatly appreciate your insightful reviews and are delighted that you have acknowledged our paper's strengths. We briefly summarize them as follows: - **Novelty:** "The methods proposed are quite novel.", "The research topic is significant and intriguing.", "The paper presents an interesting concept.", etc. - **Deep Analysis and Abundant Experiment:** "The paper presents an in-depth analysis and optimization of ICL.", "The paper conducts a large number of ablation experiments.", etc. - **Writing:** "The paper is well written and easy to follow.", "The presentation of the research question is clear to read and follow.", "The paper is well-organized.". - **Fully Motivated:** "The proposed optimization methods are reasonable.", "the proposed approach of optimizing the state vector using inner/momentum optimization is well-motivated.", etc - **Effectiveness:** "The proposed two optimization methods for the state vector seem to be effective.", "The optimization method has shown promising improvement.", etc *** We present the additional experiments on a larger model and more complex datasets. **Please check out the [Rebuttal PDF](https://openreview.net/attachment?id=ulPGXOjfvv&name=pdf)**. The following are the brief analysis and findings: ### **Experiment on Llama-2-70B** We provide the optimization and aggregation results on the Llama-2-70B model. Due to resource constraints, we are unable to conduct our method on very large models. The results of the optimization method are presented in `Table 7`, and the aggregation method results are shown in `Figure 10`. We find that, compared to smaller models, all results improve when applied to Llama-2-70B. Furthermore, both the inner optimization and momentum optimization effectively enhance the state vector, either outperforming or being comparable to regular ICL in zero-shot and few-shot settings. For the aggregation results, the trends observed in smaller models remain evident in the larger model, with the D&C method still outperforming the averaging aggregation in multiple example settings. These results indicate that our inner and momentum optimization, as well as the D&C aggregation method, can also benefit the state vector on the larger Llama-2-70B model. ### **State Vector for Alignment** We present the performance of the inner optimized state vector on the alignment task in a zero-shot setting. We evaluate our method using two automatic alignment benchmarks: alpaca-eval (2.0)[1] and just-eval[2]. The results shown in `Table 8` indicate that although our state vector is slightly inferior to the regular ICL baseline, it still demonstrates significant potential as an effective alignment approach. By omitting the demonstration in the input, our method significantly reduces the time cost (e.g., by 5.4$\times$ on Llama-2-7B), but achieves 90% performance of regular ICL. Compared to mainstream alignment-tuning methods (i.e., instruction finetuning and reinforcement learning from human feedback), our method achieves an average of 85\% of their performance without requiring any training. Notably, with Mistral-7B and Llama2-70B, our state vectors can achieve 80\% of the alignment performance of GPT-4-0613. These results demonstrate that state vectors can enable efficient and effective model alignment, and that inner optimization is beneficial for complex alignment tasks. [1] *Length-controlled alpacaeval: A simple way to debias automatic evaluators.* [2] *The unlocking spell on base llms: Rethinking alignment via in-context learning.* *** We sincerely thank the reviewers for their constructive suggestions and questions to enhance our paper. We will address your questions and concerns below and incorporate your valuable suggestions into our revisions. Please reply if you have any further questions, and we will be more than happy to continue the discussion. Pdf: /pdf/2ed9053a9e9d6cd9f8d91595992646b4dbba83f0.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper considers the problem of using compressed vectors to replace explicit demonstrations, with potential benefits of providing new perspectives to better understand the mechanism of In-Context Learning (ICL), and addressing the issue of overly long demonstrations by summarizing them into a vector. The main challenge is the drop in accuracy. The paper proposes new methods to enhance compressed vectors, using ideas of ensemble averaging, momentum (though details are insufficient), and Divide-and-Conquer aggregation. Many ablation experiments are conducted to prove the effectiveness and answer different research questions. Strengths: 1. The paper conducts a large number of ablation experiments. 2. The methods proposed are quite novel, including the heuristic approaches and Divide-and-Conquer aggregation. 3. In practice, the methods perform very well on certain tasks. 4. The work provides new perspectives to better understand the mechanism of ICL. Weaknesses: 1. The development is still in its early stages, mostly heuristic, lacking any theoretical guarantees. However, I think this is not a critical issue, as the final performance is good in practice for certain tasks, and a rigorous analysis is very hard due to the complexity of transformers. 2. The presentation needs improvement, e.g., Figure 4 should have a color legend for different positions. The specific parameter $L$ used should be clarified in Section 5.1 3. The strict definition of the momentum gradient optimization algorithm used needs to be clearly stated. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What is the strict definition of the momentum gradient optimization algorithm used as opt? Is it Polyak Momentum or Nesterov momentum? 2. In the code, what do FixOptimizer, OneStepOptimizer, and FixOneStepOptimizer represent respectively? Which one represents momentum? 3. For Divide-and-Conquer aggregation, how does the performance compare to directly averaging the different group-specific state vectors? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: 1. The presentation needs improvement. 2. It is worth considering larger models, e.g., Llama2-70B. But existing experiments on Llama-2-7B, Llama-2-13B and GPT-J-6B is strong enough. 3. It is worth discussing the performance on more complex ICL tasks, such as math and text summarization. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `pJQh`, Thank you for your review. According to the feedback from you and other reviewers, we have conducted additional experiments and analysis. We have uploaded a [Rebuttal PDF](https://openreview.net/attachment?id=ulPGXOjfvv&name=pdf) that contains new figures and tables. We provide the analysis in `General Response` (shown in a separate comment on this page). ***We would like to address your concerns and questions in detail below.*** *** ## **Concern 1: Limitation of Model Size** Thank you for your valuable feedback and for highlighting the potential limitations related to the model sizes. In response to your concern, we have conducted additional experiments using the larger Llama2-70B model to evaluate the effectiveness of the proposed optimization method and the D&C aggregation. Please refer to the "Experiment on Llama-2-70B" in our `General Response` for detailed experiments and analysis. Notably, while the performance of ICL improves with the larger Llama2-70B, our method still effectively enhances the state vector and outperforms regular ICL in a few-shot setting. We appreciate your feedback and hope these additional results address your concerns. ## **Concern 2: Limitation of Datasets** Thank you for your feedback and for highlighting the potential limitations associated with the tasks. To explore the effectiveness of state vectors on more complex tasks, we conducted experiments on the alignment task, which involves aligning LLMs with human preferences and is inherently more complex and diverse. Please refer to the "State Vector for Alignment" in our `General Response` for detailed experiments and analysis. Notably, our state vectors can achieve 85% of the performance of mainstream alignment-tuning methods (i.e., instruction-finetuning and reinforcement learning from human feedback) without training. With Mistral-7B and Llama-2-70B, our state vectors can achieve 80% of the alignment performance of GPT-4-0613. We hope these additional experiments provide a thorough understanding of our method's capabilities on complex tasks. ## **Question 1: Definition of Momentum Gradient Optimization** Thank you for your valuable feedback and for raising an important point regarding the strict definition of the momentum gradient optimization algorithm used in our study. In this paper, we employed the Polyak Momentum Optimization algorithm. To ensure clarity and enhance the reader's understanding, we have provided the update rules for all optimization algorithms used in Table 2, corresponding to the $opt(*)$ function calculation method described in Equation 9 in the paper. To maintain consistency with the notation used in the paper, in the following formulas, $E_i$ denotes the $i$-th ($1 \le i \leq N$) state vector (we ignore the hyper-parameter $L$ for simplicity), and $N$ is the number of state vectors. Additionally, let $v_{t}$ represent the update vector at $t$-th iteration, initialized as $v_0=\boldsymbol{0}$. Here, $\alpha$ denotes the learning rate, $\beta$ denotes the momentum coefficient, and $\epsilon$ is a very small constant for calculation stability. Below are the detailed update rules for each optimizer: 1. Polyak Momentum Optimizer (mom.) $g_t = E_{t}-E_{t-1}$ $v_{t}=\beta v_{t-1}+(1-\beta)g_t$ 2. AdaGrad Optimizer (adag.) $g_t = E_{t}-E_{t-1}$ $s_t=s_{t-1}+g_t \cdot g_t$ $v_{t}=\frac{1}{\sqrt{s_t+\epsilon}} \cdot g_t$ 3. RMSprop Optimizer (rms.) $g_t = E_{t}-E_{t-1}$ $s_t=\beta s_{t-1} + (1-\beta) g_t \cdot g_t$ $v_{t}=\frac{1}{\sqrt{s_t+\epsilon}} \cdot g_t$ 4. Adam Optimizer (adam.) $g_t = E_{t}-E_{t-1}$ $s_t=\beta_1 s_{t-1} + (1-\beta_1) g_t \cdot g_t$ $h_{t}=\beta_2 h_{t-1}+(1-\beta_2) g_t$ $\hat{s}_t=\frac{s_t}{1-\beta_1^t}$ $\hat{h}_t=\frac{h_t}{1-\beta_2^t}$ $v_{t}=\frac{1}{\sqrt{\hat{s}_t+\epsilon}} \cdot \hat{h}_t$ After $N$ iterations, we use the final update vector $v_{N}$ to optimize the state vector: $opt([E_i]^N_{i=1})=\alpha * v_{N}$ We hope this clarifies the strict definition and update rules for the momentum gradient optimization algorithm used in our study. Thank you once again for your insightful question. ## **Question 2: Comparison between D&C and Averaging Aggregation** In our aggregation experiment, we compared our D&C aggregation with the average aggregation method. The average aggregation method involves averaging the group-specific state vectors. As shown in Figures 2 and 6, although average aggregation benefits from an increasing number of examples, D&C aggregation outperforms average aggregation when the number of examples is the same. This is primarily due to the fact that D&C aggregation better captures the information in the group-specific state vector, leading to more robust and better performance. Thank you for your question. We hope this comparison adequately addresses your question. ## **Question 3: Optimization in Code** In the code, *FixOptimizer* denotes the inner optimization. *OneStepOptimizer* denotes the direct momentum optimization on the original state vector, which we did not apply. *FixOneStepOptimizer* denotes the momentum optimization on the inner optimized state vector , and this represents the "momentum optimization" mentioned in the paper. ## **Suggestion of Presentation** We will revise `Figure 4` to include a color legend for different positions to enhance its clarity. For the specific parameter used in the paper, we provided them in the "implementation details" in `Appendix A`. We appreciate your suggestions and hope these could address your concerns. *** Thank you very much again for your great questions and suggestions. Please let us know if you have any further questions, as we are happy to continue the discussion. If you find that our response addresses your concerns, would you kindly consider raising your rating score for our paper? We greatly appreciate your consideration. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. It answers most of my question. > For Divide-and-Conquer aggregation, how does the performance compare to directly averaging the different group-specific state vectors? My question is not regarding to average aggregation, but for average of group-specific state vectors. My motivation is that the aggregation of group-specific state vectors in this paper seems non-trivial to me, and I was wondering what's the performance if a naive average aggregation of group-specific state vectors are used. I understand this is a new algorithm and implementing it and evaluating it comes with cost, and I think this new experiment is not necessary. Existing experiments are strong enough. > It is worth discussing the performance on more complex ICL tasks, such as math and text summarization > we conducted experiments on the alignment task, which involves aligning LLMs with human preferences and is inherently more complex and diverse Thanks for the additional experiment. I think one fundamental question in this field that is still unclear is why substituting the state vector works. A relevant question is what information can be stored in state vector. Answering these questions would be far beyond the scope of this paper. But since these questions are clear, it would be beneficial to provide more evidence on the state vectors method on tasks of different complexity, even if the state vectors method turn out to perform bad on some cases. I suspect on more complex task, the information stored in state vectors would be not sufficient and performance improvements would diminish. This motivate me to ask the question about more complex ICL tasks. But again, this is not necessary, and I think existing experiments are strong enough. Based on your promise to improve the presentation, I will increase my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer pJQh Comment: > For Divide-and-Conquer aggregation, how does the performance compare to directly averaging the different group-specific state vectors? Thank you for your question. We apologize for the earlier misunderstanding and value the chance to further discuss the naive average aggregation of state vectors that you have mentioned. However, we are slightly confused, as our understanding is that the averaging aggregation state vector proposed in our work indeed uses the average of group-specific state vectors. We believe this is equivalent to the naive average aggregation of group-specific state vectors that you referred to. Since the naive averaging algorithm is similar to our Inner Optimization approach, we conducted an additional experiment to compare the performance of the averaging state vector with that of the inner optimized state vector. For the naive averaging algorithm, we set up a total of 100 examples, with 10 examples for a group. The results for the naive averaging of group-specific state vectors under this setting are provided below. | | Antonym | English-French | Person-Instrument | | ------------------- | :--------: | :--------------: | :-----------------: | | inner (zero-shot) | 61.0±1.0 | 66.5±1.0 | 67.4±2.6 | | average (zero-shot) | 59.3±1.4 | 67.1±0.8 | 66.7±2.2 | | inner (few-shot) | 66.2±1.6 | 74.6±0.9 | 70.1±4.3 | | average (few-shot) | 65.7±1.1 | 75.1±1.5 | 70.5±3.1 | Our observations indicate that both methods enhance the state vector in a similar manner, leading to comparable improvements in performance and robustness. However, in terms of efficiency, our Inner Optimization approach holds an advantage as it requires only a single forward pass and fewer examples. We hope our response addresses your question. If we have misunderstood your proposed algorithm, we kindly ask you to provide a more detailed description of the difference between your proposed algorithm and our averaging aggregation, as well as specify the experiment results you are interested in. We would be more than happy to continue this discussion with you. *** > It is worth discussing the performance on more complex ICL tasks, such as math and text summarization > we conducted experiments on the alignment task, which involves aligning LLMs with human preferences and is inherently more complex and diverse We sincerely appreciate your thoughtful feedback and your emphasis on the importance of the state vector method in more complex tasks. While our current work focuses on explaining the working mechanism and applying it to several basic tasks, we fully recognize the significance of exploring its application in more complex scenarios. We plan to investigate the performance and applicability of state vectors in these more challenging tasks in our future work. *** Thank you so much for raising the score and your very supportive comments on our paper! We will revise the paper according to your suggestions and comments. Please let us know if you have any further questions, as we are happy to continue the discussion.
null
null
null
null
null
null
PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation
Accept (poster)
Summary: The paper introduces a Primitive-Driven Waypoint-Aware model for robotic manipulation tasks. Initially, the entire language instruction is broken down into specific sub-steps using VLM. Subsequently, each sub-step's corresponding feature ("waypoint") is predicted through image prediction to segment the entire task execution process. The article also employs different execution frequencies for different prediction steps to enhance the inference speed. Comparative experiments were conducted on the SeaWave benchmark, and the model's generalization ability in unseen scenarios and its learning capability from human videos were analyzed. Strengths: The figures and tables are clear and easy to understand. The experimental section reproduces several renowned studies and tests them on a new benchmark. The ablation study is elaborate. Weaknesses: 1. The Introduction section contains misleading information and overclaims regarding the motivation. The model proposed in this paper has limited relation to a “world model”. I think that only the first step of parsing the language instruction has utilized the VLM, which makes it hard to convince that the entire model can be called a "world model". However, almost the entire article emphasizes the content related to the world model. Additionally, the paper points out that a significant drawback of previous works is sequential execution, and proposes the idea of accelerating inference speed for this reason. But the method of this paper is still sequential execution, and the acceleration is merely achieved by artificially setting different execution frequencies for different reasoning steps, which is also hard to consider as a strong contribution of this paper. 2. The method proposed in the article seriously lacks novelty, and the discussion of its relationship with related paper is also insufficient. I summarize that the article mainly proposes two methods: First, it decomposes language into multiple sub-goals, which the author refers to as waypoints. Similar methods were first proposed in SayCan, and the language parsing process in this paper is not learnable. Second, the article proposes predicting future waypoints as intermediate results. Similar future prediction methods have already been numerous, and related articles have also studied predicting future images (the approach of this paper) or the trajectory of future key points as different forms of sub-goals. Therefore, the technical contribution and innovation of this paper are quite lacking. 3. The experimental results of this paper are not very convincing. - a) The method of this paper has only been verified on a relatively limited SeaWave Benchmark. Moreover, for the method proposed in this paper, it is necessary to artificially process the data, such as limiting it to 10 primitive actions. And it is also necessary to add additional primitive action and waypoint annotations. This almost introduces additional annotation information that is not utilized in the baseline for comparison. - b) It is strange that this paper chooses Octo, RT-1, and GR-1 as baselines, as the training cost of these papers is very high, and it seems that this paper needs to re-train these models on new data, which requires a very long time for both training and testing. From the perspective of the method of this paper, it should be compared with some sub-goal prediction methods, such as UniPi and AVDC. - c) The results in Table 2 show that PIVOT-R's performance drops more in unseen scenarios compared to seen scenarios, how to explain this? - d) The experimental results in Table 4 have almost nothing to do with the motivation of the paper. The fact that pre-training with human data can improve the performance and generalization of downstream tasks has already been proven in a large number of previous works. Technical Quality: 3 Clarity: 2 Questions for Authors: My main concerns lie in the need for a clearer articulation of the motivation, a more explicit delineation of the contributions of this paper, and improvements to the experimental section. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have adequately addressed the limitations and societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to see your recognition of the PIVOT-R experiment. But there are some serious factual errors in the review: (1) VLM is **not equal to** the world model; (2) AHE is **multi-threaded** rather than sequential execution; (3) The task parsing of SayCan and PIVOT-R is NOT at the same level, and the learnability of language parsing is unnecessary; (4) Scene prediction is strongly related to the waypoint-aware world model and should not be viewed in isolation; (5) The experimental design and the motivation of PIVOT-R are strongly related and complete. We provide detailed responses to all the issues you have pointed out. We hope our response and rebuttal can address your concerns, and sincerely hope you can reconsider our work. **Q1.1. About the world model.** VLM is not equal to the world model. The pioneering work of the world model [1] defines it as a model that can perceive the environment and predict changes in environmental states. In our PIVOT-R's waypoint-aware world model (WAWM), we use VLM to **perceive the scene and parse user instructions** to obtain primitive action descriptions. And we use the scene prediction module to **predict waypoint scenes** for capturing environmental dynamics. This fits the definition of a world model. [1] Ha, David, and Jürgen Schmidhuber. "World models." *arXiv preprint arXiv:1803.10122* (2018). **Q1.2. AHE's technical contribution.** PIVOT-R's AHE is implemented in a multi-threaded manner, with each module's output stored in a buffer to prevent blocks between modules. As shown in Table 3 in our paper, AHE improves PIVOT-R's execution efficiency by 28 times (27ms vs. 756ms). This approach is valuable for models with multiple modules of varying efficiencies, providing significant enhancement and useful reference for other work. **Q2. Waypoints and technical contributions.** **(a) Waypoints.** SayCan uses LLM to decompose complex human instructions into a higher-level sub-task, such as "find apples/go to table". This is significantly different from PIVOT-R, which is a decomposition of low-level primitive actions proposed for robot manipulation tasks. **(b)Technical contribution.** Waypoints' technical contributions should not be reviewed individually. PIVOT-R is the first attempt to introduce waypoint predictions for world model learning. It parses complex user instructions into primitive actions through VLM, and combines scene and action prediction modules driven by primitives to realize the modeling of physical world knowledge. This enables PIVOT-R to focus on task-relevant key points instead of being drowned in trivial scene predictions, leading to significant improvement in both manipulation performance and execution efficiency. **(c) Language parsing is not learnable.** As shown in the results of Lines 4 and 5 in Table 3 in our paper, the current open-source VLM is fully capable of handling the related primitive parsing tasks. Therefore, the introduction of VLM in the offline manner does not have a big impact on the whole performance. **Q3. Experiment.** **(a) Data annotation.** First, 10 primitive actions are sufficient for manipulation tasks. For example, [1,3] uses only pick and place actions, and ForceSight[2] defines 5 actions. Secondly, adding more annotation information to existing benchmarks is a common practice to improve model performance. Similarly, MOKA [4] and LEO [5] use LLM to annotate data for better performance. This annotation method itself is also one of the contributions of the work. [1] SHRIDHAR M, MANUELLI L, FOX D. CLIPort: What and Where Pathways for Robotic Manipulation[J]. [2] COLLINS JeremyA, HOUFF C, TAN Y, et al. ForceSight: Text-Guided Mobile Manipulation with Visual-Force Goals[J]. 2023. [3] ZENG A, PETE F, TOMPSON J, et al. Transporter Networks: Rearranging the Visual World for Robotic Manipulation[J]. arXiv: Robotics,arXiv: Robotics, 2020. [4] Liu, Fangchen, et al. "Moka: Open-vocabulary robotic manipulation through mark-based visual prompting." *arXiv preprint arXiv:2403.03174* (2024). [5] Huang, Jiangyong, et al. "An embodied generalist agent in 3d world." *arXiv preprint arXiv:2311.12871* (2023). **(b) Reasons for choosing methods such as Octo and RT-1.** This is mainly because they are state-of-the-art robot manipulation models. **(c) Comparison with other sub-goal prediction methods.** We compared SUSIE [1], which uses InstructPix2Pix [2] for sub-object prediction, and low-level strategies for action prediction. As shown in Table 2 of the rebuttal Appendix PDF, PIVOT-R outperformed SUSIE by 26.93% (74.19% vs. 47.26%) on SeaWave tasks, demonstrating the effectiveness of WAWM in PIVOT-R for sub-goal modeling. [1] Black, Kevin, et al. "Zero-shot robotic manipulation with pretrained image-editing diffusion models." arXiv preprint arXiv:2310.10639 (2023). [2] Brooks, Tim, Aleksander Holynski, and Alexei A. Efros. "Instructpix2pix: Learning to follow image editing instructions." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. **(d) Why does PIVOT-R show obvious performance degradation on unseen scenes?** This is normal for robot manipulation models, as they often struggle with generalization to unseen scenes not present in the training data, leading to perception or recognition errors. As shown in Table 2 in our paper, nearly all methods experience performance degradation in new scenarios. However, our model still achieves state-of-the-art performance. **(e) Train with Human Data.** This experiment is mainly to prove that PIVOT-R can also benefit from human data. We hope to conduct large-scale data training to improve model performance in the future. Based on your suggestion, we will move this experiment to the appendix in our revision. --- Rebuttal 2: Comment: Dear Reviewer, Thank you once again for your valuable feedback on our work. In our rebuttal, we have provided detailed responses to the weaknesses and questions you raised. We have clarified the concept of world model, and the novelty of our waypoints and AHE. Additionally, we have elaborated on the experimental details, explaining data annotation and baseline selection, and added an additional sub-goal prediction baseline. We have also conducted additional experiments, including SIMPLER benchmark and real-world experiment to further illustrate the performance enhancements. As the author-reviewer discussion period is drawing to a close, we would like to check if there are any remaining questions or concerns that we can address. Please feel free to reach out if you need further clarification on any aspect of our work. We are committed to ensuring that our responses fully address your concerns and contribute to the improvement of our paper. We are confident that we have addressed all of your concerns and hope you will reconsider our work. Title: Waiting for further discussion --- Rebuttal Comment 2.1: Comment: The author has addressed most of my doubts, and I have also seen that the author has provided additional experiments with other benchmarks as well as in real-world scenarios. I greatly appreciate the author's efforts. However, I still have some questions: 1. I understand that VLM is not equal to the world model, and I am also aware of the definition of the world model. But I still cannot fully comprehend the significance of the paper's repeated emphasis on the world model. As other reviewers have also pointed out, the method of this paper does not make a very clear and distinct differentiation from many other related works. 2. The author also mentioned that the method of this paper has incorporated more annotation information. Will the baseline method also introduce this information (considering the issue of fairness)? In addition, is the annotation process for this data quite laborious? For example, can it be efficiently generated through programs or LLMs, or does it require human intervention? 3. In my third point of weakness (c), what I mean is to focus on the performance degradation of each method after transferring to unseen conditions compared to its own seen conditions. It can be observed that PIVOT-R has dropped by about 10%, 8%, and 11% under three unseen conditions, respectively. While other baselines, such as Surfer, have dropped by 1.5%, 2%, and 7% under the three unseen conditions, respectively. Does this mean that PIVOT-R is more affected by environmental changes? --- Rebuttal 3: Title: Further clarification and discussion. Comment: Thank you very much for your timely and constructive comments. In response to your questions, we make the following clarifications: **Q1:** We apologize for any confusion caused by our previous responses. It appears the questions arose from our manuscript not clearly distinguishing our approach from existing methods. Before addressing our use of the term 'world model,' we'd like to clarify the key differences that set our method apart. Previous work (e.g., RT-2, RT-H, RT-X and RoboFlamingo) focused on using Visual Language Model (VLM) to facilitate language-guided robot manipulation tasks, improving the model's capabilities in scene perception, task planning, and logical reasoning. However, due to its lack of ability to model critical waypoints and asynchronous execution, the model learning ability and efficiency are low. Our PIVOT-R system is designed for complex language-instructed robot manipulation through a hierarchical approach. It begins by using a VLM to convert intricate commands and visual inputs into object-oriented skills (primitive actions). These actions guide the waypoint predictor (i.e., scene prediction module), which forecasts future waypoints for upcoming observations. The action prediction component then determines the necessary actions to reach these waypoints. By incorporating waypoints, PIVOT-R disentangles the dependency between language and actions, and better leverage cross-trajectory waypoint transition knowledge to improve the action prediction. Additionally, our model integrates an asynchronous hierarchical executor, making it suitable for real-time robotic control. PIVOT-R offers a comprehensive framework that significantly enhances both effectiveness and efficiency. Now, we would like to explain the choice of the term "world model", specifically the “waypoint-aware world model”. As we've outlined above, the essence of our PIVOT-R system lies in its ability to predict waypoints, guided by the context of primitive actions. We initially believed that labeling it as a “world model” would provide an immediate and intuitive understanding for our audience, clearly illustrating the model's predictive capabilities based on actions and observations. However, we acknowledge the reviewer's feedback that using the term “world model” might inadvertently understate the complexity and depth of our contributions. To better highlight the uniqueness of our approach, we will reduce the description in terms of “world model” and emphasize PIVOT-R’s contribution to waypoint-aware modeling. We will revise our manuscript to reflect this change. **Q2: Fairness and cost of annotation.** There may be some misunderstandings here. Waypoint information in PIVOT-R is automatically generated and does not require any manual intervention. Specifically, we use the open source LLaVA 1.5 as the VLM for waypoint judgment, and use scripts to automatically generate waypoints in the robot's manipulation trajectory. **(1) Fairness.** We believe that fairness here mainly involves two aspects: experimental setting and dataset input. First, in terms of experimental settings, PIVOT-R and other methods maintain the same experimental settings such as optimizer and data augmentation. Secondly, in terms of dataset input, they all have the same input, and waypoint information is automatically generated by the VLM in PIVOT-R after receiving input. Since other baseline methods ignore the modeling of waypoint information, resulting waypoint information is useless for them. Therefore, we think their comparison is fair. **(2) Cost.** For waypoints information process, we used the open source LLaVA 1.5 as VLM on 8 RTX4090s to complete the waypoint generation of 13K trajectory data in 6 hours, which is affordable for practical implementation. Except for the manual definition of prompts (in Appendix F), no additional manual intervention is required. The prompts are simple and reusable.Therefore the cost is low. **Q3:** Thank you for your feedback, but we respectfully disagree with your method of evaluating the model's generalization ability. The key to evaluating generalization lies in how well a model performs under unseen conditions compared to others. PIVOT-R outperforms Surfer by 12.5%, 15.8%, and 15% in three different unseen scenarios. Evaluating models based solely on the percentage of performance drop can be misleading, especially when the initial performance levels are different. For example, if two students take a math exam, with Student A scoring 95 and Student B scoring 60, and then on a harder test, Student A drops to 85 (a 10.5% drop) while Student B drops to 59 (a 1.7% drop), it is clear that the absolute score is a more meaningful metric than the percentage drop. --- Rebuttal Comment 3.1: Comment: Thank the author for the detailed response to my question and for pointing out some of my misunderstandings. After considering the opinions of other reviewers, I believe the author has addressed the reviewers' questions very well during the response phase, and I hope that the final version can integrate the new experiments and discussions into the paper. Therefore, I will raise my score. --- Reply to Comment 3.1.1: Comment: Thank you again for your constructive comments and help in improving our paper.
Summary: In this paper, the authors present PIVOT-R, a primitive-driven waypoint-aware world model for robotic manipulation. PIVOT-R comprises two key components: a waypoint-aware world model (WAWM) that parses primitive actions and predicts primitive-driven waypoints, and an Action Prediction Module that decodes low-level actions. Additionally, they introduce an Asynchronous Hierarchical Executor (AHE) for PIVOT-R, which enhances execution efficiency by applying different execution frequencies to different modules. Experimental results demonstrate that PIVOT-R achieves state-of-the-art performance on the SeaWave benchmark. Strengths: * The design of setting different execution frequencies for different modules (AHE) makes sense and enhances execution efficiency. * The authors conduct extensive experiments, including comparisons with several baselines and ablated versions, and provide detailed discussion and analysis. * The paper is well-organized and easy to understand. Weaknesses: - The proposed framework has only been evaluated in simulated environments. The authors are encouraged to conduct real-world experiments to demonstrate the method's sim-to-real transfer capability. - The method relies solely on 2D RGB images without any depth information, which may hinder the accurate prediction of 3D robot actions, especially in unseen perspectives and scenarios. Additionally, transferring simulated RGB images to real-world RGB images without depth information is a non-trivial challenge. Technical Quality: 3 Clarity: 4 Questions for Authors: * I understand that different modules are assigned different execution frequencies, but what is the definition of the execution frequency? For example, when the authors set v1 = 3, v2 = 10, v3 = 30, does it mean that the VLM model is used 3 times and the action prediction module is used 30 times for each data sample? In addition, I am interested in knowing the average execution time for each module. * The authors claim that world modeling of waypoints can prevent critical robot dynamics from being submerged in trivial robot manipulations. It would be beneficial if the authors could provide an ablation study or relevant examples illustrating this phenomenon. * Given that the method operates in a closed-loop system, I wonder if there are cases where the robot can retry and successfully perform an action after an initial incorrect attempt? Or does an incorrect action always lead to task failure? * Typo: In line 181, there is a repeated 'waypoint'. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have discussed the limitations, but they are encouraged to provide and analyze specific failure cases of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your comprehensive and encouraging review! We respond to all the issues you pointed out in detail below. We hope our response and rebuttal revision will address your concerns. **Q1. Real-world experiment.** We have added additional real-world experiments in Table 3 of the rebuttal PDF, where we set up three tasks: (i) "Pick up": pick up the correct object from the table. (ii) "Put on": Pick up the object and place it on the correct color block. (iii) "Push to": Push the object to the correct color block. For these three tasks, we collected 400, 200, and 200 sets of data respectively, for a total of 800 demonstrations. Finally, we perform 24 sets of tests for each task to calculate the average success rate. As shown in the results, compared with the best baseline Surfer, PIVOT-R achieved a **6% performance improvement**. With the help of the waypoint-aware world model, PIVOT-R has demonstrated excellent real-machine capabilities. We also added demonstrations of real-world evaluation in Figure 2 of the rebuttal PDF. **Q2. Action prediction of 3D robots.** The most advanced methods such as GR-1, RT-1, and Surfer are based on RGB, and for fairness and ease of comparison, we have followed their settings. Due to time constraints, we plan to introduce 3D information in our upcoming work. **Q3. Definition of execution frequency and execution time.** PIVOT-R's AHE executes the action prediction module once in each time step, and other modules are updated proportionally. That is, when the action prediction module is executed 30 times, the VLM and scene prediction modules are executed 3 and 10 times respectively. In addition, each inference time of the three modules of primitive action parsing, scene prediction and action prediction is about 177ms, 29ms and 18ms. The execution speed mainly depends on the action prediction module. **Q4. Too frequent waypoints cause the robot dynamics to be overwhelmed.** The corresponding experimental results are shown in row 3 of Table 3 (*i.e.*, PIVOT-R w/ next frame) in out paper. As shown in Table 3, compared with PIVOT-R (using key frames as waypoints), the average performance of PIVOT-R w/ next frame (using the next frame of the robot's manipulation trajectory as waypoints) dropped by 29.7% (74.19% vs. 44.45%). This shows that the waypoint-aware world model can effectively improve PIVOT-R’s modeling capabilities of key robot dynamics. **Q5. The result of a robot encountering an error during initial execution.** In some cases, retries may be successful. As shown in Figure 1 (left) in the rebuttal PDF, when the position of the gripper deviated and the object failed to be grasped, the second attempt to grasp was successful. However, in the case of Figure 1 (right), if the object is knocked down and rolls a certain distance, it will be difficult to successfully grasp it again. **Q6. Expression error.** Thanks for your reminder, we will modify this typo in our revised version.
Summary: This paper introduces PIVOT-R, an approach for language-guided robotic manipulation. PIVOT-R consists of a Waypoint-aware World Model (WAWM) and a lightweight action prediction module, along with an Asynchronous Hierarchical Executor (AHE) to improve efficiency. The model achieves state-of-the-art performance on the SeaWave benchmark, demonstrating improvements in performance and efficiency compared to baselines. Strengths: **Performance:** The proposed PIVOT-R model demonstrates strong performance improvements over chosen baselines, achieving state-of-the-art results on the SeaWave benchmark. **Thorough Evaluation on Chosen Benchmark:** The authors provide comprehensive results on the SeaWave benchmark against strong baselines, and perform thorough ablations for generalization and interpretability. Weaknesses: **Lack of Clarity:** The abstract and introduction do not clearly articulate the specific problem being addressed. While they mention the need for language-guided robotic manipulation and the limitations of previous approaches, the exact nature of the problem, such as motivating the need for waypoint prediction in the first place, is not succinctly defined. Similarly, I found some explanations of the experiments hard to follow in Sections 4.5 and 4.6. This could be improved by decreasing the use of lingo specific to the project and spending more time motivating each experiment before diving into details. **Novelty:** The idea of breaking down tasks into action primitives and using waypoints is not entirely novel. Similar concepts have been explored in prior work, such as: CLIPort: https://arxiv.org/abs/2109.12098 PerAct: https://arxiv.org/abs/2209.05451 RAPS: https://arxiv.org/abs/2110.15360 Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning: https://arxiv.org/abs/1803.09956 AWE: https://arxiv.org/abs/2307.14326 ForceSight: https://arxiv.org/abs/2309.12312 Scaling up and Distilling Down: https://arxiv.org/abs/2307.14535 Text2Motion: https://arxiv.org/abs/2303.12153 The authors do not mention most of these works, and they do not go into sufficient detail in distinguishing themselves from prior work. The paper also seems to lack reference to neurosymbolic/TAMP/PDDL approaches, which are concepts relevant to the idea of action primitives. **Asynchronous Hierarchical Executor:** The AHE, while improving efficiency, appears to be a simple scheduler that can be implemented with very simple logic. This does not constitute a significant technical contribution. Technical Quality: 3 Clarity: 2 Questions for Authors: **Evaluation on Other Benchmarks:** Have you considered evaluating PIVOT-R on other well-known benchmarks to better contextualize the performance improvements? I may consider changing my rating if results on a more well-known benchmark are presented. **AHE Mechanism:** Could you elaborate more on the algorithm behind the AHE module? **Novelty:** Can you make a brief statement on the novelty of your work, particularly with respect to the prior work mentioned in the weaknesses section? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The paper discusses the limitations of the work, including the potential inconsistency between high-level instructions and underlying actions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your detailed and constructive comments. We are delighted to see your recognition of PIVOT-R's state-of-the-art performance on the SeaWave benchmark. We respond to all the issues you pointed out in detail below. We hope our response and rebuttal revisions will address your concerns and lead to reconsideration of our work. **Lack of Clarity:**  **(a) Waypoint prediction motivation.** In fact, we have a detailed and concise statement about the motivation of waypoint prediction in lines 3-9 of the abstract and lines 41-42 of the introduction. The motivation for waypoint prediction is mainly to prevent the model from being drowned in trivial scene and action predictions, which is crucial for world modeling and manipulation skill learning. **(b) Experimental motivation in Sections 4.5 and 4.6.** Section 4.5 discusses the impact of waypoint selection, VLM, AHE, scene prediction supervision, and action prediction module design on PIVOT-R's performance. Section 4.6 analyzes why PIVOT-R succeeds, exploring its generalization to new tasks and potential for further improvement by incorporating other datasets. These experiments are crucial for understanding each module's design effect and PIVOT-R's advantages. Thanks for your valuable suggestions, we will revise the relevant expressions and reduce specific terminology for clarity. **Novelty. Q1. Primitive actions and waypoints.** Thank you very much for providing so many valuable related works. We have noticed previous work on waypoints and primitive actions, noting that they often used a limited number of actions or lacked world models for support. For instance, CLIPort [1], Transporter [2], GMRT [3], and VPG [4] are restricted to simple actions like pick/place/push, limiting their use in complex tasks. Some language-guided models [5,6,7] define a few primitive actions (≤5) and add prompts to aid decision-making. In contrast, PIVOT-R supports 10 primitive actions, including unique actions like "rotate/open/close," making it effective in complex tasks. Crucially, PIVOT-R is the first primitive-driven, waypoint-aware world model, using primitives to break down tasks and combining scene and action prediction modules to model physical world knowledge. This enables PIVOT-R to handle complex tasks based on user instructions. We will also add the reference to the neuro-symbolic and PPDL approaches in our revision. Thank you again for your suggestion. [1] SHRIDHAR M, MANUELLI L, FOX D. CLIPort: What and Where Pathways for Robotic Manipulation[J]. [2] ZENG A, PETE F, TOMPSON J, et al. Transporter Networks: Rearranging the Visual World for Robotic Manipulation[J]. arXiv: Robotics,arXiv: Robotics, 2020. [3] STENGEL-ESKIN E, HUNDT A, HE Z, et al. Guiding Multi-Step Rearrangement Tasks with Natural Language Instructions[J]. [4] ZENG A, SONG S, WELKER S, et al. Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning[C/OL]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid. 2018. [5] COLLINS JeremyA, HOUFF C, TAN Y, et al. ForceSight: Text-Guided Mobile Manipulation with Visual-Force Goals[J]. 2023. [6] HA H, FLORENCE P, SONG S. Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition[J]. 2023. [7] Lin K, Agia C, Migimatsu T, et al. Text2motion: From natural language instructions to feasible plans[J]. Autonomous Robots, 2023, 47(8): 1345-1365. **Novelty. Q2. AHE's technical contribution.** It should be emphasized that the technical contribution of AHE is not isolated. It should be considered together with the waypoint-aware world model (WAWM). RT-H uses VLM for robot actions and language association, requiring VLM at every manipulation step, which reduces efficiency. In contrast, PIVOT-R's VLM-based primitive-driven WAWM for scene and action prediction, combined with AHE for asynchronous execution, improves efficiency by 28 times (Table 3 in our paper). Though simple, AHE's integration with WAWM is highly effective. We will emphasize this in the revised version. Thank you for the suggestion. **Questions. Q1. Performance of PIVOT-R on other benchmarks.** We evaluated PIVOT-R on the latest SIMPLER [1] benchmark, a scalable, repeatable and reliable proxy for real-world evaluation. We use this to verify the scalability of PIVOT-R in the real world. As shown in Table 1 of the rebuttal PDF, PIVOT-R outperformed the best baseline by nearly 10%. Additionally, we conducted real-world experiments, where we set up three tasks: (i) "Pick up": pick up the correct object from the table. (ii) "Put on": Pick up the object and place it on the correct color block. (iii) "Push to": Push the object to the correct color block. We collected 400, 200, and 200 sets of demonstrations respectively. We tested each task 24 times to calculate the average success rate. As shown in Table 3 of the rebuttal PDF, PIVOT-R achieved a 6% improvement over the best baseline, Surfer. The waypoint-aware world model enabled PIVOT-R to demonstrate excellent real-world capabilities. Real-world evaluation demonstrations are included in Figure 2 of the rebuttal PDF. [1] Li, Xuanlin, et al. "Evaluating Real-World Robot Manipulation Policies in Simulation." *arXiv preprint arXiv:2405.05941* (2024). **Questions. Q2. AHE mechanism.** Specifically, we use multithreading to process each module separately. Each thread runs at its own frequency, extracts the latest data from the corresponding buffer, and places the output results in the buffer. For example, the VLM gets data from the camera buffer and saves the output in the buffer after each update. Then, the scene and action prediction module updates at different frequencies and reads the latest data from the cache of the previous module. Different modules will not be blocked by other parts. --- Rebuttal 2: Comment: Dear Reviewer, Thank you once again for your valuable feedback on our work. In our rebuttal, we have provided detailed responses to the weaknesses and questions you raised. We have clarified the novelty and distinctiveness of our waypoints and AHE. Additionally, we have elaborated on the experimental motivation. We have also conducted additional experiments, including SIMPLER benchmark and real-world experiment to better contextualize the performance improvements. As the author-reviewer discussion period is drawing to a close, we would like to check if there are any remaining questions or concerns that we can address. Please feel free to reach out if you need further clarification on any aspect of our work. We are committed to ensuring that our responses fully address your concerns and contribute to the improvement of our paper. We are confident that we have addressed all of your concerns and hope you will reconsider our work. Title: Waiting for further discussion --- Rebuttal Comment 2.1: Comment: Thank you for your detailed response. I appreciate the effort to clarify the motivation for waypoint prediction and to address the technical contributions of your work, as well as the additional evaluations on the SIMPLER benchmark and real-world tasks. **Lack of Clarity:** Regarding the motivation for waypoint prediction, I appreciate your point that it is meant to prevent the model from being overwhelmed by trivial scene and action predictions. However, I still find the argument somewhat vague. A clearer statement could emphasize that waypoint prediction enhances performance by creating a more meaningful mapping between instructions and actions, simplifying the task by focusing on key decision points rather than every low-level action. This would be analogous to how language models benefit from predicting at the token level rather than the character level, avoiding redundancy and focusing on more informative predictions. **Experimental Motivation:** Thank you for clarifying the experimental motivations. Most of this seems clear after rereading the paper. However, I think 4.6.1 and Figure 4 are still unclear. In particular, while $F_{O_t}$ and $F_{M'\text{t}}$ are defined, I don't think the definition of $F_{M_t}$ is mentioned in the paper. I think the discussion of this study should be grounded in less opaque language, such as "The observation feature approaches the waypoint feature as the task progresses". This would help to motivate the experiment. **Novelty:** The authors have provided useful references to prior work and clarified how PIVOT-R differentiates itself, particularly with the combination of waypoints and a world model. However, I still believe that the paper should include a more comprehensive discussion of related work, especially those that incorporate waypoints and action primitives. While the combination of a waypoint-aware world model and a greater number of primitives may be novel, the paper would benefit from a more thorough comparison with prior approaches to better highlight its unique contributions. I acknowledge your point that the Asynchronous Hierarchical Executor should be considered in conjunction with the Waypoint-aware World Model rather than as an isolated contribution. However, I maintain that the AHE, in its current form, seems to be more of an implementation detail than a key contribution. While it improves efficiency, its design as a simple scheduler using multithreading is not, in my opinion, sufficiently novel to warrant being highlighted as a major innovation. I suggest downplaying the emphasis on AHE as a key contribution in the paper. **New Results:** The additional evaluations on the SIMPLER benchmark and real-world tasks are appreciated and add value to the paper. These results give the reader more faith in the utility of your method. Based on your rebuttal and the additional results provided, I am inclined to raise my score from a 4 to a 5, contingent on the following revisions being made: 1. A section should be added to the related work that thoroughly discusses prior approaches involving waypoints and primitive actions, to better contextualize your contributions. 2. The emphasis on the Asynchronous Hierarchical Executor as a key contribution should be down-weighted, as it appears to be more of an implementation detail rather than a novel technical innovation. 3. Section 4.6.1 and Figure 4 should be made more clear. If these adjustments are made, I believe the paper would be stronger and warrant a higher score. --- Rebuttal 3: Comment: **Q1: Clarity.** Thank you for your precious suggestions to help highlight the spotlight of our methods. And during the rebuttal process, we have revised our description to be more distinguishable. As noted by the reviewer, by introducing waypoints as a data structural chunking mechanism, similar to tokenization in NLP, we segment dense and irregular robot trajectories into meaningful sections, reducing the prediction burden. This hierarchical approach decouples language-action interdependencies and leverages cross-trajectory waypoint transition knowledge, improving action prediction accuracy. We will revise our manuscript to elucidate this concept more clearly. Furthermore, we believe this analogy could inspire future exploration of advanced tokenization techniques like Byte-Pair Encoding (BPE) to enhance language-instructed robot control systems. We will incorporate this discussion into our revised manuscript to provide a more comprehensive understanding. Once again, we are grateful for your insightful suggestions, which have greatly contributed to the depth and clarity of our work. **Q2: Experimental description.** Thank you for your feedback. $F\_{O\_t}$, $F\_{M'\_t}$, and $F\_{M\_t}$ represent the features of $O\_t$, $M'\_t$, and $M\_t$ respectively. We will add definitions of relevant terms. And we will improve writing in Section 4.6.1 with more concise and formulaic descriptions. For example, "The observation feature approaches the waypoint feature as the task progresses" will be revised into "The $L\_2$ distance between $F\_{O\_t}$ and $F\_{M\_t}$ gradually decreases as the task progresses". We will revise our manuscript to reflect these changes. **Q3: Novelty.** Thank you for your recognition and for providing so many valuable related works in previous discussions. We have conducted a thorough discussion and comparison among the related works in the latest version. In addition to the related work mentioned in previous responses, we also conduct a more detailed investigation. For example, PerAct[1], RVT[2] use robot states as waypoints to skip trivial action predictions. SUSIE[3] and UniPi[4] predict sub-goals through video predictors, but there is an inconsistency between the predicted video and actions. In contrast, PIVOT-R strategically selects waypoints by primitive actions to model physical world dynamics, thereby extracting the correlation between actions and scenes to enhance the accuracy of action prediction. [1] Shridhar, Mohit, Lucas Manuelli, and Dieter Fox. "Perceiver-actor: A multi-task transformer for robotic manipulation." Conference on Robot Learning. PMLR, 2023. [2] Goyal, Ankit, et al. "Rvt: Robotic view transformer for 3d object manipulation." Conference on Robot Learning. PMLR, 2023. [3] Black, Kevin, et al. "Zero-shot robotic manipulation with pretrained image-editing diffusion models." arXiv preprint arXiv:2310.10639 (2023). [4] Du, Yilun, et al. "Learning universal policies via text-guided video generation." Advances in Neural Information Processing Systems 36 (2024). **Q4: Emphasis on Asynchronous Hierarchical Executor.** Thank you for your suggestion. We will reduce the emphasis on AHE and add more implementation details in the experiment setting. We will improve this in the revised version. --- Rebuttal Comment 3.1: Comment: Dear Authors, Thank you for your detailed response. My main concerns have now been addressed, and I will maintain my increased rating of 5.
Summary: The paper proposes PIVOT-R, a waypoint-based world model for robot manipulation. Concretely, given a language instruction, PIVOT-R converts it to a textual intermediate goal using a VLM and then feeds that as input into a scene prediction model that generates the waypoint. This waypoint is then fed into the action prediction model that predicts the robot action. The proposed method is compared with several baselines on the SeaWave benchmark and shows significant improvement in performance compared to numerous baseline methods. Edit: The authors have addressed most of my comments. Strengths: - The paper seems to outperform the baselines by a margin on manipulation tasks of increasing complexity due to long range tasks - The paper is well written and easy to follow - The paper makes important technical contribution on effectively combining large VLMs with scene prediction models. - Modulo some additional experiments (see below), the paper makes a strong empirical case for the proposed method (PIVOT-R) compared to baseline methods. Weaknesses: - The method is complex, and uses heuristics for defining intermediate waypoints, which does not look scalable at the outset. - For baselines, it looks like there is video generation pretraining (GR-1) and Scene decoding (Surfer), however an important baseline is missing: SUSIE [A] - Labeling of waypoints which is done in a heuristic way (zero hand speed, gripper state change, final frame, etc). It would be good to understand which is the most critical of these to get the most performance from the model. - It would help to quantify the the additional labeling cost, and how scalable it is for various tasks. - If I understand correctly, the baseline method Surfer which is the closest to the proposed method, seems to have 2 differences – a) splitting the task into waypoints, b) feeding the predicted scene as input. Ablation of these two changes to see which one contributes how much would help in understanding the paper. [A] Black, Kevin, et al. "Zero-shot robotic manipulation with pretrained image-editing diffusion models." arXiv preprint arXiv:2310.10639 (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: - Could the authors describe the ablation "PIVOT-R w/ video decoder" and provide an intuition on why might it hurt performance. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors should address the limitations in more detail. Specifically, discussing the additional labeling requirement of waypoints and the generalizability and efficacy of the heuristics to come up with the waypoints would help. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your comprehensive and encouraging review! We respond to all the issues you pointed out in detail below. We hope our response and rebuttal revision will address your concerns. **Q1. The complexity and scalability of method.** **(i) Model design & implementation.** In fact, there is no complex and cumbersome design in PIVOT-R. In terms of design, the core module of PIVOT-R is a waypoint-aware world model, which is mainly composed of an open source VLM and a scene prediction model for primitive parsing and scene modeling. In terms of implementation, AHE sets different frequencies for different modules and adopts multi-threaded asynchronous implementation, which greatly improves execution efficiency. Therefore, the introduction of WAWM will not significantly increase the complexity of the entire model, and PIVOT-R is simple in design and implementation. We will also release our code upon acceptance of the paper to facilitate future research. **(ii) Heuristic methods for defining waypoints.** The heuristic method is a general waypoint definition method that has been adopted in many previous robot manipulation tasks [1-4]. PIVOT-R can achieve automated annotation of waypoints in various robot manipulation benchmarks using open-source VLM and scripts. Therefore, our waypoint annotation method has good scalability. [1] HSIAO K, LOZANO-PEREZ T. Imitation Learning of Whole-Body Grasps[C/OL]//2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing. 2006. [2] AKGUN B, CAKMAK M, JIANG K, et al. Keyframe-based Learning from Demonstration[J/OL]. International Journal of Social Robotics, 2012: 343-355. [3] SHRIDHAR M, MANUELLI L, FOX D. Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation[J]. 2022. [4] JAMES S, DAVISON AndrewJ. Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation[J]. arXiv: Robotics,arXiv: Robotics, 2021. **Q2. PIVOT-R *vs.* SUSIE.** Thank you for your suggestions. We compared PIVOT-R and SUSIE on the SeaWave benchmark, and the results are shown in Table 2 of the rebuttal PDF. As shown in the results, compared with SUSIE, PIVOT-R achieved an **average performance improvement of 26.93%** (74.19% *vs.* 47.26%) on four levels of SeaWave tasks. We found that this may be caused by the poor quality of the target image generated by SUSIE severely limiting the model's action execution when given unseen instruction input. In addition, compared with SUSIE, PIVOT-R still achieves a 9.17% (88.06% vs. 78.89%) performance improvement on Level 1 tasks without complex instructions. We think this is because SUSIE's diffusion model and low-level policy are trained separately, causing prediction deviations in the images that affect the low-level policy. In contrast, PIVOT-R's WAWM and action prediction modules are trained together, thus effectively avoiding this problem. We will add these results to the revised version. **Q3. Key factors in waypoint definition.** Thank you for your suggestion. We mainly consider three markers: zero hand speed, gripper state change, and primitive action completion frame to define waypoints. Among them, both zero hand speed and gripper state changes belong to robot state changes, and the robot arm speed is zero when the gripper state changes, indicating a strong correlation between the two. And they can judge directly through the robot motion status port. Therefore, as defined by Line 240-241 in the article, the above three waypoint judgment conditions can be divided into two types: primitive action completion frame and robot state change frame (including gripper and arm). To illustrate the contribution of the above two waypoint selections methods, we added a set of ablation experiments shown in Table 4 of the rebuttal PDF. As shown in the results, the performance of PIVOT-R with only primitive action completion frames dropped by 5.1%, the performance of PIVOT-R with robot state change frames dropped by 30.54%. Therefore, action completion frames are the main contributing factor. **Q4. Annotation cost and scalability.** **(1) Cost.** For dataset annotation, we use the open source LLaVA 1.5 as the VLM for waypoint (*i.e.*, primitive action completion frame) judgment, and use scripts to automatically annotate waypoints in the robot's manipulation trajectory. As shown in Table 3 in our paper, PIVOT-R is not sensitive to the choice of VLM. We used the open source LLaVA 1.5 as VLM on 8 RTX4090s to complete the annotation of 13K trajectory data in 6 hours, which is affordable for practical implementation. **(2) Scalability.** The annotation conditions of waypoints depend on the state changes of the robot and the visual changes of the manipulation trajectory in the simulator, which are common and easily accessible in different benchmarks. Therefore, this waypoint annotation method has good scalability. **Q5. Ablation experiment for Surfer.** The biggest difference between PIVOT-R and Surfer is that PIVOT-R adopts a waypoint-aware world model strategy. The results of using the next frame in Surfer as a waypoint in PIVOT-R have been shown in row 3 of Table 3 in our paper, which resulted in a very significant performance degradation (*i.e.*, performance loss of 29.7%, 74.19% *vs.* 44.45%). Taking the predicted scene as input is not the key difference between PIVOT-R and Surfer. Surfer and PIVOT-R essentially regard the robot action as the key factor in observing the image state transition, but they use different prediction orders. Thank you for this valuable comment, we will explain this in our revision. **Q6. Explanation of ablation for PIVOT-R w/video decoder.** The pixel-level prediction settings of "PIVOT-R w/ video decoder" are inconsistent at the semantic level with the high-level primitive actions such as "close to" and "grasp" that need attention. This results in degraded model performance.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time, insightful suggestions, and valuable comments. We are happy that they appreciated our paper - makes **important technical contribution** on combining large VLMs with scene prediction models (Reviewer XYkg); - is **well-written, well-organized, and easy to follow** (Reviewer XYkg, F1Qx, and 9FHi); - the design of the asynchronous hierarchical executor (AHE) **makes sense and enhances execution efficiency** (Reviewer F1Qx); - demonstrates **strong performance improvements** on the SeaWave benchmark and provides **thorough ablations** for generalization and interpretability (Reviewer eMP4, XYkg, and F1Qx). We have also conducted additional experiments and provided necessary clarifications in our rebuttal, which are summarized below: - In response to Reviewer eMP4 and F1Qx, we have added additional real robot experiments (Table 3) and experiments on other benchmarks (Table 1) in the rebuttal PDF. The results show that our PIVOT-R **significantly outperforms comparison methods** in both **real robot experiments and other benchmarks.** - In response to Reviewer eMP4, we have highlighted our motivation and technical contribution of introducing waypoint prediction and asynchronous hierarchical executor, which are **crucial for enabling the world model to capture key dynamics and enhance execution efficiency**. We also provided a detailed analysis of the difference between our PIVOT-R and existing methods regarding action primitives and waypoint prediction. Note that both Reviewer XYkg and F1Qx **acknowledged the technical contribution of our paper.** - In response to Reviewer 9FHi, we have clarified that **both the VLM and the scene prediction model** form the world model, which fits its definition of perceiving the environment and predicting changes in environmental states. We also highlighted that our core novelty is that we **introduce action primitives and waypoint prediction for improving world modeling**, which we believe is a valuable and inspiring idea for the robotic community. It is arbitrary to judge our novelty in simply action primitive decomposing and waypoint prediction. We provide detailed answers to the reviewers' questions individually. We hope our response and rebuttal revision will address the reviewers’ concerns. Pdf: /pdf/787b0672c65a895c56c4faa7a00b9786d5187fba.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SARAD: Spatial Association-Aware Anomaly Detection and Diagnosis for Multivariate Time Series
Accept (poster)
Summary: This manuscript proposes a spatial association reduction method for anomaly detection and diagnosis (SARAD) where anomalous features have low association values in reconstructing the multivariate time-series. The association values are derived from an attention between the features and exhibit changes with time which enables the detection of anomalous features and the time period of anomalies. SARAD is built on two models: a transformer for data reconstruction and an MLP for the reconstruction of spatial progression. The last anomaly score is a combination of both reconstruction scores. Strengths: - The proposed methods can be useful to detect the source of anomalies in multivariate time-series data. - The manuscript provides a sufficient evaluation. Weaknesses: - In my opinion a main issue regarding the anomaly diagnosis should be addressed. For example, in Fig. 1 there are three features (9, 13 and 16) correlated with 12 and 15. An additional example is from Fig. 5 where features 13 and 16 are highly correlated with 12 and 15. What distinguishes the detected features from their correlated features? - There was no discussion about the identity shortcut problem. Such a problem is typical for reconstruction-based methods where the model learns to reconstruct anomalies and thus it becomes harder to separate them from the normal ones. Maybe I missed it in the manuscript, but can you elaborate on this issue? How do you address this issue in your method? - The Point-based evaluation on PSM dataset indicates some weaknesses regarding the sensitivity to the window size and short-term anomalies (see Table 13). The clustering for SMD and PSM is also not of the same quality compared to other datasets (see Figs. 12 and 13). - The model size/parameters of the proposed method exceed other baselines by big margines (except for DiffAD). Technical Quality: 3 Clarity: 2 Questions for Authors: - How is the threshold defined from the anomaly scores to detect anomalies? - Can you please provide a brief explanation why progression-based $p$ (SPR) is less effective for SWaT? Minor: - I find it a bit confusing to call it spatial association for time-series processing. Maybe it is better to call it channel/feature/variable. - It is clearer if you visualize or indicate the ground truth for anomaly diagnosis alongside the detected features i.e., in Figs. 1, 3, and 4-7. - Lines 84-85: Can you please make it clearer which spaces you are refereeing to? I think you mean data space and spatial progression space. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I think the issues about anomaly diagnosis and the identity shortcut should be discussed (see weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their detailed and constructive reviews. Here are our responses: **Weakness1:** **Response:** What distinguishes features 12 and 15 from features 9, 13, and 16 in Fig. 1 is that the spatial association reduction only occurs on the former features. Figs. 1(e)(f) shows that column-wise association reductions are significant (in darker cells) on features 12 and 15, but not on 9, 13, and 16. The reductions in Fig. 1e(f) are computed as the negative-only changes of the Transformer’s intermediate spatial association mappings from before (after) the anomaly to during the anomaly. Similarly, in Fig. 5, spatial association reduction occurs on features 12 and 15, but not on 13 and 16. **W2:** Identity shortcut happens when the data reconstructor learns to copy the anomalous input directly to the output. We considered two kinds of techniques to address the problem. On the data side, we could inject noises into the training input such that the reconstruction-based method does not know the true values. The added noise prevents the method from overfitting anomalies in the training set and learning to copy the input. On the model architecture side, we could apply self-attention masks within the MHSA module. In our case of SARAD, it prevents each feature’s representation from directly reading itself during attention computation. We initially implemented both but dropped them early in development because we did not run into the identity shortcut problem across all datasets. We also observed that, even in cases where the errors approach zeros (<1e-4), anomalies still lead to high anomaly scores relative to normal time points. **W3:** We acknowledged the weakness in Lines 688-693 on page 28. PSM contains many short-term anomalies (Fig. 8), making it hard to detect for all methods. In all our evaluations, we applied a unified time window length (of 100) to all datasets to offer a fair comparison with all baselines, as the majority of them originally or officially have done so as well. Fig. 10 and 11 show that with a smaller window size (of 10 or 20), SARAD achieves better performance than the default size. In deployment, the window length could be set based on the seasonal patterns, the sampling frequency, and (if available) previous anomaly history to maximize performance. We also note that nearly all methods are limited by the choice of window size. SMD and PSM are harder to detect anomalies upon compared to other datasets, leading to lower quality clustering in Figs. 12 and 13. The hardness is due to the existence of many short-term anomalies. PSM and SMD almost always have lower best performance scores than other datasets in Tables 3, 13, and 16. **W4:** In Lines 527-535, we acknowledged the limitations of the model size and complexity in regard to the number of features. The model size is modest for modern GPUs as we were training it on a three-year-old NVIDIA A10 GPU. Table 15 in Appendix N shows that, while our model size exceeds some other baselines, the time overheads are smaller than lightweight models such as GDN and MAD-GAN and comparable with many others as well, Anomaly Transformer and TranAD among them. In some cases, such as the DCdetector, the small model size does not translate into low time overheads due to its excessive contrastive learning pipeline. We also note that the inference times of our model are several magnitudes below all datasets’ sampling periods, guaranteeing real-time deployment. Also, the training times are several magnitudes below all datasets’ collection time, incurring little to no delay for model/project development. Finally, our backbone Transformer is known for fast parallel processing and we refactored MHSA implementation to enable parallel subseries processing (Line 168). Our design choice of the near vanilla Transformer and MLP architectures makes possible existing optimization techniques and a stripped-down loss function prunes excessive operations during loss computation. **Question1:** **Response:** While our main evaluation metrics are threshold-independent AUC scores, our method supports any threshold selection protocol. A common strategy would be to measure all anomaly scores $\boldsymbol{s}$ on the training/validation set and set the threshold to be $mean(\boldsymbol{s}) + 2 \cdot std(\boldsymbol{s})$. **Q2:** SWaT contains several extensively long-lasting anomalies, the longest being 35,900, during which most previously correlated features maintain their correlation due to the limited cyber-attack range (single attack point and single attack subprocess). While the SPR is indeed sensitive to changes in correlational relationships (similar observations are made in Fig. 3 example), the progression-based score decreases in the long anomaly span as the physical process regains stability. In addition, capturing SPR during a long-range anomaly is difficult as we are differentiating the association mappings between consecutive subseries within a fixed-length time window. The data-based score, however, captures the elevated or dropped steady-state measurements during an anomaly and thus complements its progression counterpart. In the end, we adopted a joint detection criterion that utilizes both. **Minor1:** **Response:** We note that spatiality has different connotations in some AI literature, e.g., geographic location or characteristics on Earth. We will mention such definitions in other related literature and with respect to that clarifications on our usage of the phrase “spatial association” in this work. **M2:** Ground truths for diagnosis labels are highlighted in red bounding boxes in Figs. 1, 3, 4, and 7. We will append the necessary descriptions. **M3:** Apologies for the unclear mentions of spaces. Yes, we are referring to both. --- Rebuttal Comment 1.1: Comment: Thank you for your response. There are still two unanswered questions. I will try to rephrase my questions to make them clearer: **W1.** For example, in Fig. 5, the spatial association reduction happens for two features 12 and 15. The question is: Why didn’t this reduction happen for features 13 and 16 as well even though they are highly correlated with features 12 and 15? **W4.** My comment was about the model capacity rather than the training/inference time. As seen in Table 15, page 29, the proposed model exceeds other baselines by big margines (except for DiffAD). I think this gives the proposed model an advantage and doesn’t fully guarantee a fair comparison. --- Rebuttal 2: Title: Response to W1 & W4 Comment: **W1:** You are correct in that there are non-anomalous features (9, 13, 16) correlating with anomalous features (12, 15) in Fig. 1. Similar correlated non-anomalous features can be found in Fig. 5. What distinguishes them from the anomalous features is the frequency of the patterns they showed during the anomaly period in the whole training set. Such patterns are frequent in the training set for features 9, 13, and 16, but not for features 12 and 15 (**see the following MSE experiment** for measuring frequencies). As the Transformer model was trained on the training set and learned the distributions of normal feature patterns, it does not register such frequent patterns of those correlated features as showing association reductions. **MSE Experiment:** We herein illustrate the frequencies of correlated non-anomalous features (9, 13, 16) in Fig. 1 relative to the anomalous (12, 15) using **Mean Squared Errors (MSEs)**. We first extract a subseries (of length 255) for each feature of interest during the anomaly period in Fig. 1 and refer to them as templates. We then compute the MSE between each template and each equal-length subseries from the training set (of size 566K) of the same feature. All features have already been normalized in data pre-processing. The table below shows the densities of MSE values for each feature on ranges starting from 0. It demonstrates a stark contrast between the anomalous features and the correlated non-anomalous features, with the majority of MSE values falling below 1 for the correlated and yet 0.00% for the anomalous features. The contrast remains stark as we shrink the ranges when plotting a histogram (which cannot be shown here due to OpenReview’s restrictions), with a dense concentration of MSEs near 0 for only the correlated features. |||||**MSE Range**|||| |--------------|-----------|----------|----------|-------------|----------|----------|------------| ||**Feature**|**(0, 1]**|**(1, 2]**|**(2, 3]**|**(3, 4]**|**(4, 5]**|**(5, $+\infty$)**| |**Correlated**|9|**76.62%**|8.74%|4.60%|3.53%|2.46%|4.06%| ||13|**74.87%**|14.03%|3.58%|1.98%|1.01%|4.53%| ||16|**55.83%**|21.00%|7.60%|5.99%|2.71%|6.87%| |**Anomalous**|12|0.00%|1.43%|**90.45%**|3.66%|0.65%|3.81%| ||15|0.00%|**62.69%**|14.00%|11.88%|5.06%|6.36%| The MSE distributions observed here (and similarly for the Fig. 5 case) underline the usualness of correlated features’ patterns. Since other subseries in the training set share similar patterns for the correlated features, the Transformer model does not register those features as showing association reductions. **W2:** The large model size generally does not give our model an advantage in performance. We refer to our Hyperparameter Sensitivity experiments in Appendix H. Figures 10(c) and 11(c) both show that on most datasets (except SWaT) the performance is insensitive to exceedingly small attention length $D$ (also denoted as $d_{model}$ in some Transformer literature). The model size is largely decided by $D$ since the majority of parameters reside in linear transformations inside the Transformer's self-attention modules, whose weights are of shape $D \times D$. We chose 512 as the default value for $D$, following the original Transformer paper and subsequent works. Compared to the default value of 512, a small $D$ of 64 does not limit the model's performance on most datasets while scaling down the model by roughly 1/64, as shown in Figures 10(c) and 11(c). The following table reports the numbers of model parameters when $D=64$ and relative scales to the defaults when $D=512$ (as in Table 15). The model sizes are now comparable to most baselines, while the performances are of similar levels (except SWaT) as when $D=512$ and thus remain state-of-the-art. | | SMD | | PSM | | SWaT | | HAI | | | ----------------- | ------ | ----- | ------ | ----- | ------ | ----- | ------ | ----- | | Method | Param. | Scale | Param. | Scale | Param. | Scale | Param. | Scale | | **Ours** ($D=64$) | 198K | 1/48 | 284K | 1/56* | 212K | 1/45 | 343K | 1/46 | One possible explanation for under-performance on SWaT is that SWaT has many long-range anomalies, which could require a large $D$ value to adequately encode rich spatial information during anomalies. Figures 10(c) and 11(c) show that $D=256$ (Param. count=2.46M, Scale=1/4) results in a much closer performance to the default $D=512$. ------ *Correction: It came to our attention when computing the model scales that Table 15 in the original version incorrectly reported our parameter count on PSM. The count should be 15.85M, not 1.59M. Apologies for the typo. All other parameter counts for our model are correct. We will correct the count in the final version.
Summary: In this paper, the authors aim to address the problem of anomaly detection and diagnosis for time series data. The authors consider that the existing methods may obscure or dilute the spatial information and the interaction between different variates, they propose the SARAD model to capture these interactions with the transformer. The authors evaluate the proposed method on several datasets and achieve good performance. Strengths: N.A. Weaknesses: However, there are some problems to solve 1. The authors claim that existing methods may assume feature independence or combine variables of a diverse physical nature. Indeed, the recent methods for time series analysis usually employ independent channel assumptions. However, several methods like the conventional RNN and the graph-based methods do not assume that the features are independent. Therefore, the motivation of this paper might not be convincing. 2. Moreover, Figure 1 might be not clear to explain the motivation of the paper. Specifically, Figure 1(a) is not informative and Figure 1(b)(c)(d) are almost the same. 3. The contribution of the proposed method is limited. There are other methods that use transformers as the backbone networks like [1]. It is suggested that the authors should provide a detailed discussion and compare with it. [1] Xu, Jiehui, et al. "Anomaly transformer: Time series anomaly detection with association discrepancy." arXiv preprint arXiv:2110.02642 (2021). Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to weaknesses Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please refer to weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their detailed and constructive reviews. Here are our responses: **Weakness1:** **Response:** We agree that some temporal modeling methods, such as RNN, do not make feature independence assumptions. However, Lines 28-34 stated that temporal methods including RNNs which do not assume feature independence combine variables/features of diverse physical nature. That leads to the dilution of spatial information, which could be crucial to detection. Our related work also covered RNN-based methods such as LSTM and MAD-GAN (Lines 91-94), the latter being one of our baselines. We highlighted their limitations regarding small receptive fields in time and the impact of timestamp misalignment across features (Lines 97-99). Our motivation for this paper is expressed in Lines 32-38: temporal methods ignore the spatial associations that characterize multivariate time series patterns. They also restrict diagnostic capabilities caused by a mismatch between temporal novelty and capturing spatial novelty. Our proposed SARAD aims to leverage spatial information and exploit the spatial association descending patterns common with time series anomalies. **W2:** Figure 1 investigates the Spatial Association Reduction (SAR) phenomenon by showing the changes in spatial associations throughout a time series anomaly. Fig. 1a shows the raw time series of all feature channels, with the anomalous features (#12 and #15) highlighted in red and marked with dashed lines. The intention of 1a is to show a full picture of the raw data before, during, and after the anomaly of interest. We were unable to insert proper space between features due to the page limit, which may cause readability issues without digital zoom. Based on the reviews, we can replace 1a with a selected feature set to improve readability. Figures 1(b)(c)(d) show the average association mappings before, during, and after the anomaly. While they might look similar, the differences between them are shown in Figs 1(e)(f) which display elevated association reductions on the anomalous features (#12 and #15). Apologies for the color scheme as it does not have enough contrast. To improve readability, we will increase the contrast for Figs. 1(b)(c)(d). **W3:** We discussed existing detectors of the Transformer backbone such as Anomaly Transformer and DCdetector in Lines 94-102 and 105-108. Furthermore, we compared against Transformer-based detectors including TranAD, Anomaly Transformer, DCdetector, and ATF-UAD in Tables 3, 4, 13, and 16. Concretely, for the cited Anomaly Transformer, we underlined its temporal modeling capabilities underpinned by the self-attention mechanism and yet restricted by the exceptionally small receptive field (typical 1D Conv kernel size is 3 for embedding) and cross-feature timestamp misalignment issues. For anomaly diagnosis, temporal methods such as Anomaly Transformer also mismatch the anomaly criterion of temporal novelty with spatial interpretation. We emphasize our contributions in this paper in Lines 80-87. Previous Transformer-based detectors fall short of spatial modeling. In both detection and diagnosis contexts, they ignore the spatial associations that ubiquitously characterize multivariate time series patterns. Specifically, compared with previous detectors using a Transformer backbone, our contributions are as follows. - Our method explicitly models spatial associations with self-attention modules to address the aforementioned receptive field and timestamp misalignment issues. - We design subseries division and refactor MHSA implementation accordingly to enable data shuffling during training and prevent memory storage of last association mappings. - We propose autoencoding in the spatial association space to exploit the association descending patterns common with anomalies and complement data autoencoding much less sensitive to spatial novelty. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The response has addressed my concerns. I have raised the score.
Summary: In this paper, the authors focus on incorporating spatial information for time series anomaly detection and diagnosis. The proposed algorithm, SARAD, employs a transformer-based data reconstruction approach to capture inter-feature associations. By analyzing changes in these associations over time, the algorithm identifies anomalies under the assumption that anomalous features cause a reduction in the perceived association. Strengths: The idea of utilizing spatial association descending patterns for time series anomaly detection and diagnosis seems interesting. The authors proposed new criteria for calculating anomaly scores. Detailed experimental analysis has been performed to validate the proposed method. Weaknesses: The related work section lacks a detailed discussion of all the baseline methods used for comparison in result analysis. For a more comprehensive understanding and to highlight the contributions of the paperwork, it would be better to incorporate those methods and discuss their limitations. The third paragraph in the Introduction section is not easy to follow. Providing more background information and reorganizing would be recommended. Technical Quality: 3 Clarity: 2 Questions for Authors: I'm interested in understanding how the SARAD method distinguishes reductions in spatial associations that indicate anomalies from those that could be considered normal variations. Please clarify the types of anomalies the SAR phenomenon is effective for. The authors could have enhanced their study by comparing SARAD with existing methods (e.g., InterFusion), which integrate both spatial and temporal information for anomaly detection Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their detailed and constructive reviews. Here are our responses: **Weakness 1:** **Response:** Apologies for not including all baselines being compared in the related work section. Lines 89-102 introduced and discussed the limitations of several baselines including MAD-GAN (Li et al., 2019), ATF-UAD (Fan et al., 2023), USAD (Audibert et al., 2020), AT (Xu et al., 2022), and DCdetector (Yang et al., 2023). Lines 108-110 briefly mentioned GDN (Deng and Hooi, 2021) as a graph-based prediction approach for detection. We did not discuss IF and DIF baselines as they are based on neither temporal nor spatial modeling. That was our oversight. We will include a summary and limitations of IF and DIF in the final version. They work by building a binary decision tree ensemble that partitions either the data space (IF) or the deep embedding spaces (DIF). They are limited by the lack of temporal or spatial information and their anomaly scores are not reflective of the magnitude of the anomalies. We will also include TranAD, which replaces the MLP in USAD with Transformer, and its adversarial training paradigm makes the reconstruction errors more robust. However, it shares the temporal modeling nature with other Transformer-based detectors and is limited by the small receptive field to capture long-range inter-feature correlation and handle timestamp misalignment. With regards to GDN, we will also highlight its limitations beyond the lack of temporal changes of spatial associations mentioned, including the mismatch between its single-point prediction target and the range-wise anomalies and unstable Top-K node selection during training. **W2:** We will reorganize the third paragraph in the final version. Specifically, we will discuss the role of MHSA inside the Transformer and explain why the intermediate association mapping via MHSA captures the correlational relationships between features under our setting. **Question 1:** **Response:** SAR is effective whenever anomalies originate from or lead to the dissolution of pre-existing associations, detaching anomalous features from their non-anomalous counterparts (Lines 56-58). Such associations are common in industrial control systems and many other monitoring systems, in which sensors routinely collect measurements from different locations of interconnected subprocesses. Examples of SAR can be found in Figs. 1, 3, 4, 5, 6, and 7 where during the anomalies anomalous features experienced sudden unusual spikes, significant deviation from normal values, or some forms of correlational breakdowns with others. For example, in Fig. 7 a water pump P-101 (feature #4) was turned off without correlations with others causing an anomaly. To distinguish between anomalous SAR from normal variations, SARAD uses an MLP autoencoder on reduction in the association space (Lines 70-73). The autoencoder learns the patterns of normal variations by minimizing their reconstruction errors. It could dismiss normal SAR during inference, compared to using reductions directly (Lines 286-288 in Ablation Studies). In addition, SARAD’s data autoencoder produces higher scores for anomalies, complementing the shortcomings of the association autoencoder. Lines 113-119 stated that we derived our SAR insights from the cyber-physical defense literature, where time series anomaly detection is extensively applied (SWaT and HAI are both cyber-physical systems). There are dynamic watermarking approaches (Satchidanandan and Kumar, 2017 and Dai et al. 2023) that overlay actuation with randomized signals to exploit correlational breakdowns (SAR in some way) of attacks (anomalies) for detection. While they are intrusive defense approaches, our method is non-intrusive and yet still exploits SAR. **Q2:** We ran into some compatibility issues when we tried to set up InterFusion during the rebuttal period, i.e., InterFusion’s CUDA and TensorFlow versions are so old that our machines no longer support them. Unfortunately, we couldn’t sort out the problem by the rebuttal deadline. We will try our best to resolve the issues and provide InterFusion experimental results in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the response. The authors have addressed all the issues menitoned in the review.
Summary: Paper proposes a deep learning based anomaly detection method for multi-variate time series data. The proposed method has two components in a single neural network. First component is a traditional auto encoder approach in which the input time series is broken into two parts (along time) and the model is trained to use the first part to generate the second part, and identify anomalies as a difference between the predicted and true output. The second component trains a transformer to generate the associations across the different features (akin to cross-correlation) and then train an MLP to predict the associations and identify anomalies if the predict associations are different than the observed associations. Thus, the method captures a broader range of temporal anomalies. Experimental results on a variety of benchmark data sets and comparisons with state of art methods demonstrate that the proposed method (SARAD) is able to identify anomalies that might not be apparent to approaches that do not consider the associations. Strengths: Paper is well-described. The idea of using association seems novel, though I have seen papers using cross-correlation to understand the multi-variate effect. Results are promising. The approach is compute intensive (scaling quadratic with the number of features), but authors do acknowledge the limitation and outline possible approaches to work around it. Weaknesses: A terminology issue - spatial often refers to data with positional coordinates (geographic or xy on a domain) which indicates spatial relationships between observations. On top of that, each observation would be represented as a multi-dimensional feature vector. In this paper, the authors use the term spatial to refer to the multi-dimensional feature vector representation of data. I found that very confusing. I think it might be better to fix the terminology to improve readability of the paper. Technical Quality: 4 Clarity: 3 Questions for Authors: Are there any types of anomalies that would still not be captured by this method? Similar to above, what kind of false positives could be captured by adding the association part? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The paper has addressed the limitations - I have suggested a couple above. Paper does not have a direct societal impact so the authors have a reasonable response to that. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their detailed and constructive reviews. Here are our responses: **Weakness 1:** **Response:** Apologies for causing the confusion. We note that spatiality has different connotations in some AI literature, e.g., geographic locations or characteristics on Earth. We will mention such definitions in other related literature and with respect to that clarifications on our usage of spatiality in this work to refer to the multi-dimensional feature vector of time series data. This terminology is also used in some related literature [1][2], i.e., time series prediction and anomaly detection. [1] Tryambak Gangopadhyay, Sin Yong Tan, Zhanhong Jiang, Rui Meng, and Soumik Sarkar. 2021. Spatiotemporal Attention for Multivariate Time Series Prediction and Interpretation. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 3560–3564. [2] Yu Zheng, Huan Yee Koh, Ming Jin, Lianhua Chi, Khoa T. Phan, Shirui Pan, Yi-Ping Phoebe Chen, and Wei Xiang. 2023. Correlation-Aware Spatial–Temporal Graph Learning for Multivariate Time-Series Anomaly Detection. IEEE Transactions on Neural Networks and Learning Systems (2023), 1–15. **Question 1:** **Response:** Very long-range anomalies are hard to capture even by our method. We note that on PSM and SWaT, there are a few very long-range anomalies lasting several thousand time points (the longest one is 35,900 as per Table 2). Over the long course of such anomalies, the anomalous features re-correlate with non-anomalous features, or the correlational breakdowns at the beginning of anomalies cease in the view of divided subseries, as the physical process regains stability. Table 6 shows the impact of those very long-range anomalies on SWaT performance when the score is entirely based on spatial associations (SPR). Without the cues from changes in spatial associations, our method relies on data reconstruction to detect such anomalies. In some cases, the system returned to normal operational states, leading to lower reconstruction errors and causing the anomalies to evade detection. **Q2:** False positives can occur when the correlational relationships between features are altered but not due to anomalies. Examples include situations in which the monitored system changes operational modes unexpectedly. On SWaT, we observe a few false positives which are suspected to be during changes of operational modes. However, due to a lack of operational scheduling details and setpoint (desired conditions) data during test set collection in the SWaT technical document, we were not able to verify that. On HAI, there is an operation task scheduler that periodically adjusts the setpoint values within the legal ranges to simulate benign scenarios. As these adjustments are periodical, there are fewer false positives on HAI due to mode changes. Table 16 shows the elevated false positive rates for SWaT when compared to HAI. Real-world possibilities of unplanned operational mode or setpoint changes are low. --- Rebuttal Comment 1.1: Title: Thank you Comment: We thank you for replying all questions raised by reviewer. --- Rebuttal Comment 1.2: Comment: Thanks for your responses. The clarifications are much appreciated. I stand by my current review rating.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Metric Space Magnitude for Evaluating the Diversity of Latent Representations
Accept (poster)
Summary: This paper studies the diversity in representation learning. The authors propose a novel family of diversity measures based on metric space magnitude, a mathematical invariant that captures numerous important multi-scale geometric characteristics of metric spaces. The main contributions are as follows: 1. The authors first introduce magnitude as a general tool for evaluating the diversity of latent representations, 2. The authors formalise a notion of difference between the magnitude of two spaces across multiple scales of similarity. Strengths: 1. The idea of introducing magnitude as a general tool for evaluating the diversity of latent representations is quite attractive and it seems can be generalized. 2. The authors formalise a notion of difference between the magnitude of two spaces across multiple scales of similarity. 3. The authors discuss the theoretical properties a suitable diversity measure should satisfy. Weaknesses: 1. As mentioned in the paper, it is difficult to define the diversity. The paper does not clearly explain how to link their method with diversity. Maybe I did't get their point. But I think their method is more like to evaluate the better representation instead of measuring the diversity? 2. The authors demonstrate that magnitude is stable and can detect curvature, however such connection makes people a little confused. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. 1. Can authors better explain my confuses? I think the proposed approach is more like to evaluate the better representation instead of measuring the diversity. 2. I didn't see clearly connections between magnitude and curvature. Can authors explain clearly? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their questions and the positive assessment of our work. **For revisions, we take this feedback as motivation to clarify and further explain the links between magnitude and diversity.** ___ > As mentioned in the paper, it is difficult to define the diversity. The paper does not clearly explain how to link their method with diversity. Maybe I didn't get their point. But I think their method is more like to evaluate the better representation instead of measuring the diversity? Thanks for raising this question about our current write-up! We will use this to revise our manuscript. To summarise our main points: In our paper, we define diversity by considering fundamental axioms which a sensible diversity measure should satisfy, namely monotonicity in distance (diversity should increase as dissimilarity between observations increases) and the twin property (diversity should not change when including duplicate observations). We state these main theoretical requirements, and demonstrate that our proposed measures fulfil them. In this way, we provide the theoretical link between diversity and our methods. Specifically, we use the magnitude of a latent representation to evaluate diversity. Magnitude measures diversity as the “effective number of points” at a distance scale or “zoom factor.” Given a distance between data points, and a zoom factor, magnitude answers the question: How many *distinct* points or clusters can be distinguished at this scale? Intuitively, diverse spaces have clearly separated observations and high magnitude. In comparison, less diverse spaces, whose observations are more clustered, have lower magnitude. We then define formalised measures of diversity that summarise trends in magnitude across multiple resolutions. This is further illustrated in Figure 1 in the paper. Here, the random pattern in $X_1$ is the most diverse example because its points are most clearly separated and more evenly spread across the entire area. Thus, it achieves the highest values of magnitude. In comparison, the clustered pattern in $X_2$ generated via a Hawkes process is less diverse because it shows a more uneven distribution with points being concentrated at specific areas. Overall, the magnitude of $X_2$ is thus lower than for $X_1$. Our diversity evaluation pipeline illustrated in Figure 1 then summarises these differences in diversity across multiple resolutions in a principled manner. In the context of representation learning, diversity then is a key concept on what it means for representations to be better. Our goal thus is to evaluate latent representations by measuring their diversity. **In revisions, we will revise the introduction of magnitude and diversity to better explain what diversity means in our context and how our method measures diversity. Further, we will add an extended case study to the appendix in order to illustrate how our method measures diversity via intuitive examples.** ___ > I didn't see clearly connections between magnitude and curvature. Can authors explain clearly? We thank the reviewer for this question. The motivation of the curvature experiment was to substantiate our argument that magnitude encodes important geometrical properties with experimental evidence. Our results then support and are motivated by the theoretical results linking magnitude and curvature ([Willerton 2010](https://arxiv.org/pdf/1005.4041)). **At the same time, this experiment shows that magnitude is more effective at this task than other, more complex geometrical summaries** (like persistent homology). For the experiment itself, we can intuitively explain the relationship between diversity i.e. magnitude and curvature as illustrated in Figure S.5. For unit disks of positive curvature, the higher the curvature the lower the value of MagArea. This indicates that points move closer and closer the more curved the surface is decreasing the diversity in Euclidean space. For surfaces with negative curvature we see the opposite trend. The more negatively curved the Poincaré disk the lower the value of MagArea. This is because Euclidean distances between points and thus diversity are decreasing. In revisions, we will clarify the connection between magnitude, diversity and curvature. To do so, we will reference the relationship to relevant theoretical results in the text. Further, we will add an introductory text earlier in the paper to motivate this experiment. --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications and additional materials. It makes me understand it clearly. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive response. We are pleased to hear the additional materials and clarifications have provided a clearer understanding. Should you or any other reviewers have additional questions, we are happy to discuss and provide further clarifications. --- Rebuttal 2: Title: Response 1/3 Comment: We thank the AC and the reviewers for their further questions and are happy about the chance to elaborate more on the technical details regarding the link between the mathematical theory of diversity and our proposed method. Our previous response aimed to convey an intuitive understanding of diversity. > What are the formal definitions of the four axioms (Monotonicity, Twin property, Absence invariant, Multi-scale)? The first three are somewhat trivial, but how to define the multi-scale i.e., "encodes both local and global trends in the data manifold." mathematically does not seem trivial. > Does the proposed metric MAGAREA satisfy the four axioms? If so, where in the paper can we find the proof? Yes, MAGAREA fulfils all the desired axioms as we will detail below. Definitions of the diversity axioms are stated in Appendix C.3. and we are happy to elaborate on this discussion for revisions. First, we want to highlight what it means for a measure to be multi-scale. A **multi-scale measure** encodes both local and global trends in the data manifold by considering multiple levels of scale or resolution simultaneously. Formally, we can require that a diversity measure $m_t \in \mathbb{R}$ is a continuous function of the scale of dissimilarity $t$. A multi-scale measure, $m$, then summarises diversity across multiple scales i.e. $m = f(m_{t_1}, (m_{t_2}, …, (m_{t_n})) \in \mathbb{R}$ for $n>2$ and some summary function $f$. That is, rather than giving a snapshot of diversity at a fixed degree of (dis)similarity, multi-scale methods summarise diversity across varying scales of (dis)similarity. We reason that this property is advantageous to capture a more complete picture on how both coarse and more nuanced dissimilarities in observations affect diversity. Indeed, **being a multi-scale summary is a distinguishing characteristic of our proposed diversity measure, MAGAREA.** Alternative diversity measures, such as average similarity, the Vendi score or magnitude computed at one scale, do not fulfil this criterion as they are single resolution snapshots computed from a fixed similarity matrix. **For revisions, we will clarify the current statement in the main text and include a formal definition as well as an extended discussion regarding this property in the appendix.** Regarding the remaining properties, we can prove that monotonicity in observations, the twin property and absence invariance hold for the magnitude of a negative definite metric space. Then, because magnitude $Mag_X(t)$ fulfils these axioms for each scale of $t \in \mathbb{R}^+$, our measure $\text{MAGAREA} = \int_{T} Mag_X(t) dt$ does as well. We state this in Appendix C.3. and **will elaborate on the relevant proofs during revisions.** Briefly, we can sketch the relevant proofs as follows: - Magnitude is **monotone in observations** i.e. magnitude does not decrease when including a novel observation. This follows from [Corollary 2.4. in Leinster (2013)](https://arxiv.org/pdf/1012.5857). - Magnitude fulfils the **twin property** i.e. it remains unchanged when including a duplicate observation. This follows from the fact that a metric space cannot contain a duplicate element by definition. - A diversity measure is **absence-invariant** if it remains unchanged when restricting the empirical distribution to its support. That is, diversity does not change when removing elements or features that have not been observed or have zero probability. Magnitude is absence-invariant because the distance matrix $d$ itself is absence-invariant. **We will add this proof during revisions.** We thank the AC for their questions and will highlight the relevant diversity axioms, their definitions and proofs better in revisions. Please let us know whether additional clarifications are required. --- Rebuttal 3: Title: Response 2/3 Comment: > Is the design of MAGAREA unique? Cannot we replace $\exp$ in the definition of $\zeta$ with another increasing and convex function? The computation of MAGAREA is unique insofar as we require $\zeta(t)=f(-td)$ being invertible for all $t \in \mathbb{R}^+$, which, to the best of our understanding, won't necessarily hold for any increasing and convex function $f$. For example, take $f(x)=x^{1000}$, so that $\zeta_{ij}(t) = (t d_{ij})^{1000}$ which is increasing and convex for $td > 0$. Then, for some low values of $t$ close to zero, $\zeta$ will be singular and hence not invertible. We know, however, that any positive definite matrix is invertible so that we can define the magnitude of any such matrix $\zeta$ [(Leinster 2017)](https://arxiv.org/pdf/1606.00095). Given this motivation, $\exp$ is a somewhat “canonical” choice for defining the magnitude of a metric space because it is a prime example of a strictly positive definite kernel, which ensures invertibility of the similarity matrix for any negative definite metric $d$ [(Feragen et al. 2015)](https://ieeexplore.ieee.org/document/7298922). From a category-theoretic perspective, magnitude represents a generalised notion of size for a metric space, which is a special type of monoidal categories. The exponential kernel then has been chosen for defining the magnitude of a metric space because of its multiplicative properties i.e. it is necessary that $f(x+y)=f(x) \cdot f(y)$ [(Leinster 2021, p.212)](https://arxiv.org/pdf/2012.02113) to define a valid notion of size. This essentially forces, $f(x) = c^{-x}$ for some constant $c$. Hence, the choice of $e$ as a basis is arbitrary and any other positive constant could be used, which is equivalent to re-scaling the distances. We also want to note that the current definition of magnitude has revealed useful theoretical and geometric insights, such as proving the connection between magnitude and the curvature of a metric space. Hence, if we choose any $f$ that does not fulfil the multiplicative property above, we lose the existing knowledge relating magnitude to geometrical properties of the underlying space, for example. The standard formulation of $\zeta$ also has some appealing properties linking it to diversity. We have $\exp(-td_{ii})=\exp(0)=1$, so that the similarity of an observation to itself is always $1$. Further, we have $\lim_{t \to \infty} \exp(-t d_{ij}) = 0$, so that the similarity between distant points approaches zero asymptotically. Thus, $\zeta(t)$ gives a valid similarity matrix, whose entries are bounded by $[0,1]$, which is desirable for defining a (dis)similarity-based notion of diversity [(Leinster 2021, p.173)](https://arxiv.org/pdf/2012.02113). **In fact this behaviour is the practical reason why magnitude can be interpreted as the effective number of points.** Hence, we know that other choices of $\zeta$ are possible and we believe it is worth investigating the generalisation of our methods to other choices of kernels in future work. Nevertheless, we find that, for now, $\exp$ provides a useful “default” choice for defining diversity based on the considerations detailed above. --- Rebuttal Comment 3.1: Title: Response 3/3 Comment: > Also, as a relevant question, I would like to ask why we cannot use for authors' motivation (traditional) discrepancy measures in the low-discrepancy sequence area, like the star-discrepancy (e.g., [WS08]. We can find traditional discussions in e.g., [War72])? Thank you for pointing out these references! We agree that discrepancy measures, such as star-discrepancy, give an interesting set of tools for quantifying the difference between a latent space and a uniform distribution on a hypercube. In our context, these methods could be used to measure an important aspect of diversity, namely the evenness or uniformity of an empirical distribution. In our previous response, we explained the importance of evenness for understanding diversity via the comparison between the clustered patterns $X_2$ and the random point pattern $X_1$. Preliminary experiments with discrepancy measures on such data show that (a) measures are often not capable of reproducing the ground-truth ranking in diversity, i.e. $X_1, X_2, X_3, X_4$, or (b) fail to capture the degree of difference in diversity when comparing $X_1$ and $X_2$. **We will incorporate these measures in an extended experimental discussion in our revision.** Moreover, we find that “evenness” does not fully capture all relevant aspects of diversity. Another key aspect is measuring the absolute richness of a dataset. For example, we want to measure that a space with 100 uniformly distributed samples does not decrease in diversity when including additional 10 randomly sampled points. That is, we define diversity not just by the evenness of the empirical distribution, but also by the richness or the number of distinct observations or clusters. Linking back to the axioms of diversity, this behaviour is described by requiring monotonicity in observations as well as the twin property. We can show that both axioms do not hold for the star-discrepancy via counterexamples. For example, when including discrepancy measures into the simulation study conducted for the twin property as detailed in the additional PDF, we see that L2-star discrepancy changes under the inclusion of duplicates. In comparison, our proposed diversity measure evaluates both evenness and richness by summarising the effective number of distinct points across multiple resolutions. Therefore, our method gives a unique view on the diversity of latent spaces that is based on mathematical theory, but not yet addressed by existing diversity or discrepancy measures in ML. Regarding the original motivation of our work, we focused on linking our method to two applications that benefit from improved diversity evaluation, namely the evaluation of generative models and automated embedding-based diversity evaluation e.g. for assessing LLMs. To the best of our knowledge, discrepancy measures have not yet been included as standard benchmark methods for diversity evaluation in these fields. **We fully agree that this is worth further investigation and are excited to include discrepancy measures as alternative baselines during revisions.** We thank the AC for their questions as well as their further suggestions and are happy to clarify our answers.
Summary: This paper focuses on evaluating the diversity of latent representations. The authors develop a family of magnitude-based measures of the intrinsic diversity of latent representations, formalizing a novel notion of dissimilarity between magnitude functions of finite metric spaces. Moreover, they demonstrate the practicality and performance of the proposed measures in different domains and tasks. Strengths: 1. The method proposed in this paper is innovative, and the writing is logical. 2. This paper conducts an in-depth theoretical analysis and sufficient experimental discussion and analysis. Weaknesses: 1. In Section 4.3, the reason for choosing the 5-NN classifier needs to be explained. In addition, what is the purpose of designing a comparative experiment between PCA pre-processing and no pre-processing? 2. This paper discusses several application scenarios of the proposed diversity evaluation, and we can further explore whether there are more scenarios worth exploring, such as measuring the representation ability of graph contrastive learning models. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Table 1, what is the reason for the apparent performance difference between MAGAREA with the piecewise linear and MAGAREA using quantile regression? 2. In section 4.3, what is the basis for the author to use the 5-NN classifier to predict the embedding model? 3. In section 4.3, what is the purpose of the author's comparison of the results of PCA pre-processing and no pre-processing? How do the authors view that these two methods have both positive and negative effects on different models? 4. In Section 4.5, the authors discussed magnitude evaluation graph generative models. Can the diversity evaluation proposed in this paper also be used to evaluate the representation learning ability of graph contrastive learning models? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. MAGDIFF is a reference-free measure of intrinsic diversity, but does not measure fidelity. 2. The paper does not investigate whether embedding-based similarities are outperformed by alternative task- or domain-specific similarities. Instead, the evaluation relies on the utility of embedding models and assumes that latent spaces encode useful/realistic relationships between samples. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and are looking forward to discussing the questions raised as well as clarify them in the text during revisions. ___ > In section 4.3, what is the basis for the author to use the 5-NN classifier to predict the embedding model? The 5-NN classifier is chosen as a very simple model, which somewhat surprisingly already manages to separate the embedding spaces of different models extremely well based on their difference in intrinsic diversity. Here, 5 neighbours are chosen as a default as implemented in the $\texttt{sklearn}$ package without any hyperparameter tuning. Motivated by your question, we conducted further sensitivity analysis to assess how much the reported results change under different parameter choices. Table 2 in the attached PDF then shows that classification accuracy hardly varies across different choices of $k$ neighbours. For revisions, we will add an explanation for the choice of model in the text. ___ > In section 4.3, what is the purpose of the author's comparison of the results of PCA pre-processing and no pre-processing? How do the authors view that these two methods have both positive and negative effects on different models? PCA preprocessing is applied under the assumption that the varying dimensionality between embedding spaces could be the main driver of the observed disparities in diversity. We thus wanted to test if dimensionality reduction to the same number of 384 dimensions could account for the differences in diversity and reduce the classifiers’ performance scores. However, our results show that this does not change the experimental results. There thus remain inherent differences in each models’ signature as distinguished via diversity. We will include further explanations and motivations for these model choices in the revisions. **We also want to clarify that our method does not depend on such a specific pre-processing; we merely used it here to make the task of detecting a specific model harder.** ___ > In Table 1, what is the reason for the apparent performance difference between MAGAREA with the piecewise linear and MAGAREA using quantile regression? Thank you for pointing this out! We will clarify this in revisions: The input data to these models, that is the values of MagArea for different curvature values, is plotted in Figure S.5. Explanatory analysis of this relationship then informed our model choice. The piecewise linear model better fits the trend in Figure S.5, which is why it outperforms the quadratic relationship modelled via quantile regression. Both models were included to offer multiple proposals on how to interpolate between the MagArea scores for surfaces of negative and positive curvature. ___ > In Section 4.5, the authors discussed magnitude evaluation graph generative models. Can the diversity evaluation proposed in this paper also be used to evaluate the representation learning ability of graph contrastive learning models? We appreciate this suggestion and are excited to explore and discuss more scenarios for which our proposed method is of relevance. Evaluating representation learning models themselves is an exciting question in itself that deserves further exploration. In the **context of graph contrastive learning**, we have reasons to believe that our method can be extended to the evaluation of self-supervised representations. As an intrinsic measure of the diversity of a space, magnitude measures the effective size of an embedding space. Conceptually, this can be likened to measuring the effective rank of an representations as computed by [RankMe (Garrido et al. 2023)](https://arxiv.org/pdf/2210.02885) for the evaluation of joint-embedding self supervised learning. That is, we believe that problems such as **dimensional collapse** could also be assessed in an expressive manner by a representation’s multiscale magnitude. Further, it is of interest to explore the role of diversity in a self-supervised context and investigate how diversity measures can be effectively used to improve model performance, potentially via **maximising diversity during training**. We thus believe that using magnitude for evaluating representation models deserves its **own thorough investigation** as well as further theoretical links to learning theory in SSL, which we look forward to conducting in the future.
Summary: In this paper, authors introduce magnitude of a metric space to evaluate the diversity of the learned latent representation. Extensive experimental results are proposed, which to some extent illustrate the effectiveness of the proposed criterion. Strengths: 1. The expression of the paper is easy to follow. 2. The solving problem is important and could have good impact in the community. 3. The proposed method has shown good theoretical and empirical performance to some extent, illustrating the effectiveness of the proposed measurement. 4. The experimental results are comprehensive. Weaknesses: 1. The theoretical analysis lacks comparison. 2. More discussion should be conducted over the circumstances when the proposed algorithm performs not as good as the existing algorithms to give more understanding of the pros and cons of the proposed criterion. 3. It is welcomed if a case study can be conducted to illustrate more insight of the proposed criterion. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. More theoretical comparison between existing methods will make the results more convincing. Is the proposed method superior over other counterparts theoretically? 2. Can the proposed criterion combine with the existing methods to provide more comprehensive description of the learned representation and come up with better performance. Is it compatible with the existing methods? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the questions and weakness of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The theoretical analysis lacks comparison. > More theoretical comparison between existing methods will make the results more convincing. Is the proposed method superior over other counterparts theoretically? We appreciate the interest in discussing theoretical analyses in comparison to existing algorithms and briefly summarise our results (we will further improve this in our revisions): Section 3.1 and Appendix C.3 discuss the theoretical properties and fundamental axioms of diversity in comparison to existing baseline measures of intrinsic diversity. Further, Appendix C.4 demonstrates which diversity axioms hold for which of these measures. For revisions, we extend this investigation of theoretic properties to a simulation study reported in Figure 1 of the attached PDF. Our magnitude-based diversity score is the **only approach** that fulfils all desired criteria, showing that our method is superior to alternative diversity measures from a theoretical perspective. We will highlight this theoretical discussion in revisions and emphasise the importance of fulfilling fundamental axioms of diversity via an extended simulation study. ___ > More discussion should be conducted over the circumstances when the proposed algorithm performs not as good as the existing algorithms to give more understanding of the pros and cons of the proposed criterion. **Throughout our experiments on state of the art diversity evaluation benchmarks, we have not encountered a scenario, where our methods perform worse at measuring diversity than existing methods.** In terms of the cons of our method, we noted the main limitations of our approach in Section 3.5: * Our method does not assess fidelity. * Scalability w.r.t to the size of embeddings. We note that practitioners can assess fidelity using specialised scores. Our method can then be interpreted **alongside** fidelity metrics, covering diversity aspects not yet addressed by existing measures. Further, scalability was not a limitation in our experiments. Relevant diversity evaluation tasks typically study small graph datasets, evaluate the response of text generation models for specific tasks, or study image embeddings in terms of meaningful subsets (like measuring intra-class diversity). However, in case scalability becomes an issue, we can work with efficiently approximations of our score, based on subsets. We believe that the advantages of our method by far outweigh its limitations. To sum up, the main benefits of our methods are: * Their agreement with fundamental axioms of diversity. * Their expressivity. * Their multiscale nature. * Their flexibility (w.r.t choosing a dissimilarity). * Their connection to geometric properties. We thus believe that magnitude provides a hitherto-unaddressed perspective on diversity that complements additional evaluation measures. For revisions, we will include a broader discussion on the pros and cons. ___ > It is welcomed if a case study can be conducted to illustrate more insight of the proposed criterion. We agree it is important to build this intuition! For revisions, we provided an **extended case study in the attached PDF**, showing a comparison of our proposed magnitude-based approach and alternative measures. To link our investigation to the theoretical axioms of diversity, we examine the so-called twin property. This requirement asserts that diversity should not change when including duplicate observations into a given dataset. When evaluating generative models, diversity measures that satisfy the twin property are advantageous because they penalise models that just repeat existing observations again, as opposed to providing genuinely “novel” outputs. Results of this case study are reported in Figure 1 of the attached PDF, showing how the popular baseline measures **all fail to fulfil** the twin property, instead exhibiting highly-inconsistent behaviour. Our proposed method meanwhile is the **only** diversity measure that respects the twin property and remains consistent, demonstrating another one of its practical advantages. To gain more insights, we further compare diversity metrics on the examples from Figure 1 and report them in Table 1 of the attached PDF. The results show that two of the baseline measures fail to capture notable differences in diversity on simple simulations as they do not detect that the random pattern in $X_1$ is more diverse than a clustered pattern in $X_2$. ___ > Can the proposed criterion combine with the existing methods to provide more comprehensive description of the learned representation and come up with better performance. Is it compatible with the existing methods? In general, our method is compatible with existing scores. Our proposed diversity measure can be used to evaluate and improve the **performance of generative models**. This allows us to choose a generative model that best captures the diversity of a reference (achieving low MagDiff values) while simultaneously retaining high fidelity scores. Section 4.5 explores this scenario for evaluating graph generative models; we will focus on combinations of existing scores with our score in future work, thanks for the suggestion! Our measure is very versatile and can be extended to other settings to improve model performance via **incorporating it into the model training**, for instance 1) for optimisation purposes, as part of the loss function; 2) as an early stopping criterion for monitoring the diversity of the learnt representations; or 3) by improving how well the learnt representation preserves the ground truth diversity of a known reference. We would be excited to further explore these scenarios in future work. --- Rebuttal Comment 1.1: Title: Comment Comment: Thanks for the detailed responses, especially the additional discussion, analysis and the example, which solve my concerns and questions.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their unanimous support of our work and contributions, as well as for their interesting questions and suggestions for further clarifications. We are confident that we can implement the required changes mentioned below by making **small amendments** to our manuscript. Moreover, to complement these changes, we address reviewers’ specific questions in our responses to their individual reviews, and we are happy to provide further clarifications in the discussion phase. Pdf: /pdf/864801548ded9a6ba8cd3c44a6e89deb1eb490ec.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Selective Generation for Controllable Language Models
Accept (spotlight)
Summary: This paper introduces Neuro-Selective Entailing-Generation (NSEGen), a novel approach to enhance the trustworthiness of generative language models. The method extends selective classification to language generation tasks, utilizing textual entailment to measure semantic correctness between generated and true answers. NSEGen employs a semi-supervised method that leverages both labeled and unlabeled data by learning an entailment set, addressing the challenge of expensive entailment labeling. The authors introduce neuro-selection functions to optimize feature selection for minimizing false discovery rate and provide theoretical guarantees on false discovery rate control. The approach aims to control the entailment-based false discovery rate while maximizing selection efficiency. Strengths: - The paper introduces a new framework for improving GLM trustworthiness that addresses limitations of existing methods. This is particularly important because it tackles the metric misalignment issue in GLMs, where conventional metrics like exact match fail to capture semantic correctness. By introducing selective generation and leveraging textual entailment, the authors provide a more nuanced and accurate way to evaluate language model outputs. - The approach is supported by thorough theoretical analysis and guarantees. The authors provide a detailed proof for the correctness guarantee of their algorithm (Theorem 1) and establish conditions for achieving monotonicity in precision (Lemma 3). Weaknesses: - The method introduces several new components and parameters, which may make it challenging to implement and tune. For instance, the algorithm involves learning an entailment set, designing neuro-selection functions, and tuning multiple parameters. This complexity could make it difficult for practitioners to adopt the method. - Experiments are conducted only on Natural Question with two models (GPT-3.5 and Alpaca-7B). While these experiments do demonstrate the method's effectiveness, they leave open questions about its generalizability. Especially if the method claims to adapt uncertainty learning methods to generation tasks. It would be valuable to see how the method performs on a wider range of language generation tasks (e.g., summarization, translation, or open-ended text generation and so on). Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The method introduces several new components and parameters, which may make it challenging to implement and tune. For instance, the algorithm involves learning an entailment set, designing neuro-selection functions, and tuning multiple parameters. This complexity could make it difficult for practitioners to adopt the method. - We thank you for valuable comments on practical use of our algorithm. Even though the algorithm may seem complex at first glance, we want to highlight that there are only a few user-specified parameters that the user should take care of to run our algorithm, all of which have intuitive interpretations. - Here, we provide (1) a detailed description of how to choose the user-specified parameters and (2) additional guidelines for the practical use of our method. First, we would like to start with (2). The following is a list of configurations that practitioners need to choose before running our algorithm, which may vary based on the language generation problem they aim to solve. Since we assume the configurations are easy for practitioners to choose, we distinguish them from what we call “user-specified parameters”. * Generator $G$: Target GLM that we aim to control the rate of hallucinations for the specific language generation task * $Z_E, Z_U$: Calibration data with and without labels on textual entailment * $f_{M_1}, f_{M_2}$: Language scoring functions to quantify the uncertainty of the generated sequence on a given problem * $f_E$: Entailment classifier to predict the textual entailment - The remainder provides a detailed description of choosing the user-specified parameters: $\epsilon_S, \epsilon_E$, and $\delta(=\delta_E+\delta_S)$. * For $\epsilon_E$, we newly propose an automatic $\epsilon_E$-searching algorithm for convenience. We hope to add this algorithm in our final manuscript, since we believe this would make our algorithm more accessible to users. * $\epsilon_E$: The target level of mislabeling error made in the pseudo-labeling process. Unlike $\epsilon_S$ which specifies the target rate of hallucinations (FDR-E), we thought that letting the user choose $\epsilon_E$ may not be easy. Hence, we propose a $\epsilon_E$-searching algorithm which automatically chooses $\epsilon_E$ among possible candidates. Specifically, we find the smallest $\epsilon_E$ that returns a non-vacuous entailment set among candidates. If a detailed algorithmic description or a modified version of proof is needed, we will additionally provide them via an anonymized file. * The following are guidelines for choosing the remaining user-specified parameters. Although they may not seem straightforward to choose, they have intuitive interpretations that make them easy to specify. * $\epsilon_S$: As mentioned above, $\epsilon_S$ is the target rate of hallucinations (FDR-E) chosen by the user, which is straightforward. Based on the type of language generation problem or the user’s preference between the rate of hallucinations and the number of abstained answers, he/she can select $\epsilon_S$. * $\delta ( = \delta_S + \delta_E )$: $\delta$ is related to the confidence of the guarantee on FDR-E. Specifically, irrespective of which calibration set $(Z_E, Z_U)$ we use, our algorithm controls the FDR-E with probability $1 - \delta$ (Theorem 1). This can be chosen based on the degree of confidence that the user thinks appropriate (e.g., 90% of confidence). > Experiments are conducted only on Natural Question with two models (GPT-3.5 and Alpaca-7B). While these experiments do demonstrate the method's effectiveness, they leave open questions about its generalizability. Especially if the method claims to adapt uncertainty learning methods to generation tasks. It would be valuable to see how the method performs on a wider range of language generation tasks (e.g., summarization, translation, or open-ended text generation and so on). - Thanks for raising concerns on the generalizability of our method. We think it would be great to verify and further generalize the applicability of our method to a wider range of language generation tasks and these would be very interesting future work. Although we have conducted an experiment only on the open-ended QA problem, we want to note that our problem and algorithm are designed in a general manner. - As of generalizability, one of technical issues is concerned with the choice of the entailment classifier. Specifically, the success of our semi-supervised version, a label-efficient version which exploits the unlabeled data set via pseudo-labeling, depends on whether we have access to an accurate entailment set for the given language generation problem. Specifically, as can be seen in Algorithm 1, the accuracy of the entailment set depends on the performance of the textual entailment classifier. Therefore, the applicability of our algorithm depends on the choice of the entailment classifier and its quality on the specific type of language-generation problem we consider. In this paper, we consider the transformer-based textual entailment classifier which is originally trained on the natural language inference (NLI) dataset. Since the NLI dataset consists of pairs of premises and hypotheses in a declarative form of moderate length, the entailment classifier would not perform well on pairs of long sequences. For such cases where the pretrained entailment classifier provides inaccurate predictions, extra calibration data are needed to train an entailment classifier from scratch [R1], or to finetune the base model [R2]. * [R1] Yu Gui, Ying Jin, and Zhimei Ren. “Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees.” ArXiv, 2024. * [R2] Christopher Mohri, et al. “Learning to Reject with a Fixed Predictor: Application to Decontextualization.” ICLR, 2024.
Summary: The paper investigates the trustworthiness of generative language models in critical decision-making systems, identifying deficiencies in current uncertainty learning methods, such as selective classification and conformal prediction, which fail to address metric misalignment in GLMs. The authors propose an entailment-based false discovery rate by leveraging logical entailment to define the relationship between true and generated answers. They introduce a supervised selective generation approach using entailment labels to control the FDR and a semi-supervised method to mitigate the high cost of obtaining these labels. This method employs an entailment set for pseudo-labeling and neuro-selection functions to enhance data space learning, reducing FDR. The neuro-selective entailing-generation algorithm is theoretically validated, meeting the PAC guarantee on the desired FDR. NSeGen demonstrates improved efficiency and reliability in achieving a desired FDR level compared to baseline models, highlighting its practical applications. Strengths: The work works to provide good grounding of both the problem and proposed path to the solution. With the proposed path, the authors work well to highlight the deficiencies both with current approaches, as well as their chosen approach and then incorporate fixes that make the deficiencies less of a problem. The work to do psuedo labelling and also deal with entailment is important as well as showing the theoretical guarantees. Weaknesses: 1. There is a bit of repetitiveness in the Experiment section when describing the GLMs and Datasets. 2. It should be made clear that your approach deals with the generation results from the underlying model. For example, when reading table 1 earlier I thought you were comparing Alpaca to your method, but later got it as you have different ways to deal with the selective generation. 3. It would be great to get a practical understanding of how much data would be needed to train a good enough selective algorithm. This is important especially in low resource scenarios (both language and application areas) as the space of application of where such an algorithm such as yours might be used is highly likely in places were there is not much data. So ablation studies on sizes of the training set would have been great. Technical Quality: 3 Clarity: 4 Questions for Authors: See my weaknesses. esp. 3. It would be great to get a practical understanding of how much data would be needed to train a good enough selective algorithm. This is important especially in low resource scenarios (both language and application areas) as the space of application of where such an algorithm such as yours might be used is highly likely in places were there is not much data. So ablation Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The work covers their limitations well and this was very appreciated. A practical guide for researchers might have been great in terms of knowing how much data is needed to help the psuedo labelling. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > There is a bit of repetitiveness in the Experiment section when describing the GLMs and Datasets. * Thank you for your suggestion. As you mentioned, there are redundancies in our descriptions of GLMs and datasets. The following is our revised version in a concise but detailed manner, and we will update it in our final manuscript. * Models and Datasets. We use two large language models (LLMs), GPT-3.5-Turbo and Alpaca-7B, for language generation. Specifically, we consider the open-ended QA task, where the model is prompted to generate the answer in a declarative form given a question. To validate our method and its theoretical guarantee on controlling FDR-E, we create a dataset on textual entailment using the Natural Questions (NQ) dataset [43] for each GLM. Based on the transformation method by [44] that converts the question and answer pair in QA dataset into a declarative form, we manually labeled textual entailment by letting the generated sequence as the premise and the reference answer in declarative form as hypothesis. Approximately 7.3k (7,374) and 3k (3,000) samples are labeled for GPT-3.5-Turbo and Alpaca-7B, respectively, and both are split into calibration data at an 8:2 ratio. For semi-supervised learning algorithms that exploit unlabeled data, at most 27k and 10k unlabeled samples are used to train a selective generator, varying its size. Besides, semi-supervised learning algorithms use only 75% of the labeled calibration data compared to what is used by supervised methods. > It should be made clear that your approach deals with the generation results from the underlying model. For example, when reading table 1 earlier I thought you were comparing Alpaca to your method, but later got it as you have different ways to deal with the selective generation. * Thank you for your suggestion. We agree that there should be a clarification that our approach depends on generation results of the underlying GLM we consider, even with the same dataset. We will update it in our final manuscript, which is the revised version of descriptions on GLMs and datasets from the above question. > It would be great to get a practical understanding of how much data would be needed to train a good enough selective algorithm. This is important especially in low resource scenarios (both language and application areas) as the space of application of where such an algorithm such as yours might be used is highly likely in places where there is not much data. So ablation studies on sizes of the training set would have been great. * Thanks for your advice. We conducted the ablation study on NQ dataset while maintaining the ratio of $|\mathbf{Z}_E|$ and $|\mathbf{Z}_U|$. We simply multiplied the sizes of $|\mathbf{Z}_E|$ and $|\mathbf{Z}_U|$ by 1, 0.7, 0.5, 0.3, and 0.1. To set initial data sizes of each model similar in ratio, the sizes of GPT-3.5 were set to about (2.4k, 10k) and Alpaca7B to (5.9k, 2.7k). $\epsilon$ was set to 0.25. Based on the representative NSEGen, FDR of GPT-3.5 was reported as (0.1561, 0.1478, 0.1146, fail, fail), and FDR of Alpaca7B was reported as (0.0251, 0.1478, 0.1146, fail, fail), respectively. * Two models show different FDR-Es on the NQ dataset. Considering this fact with the results, although Alpaca7B's $|\mathbf{Z}_U|, |\mathbf{Z}_E|$ on NQ dataset are larger, when the size is reduced, Alpaca7B fails to bound earlier than GPT-3.5, where other methods show similar trends. * Thus, for a stable bound, the certain size of $|\mathbf{Z}_E|$ seems to be needed proportional to the FDR(which is dependent on the model and data). * We will add this ablation studies by using plots in Appendix of our final manuscript.
Summary: This work presents a method for selective generation from language models for question answering. Their approach functions as a secondary decision function on top of an existing language model, determining whether to accept the language model's generation or to abstain. Their method is approach is based on constructing an entailment set of correct answers to a given question. The entailment set is defined as all answers that imply or textually entail the ground truth answer. They then design their secondary decision function such that it satisfies bounds on the correctness of accepted, generated answers based on whether it is within the constructed entailment set, or the answer not being in the ground truth entailment set. The authors then demonstrate that their method satisfies correctness guarantees based on conformal prediction, and demonstrate the efficacy of using their entailment set approach to identifying correctness empircally through experiments on open-domain question answering (NaturalQuestions) and with two language models (Alapaca7b and GPT3.5) Strengths: The goal of this work, selective generation for language models to prevent hallucinations and erroneous outputs is sounds and well motivated. This work provides theoretical guarantees on the efficacy of the selective generation approach, builds upon insights from prior works on the efficacy of using textual entailment to aid in calibrating QA predictions. Weaknesses: Relying on single-directional textual entailment as a method for determining correctness of an answer is susceptible to accepting generations that produce that untrue fact or hallucinations in addition to the correct answer. [1] is a very relevant work (that was not cited) that similarly uses entailment to determine equivalence between two answers; however, in their work they rely on bi-directional entailment to ensure answer equivalence. The baselines used in this work lack clear descriptions, and are not in line with current simple and effective methods for selective generation and calibration of language models [1, 2, 3]. [2] and [3] both do not require additional entailment training data and [1] takes a related approach using entailment to determine answer equivalence among a set of generated answers. [1] also does not require domain-specific entailment data. [1] Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation Lorenz Kuhn, Yarin Gal, Sebastian Farquhar ICLR 2023 [2] Self-consistency improves chain of thought reasoning in language models. X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou. ICLR 2023 [3] Language Models (Mostly) Know What They Know Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, Jared Kaplan ArXiv 2022 Technical Quality: 3 Clarity: 2 Questions for Authors: The presentation of the experimental details could benefit from additional explanation. Clearly defining what the baselines are and how their implemented (not just the algorithms in the appendix) would greatly improve readability. This applies to the experimental results and metrics as well. Evaluations could be designed in a way that (1) more clearly describe each methods efficacy and performance in the end task (QA in this case) and (2) allow for direct comparison against other methods also designed for calibration and selective generation. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors address that they do not report statistical significance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Relying on single-directional textual entailment as a method for determining correctness of an answer is susceptible to accepting generations that produce that untrue fact or hallucinations in addition to the correct answer... - We agree that the bi-directional entailment is the best choice in terms of evaluating the "equivalence" of two sequences. However, for open-ended QA problem, GLM is prompted to generate a sequence in a free-form that often includes extra explanation. Therefore, considering the single-direction entailment as a correctness metric may be preferred. - Furthermore, the problem set-up that our algorithm considers is general since it can take any type of entailment $R$ as a correctness metric (Eq. (1)). Taking paraphrase generation as example, it is appropriate to let $R$ be the bi-directional entailment, and FDR-E is defined as $P${$G(x)\notin E_{\text{true}}(y)\wedge y\notin E_{\text{true}}(G(x))$}. Then, the implementation of our algorithm is exactly the same as we did with the open-ended QA problem. - Additionally, we conducted experiments using bi-directional entailment and compared them with the result from single-directional entailment in the open-ended QA problem using proxy labels, showing similar results. For GPT3.5, NSEGen can bound FDR-E to 0.1659 (where efficiency is 0.9804) when using single-directional entailment, while to 0.1594 (where efficiency is 0.9653) using bi-directional entailment. Other baselines showed similar trends. > The baselines used in this work are not in line with current simple and effective methods ... [R1, R2, R3]. - We appreciate constructive suggestions on related work. Suggested papers [R1-R3] are mainly to design better uncertainty measures for the generated sequence (=a good “scoring function”). It is a very important issue in the NLP literature, and the efficiency of our algorithm implicitly depends on a given scoring function. However, “given” a scoring function, what our algorithm does is to learn a selective generator which refuses to generate the sequence based on a scoring function value. Therefore, designing a scoring function is out of our scope. [R1-R3] can be used to define the scoring function, and our algorithm provides a theoretical guarantee on FDR-E, irrespective of the choice of scoring functions. > [R2] and [R3] both do not require additional entailment training data and [R1] takes a related approach using entailment to determine answer equivalence among a set of generated answers. [R1] also does not require domain-specific entailment data. - We agree. But we also want to highlight that an extra entailment-labeled training set is necessary to “guarantee” the rate of hallucinations (Theorem 1). - Besides, it is also true that [R1] does not require “domain-specific” entailment data, where the transformer-based entailment classifier that [R1] utilizes is trained on the NLI dataset. While the classifier can be applied to arbitrary pairs of sequences, it underperforms on some language generation problems where the pair comes from different distribution. For such cases, extra calibration data are needed to train an entailment classifier from scratch [R4], or to finetune the base model [R5]. * [R4] Yu Gui, Ying Jin, and Zhimei Ren. “Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees.” ArXiv, 2024. * [R5] Christopher Mohri, et al. “Learning to Reject with a Fixed Predictor: Application to Decontextualization.” ICLR, 2024. > The baselines used in this work lack clear descriptions… > Clearly defining what the baselines are and how they are implemented ... would greatly improve readability. This applies to the experimental results and metrics as well. - We clarified descriptions on baselines (SG-EM, SG-EL, SG-ES) along with extra baselines (SG-ES-Sup, SG-PL, SG-PFL) for clear comparisons. * Supervised Baselines * SG-EM: A supervised method that uses exact match metric * SG-EL: A supervised method that uses semantic correctness encoded in the entailment label instead of exact match * SG-ES-Sup: A method that uses same entailment set algorithm in SG-ES, where $Z_U = \emptyset$ * Semi-Supervised Baselines * SG-PL & SG-PFL: They exploit the unlabeled data by pseudo-labeling entailment based on a threshold. Both algorithms are heuristics, since they do not control the mislabeling error. * Metrics * FDR-E refers to the ratio of whether a generated answer does not entail the true answer for a test sample. * Efficiency refers to the ratio of data selected in the test set. > The presentation of the experimental details could benefit from additional explanation. - We employ two large language models (LLMs), GPT-3.5-Turbo and Alpaca7B. We create a dataset on textual entailment using the Natural Questions (NQ) dataset [43]. Based on the transformation method by [44], we convert the (q, a) pair into a declarative form for each model. Besides, semi-supervised learning algorithms use only 75% of the labeled data compared to what is used by supervised methods. - To control an FDR-E, we use two user-specified parameters $(\epsilon, \delta)$, where we use $(0.25, 0.02)$ unless specified. For our methods (i.e., SG-ES, NSEGen, and SG-ES-Sup), we have four parameters ($\epsilon_S, \epsilon_E, \delta_S, \delta_E$) which are mapped as follows: $\epsilon_S = \epsilon$, $\epsilon_E = 10^{-4}$, $\delta_S = \delta/2, \delta_E = \delta/2$. For other methods without using entailment sets, we use $\epsilon$ and $\delta$ accordingly. > Evaluations could be designed in a way that (1) more clearly describe each method's efficacy and performance in the end task (QA in this case) and (2) allow for direct comparison against other methods also designed for calibration and selective generation. - We have compared methods in attached files (Table 1, 2). > The authors address that they do not report statistical significance. - See Figure 1 in attached files. --- Rebuttal 2: Comment: Dear Reviewer abKP, We again appreciate your constructive feedback to improve our submission. As the discussion period ends soon, we'd like to hear more about your opinion on our submission to address your concerns. Thanks! Best, Authors
Summary: The paper looks at a selective generative language system, meaning one which can produce a I-Don't-Know label rather than an answer, and calibrating it such that some guarantees on the precision of the system can be made. The paper expands upon citation [1] on selective generation, to improve the efficiency by not requiring an exact match metric on the answer in order to be able to say whether it is correct or not. This is important for language generation tasks where there are endless possibly valid answers. It does this via the use of entailment measures, ie saying whether a candidate question+answer pair have positive/negative or neutral entailment. The paper reports theoretical analysis of when the proposed method can be learnt within given error margins, doing so under reasonable assumptions of the data being IID. This may not truly be the case for say actual question-answer datasets, but is a required assumption for the theoretical statements that are presented. The paper also presents empirical results on a question-answer dataset (Natural Questions), comparing against other selective generation methods with exact matching (rather than entailment), and with entailment sets (from labels or inferred). The proposed method which learns the selective threshold is shown to have the best false-discovery-rate (ie the inverse of precision on the generated answer). Strengths: The requirement of having labels for answer correctness is clearly a large limiting factor in language generation tasks given the large number of possible correct answers. Relaxing this via entailment is therefore well motivated. Thorough theoretical analysis given, albeit under Weaknesses: * Say more about what language problems this could and could not be applied to. The key appears to be in being able to have an accurate entailment function. For how many types of language-generation problems is such obtainable? * Some data on rates of IDK generation on the test set would be useful to see. * Minor suggestion: an algorithm box showing the steps for the whole proposed approach would be helpful for the reader. * The paper would benefit from an editor / proofreading. It doesn't impact understanding, but there are several small typos (lines 34, 69, 367 as some examples) Technical Quality: 3 Clarity: 3 Questions for Authors: * In figure 2, are the SG-EM and SG-ES systems scoring 0? I'm not familiar with the prior literature, but with this paper being strongly influenced by [1] where SG-EM is from, it seems surprising these 2 baselines are so poor. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: * The theoretical analysis relies on the IID assumption, which won't hold on real world language datasets. This is noted by the authors. * Not clear how many language problems this is applicable to. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Say more about what language problems this could and could not be applied to. The key appears to be in being able to have an accurate entailment function. For how many types of language-generation problems are such obtainable? Thanks for raising a nice point. We expect our method can be generalized to a variety of language generation problems, but we currently consider open-ended QA due to the limitation of the entailment classifier that we rely on. * In particular, as you have mentioned, the key point is whether we have access to an accurate entailment function for the given language generation problem. Specifically, as can be seen in Algorithm 1, the accuracy of the entailment function depends on the performance of the entailment classifier. Therefore, the applicability of our algorithm depends on the choice of the entailment classifier and its quality on the specific type of language-generation problem we consider. In this paper, we consider the transformer-based textual entailment classifier which is originally trained on the NLI dataset. Since the NLI dataset consists of pairs of premises and hypotheses in a declarative form of moderate length, the entailment classifier would not perform well on pairs of long sequences (e.g., summarization). For such cases where the pretrained entailment classifier provides inaccurate predictions, extra data are needed to train an entailment classifier from scratch [R1], or to finetune the base model [R2]. * [R1] Yu Gui, Ying Jin, and Zhimei Ren. “Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees.” ArXiv, 2024. * [R2] Christopher Mohri, et al. “Learning to Reject with a Fixed Predictor: Application to Decontextualization.” ICLR, 2024. > Some data on rates of IDK generation on the test set would be useful to see. * Rates of IDK generation correspond to 1 - efficiency, which refers to the ratio of data selected in the test set. Efficiency is reported by figures and tables in our response.pdf (or the paper). * If you want some data examples of IDK generation, we sampled three examples of GPT3.5 that NSEGen says IDK, in addition to Table 2 of the paper. * Question: who sang there she was walking down the street * Correct Answer: Manfred Mann * Generated Answer: The Beatles sang “There she was walking down the street” * Question: how long is the ferry ride from cape may to lewes * Correct Answer: 80 minutes * Generated Answer: The ferry ride from Cape May to Lewes is approximately 1 hour and 15 minutes. * Question: what is the first book in the hive series * Correct Answer: Institute of Villainous Education * Generated Answer: The first book in the Hive series is “The Hatching” by Ezekiel Boone. > Minor suggestion: an algorithm box showing the steps for the whole proposed approach would be helpful for the reader. * Thank you for your suggestion. We will add the algorithm box on the final manuscript. > The paper would benefit from an editor / proofreading. It doesn't impact understanding, but there are several small typos (lines 34, 69, 367 as some examples) * Thank you for your suggestion. We will proofread over the draft and fix typos and errors for better readability. > In figure 2, are the SG-EM and SG-ES systems scoring 0? I'm not familiar with the prior literature, but with this paper being strongly influenced by [1] where SG-EM is from, it seems surprising these 2 baselines are so poor. * The main reason for the small FDR result of SG-EM is that it does not consider semantic correctness of answers (i.e., it only considers the answer is correct only if it is exactly the same as a given true answer). Thus, to achieve a desired FDR rate, [1] finds a conservative selective generator (i.e., $\tau$ is close to one thus mostly saying IDK), resulting in low FDR to achieve a desired FDR rate. The main reason for the small FDR result of SG-ES is due to the not-performant scoring function, i.e., a scoring function $f_{M_1}$ usually does not have high scores on correct answers. In this case, Algorithm 3 does not tightly achieve a desired FDR rate, it finds a conservative selective generator (i.e., $\tau$ is close to one thus mostly saying IDK), resulting in low FDR to achieve a desired FDR rate (as the SG-EM case). * The reason why the score is exactly '0' is because we did not report the scores on the figure when the methods cannot be bound under $\epsilon_S$, for the reasons above. (In this case, $\tau$ is the largest value in the calibration data which is ~= 1). * We will add this analysis in the paper and to avoid such confusion, we will use Table to annotate the cases where $\epsilon_S$ guarantees are not satisfied. --- Rebuttal Comment 1.1: Comment: BTW, we have a formatting issue in our first answer. Our response starts from "Thanks for raising..." in the question quote. Please consider this when you read our response. Thanks!
Rebuttal 1: Rebuttal: We appreciate reviewers’ valuable feedback and constructive comments. In this global response, we delineate the structure of our responses. * We provide individual responses to questions of each reviewer. * We provide pdf, including requested experiment results. * We also suggested an improved method due to a request – We can provide a proof (also a slight variation of the original proof) on the correctness of the improved method if required. Pdf: /pdf/8e4541516b07bdad77c61674c0506a90ce697394.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning to Understand: Identifying Interactions via the Möbius Transform
Accept (poster)
Summary: The paper present a way to compute high-order interactions using Mobius transformation. Strengths: The paper shows some interesting results on the sampling requirement for recovering the interactions among features, specially tailored for explainability with boolean functions (i.e., explanation game) Weaknesses: 1. The paper is difficult to follow and does not have a clear storyline; most importantly, it is not clear how the significant interactions could be recovered at the end after obtaining the required samples 2. Aside from the clarification, though having an innovative approach and some interesting results, the method seems to have little applicability in real-world applications; one reason for doing so is one of the main assumptions of the method of having only K nonzero interactions (= K nonzero Mobius coefficient). For explainability, as defined, for instance, in SHAP, it is typically the case that we have many nonzero interactions since it is rare that the interactions among a feature subset will go to zero (given the way we compute the interactions). This implies that in most cases, we have a very large K of nonzero interactions, and this would make the sparse Mobius transformation impractical. 3. That being said, it is not clear how exactly the interactions are obtained; as the authors stated in the paper, the naive computation of all Mobius coefficients includes exponentially many parameters; given the samples, what interactions should be included as significant? 4. Last but not least, in explainability, we usually seek the most influential features and the most important interactions (rather than all feature importance or significant interactions); the paper seems to merely focus on computing all the significant interactions rather than the most important ones, and in explainability, this is not essentially sought and seems impractical due to many nonzero interactions. Minor: Some nations need more explanation as they are borrowed from other disciplines (as the authors also stated in the Introduction, the paper is multidisciplinary); For instance, the functional subsampling and aliasing that are the backbone of the sampling strategies, required more explanation to average reader of the paper, IMHO. Technical Quality: 2 Clarity: 1 Questions for Authors: how can the results of the paper be generalized to non-boolean functions? this would generalize the results from detecting interactions for explainability to a general case of interaction detection for any arbitrary dataset. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: The assumption of having only K non-zero interactions, while in practice K is typically large for explainability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We believe that some aspects of this review may be based on misunderstandings, which may have resulted in some false or misleading statements in your review. In this rebuttal we will provide additional information and clarification that will hopefully resolve your concerns, and help underscore the significance of this work. **Point 1 and 3**: In your first point you said: "it is not clear how the significant interactions could be recovered at the end after obtaining the required samples". You also asked in your third point "That being said, it is not clear how exactly the interactions are obtained; as the authors stated in the paper, the naive computation of all Mobius coefficients includes exponentially many parameters; given the samples, what interactions should be included as significant?" A: The purpose of this manuscript is precisely to show how a small number of the interactions (which correspond to Mobius coefficients) can be recovered. A Mobius coefficient corresponds to an interaction, because it represents the joint effect of a given set of inputs that only occurs when all those given inputs are active. Theorem 1 shows that our algorithm recover the $K$ non-zero Mobius coefficients using the SMT algorithm. In this noise-free context "significant" means non-zero. Theorem $2$ considers the setting where we have $K$ large non-zero Mobius coefficients, but there is also noise, which corresponds to many relatively small interactions between inputs. In this context the SMT algorithm recovers the $K$ large interactions, which are the ones we call "significant". **Point 2**: You have also made the following claim. "Aside from the clarification, though having an innovative approach and some interesting results, the method seems to have little applicability in real-world applications; one reason for doing so is one of the main assumptions of the method of having only K nonzero interactions (= K nonzero Mobius coefficient). For explainability, as defined, for instance, in SHAP, it is typically the case that we have many nonzero interactions since it is rare that the interactions among a feature subset will go to zero (given the way we compute the interactions). This implies that in most cases, we have a very large K of nonzero interactions, and this would make the sparse Mobius transformation impractical." A: This is an important distinction. You are correct in your statement that in nearly all practical settings, there will be a large number of *nonzero* interactions (or Mobius coefficients). However, what has been discovered by ourselves and many others is that a relatively small number of Mobius coefficients are needed to faithfully represent the functions learned by well-trained machine learning algorithms. For reference on a diverse set of machine learning algorithms and applications, we refer the review to Figures 2 and 6 in our paper, Figure 3 in [1], and Figure 3 in [2]. Furthermore, as shown Figure 2 in our paper and Figures 4, 7-9 in [1], these significant interactions (high magnitude Mobius coefficients) tend to be low-order. Under the interpretation that insignificant interactions are treated as noise (an interpretation also made by [1] and [2]), we provided in this work a noise-robust algorithm for extracting the K significant interactions up to order t, with the precise statement of this result in Theorem 5.2. [1] Ren et al. Where We Have Arrived in Proving the Emergence of Sparse Interaction Primitives in DNNs. ICLR, 2024. [2] Li et al. Does a Neural Network Really Encode Symbolic Concepts? ICML, 2023. **Point 4**: "Last but not least, in explainability, we usually seek the most influential features and the most important interactions (rather than all feature importance or significant interactions); the paper seems to merely focus on computing all the significant interactions rather than the most important ones, and in explainability, this is not essentially sought and seems impractical due to many nonzero interactions." A: We could not make sense of this comment. To be clear, this paper aims to accomplish exactly what you state is the essential goal of explainability. Please feel free to clarify. **Note on Clarity and Narrative**: You have mentioned that you felt that this work was "difficult to follow and does not have a clear storyline". This comment was rather surprising to us, as we have received contradictory feedback from others (including both of the other reviews). We believe the paper is organized in such a way as to naturally explain the different parts of the algorithm, and we have included many figures to aid in explanation. If you have any concrete suggestions on how to improve clarity and the narrative, we welcome it. **Extending beyond binary functions**: The Mobius transform can certainly be extended beyond the binary inputs, but there remains significant work to extend this algorithm to the $q$-ary case. Critically, we have used group testing designs and results from group testing. Results in non-binary group testing are very limited. In addition, it is unclear how one would extend other core parts of the algorithm, such as the sampling, to the $q$-ary case for $q> 2$. --- Rebuttal Comment 1.1: Title: raise score Comment: I appreciate the responses of the authors to all reviewers and adjust my score accordingly. Just to elaborate more on point 4; the proposed method finds all K significant interactions, which is very interesting since the search space for finding such interactions is exponential. However, the limitation of the method is when K is large (which is typical of the way we define explanation games in explainability), and many of them are not probably of use since we mainly seek the most significant interactions for explanations (e.g., k most significant interactions where k <<<< K). --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your reply. We wanted to ask: *What are the issues that still keep this paper from being a clear accept*? It would be great if we could resolve these issues and have a unanimous opinion on this paper. **Regarding Point 4**: Thank you for your clarification. As we now understand it , we believe we have a solution to this. In fact, this solution is *already present* in the paper from line 516-521. The core idea is that we include a post-processing step that takes the $K$ coefficients, and further sparsifies them to our desired level. This is what we have done in Fig. 6 to produce the results for SMT. Thus you can extract a much smaller number $k << K$ coefficients using the following idea in practice: 1. Choose $b$ (which serves as a proxy for $K$) and run the full SMT algorithm to get $\hat{F}$. $b$ should be set such as the computational budget allows. 2. Apply regression and LASSO steps to further process the transform $\hat{F}$ to reduce the number of coefficients. Empirically, we observe that as we increase the amount of regularization in the LASSO, some of the coefficients get set to zero, and their mass is moved to other coefficients. *If this is the main issue you still have, we could include this idea and discussion more prominently in the manuscript*. For your convenience, we have included the relevant appendix excerpt below --- We run SMT to obtain a sparse Möbius representation $\hat{F}$ with support $\mathrm{supp}(\hat{F})$. Then, we fine-tune the values of the coefficients by solving the following regression problem over a uniformly sampled set of points $\mathcal{D} \subseteq \mathbb{Z}\_2^n$: \begin{align*} & \min\_{\hat{f}, \boldsymbol{\alpha}} \; \sum\_{\mathbf{m} \in \mathcal{D}} (\hat{f}(\mathbf{m}) - f(\mathbf{m}))^2 \\\\ & \\;\\; \text{s.t.} \quad \hat{f}(\mathbf{m}) = \sum\_{\mathbf{k} \leq \mathbf{m}, \mathbf{k} \in \mathrm{supp}(\hat{F})} \alpha_{\mathbf{k}}, \forall \mathbf{m}. \end{align*} To measure the faithfulness captured by sparse approximations, we modify the regression problem by adding an $\ell_1$ penalty on the values of the Möbius coefficients. Then, we vary the penalty coefficient $\lambda$ to obtain different levels of sparsity: \begin{align*} & \min\_{\hat{f}, \boldsymbol{\alpha}} \; \sum\_{\mathbf{m} \in \mathcal{D}} (\hat{f}(\mathbf{m}) - f(\mathbf{m}))^2 + \lambda \sum\_{\mathbf{k} \in \mathrm{supp}(\hat{F})} |\alpha_{\mathbf{k}}|\\\\ & \\;\\; \text{s.t.} \quad \hat{f}(\mathbf{m}) = \sum_{\mathbf{k} \leq \mathbf{m}, \mathbf{k} \in \mathrm{supp}(\hat{F})} \alpha_{\mathbf{k}}, \forall \mathbf{m}. \end{align*}
Summary: This study proposes a new efficient method to compute the Möbius Transform. To understand the behavior of large complex machine learning models, various studies have used game theory concepts such as the Shapley value to measure the impact of input variables on the model's outputs. While Shapley values only focus on individual impacts of input variables, Möbius Transform can score any subset of input variables, which can take into account higher-order interactions between input variables. One of the fundamental challenges of computing Möbius Transform is its exponential computational cost (i.e., requiring $2^n$ inferences with $n$ input variables). This study reduces its cost; with a sparsity assumption, the computation runs with $O(n)$ number of samples, and with a low-degree assumption, it further reduces to $O(\log(n))$. The proposed method (Sparse Möbius Transformer; SMT) exploits a connection between the Möbius Transform in the original $n$-dimensional space (say, F-space) and that in a lower $b$-dimentional space (say, U-space). If the *aliasing set* is a singleton, then the Möbius Transform in F-space can be recovered from that in U-space. The authors proposed methods to detect singletons and turn mulitons into singletons using graph peeling. The experimental results justify the efficiency and the high faithfulness recovered by SMT. Strengths: - This study offers an efficient computational method of Möbius Transform, which encompasses widely used game-theoretic metrics such as Shapley values and Benzhaf values, and the accelaration from baselines is significant. - The proposed method is theoretically supported. - While the paper is densely theoretical, the paper writing is generally easy to follow. - While not strongly practical and large scale, the assumptions and claims are justified by numerical experiments at key points. Weaknesses: **Major comments** - The proposed algorithm works on a $b$-dimensional space, but I was not able to find out how $b$ is set in the experiments and ablation experiments on this hyper-parameter. While the theory suggests $b = O(\log(K))$, this is not very suggestful due to the lack of the constant factor. - The baselines in Fig. 6 are all first-order metrics. In machine learning and computer vision, second-order metrics (e.g., Interactions) are also popular. Why were those methods not included in the experiments? **Minor comments** - It'd be better to clarify several notations, such as $|\mathbf{k}|$ and $[K]$. - To better convince readers of the connection between Shapley value and Möbius basis, it'd be better to provide a deriviation of the left equality of Eq. (2). - In the main text, $C$, the number of samplings (?), appears a bit out of sudden at line 193. Technical Quality: 3 Clarity: 3 Questions for Authors: I'd like the Authors to answer the weaknesses raised above. Plus, - At Limitations, the Authors mentioned that "we may be more interested in taking a sparse projection onto a subset of low-order terms", but I'm not fully following this. Can't we resolve it simply by zeroing out the coefficients of high-order terms (just as "a low-pass filter")? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are adequately presented. For the potential improvements, see Weakness and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. $b = O(\log(K))$. In the appendix, we consider the constant $\eta = \frac{2^b}{K}$, which we refer to as the inverse load factor. Theoretically, if we carry through the density evolution analysis, it is possible to find the minimal $\eta$ to ensure convergence of the message passing asymptotically. This is a relatively small number. for instance, $\eta = 0.4073$ suffices for $C=3$ (see [1], Table 1 for a full analysis). In practice, for the real, finite-regime experiments we find $b = \mathrm{ceil}(\log(K))$ typically works well, though it is possible to tune the parameter and adjust $b$, which can be helpful, especially in noisy settings, and if we don't know $K$ a priori, we can increase $b$ until the the algorithm works. (In fact, it is possible to re-use data from smaller values of $b$ to re-run with larger $b$ to expedite this process) [1] Li, Xiao, Joseph K. Bradley, Sameer Pawar, and Kannan Ramchandran. "SPRIGHT: A fast and robust framework for sparse Walsh-Hadamard transform." arXiv preprint arXiv:1508.06336 (2015). 2. Fig. 5 focuses on comparisons with second order methods, and reveals the massive computational advantage of our method as compared to those baselines. In Fig. 6, we have chosen to focus on first order comparisons because these experiments are run at a reasonably large scale where running higher order methods begins to get computationally difficult. It is likely that higher order methods would perform closely to our methods, just with an increased computational burden. Question about projection: In this case truncation is not the same thing as a sparse projection onto low order terms, though it is not obvious. The reason this is not the case is because the Mobius transform is a representation in terms of an AND function basis. In fact, the Shapley value, and other interaction indices are usually different projections into the Mobius AND basis. This is different from a Fourier transform, which uses an XOR basis. In the case of the Fourier transform, due to orthonormality, your assertion is correct, and simply "zeroing-out" higher order terms is a projection. --- Rebuttal 2: Title: Response to rebuttal Comment: Thank you for the clarification. The authors addressed my concerns properly. About 2), while Figure 5 compares second-order methods, it does not show how faithfulness increases with the increase of computational budgets. Practically, we may be interested in the cost of reasonable reconstruction (e.g., 90%). I believe this kind of analysis and second-order version of Fig.6 (with a small dataset) will improve the completeness of this work (just a comment, not asking for an immediate response). Overall, I'll maintain my score for now. There seems to be a correction/improvement in the proofs pointed out by Reviewer jcxh, so my final score can be affected by the consequences of the discussion between the Authors and Reviewer jcxh. --- Rebuttal Comment 2.1: Comment: Thank you for your prompt response. Our discussions with reviewer jcxh are now concluded. If you have any further comments please let us know. Regarding your comments about comparison, we will keep this in mind as we prepare the final manuscript if this paper is accepted.
Summary: This paper proposes an algorithm to efficiently compute Mobius transform under the assumption that the function to be transformed is composed of sparse and low-degree interaction terms. The paper also provides an asymptotic analysis of the sample complexity, time complexity, and accuracy of the algorithm. Strengths: 1. This paper focuses on an important problem in Explainable AI, i.e., how to efficiently compute the Mobius transform (or equivalently, the Harsanyi dividend [cite1]) of a function to obtain the interaction between different input variables. The computation requires inferences on $2^n$ different subsets of input variables, which is intractable in general. The paper proposes an algorithm that leverages the sparse and low-degree structure of the interactions observed in experiments, thus reducing the computational cost. 2. The algorithm is presented in full detail, with straightforward examples to illustrate each step. Weaknesses: I have been studying explainable AI (XAI) for years, with a special interest in the Shapley value, the Shapley interaction index, and the Mobius transform. I am impressed by this work and greatly appreciate the progress in the efficient computation of Mobius transform. The sparsity and low-degree structure of interactions and the proposed algorithm can be of great interest to the XAI community, because they help explain a neural network’s decision-making logic into sparse symbolic interactions. However, I find that the claim and corresponding proof on the singleton detection algorithm are incorrect, and the literature review is insufficient. Nevertheless, **I would like to significantly raise my score to acceptance if these concerns are appropriately addressed.** 1. The most important part of the proposed method is the singleton detection algorithm (in Section 4 and Appendix C.7). However, I find that the claim “singleton identification and detection can be performed without error in the noiseless setting” and the corresponding proof (in Appendix C.7.1) are incorrect. To be specific, the proposed method uses $U(j)$ to efficiently compute of the Mobius transform if $U(j)$ passes the singleton detection, but I find that the singleton detection algorithm in the paper cannot guarantee that $U(j)$ *is indeed a singleton*, although the paper claims that it can. To this end, we can easily find counterexamples on which the singleton detection algorithm fails. In summary, we can prove that the algorithm will fail under the following conditions. Given the number of input variables $n$ and the subsampling parameter $b<n$, the interactions satisfy (1) $\forall k\in \\{0,1\\}^n$ s.t. the first $b$ elements of $k$ are all 1’s, $F(k)=-\delta$ if $|k|=n$, $F(k)=\frac{\delta}{n-1-b}$ if $|k|=n-1$, and $F(k)=0$ if $|k|<n-1$, where $\delta \in \mathbb{R}$ is an *arbitrary* scalar (2) $\forall k\in \\{0,1\\}^n$ s.t. at least one of the first $b$ elements of $k$ is 0, the value of $F(k)$ can be *arbitrary*. **Counterexample:** Let us consider $n=6$ input variables and set $b=2$ for subsampling, which follows the setting in Figure 3. Suppose there are five non-zero interactions in total: $k_1=110111, k_2=111011, k_3=111101, k_4=111110, k_5=111111$. And $ F(k_1)= F(k_2)= F(k_3)= F(k_4)=1/3$, $F(k_5)=-1$. In the $c$-th subsampling run, let the subsampling matrix $H_c=[[1,0,0,0,0,0],[0,1,0,0,0,0]]$. Then, we will obtain $u_c(00)=f(001111)$, $u_c(01)=f(011111)$, $u_c(10)=f(101111)$, $u_c(11)=f(111111)$, and also $U_c(00)=0$, $U_c(01)=0$, $U_c(10)=0$, $U_c(11)=F(k_1)+ F(k_2) + F(k_3) + F(k_4) + F(k_5)$. The next step in the proposed method is to detect if each $U_c(j)$ is a zeroton, a singleton, or a multiton. This is done by adding different unit vectors $d_{c,p}=e_p, p=1,…,n$, and obtain $u_{c,p}(l)=f\left(\overline{H_c^\top \bar{l} + d_{c,p}}\right) \rightarrow U_{c,p}(j)=\sum_{k\le \bar{d} \ s.t. Hk=j} F(k)$. We list the results of $U_{c,p}(j)$ for $j=11$, in the following table: | | p=0 | p=1 | p=2 | p=3 | p=4 | p=5 | p=6 | | ------------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | | $d_{c,p}$ | 000000 | 100000 | 010000 | 001000 | 000100 | 000010 | 000001 | | $U_{c,p}(11)$ | 1/3 | 0 | 0 | 1/3 | 1/3 | 1/3 | 1/3 | **The above counterexample passes the singleton detection algorithm, but $U_c(11)= F(k_1)+ F(k_2) + F(k_3) + F(k_4) + F(k_5)$ contains five terms and is apparently not a singleton.** To be more specific, we can see that the metric $y_{c,p}=1-\frac{U_{c,p}(11)}{U_{c,0}(11)}$ is either 0 or 1, for $p=1,…,n$. Therefore, according to the criteria in Equation (9), the multiton $U_c(11)= F(k_1)+ F(k_2) + F(k_3) + F(k_4) + F(k_5)$ will be mistakenly judged as a singleton by the algorithm. As a result, the algorithm will yield an incorrect output that contains only one interaction term, while the ground truth contains five interaction terms. I suggest the authors either revise the claim and the proof in Appendix C.7.1 to be in an asymptotic or probabilistic manner, or provide detailed discussion to the failure cases of the singleton detection algorithm. Even though the algorithm cannot handle all scenarios, I’m still willing to raise my score if proper discussion on the failure cases is added. --- 2. The literature review is insufficient. The core background behind the proposed method is the sparsity of Mobius transform. In fact, the sparsity of Mobius transform (or equivalently, the Harsanyi dividend) has been well studied by many previous works, but these works are not properly discussed in the paper. For example, [cite2, cite3] observed the sparsity of Mobius transform on a wide range of network architectures (e.g., MLPs, CNNs, transformers, PointNet) trained on various tasks (e.g., tabular data classification, image recognition, sentiment analysis, pointcloud classification). [cite4] explored the source of the sparsity and even proved the sparsity of Mobius transform under three sufficient conditions. The authors are expected to include these previous works in the Related Work section or in the discussion of the sparsity assumption in Section 2. --- 3. The paper reports the relationship between Mobius transform and the Shapley value in Equation (2) and Appendix A. However, this relationship has already been discussed in the original paper of Harsanyi dividend [cite1]. Therefore, I encourage the authors to cite paper [cite1] as a proper reference. [cite1] John C. Harsanyi. A simplified bargaining model for the n-person cooperative game. International Economic Review, 4(2):194–220, 1963. [cite2] Ren et al. Defining and Quantifying the Emergence of Sparse Concepts in DNNs. CVPR, 2023. [cite3] Li and Zhang. Does a Neural Network Really Encode Symbolic Concepts? ICML, 2023. [cite4] Ren et al. Where We Have Arrived in Proving the Emergence of Sparse Interaction Primitives in DNNs. ICLR, 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses above --- After rebuttal: The authors' response mostly addresses my concern about the singleton detection algorithm. The authors also commit to incorporating the relevant literature into their paper. Therefore, I have raised my score to acceptance. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer jcxh. Thank you for your through and helpful review. We are delighted that you are impressed with our contributions towards efficient Mobius Transform computation, and agree with our assumptions of low-degree interactions and sparsity. We believe that our work will in time be appreciated as a significant step forward for Mobius transform computation. **Proof Correction** We have corrected the technical issue you highlighted, and will include that information in a separate comment. **Literature Review** Regarding the literature review, we are very happy to see these other references, particularly the modern ones. It is exciting to see a community of people working on these types of problems, and these are excellent examples that highlight the practical points of our assumptions. These papers will bolster our current Section 2, and add credibility to our work. We will also certainly include the Harsanyi reference. **Final note** Reviewer vNmZ has raised concerns about the validity of our assumptions for practical settings. We have made the case that these are in fact interesting and important assumptions to consider. Given your expertise in this area, we would welcome your contribution to the discussion. --- Rebuttal 2: Title: Proof Revision (Assumption) Comment: **Updated Proof** We would like to thank the reviewer for identifying this issue. After a thorough analysis, we see that the previous proof in Appendix C.7.1 (which applies to the noiseless case only) was only for multitons of order $2$, and pathological examples as you described can be constructed for higher order multitons. We have resolved this issue with the full proof included below. As the reviewer has suggested, we have introduced a very mild probabilistic assumption on the values of the non-zero coefficients, so that they may not be completely arbitrary. The proof for our noisy result, which uses a more complicated approach and requires additional samples (along with already restricting $F(\mathbf{k})$) is not impacted by this issue. **Assumption 2.3 (No Cancellation)** Suppose the non-zero values $F(\mathbf{k}_1)$, ... , $F\(\mathbf{k}_K\)$ are sampled from a joint distribution $\mathbb{P}$ that satisfies the following condition: \begin{equation} \sum_{i \in S}F(\mathbf{k}_i) \neq 0, \;\forall S \subseteq [K], \; S \neq \emptyset. \end{equation} Assumption 2.3 is quite mild, and there are many classes of $\mathbb{P}$ that satisfy this assumption. *For example, any absolutely continuous $\mathbb{P}$ satisfies Assumption 2.3 a. s*. **Proof.** For simplicity let $\mathbf{F}$ represent a $K$ dimensional random vector containing $F\( \mathbf{k}\_i \)$ at index $i$. Let the set $\mathcal{R}(S, \alpha) = \\{\mathbf{F} : \sum_{i \in S}F(\mathbf{k}\_i) = \alpha\\}$. Since $\mathbb{P}$ is absolutely continuous, a density $p$ exists such that: \begin{equation} \mathbb{P}(\mathbf{F} \in \mathcal{R}(S,\alpha)) = \int_{\mathcal{R(S,\alpha)}}dp. \end{equation} However, $\mathrm{dim}(\mathcal{R}(S,\alpha)) = K - \left\lvert S \right\rvert$. Thus for $S \neq \emptyset$, $\mathcal{R}(S,\alpha)$ has Lebesgue measure zero, thus \begin{equation} \mathrm{Pr}\left(\sum_{i \in S} F(\mathbf{k}_i) = 0\right) = \mathbb{P}(\mathbf{F} \in \mathcal{R}(S,0)) = \int\_{\mathcal{R}(S,0)} dp = 0. \end{equation} --- Rebuttal 3: Title: Proof Revision (Proof, Theorem 1) Comment: We separate the proof into three parts: (1) Prove $\mathrm{Detect}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_Z \implies \mathrm{Type}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_Z$. (2) Prove $ \mathrm{Detect}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_S \implies \mathrm{Type}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_S$. (3) Prove $\mathrm{Detect}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_M \implies \mathrm{Type}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_M$. Consider the subsampling group $\mathbf{U}_{c}(\mathbf{j})$ for some fixed $c, \mathbf{j}$. We first consider the case where $\left\lvert \mathbf{k}\right\rvert$ is not restricted, and denote the set of non-zero indices $\mathbf{k}_1, \dotsc, \mathbf{k}_K$ as $\mathcal{N}$. **Proof of (1)** Let $\mathrm{Detect}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_Z$, and for contradiction's sake assume $\mathrm{Type}(\mathbf{U}\_c(\mathbf{j})) \neq \mathcal{H}\_Z$. \begin{equation} \mathrm{Detect}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_Z \implies \sum_{\mathbf{k} \leq \bar{\mathbf{d}}\_p\; \text{ s.t. } \mathbf{H}\_c \mathbf{k} = \mathbf{j}} F(\mathbf{k}) = 0 \;\forall p, \end{equation} Since $\mathrm{Type}(\mathbf{U}_c(\mathbf{j})) \neq \mathcal{H}_Z$, we have that $\mathcal{N} \cap \\{ \mathbf{k} : \mathbf{H}_c \mathbf{k} = \mathbf{j}\\} \neq \emptyset$. Thus, if we consider the above implication for the case of $p=0$ and noting that $\mathbf{d}_0 = \boldsymbol{0}$, we have: \begin{equation} \mathrm{Detect}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_Z \implies \sum\_{\mathcal{N} \cap \\{ \mathbf{k} : \mathbf{H}\_c \mathbf{k} = \mathbf{j}\\}} F(\mathbf{k}) = 0 \end{equation} But considering the no cancellation condition, this is impossible, thus proving (1). **Proof of (2)** Note that the converse of (1) is true (the proof is immediate). Thus, proving $ \mathrm{Detect}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_S \implies \neg(\mathrm{Type}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_M$) is the same as proving (2). We will again use the method of contradiction, and assume $ \mathrm{Detect}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_S$ and $\mathrm{Type}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_M$. \begin{equation} \mathrm{Detect}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_S \implies U_{c,p}(\mathbf{j}) \in \\{0, U\_{c,0}(\mathbf{j})\\} \; \forall p > 1. \end{equation} Note that by our assumption, $U_{c,0}(\mathbf{j}) \neq 0$. By our assumption that $\mathrm{Type}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_M$, we must have $\left\lvert \mathcal{N} \cap \{ \mathbf{k} : \mathbf{H}_c \mathbf{k} = \mathbf{j}\} \neq \emptyset \right\rvert \geq 2$. Choose $\mathbf{k}\_1, \mathbf{k}\_2 \in \mathcal{N} \cap \\{ \mathbf{k} : \mathbf{H}\_c \mathbf{k} = \mathbf{j}\\}$. Given our choice of $\mathbf{d}\_p$ $\exists p^* > 1$ s.t. only *one* of $\mathbf{k}\_1$ or $\mathbf{k}\_2 \leq \bar{\mathbf{d}}\_{p^*}$. Without loss of generality we will assume $\mathbf{k}\_2 \leq \bar{\mathbf{d}}\_{p^*}$ and $\mathbf{k}\_1 \nleq \bar{\mathbf{d}}\_{p^*}$. Now, define the following sets: \begin{eqnarray} \mathcal{J}\_0 &=& \mathcal{N} \cap \\{ \mathbf{k} : \mathbf{H}\_c \mathbf{k} = \mathbf{j}\\} \\\\ \mathcal{J}\_{p^*} &=& \mathcal{N} \cap \\{ \mathbf{k} : \mathbf{H}\_c \mathbf{k} = \mathbf{j} \; \mathbf{k} \leq \bar{\mathbf{d}}\_{p^*} \\} \end{eqnarray} We know $\mathbf{k}\_2 \in \mathcal{J}\_{p^*}$ and $\mathbf{k\_1} \in \mathcal{J}\_0\setminus \mathcal{J}\_{p^*}$, thus $\left\lvert \mathcal{J}\_{p^*}\right\rvert \geq 1$ and $\left\lvert \mathcal{J}\_0\setminus \mathcal{J}\_{p^*}\right\rvert \geq 1$. With this, we can show that the implication above cannot be satisfied. *Case 1:* $U_{c,p^*}(\mathbf{j}) = 0$. \begin{equation} U_{c,p^*}(\mathbf{j}) = 0 \implies \sum_{\mathbf{k} \in \mathcal{J}_p^*} F(\mathbf{k}) = 0. \end{equation} Since $\mathcal{J}_p^*$ is not empty, from our distributional assumption, the above sum cannot be $0$. *Case 2:* $U_{c,p^*} = U_{c,0}(\mathbf{j})$. \begin{equation} U_{c,0}(\mathbf{j}) - U_{c,p^*}(\mathbf{j}) = 0 \implies \sum_{\mathbf{k} \in \mathcal{J}_0 \setminus \mathcal{J}_p^*} F(\mathbf{k}) = 0. \end{equation} Since $\mathcal{J}_0 \setminus \mathcal{J}_p^*$ is not empty, from our distributional assumption, the above sum cannot be $0$. This implies that $\mathrm{Type}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_M$ must be false, thus proving (2). **Proof of (3):** Since we have a converse for (1), a converse for (2) suffices to prove (3). The converse follows below: \begin{eqnarray} \mathrm{Type}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_S \implies \exists \mathbf{k}^* \text{ s.t. } U\_{c,p} \in \\{0, F(\mathbf{k^*})\\},\;\forall p. \end{eqnarray} Since $F(\mathbf{k^*}) \neq 0$, we have $U_{c,0}(\mathbf{j}) \neq 0$ and all entries of $\mathbf{U}_{c}(\mathbf{j})$ are either $F(\mathbf{k^*})$ or $0$. Thus, $\mathrm{Detect}(\mathbf{U}_c(\mathbf{j})) = \mathcal{H}_S$. --- Rebuttal 4: Title: Proof Revision (Proof, Theorem 2) Comment: Here we include a sketch of the argument for the noiseless part of Theorem 2, which focuses on the case $\left\lvert \mathbf{k}\right\rvert \leq t$ (the rest of Theorem 2 is unchanged). (1) Prove $\mathrm{Detect}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_Z \implies \mathrm{Type}(\mathbf{U}\_c(\mathbf{j})) = \mathcal{H}\_Z$. (2+3) Prove $\mathrm{Pr}\left(\mathrm{Detect}(\mathbf{U}\_c(\mathbf{j})) = \mathrm{Type}(\mathbf{U}\_c(\mathbf{j})) \right) \rightarrow 1$. **Proof of (1)** The proof is identical to above. **Proof of (2+3)** The proof is the same, with one notable exception. In the low-degree case, $p^*$ may not always exist. Using the results in citation [47] from the paper, and the union bound that already exists in the proof in Appendix C.7.2, we show that $p^*$ exists with probability approaching $1$. --- Rebuttal 5: Comment: Thank you for your response. The added assumption and proof address my concern for the singleton detection algorithm. I hope this assumption and proof, as well as the mentioned literature review for the sparsity of Mobius transform, will be properly stated in the final version of the paper if the paper is accepted. Furthermore, I have raised my score accordingly. --- Rebuttal Comment 5.1: Comment: Thank you for your diligence as a reviewer, we will be certain to correctly state the theorems with this additional assumption, and include this revised proof in the appendix.
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, ACs and Senior ACs. We have posted individual rebuttals to all three reviewers. We believe we have addressed the concerns of all reviewers. We would particularly like to thank jcxh as an outstanding reviewer. Taking into account the issues, which are now resolved, we are still confident in the candidacy of this manuscript, and look forward to the discussion period.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CigTime: Corrective Instruction Generation Through Inverse Motion Editing
Accept (poster)
Summary: The paper introduces a novel approach to generating corrective instructional text based on motion editing and generation models. This work is particularly motivated by the applications in sports coaching and motor skill learning. The authors propose a method that takes a user's current motion (source) and the desired motion (target) and generates text instructions to guide the user towards achieving the target motion. The approach leverages existing motion generation frameworks to compile datasets of triplets (source motion, target motion, and corrective text) and employs a large language model (LLM) fine-tuned on this data to produce corrective instructions. Strengths: 1. This paper properly uses LLM and focuses on an important and novel application. 2. The author formulated this problem as an inverse problem of motion editing and used the pretrained editing model to create a data model. Weaknesses: 1. The inputs of this work are motions, which are hard to obtain in real life. The off-the-shelf motion estimation algorithm may cause estimation errors, which limits the application range and fields. 2. The data-generating pipeline heavily depends on the motion-editing model, which may lead to error accumulating and overfitting. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Could the authors try to compare with vision LLMs, like GPT4O, InternVL [1], and MotionLLM [2]. 2. The way of using text pure LLM for comparison is in-context learning, which is not fair. How does an LLM understand the confused motion index tokens without fine-tuning? 3. The method and dataset heavily depend on the generalizable abilities of the motion editing model. How does the author make sure that the pretrained motion editing model can take flexible instructions generated by Chatgpt as input and achieve satisfactory editing results? 4. The authors take discrete token indexes as special tokens. This is hard to capture the detailed spatial and temporal information. Did the authors try to use continuous representations? Just like in LLaVA [3] and MotionLLM [2]. 5. The authors use a pretrained motion editing model to generate data, then finetune an LLM using the data, and finally evaluate using the same motion editing model. The data-generating pipeline and evaluation use the same motion editing model. This may lead to heavy overfitting, which can be seen in Figure 4. [1] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Chen, Ling-Hao, et al. "MotionLLM: Understanding Human Behaviors from Human Motions and Videos." arXiv preprint arXiv:2405.20340 (2024). [3] Liu, Haotian, et al. "Visual instruction tuning." Advances in neural information processing systems 36 (2024). Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The limitations have been discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comprehensive review and constructive comments. We are grateful for your recognition of the novelty of the proposed approach and the importance of our work. In the following, we address your concerns on the off-the-shelf motion estimation and motion editing models, as well as the questions related to experimental details. **The Limitation of Motion Representation** We acknowledge the difficulties of obtaining precise motion in real life. However, we found that existing motion estimation algorithms enable us to obtain usable motion sequences in most cases. To verify whether the current pipeline can be applied to real life, we conduct the following experiment. We invited two participants, one acting as a coach and the other as a trainee. The trainee first performed a source motion sequence. Then, the coach was tasked with generating a target motion sequence that differed from the source sequence. We utilized a pose estimation algorithm (WHAM [1]) to extract these motion sequences and use our method to generate corrective instructions. The trainee is then required to correct his motion based on the corrective instructions. We present an example in Figure 1 of the global response pdf. In this example, it is evident that existing motion estimation algorithms can accurately estimate the motions of both the trainee and the coach. Furthermore, our algorithm is capable of understanding these motion sequences to provide appropriate corrective instructions. [1] Shin, Soyong, et al. "Wham: Reconstructing world-grounded humans with accurate 3d motion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. **Comparision with VLMs** Using the above experiment, we also compare our method with Video-Llava and MotionLLM, which can be directly utilized to analyze the videos, and the result is shown in Figure 1 in the global pdf. The result demonstrates that our method is capable of understanding estimated motion sequences and generating corresponding corrective instructions. In contrast, Video-Llava and MotionLLM struggle to discern the differences in actions between two distinct videos, making it difficult for them to provide appropriate corrective instructions. **Comparison with Other LLMs** We appreciate your comment regarding in-context learning. To promote an in-depth comparison, we have fine-tuned various large language models (LLMs) using LoRA. We present the results in Table 1 in the global response PDF. As observed, our method still shows superior performance than the other baselines. We will include it in the revision. **W2, Q3: Data Quality** Thanks for the comment and question. We echo with the concern on the capability of Motion Diffusion Model (MDM), however, we found that the motion generated by MDM is generally good. Furthermore, to enhance its robustness, our proposed data generation pipeline enforces precise alignment between corrective instructions and the intended edits by a carefully designed prompt. For example, by clearly specifying the range of joints to be modified according to the task and efficiently formatting the output that GPT-4 generates as the corrective instructions. These techniques collectively facilitate the creation of natural motions, as evidenced by the figures and videos provided. **Continuous Representations** Thank you for your suggestion. Following the approach used in MotionLLM, we conducted the following experiment: First, we utilized a VQ-VAE to map the motion into a (T/4, 512)-dimensional feature, where T is the motion length. Then, we used a linear layer to map this feature to a (T/4, 4096)--dimensional feature. This new feature is directly injected into Llama 3 as the motion embedding. During the fine-tuning process, we update both the parameters of the linear layer and the original weights in the LLMs. The results are reported in Table 5 in the global response PDF. As observed, our current method still outperforms the continuous baseline across all metrics, confirming that, in the current setting, the features learned by VQ-VAE are not well-suited for the corrective instruction generation task. **W2, Q5: Generalizable Ability** To verify the generalization capabilities of our method, we conducted two sets of experiments. First, we presented the results using PriorMdM for validation in Table 2. In this experiment, our algorithm achieved MPJPE and FID scores that are comparable to the Ground Truth and superior to other baselines. Additionally, we validated the generalization of our method on the Fit3D dataset, as shown in Table 3. And the results demonstrate that our method's performance still holds after switching the dataset. These new evidences further verify that the proposed data collection and the training framework can help alleviate the potential bias of motion generation models. The phenomenon observed in Figure 4 is also discussed in the following. **Q5: Overfitting Phenomenon Shown in Figure 4** In Figure 4, we presented some corrective instructions and reconstructed motions for different methods. Although the reconstructed motion generated by our method closely resembles the ground truth, we do not consider this to be overfitting after a careful inspection. We brief the reasoning in the following. First, our task objective is to ensure that the reconstructed motion closely approximates the ground-truth motion. This is important if the proposed is applied to generate personalized corrective instructions (i.e., for a specific person or motion editor). Second, we observe that the instructions generated by our method do not fit exactly to the ground-truth corrective instructions, indicating that there is not a severe overfitting in the generated text. Again, we are grateful for the constructive feedback, which helps refine our submission. Please feel free to let us know if you have more questions or need more information that will help improve the final rating of our work. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the author's detailed response. I really appreciate the authors' effort to answer my questions. The authors solve most of my problems. The model architecture can be seen in other models, such as [1]. MotionLLM still can input motion instead of videos. However, this paper still introduces an interesting and useful task. Thus, I decided to increase my rating. [1] Chen, Ling-Hao, et al. "MotionLLM: Understanding Human Behaviors from Human Motions and Videos." arXiv preprint arXiv:2405.20340 (2024). --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to reevaluate our work and for your updated review. We appreciate your acknowledgment of our efforts in addressing your concerns and are glad to know that we have been able to resolve most of your questions. It is encouraging to see that our paper has contributed an interesting and useful task to the field, as you mentioned. We are committed to refining our work (e.g., by incorporating a discussion regarding MotionLLM) further based on the valuable input from you. Once again, thank you for your time and your decision to raise the score.
Summary: This paper introduces an innovative method designed to generate corrective instructional text, guiding users to achieve desired motions. The approach utilizes existing frameworks to create a dataset of motion pairs and corresponding corrective texts. A new motion-language model is introduced, efficiently generating corrective instructions based on the datasets. By fine-tuning large language models with these motion-text paired datasets, the method demonstrates precision in instructional generation. Both qualitative and quantitative analyses experimentally confirm the superiority of this approach over traditional methods, which often require extensive manual annotations and struggle with dynamic motions. The researchers’ contributions include a novel data collection pipeline and the application of advanced language models to effectively bridge the gap between motion understanding and natural language instruction generation. Strengths: 1. The paper introduces a novel task—generating motion corrective instructions—which is both interesting and potentially impactful. 2. The authors compile a corresponding dataset (it is unclear if the authors plan to make this dataset publicly available). 3. The writing is clear, and the method is easy to follow. Weaknesses: 1. If I understand correctly, the authors rely on GPT-4 to automatically generate instructions and use MDM to obtain modified motions. How is the quality of this collected data evaluated? Due to the hallucination in large models, the generated instructions may not always be reasonable. Additionally, as a classical method, MDM may not produce natural and controllable motion quality in some corner cases. How do the authors ensure the generation quality of their pipeline? 2. Is the in-context method used by the authors to score the model fair and reasonable? Given the input context length limitation, the number of examples that can be provided to the large model is restricted. 3. Does the authors' division based on upper and lower body introduce any bias? For instance, the instruction "try to touch your toes with your fingertips" cannot be generated. 4. Can the authors provide detailed experimental settings for Llama-3-8B-LoRA? From the results in Table 1, the fine-tuning results based on LoRA are even less effective than the original Llama-3. Do the authors have any explanations or analyses? Technical Quality: 2 Clarity: 3 Questions for Authors: The task proposed by the authors is innovative. My primary concerns are based on the weaknesses. Please address the following in the rebuttal: the quality of the pipeline's generation, comparison strategies with baselines, prompt settings, and experimental setups. This will help illustrate the value of the authors' method. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors' method does not address social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate your supportive acknowledgment of the writing quality, the originality of the proposed task with potential impact, the effectiveness of the proposed model, and the novelty in the data collection pipeline. Also, thanks for the summary of the main points that we need to address to help with your evaluation, i.e., the quality of the pipeline's generation, comparison strategies with baselines, prompt settings, and experimental setups. In the following, we provide more details and explanations regarding these points. **Dataset Access** Yes, we will release the dataset and the dataset generation pipeline for future research. **Dataset Quality** To ensure the quality of the generated corrective instructions by GPT-4, we conducted a thorough manual check of them before performing any experiments and verified that these generated instructions were not influenced by large models' hallucinations. Regarding the Motion Diffusion Model (MDM), in general, we observe that it exhibits robust motion generation capabilities during our experiment. To further enhance its robustness, when generating corrective instructions, the proposed data generation pipeline ensures that the corrective instructions are precisely aligned with the intended edits, e.g., by specifying the range of joints to be modified according to the task, as well as providing efficient formatting for the generated corrective instructions. Together, they facilitate the creation of natural motions, as evidenced in the provided figures and videos. **Data Bias** We would like to clarify that dividing instructions into upper and lower body parts does not limit the complexity of the motion to be generated. For example, the aforementioned target motion can be addressed by defining modification joints more specifically, i.e., we can instruct GPT-4 to generate instructions that modify both the right hand and left leg to produce a sequence that signifies ``try to touch your toes with your fingertips.'' **Comparison with Other LLMs Using LoRA** We appreciate your point regarding in-context learning and fairness. To facilitate a more direct comparison, we fine-tune various large language models (LLMs) based on LoRA. The results are presented below. | Method | BLEU ⬆ | ROUGE ⬆ | METERO ⬆ | CLIPScore ⬆ | MPJPE ⬇ | FID ⬇ | | - | - | - | - | - | - | - | | Llama-3-8B-LoRA | 0.11 | 0.17 | 0.36 | 0.78 | 0.37 | 5.03 | | Mistral-7B-LoRA | 0.13 | 0.17 | 0.36 | 0.79 | 0.30 | 5.02 | | Ours | **0.14** | **0.27** | **0.47** | **0.80** | **0.21** | **4.52** | As observed, our method still outperforms the baselines that have been fine-tuned using LoRA, which we will add to the revised version for an improved understanding of the efficiency. **Detailed Experimental Settings for Llama-3-8B-LoRA** We fine-tuned Llama3 following the methodology described in [1]. Specifically, we converted motion tokens into text and combined them with the template as input for the network while conducting the fine-tuning using LoRA. During training, we set the learning rate to 1e-4, the LoRA rank to 16, and the batch size to 512. The entire fine-tuning process lasted for 6,000 training steps. We also would like to make some clarifications about the experimental results. After fine-tuning Llama3 with LoRA, the generated corrective instructions can induce higher-quality reconstructed motions (FID: 2.09) compared to the original Llama-3 before fine-tuning (FID: 3.04). However, we observed that after fine-tuning using LoRA, the output of Llama3 became more varied, e.g., the instructions generated on the test set exhibited greater variance compared to those generated by the in-context learning baseline with the original Llama-3. This evidenced that simply fine-tuning Llama-3 using the generated data would not result in a satisfactory corrective instruction generation, e.g., due to overfitting or catastrophic forgetting. Although the outputs can induce similar target motion sequences compared to the ground truth, the increased variance in the text can lead to a decrease in the overall NLP metrics such as BLEU, ROUGE, and METERO. [1] Zhang, Yaqi, et al. "Motiongpt: Finetuned llms are general-purpose motion generators." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 7. 2024. In light of these clarifications, we hope to have addressed the concerns raised and further justified the approach and focus of our study. We hope these clarifications have addressed the concerns raised and further justified our study’s approach and focus. Please feel free to let us know if you have more questions or need further information that will help improve the final rating of our work. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their expert, comprehensive, and detailed analysis. The authors have thoroughly addressed the quality of their dataset and the rationale behind their experiments in the rebuttal. I also reviewed the comments from other reviewers and the authors’ corresponding responses. The authors’ motivation is both interesting and reasonable. Although their pipeline largely builds on previous methods, it still demonstrates potential application value, which is substantiated by the experimental analysis provided in the rebuttal. Therefore, I am inclined to raise my score. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: We would like to express our gratitude for your time, effort, and detailed evaluation of our paper. We appreciate your acknowledgment of the comprehensive analysis provided in our rebuttal, as well as your recognition of the potential application value of our research. We are glad that our clarifications and explanations have addressed your concerns, and we are thankful for your decision to raise the score.
Summary: This paper works on generating corrective instructional text from source and target human motions. Specifically, it utilized existing motion editing framework to collect a dataset for this task. Then, an LLM instruction generation method was proposed to generate text from source and target motions. This paper also shows better performances than baseline methods. Strengths: 1. This paper is well-written and easy to follow, and the supplementary video effectively enhances the content. 2. The motivation of generating corrective instruction from source and target human motions is very interesting and has potential in real-world applications. 3. The proposed method shows better performance than previous methods in the evaluation. Weaknesses: 1. The overall technical contribution is limited. The presented method seems to be an extended application of existing text-to-motion methods, and the adopted components are basically from previous methods. 2. Some recent works (e.g., [1]) are missing to discuss or compare. [1] AvatarGPT: All-in-One Framework for Motion Understanding Planning Generation and Beyond, CVPR 2024 3. MotionGPT used T5-770M as its LLM backbone, while the proposed method uses ChatGPT-4, so a direct comparison with MotionGPT using different LLMs doesn't seem to be fair. Technical Quality: 3 Clarity: 3 Questions for Authors: To resolve my concern regarding the technical contribution. I would like to know what are the main challenges of this addressed task compared to existing text-to-motion tasks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations have been discussed in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive suggestions. We also appreciate your acknowledgment of the writing quality, the novelty and potential for real-world applications, and the effectiveness of the proposed method. However, it seems that the technical difficulty as well as the technical contributions of our work may not have been entirely clear. In the following, we address these concerns. **Main Challenges and Technical Contribution** Thanks for the questions. To alleviate the impression that the proposed method seems to be an extended application of text-to-motion methods, we summarize the primary challenges regarding corrective instruction generation given existing works: + Lack of Training Data Unlike text-to-motion or human pose correction (which can be annotated through simple pipelines [1]), human motion sequences involve temporal changes. Annotating the differences between these temporal changes (for corrective instruction generation) is challenging, and, to the best of our knowledge, no existing datasets provide such annotations to enable training. To resolve this issue, we contribute by proposing a novel prompting technique and a method that generates editor-aware instructions so that the generated motion sequences can faithfully align with the corrective instructions. This process is highly scalable and can be extended to different motion editors and motion editing tasks, significantly facilitating the training of corrective instruction generation models. + Learning Complexity Beyond Text-to-Motion/Motion-to-Text Annotation Previous methods for training text-to-motion models involve either using an existing vocabulary for motion tokens or assigning new learnable embeddings, followed by fine-tuning with techniques like LoRA. We tried both approaches, but the results of utilizing one of them alone were not satisfying. There are two main reasons: First, using a fixed vocabulary and embeddings prevents capturing the correlation of motion differences and corrective instructions, as the weights are trained on tasks with a large domain gap. Second, while new embeddings can be learned with LoRA, the distribution of the original vocabulary's embeddings imposes constraints, making the learned embeddings suboptimal, especially given the smaller scale of training data for corrective instructions. To address these challenges, we integrate the goods of both. We use existing vocabulary tokens for their rich semantics and fine-tune all embeddings to maximize performance and reduce the domain gap. We also introduce an anchor loss to prevent the embeddings from diverging. These training techniques prove effective, resulting in superior performance. + Selective Expression of Differences in Specific Activity In real-world applications, selectively expressing differences between motion sequences is crucial. For example, as kicking balls, slight changes in the contact point between the foot and the ball can significantly impact performance, while arm swing amplitude may not. Capturing relevant details for effective corrective instruction generation is challenging. Therefore, we can contribute by sharing the data generation and training pipeline to aid in investigating this important issue. These challenges highlight the differences between the proposed task and the text-to-motion and motion-to-text paradigms. The solutions presented also signify technical contributions to addressing these problems. Thank you for your feedback; we will revise our manuscript to provide a clearer contrast. [1] Delmas, Ginger, et al. "Posefix: correcting 3D human poses with natural language." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. **Comparison to AvatarGPT** Thanks for your suggestion, we will discuss AvatarGPT in our revision, and sorry for not doing so due to the delayed release of accepted papers in CVPR 2024. We agreed that AvatarGPT is a compelling work expanding the range of text that can be comprehended by text-to-motion methods, enabling the generation of complex and long motion sequences from text. Additionally, like our work, AvatarGPT introduces an efficient data generation process (but not for our task), benefiting the development of text-to-motion models. In contrast to AvatarGPT, our focus is corrective instruction generation, where the inputs are two motion sequences (instead of text and motion), and the output is the text describing the differences between the input motion sequences (but not the edited motion as in text-to-motion generation). The generated text shall efficiently inform a text-based motion editing model to change the source motion sequence towards the target (reference) motion sequence to facilitate tasks like motion re-targeting and sports coaching. This fundamental difference in input-output dynamics makes it difficult to directly compare our work with AvatarGPT. We will include this discussion about AvatarGPT in our revision. **Comparison using T5** Thanks for the suggestion. First, we would like to correct a typo: in our experiments, the backbone we have used is Llama-3 but not GPT-4. We acknowledge your concern about the impact of different backbones on the performance. To resolve this issue and test the robustness of our method, we have fine-tuned T5-770M, as in Motion-GPT, to provide a more direct comparison with MotionGPT. The results are reported in Table 4 in the global response pdf. Based on the new results, we can observe that even though a smaller backbone is used (T5) as our LLM backbone, our method still outperforms MotionGPT, demonstrating its effectiveness and robustness across different backbones. We are grateful for the constructive feedback, which is instrumental in enhancing the quality and clarity of our submission. We hope that the discussions and proposed revisions can help address the current concerns. Please feel free to let us know if you have more questions that can help improve the final rating of our work. --- Rebuttal Comment 1.1: Comment: I appreciate the author's feedback and believe the rebuttal has addressed my concerns. I have also considered the issues raised by other reviewers and noted that the rebuttal provided positive responses and clear clarifications for those as well. As a result, I am inclined to increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to reevaluate our work and for providing your updated feedback. We are glad that our rebuttal has successfully addressed your concerns and clarified the key contributions of our research. Your acknowledgment is highly appreciated and serves as motivation for us to continue refining our manuscript. We will ensure that all necessary revisions are incorporated in the final version. Once again, we express our gratitude for your thorough review and your openness to adjusting your assessment. Your constructive feedback has been invaluable in enhancing the quality of our work.
Summary: This paper presents CigTime for generating motion corrective instructions. The key idea is to leverage motion editing to create datasets of motion triplets and use a fine-tuned language model to generate precise and actionable instructions. Strengths: (1) Introduces a approach for converting motion discrepancies into actionable textual guidance, with applications in sports coaching, rehabilitation, and motor skill learning. (2) Proposes a motion-editing-based pipeline to efficiently generate large datasets, reducing the dependency on manual annotations. (3) Demonstrates the effectiveness of the method through comprehensive evaluations, outperforming existing models. Weaknesses: (1) For prompting ChatGPT-4 for corrective instruction generation like in Fig.2, I think text description for upper/lower body motion is not enough, prompting ChatGPT4 to give details on how to achive the desired motion is need for "corrective instruction". (2) For corrective motion, blending source and target motion could generate not human-like motion, why don't you try to directly generate corrective motion by masking source motion with editing text prompt. (3) Although corrective motion instruction generation is helpful for lots of applications, I believe the proposed method don't show its potential to enhance any application, just like a toy-level verification. (4) For LLM finetuning, do you extend text vocabulary, or just use a number(index) to represent motion token and directly put it into LLM? (5) For comparison with ICL of other LLMs, it's not fair. The motion index list space is so large and LLM don't see possible mapping between it and text in pretraining, LLM could only succeed when you put similar index list and get similar text like few-shot example. When you try some different index list, it's likely to output similar text. This is because the mapping between index list and text is not established using few-shot examples. So maybe you should provide all these LLM trained with a PEFT(like LoRA) method. (6) What is the result on KIT? And to support finger motion, Motion-X is an optional dataset. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) For tokenizer training, it lacks the loss for codebook. (2) In line 445, FID from 1.44 to 0.24 is not subtle if you are fimiliar with t2m task. Also, why is FID for Fit3D better than HumanML3D? It's quite different from the changing of other metrics. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: As described in questions and weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate your positive comments on the proposed approach and its application potential, data generation efficiency, and the effectiveness of the method. Below, we address your questions regarding corrective instruction details, motion generation, practical application demonstrations, fine-tuning, and additional results. **More Advanced Data Generation Prompts** Thanks for the suggestion. To achieve more precise and detailed instructions that illustrate "how", we have prompted GPT-4 to generate the following corrective instructions: [Raise the left hand 20cm higher by changing the elbow and shoulder joints]; [Turn your head to the right by 30 degrees more by changing the neck joint]. These prompts can provide more detailed (localized) information on how to achieve the target motion. However, due to limitations in the motion editor, for example, the detailed instructions will result in unnatural movements in the current format, when compared to using more global corrective instructions in terms of upper or lower body instead of prescriptive local descriptions. **Corrective Motion Generation** Our current data generation process aligns with the procedure you proposed. Specifically, during the target motion generation process with the Motion Diffusion Model (MDM), we mask the source motion and modify the unmasked parts corresponding to the lower or upper body according to the corrective instructions. We will clarify this point in our revised paper. **Application** To demonstrate the potential of our method to enhance applications, we conducted the following experiment. We invite two participants, one acting as a coach and the other as a trainee. The trainee first performs a source motion sequence. Then, the coach is tasked with generating a target motion sequence that differs from the source sequence. We utilize a pose estimation algorithm (WHAM [1]) to extract these motion sequences and use our method to generate corrective instructions. The trainee is required to correct his motion based on the corrective instructions. We compare our method with Video-Llava and MotionLLM, which can be directly utilized to analyze videos. We present an example in Figure 1 of the global response. The result demonstrates that our method can understand low-quality (as compared to motion capture) motion derived from pose estimation algorithms and generate corrective instructions. In contrast, Video-Llama and MotionLLM struggle to discern the differences in actions between two distinct videos, making it difficult to provide appropriate corrective instructions. We will enhance our analysis demonstrating the practicality of our approach for real-world applications. **Extend Text Vocabulary** Currently, we use indices to represent motion tokens. However, we further conducted the following experiment to extend the text vocabulary. We present the results below. | Method | BLEU ⬆ | ROUGE ⬆ | METERO ⬆ | CLIPScore ⬆ | MPJPE ⬇ | FID ⬇ | | - | - | - | - | - | - | - | | Llama-3-8B-Extended | 0.12 | 0.23 | 0.44 | 0.80 | 0.27 | 5.43 | | Mistral-7B-Extended | 0.18 | 0.27 | 0.42 | 0.81 | 0.19 | 1.45 | | Ours-Extended | **0.24** | **0.37** | **0.55** | **0.84** | 0.16 | 1.50 | | Ours | **0.24** | 0.35 | 0.52 | 0.82 | **0.13** | **1.44** | We find that employing an extended vocabulary can enhance the text-based metrics which are calculated based on the generated instruction and ground truth instructions. However, these instructions cause a decline in the motion editing performance, resulting in a reduction in MPJPE and FID. From the perspective of the task definition, we require a model that prioritizes high reconstruction quality over instruction quality. Therefore, extending vocabulary is more detrimental than beneficial for our task, and we will discuss in the revised version. **Compare to Other LLMs with LoRA** We appreciate your observations regarding in-context learning. We have fine-tuned various large language models (LLMs) using LoRA. The results are presented in Table 1 in the global response pdf. As observed, our model still outperforms the other baselines, which demonstrates the effectiveness of the proposed training pipeline. **Experiments on KIT** We fine-tune and evaluate our method and baselines on the KIT dataset. We report the results in Table 3 in the global response pdf. On the KIT dataset, our method still outperforms other baselines across all metrics, demonstrating the generalization capability. Extension to fingers also requires effort in improving the motion editing models, for which we resort to future work. **Loss for Codebook** Following [2], we apply exponential moving averages (EMA) to update the codebook to stabilize the training process as we mentioned in Line 185, which helps alleviate the need to utilize codebook loss for tokenizer training. **FID Metrics** We greatly appreciate your detailed review and attention to the specifics of our manuscript. Upon re-checking our supplementary material, we discovered an error: the FID score on Fit3D should have been reported as 1.24, not 0.24 as previously stated. The commentary in line 445 regarding the subtlety of changes in CLIP score, MPJPE, and FID was indeed based on this correct result. The other numbers are correct after a double-check. We apologize for any confusion caused by this error and are grateful for your carefulness. Once again, we thank you for your constructive feedback and hope that our rebuttal addresses your questions. Please feel free to let us know if you have more questions that can help improve the final rating of our work. [1] Shin, Soyong, et al. "Wham: Reconstructing world-grounded humans with accurate 3d motion." IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Van Den Oord, Aaron, and Oriol Vinyals. "Neural discrete representation learning." Advances in neural information processing systems (2017).
Rebuttal 1: Rebuttal: We would like to express our gratitude for the reviewers' careful evaluations and constructive feedback on our manuscript. In response to the reviewers' feedback, we appreciate their recognition of the novelty in the task/approach presented, for generating motion corrective instructions (R2, R3, R4). As noted, the key strength of our method lies in its innovative use of motion editing to convert motion discrepancies into precise, actionable textual guidance. This method, with potential applications in sports coaching, rehabilitation, and motor skill learning, is an important contribution to the field (R1, R2, R3, R4). The reviewers also acknowledged our efficient motion-editing-based pipeline for generating large datasets, thereby reducing dependency on manual annotations. This innovation has been well received for its effectiveness as demonstrated through comprehensive evaluations, where it outperformed existing models (R1, R2, R3). We are further encouraged by the positive feedback regarding the readability of our paper and the supplementary video (R2, R3), and the acknowledgment of our method's superior performance over previous methods in evaluations (R1, R2, R3). These help underscore the contributions our work aims to make, and we are delighted that they resonate with the reviewers. The reviewers have also raised concerns or questions that we will address in our rebuttal, which include: 1. Questions regarding the specifics of the generated corrective instructions. 2. Clarifications on the motion generation process and its efficiency. 3. Requests for additional examples and demonstrations of the method in real-world scenarios. 4. Clarification of the technical difficulty, as well as the fine-tuning process. 5. Requests for further results and comparisons with baselines. 6. Questions related to the use of existing motion estimation and editing models, and their impact on the proposed approach. In our rebuttal, we will provide more details and explanations to address these concerns and clarify any misunderstandings, ensuring that our work's technical contributions and the effectiveness of the proposed method are well understood. We are also committed to addressing each of them in the revision. Pdf: /pdf/4147e9fc2862a22b8354c5b2e9072b36bac006d2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Federated Fine-tuning of Large Language Models under Heterogeneous Tasks and Client Resources
Accept (poster)
Summary: The use of LoRA in FL is challenged by the heterogeneity of downstream tasks and available resources among clients. Traditional FL methods often use the smallest viable LoRA rank for all clients for aggregation compatibility, which makes it hard to capture the full diversity of client contributions and fully utilize ample client resources. To fully leverage heterogeneous client resources for enhancing the global model's generalization ability, this paper proposes FlexLoRA, which enables the mixture of diverse LoRA weights across individual clients to account for local client resources and task differences by allocating the highest feasible rank given a client's resource budget to ensure all clients contribute effectively regardless of resource capability, with heterogeneous aggregation and redistribution of weights through SVD. Extensive experiments and theoretical analysis verify the effectiveness and scalability of the proposed FlexLoRA. Strengths: 1. Exploring heterogeneous rather than homogeneous LoRA ranks across clients to fully utilize their resources is an imperative research direction. 2. Numerous experiments and theoretical analyses are illustrated to support the effectiveness of the proposed method. Weaknesses: 1. Numerous typos remain in this paper, including but not limited to: 1) "${Client_j}$" should be corrected to "${Client&nbsp;j}$" in Figure 2; 2) "Effect" should be corrected to "effect" in line 249; 3) "r=200 2" should be corrected to "r=200" in Table 6. 2. In contrast to HETLORA [8], the proposed FlexLoRA needs to perform SVD instead of redistributing it directly before redistributing the aggregated global LoRA weight to clients. This seems to be because it provides better initialization for the low-rank matrices B and A. However, the authors do not mention this point, and this practice has already been presented in FeDeRA [42], so this may not be considered one of this paper's contributions. 3. Some relevant work in this field is missing, including but not limited to: [1] Liping Yi, Han Yu, Gang Wang, Xiaoguang Liu, Xiaoxiao Li. FedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA Tuning. arXiv preprint arXiv:2310.13283, 2023. [2] Shangchao Su, Bin Li, Xiangyang Xue. FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the Power of Heterogeneous Clients. arXiv preprint arXiv:2310.13283, 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. The method described in Section 3 omits too many preliminaries and algorithmic details, which is reader-unfriendly. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: 1. The proposed approach essentially fails into the area of federated fine-tuning of LLMs with parameter-efficient fine-tuning (PEFT) techniques. However, this paper does not discuss any limitations that may be imposed by PEFT especially LoRA used in this paper, e.g., PEFT methods sacrifice performance due to the parameter update is limited to a much lower-dimensional subspace, especially when data across clients are non-IID. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the positive score and valuable feedback from the reviewer! We'd like to respond to the reviewer's questions and comments on the following aspects: - Paper presentation comments: W1, Question, Limitation - SVD contribution: W2 - Additional related work: W3 --- ## Response to Paper Presentation Comments (W1, Q1 & L1) We deeply thank the reviewer's time for reading our paper and providing suggestions on typos and paper presentation. We will fix the typo issues that the reviewer mentioned in our next version. For section 3, we present our FlexLoRA algorithm pseudo-code and theorem 1 proof in the Appendix B and C sections due to page limit. We will consider moving the algorithm pseudo-code and any relevant details from the Appendix to the main content to make the paper more reader-friendly if our paper is selected and allowed for one additional page. We'd also like to include more discussions regarding the potential performance sacrifice of PEFT methods, as shown in relevant works such as [1]. ## Response to W2 (SVD contribution) > SVD seems to be because it provides better initialization for the low-rank matrices B and A. However, the authors do not mention this point, and this practice has already been presented in FeDeRA - [**Algorithm Difference**] We'd like to clarify that **our method differs from FeDeRA**. FeDeRA's approach involves the following steps: 1. `SVD Decomposition`: Decomposing the model's pretrained weight, $W_0$, using SVD: $SVD(W_0)=U\Sigma V^T$ 2. `LoRA Initialization`: Initializing LoRA with the top r singular values: $B=U[:r, :] \sqrt{\Sigma[:r]}, A=\sqrt{\Sigma[:r]}V^T[:, :r]$ 3. `Freezing Weights`: Using the difference between pretrained weights and LoRA as frozen weights:$\text{freeze} (W_0-BA)$ 4. `Federated Averaging`: Apart from the initialization, the method follows standard federated averaging with LoRA. This is very different from FlexLoRA, as **FlexLoRA uses SVD for aggregations while FeDeRA uses SVD for their LoRA weights initialization**. - [**Relationship between SVD and initialization**]: There exists research indicating that the initialization of LoRA can impact its performance (e.g., SLoRA[2]). FeDeRA's improved performance is attributed to better initialization, where the initial LoRA weights contain information from the pre-trained weights, not due to the use of SVD. In contrast, our paper does not claim *SVD as a major contribution*. Our key contribution lies in **enabling clients to scale ranks based on local resources, thereby enhancing performance, and SVD is only utilized for scaling rank to local clients and does not contribute to improving model performance.** In our setting, we have demonstrated that when all clients use the same rank, aggregation with SVD is equivalent to FedAvg, underscoring that the contribution of model performance is not from SVD but an increase in rank. --- ## Response to W3 (Additional Related Works) > Some relevant work in this field is missing, including: pFedLoRA, FedRA We thank the reviewer for providing the relevant work. These works will be cited and discussed in our final version to provide a more comprehensive context. Here is an overview comparison between the work that the reviewer suggests and FlexLoRA: 1. FedRA: FedRA uses a random allocation strategy for federated tuning to leverage heterogeneous clients. Compared with FedRA which randomly selects several layers for LoRA tuning on each client, FlexLoRA tunes all the layers with LoRA. 2. pFedLoRA: pFedLoRA mainly addresses personalization FL with LoRA, which is a different aspect of federated learning. pFedLoRA utilizes both global LoRA and local LoRA weights to boost the local performance of each client, which provides a solution in the personalization area, while our method focuses on an orthogonal optimization perspective: scaling rank adjustment based on local resources. Both approaches have the potential to be combined together to further improve the utility in FL. Besides, following the reviewer's suggestion to add more relevant works, we newly conduct experiments and would like to provide the result of our supplement experiment on comparing FlexLoRA with FedRA (Table 1 below). From it we can observe that **FlexLoRA is more suitable than FedRA for fine-tuning LLMs in cross-device FL settings**. | | Table 12 Result (from paper) | FedRA | | --- | --- | --- | | Homo Rank | 55.34 | 53.29 | | Heavy Tail Light | 58.39 | 56.37 | Table 1: Result from FlexLoRA vs FedRA --- ## Mentioned Refs [1] "Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment." arXiv, 2023 [2] "SLoRA: Federated parameter efficient fine-tuning of language models." arXiv preprint arXiv:2308.06522 (2023). --- ## Closing Remark We believe the paper has been further improved with your helpful comments, and hope that these responses will convince you to lean more toward acceptance of the paper. We will include the new discussion and results in our final version. Thank you once again! --- Rebuttal Comment 1.1: Comment: I have read the author rebuttal and made any necessary changes to my review. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you for your reply and for making the necessary changes! We noticed that the review modification timestamp in the system may be inconsistent. Please let us know if there have been any oversight changes or if you have any further questions. We will do our best to address all your comments. Thank you once again for your time and effort!
Summary: Under the framework of federated fine-tuning large model, this paper proposes a simple and effective LoRA aggregation method with different ranks, which mainly focuses on the problem of client resource heterogeneity. Specifically, the full-size LoRA is first obtained, and then after normal aggregation, it is decomposed into a matrix BA with the corresponding rank of each client by SVD. Extensive experiments demonstrate the effectiveness of the proposed approach when client resources are inconsistent. Strengths: 1. Different rank-based LoRA is directly matched to client resource inconsistency. The method of first multiplying into full-size LoRA and then decomposing into different ranks based on SVD is simple and effective, and has certain rationality. 2. The information loss of the parameters after SVD decomposition compared with the original global parameters is given in Section 3.4, and this method is generally reasonable. Weaknesses: 1. Does it increase the communication cost to convert matrix BA to full-size LoRA before uploading? 2. It seems that the configurations of ranks in Table 1 are not very diverse. Can the proposed method adapt to very diverse and more flexible configurations of ranks? 3. Different ranks seem to be specific to client resource heterogeneity, but do not seem to be particularly optimized for task heterogeneity? 4. While there are performance comparisons with the very related HETLoRA which is used to solve the resource heterogeneity problem, should comparison results with the rest of the state of the art FL methods also be given. Technical Quality: 3 Clarity: 2 Questions for Authors: As shown in the weaknesses section, we won't add anything more here. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: This paper lacks a detailed discussion of the limitations and shortcomings of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer's positive score and insightful feedback on our paper! In the following replies, we address all the reviewer's comments point-by-point. ## Response to W.1 > Does it increase the communication cost to convert matrix BA to full-size LoRA before uploading? - The current design of our method **does not introduce more communication costs** due to the conversion of matrix BA to full-size LoRA. The local clients directly send their trained B and A matrices to the server, which then handles the full-size computation. - **This design is a tradeoff between local and global resource availability.** Typically, in cross-device FL scenarios, local clients are numerous but have limited resources, whereas the server is a device with ample computation power. Therefore, by offloading the full-size computation to the server, we avoid burdening the local clients with the communication cost of transmitting full-size BA matrices. --- ## Response to W.2 > It seems that the configurations of ranks in Table 1 are not very diverse. Can the proposed method adapt to very diverse and more flexible configurations of ranks? - **Our method can indeed scale to more diverse and flexible configurations of LoRA ranks.** Our current design utilizes a few typical ranks to validate our method, and the assignment of rank also follows previous works: our baseline FedIT[1] uses rank 8 in their experiment and [2] experiments on both rank 30 and rank 200 in their investigation on LoRA performance. - Following the reviewer's suggestion, we conduct supplementary experiments with a more diverse set of rank configurations, similar to those used in HETLORA, to demonstrate FlexLoRA’s adaptability to a broader range of scenarios. We select rank from the following set: $r^i \in \{1,5,10,20,50,100,150,200\} $uniformly (i.e, each rank is selected with the same probability). We conducted experiments on Dolly 15K with the same experiment setting as in section 4.6: by partitioning data into 200 clients with Dirichlet distribution $\alpha=0.5$, and the testing the resulted DataJucier-1.3B model. From Table 1 below, we can observe that **under a more diverse rank setting, FlexLoRA is still able to maintain its superior performance than regular FedAvg.** | | RougeL | | --- | --- | | Homo Rank | 55.34 (from paper Table 12) | | Uniform | 58.56 (from paper Table 12) | | Uniform (more diverse rank) | 58.81 | Table 1: Results of FlexLoRA under a more diverse rank choice. --- ## Response to W.3 > Different ranks seem to be specific to client resource heterogeneity, but do not seem to be particularly optimized for task heterogeneity? - We'd like to point out that **using task characteristics to define ranks is not always feasible**. Each client may not have a single task, and determining ranks based on the client’s data distribution could be complex, potentially requiring additional training phases. Although the data partition in our main result in Table 2 is split by task, such a data partition approach is only for mimicking an extreme case of data heterogeneity, and in the real world we anticipate more task overlaps among clients. - To find a rank suitable for a client's local distribution, one may need to consider prune rank on different weights or layers to optimize its performance [3, 4], which introduces numerous computations. In this case, designing ranks for different tasks is not as meaningful as scaling ranks based on computation resources. - Our approach, FlexLoRA, was designed for a realistic scenario with both resource and task heterogeneity. **Although ranks are directly scaled by client resources, our method aims to address both resource and task heterogeneity.** --- ## Response to W.4 > While there are performance comparisons with the very related HETLoRA which is used to solve the resource heterogeneity problem, should comparison results with the rest of the state of the art FL methods also be given. - While we directly compared FlexLoRA with HETLORA, we also referenced SLoRA(2023) as another method addressing LoRA heterogeneity. We did not compare SLoRA with FlexLoRA as pair-wisely as HETLORA, because *SLoRA is used as one of our FL frameworks that can be incorporated into FlexLoRA*. We'd like to mention one of FlexLoRA’s strengths lies in its **ability to integrate with various existing methods rather than competing directly.** - Following the reviewer's suggestion on adding more relevant works, we newly conduct experiments to compare FlexLoRA with FedRA (2024), another SOTA method employs a random allocation strategy to handle resource heterogeneity. In Table 2 below, we can observe that FlexLoRA is more suitable than FedRA for fine-tuning LLMs in cross-device FL settings. | | Table 12 Result (from paper) | FedRA | | --- | --- | --- | | Homo Rank | 55.34 | 53.29 | | Heavy Tail Light | 58.39 | 56.37 | Table 2: Result from FlexLoRA vs FedRA --- ## Mentioned Refs [1] "Towards building the federatedGPT: Federated instruction tuning." ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. [2] "Towards a unified view of parameter-efficient transfer learning." arXiv preprint arXiv:2110.04366 (2021). [3] "Efficient personalized federated learning via sparse model-adaptation." International Conference on Machine Learning. PMLR, 2023 [4] "Think locally, act globally: Federated learning with local and global representations." arXiv preprint arXiv:2001.01523 (2020) --- ## Closing Remark We sincerely hope that our responses have adequately addressed your comments. If you feel they have, we would be grateful if you could consider adjusting the evaluation of our manuscript accordingly. Thanks again for your valuable suggestions! We look forward to any further guidance you may have. --- Rebuttal Comment 1.1: Comment: The author has addressed my concerns, and I agree with raising the score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thanks for your quick reply! We are sincerely appreciate for your support in raising the score. Your constructive feedback has been invaluable in enhancing the quality of our work.
Summary: This paper proposes FlexLora to tackle FL's heterogeneous resources and data distribution problem. FlexLora allows for dynamic adjustment of local Lora ranks by employing SVD for weight distribution, improving the global model’s generalization ability. Strengths: 1. The experiments are comprehensive. The experiments verified that the proposed method is efficient and improves the generalization. 2. The paper is well-written and easy to follow. Weaknesses: 1. This paper claims to follow a cross-device setting, but cross-device setting usually involves thousands of clients, while the client number in the experiment of this paper is much smaller. If the method can only be verified in a small-scale experiment, cross-device setting is not a very rigorous statement. Cross-silo setting might be more suitable for this paper. 2. One of the main contributions of this paper is allowing for dynamic adjustment of local Lora ranks in FL. Some dynamic/flexible Lora methods already exist in central learning. I wonder whether these methods can also be applied to tackle the heterogeneity of downstream tasks in FL. Related discussions or experiments are suggested here. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to weakness. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations of the model and client scale. For the small client scale, claiming a cross-device setting is not so rigorous. Cross-silo setting might be more suitable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's acknowledge of our contribution! We respond the reviewer's suggestions in the following two replies. ## Response to W.1 > This paper claims to follow a cross-device setting, but cross-device setting usually involves thousands of clients, while the client number in the experiment of this paper is much smaller. ... Cross-silo setting might be more suitable for this paper. - While it is true that our experiments do not reach the scale of millions of clients for cross-device simulation, our study involves over 1,600 clients, which is comparable with existing cross-device FL work. For example, [1] implements cross-device FL benchmarks on the Landmarks dataset with 1,262 clients, and [2] uses more than 700 clients in their cross-device FL setting. - Besides, to account for FlexLoRA's scalability on FL settings with larger client pool, we present Figure 5 in our paper, which demonstrates that **the efficacy of our method grows with the client numbers**, showcasing its potential for even larger cross-device federated learning (FL) settings. --- ## Response to W.2 > One of the main contributions of this paper is allowing for dynamic adjustment of local Lora ranks in FL. Some dynamic/flexible Lora methods already exist in central learning. I wonder whether these methods can also be applied to tackle the heterogeneity of downstream tasks in FL. - We thank the reviewer for their insightful comment. While dynamic/flexible LoRA methods do exist in centralized learning, their direct applicability to FL settings, especially in handling the heterogeneity of downstream tasks, is not guaranteed. - Per your suggestion, we newly conducted preliminary experiments by adapting concepts from one of the centralized dynamic LoRA works, ReLoRA, [3] to federated settings. ReLoRA trains LoRA for several epochs, uploads the LoRA weight to the pretrained weights, then initializes a new LoRA module and trains based on the new frozen weights, repeating this process for several LoRA modules. We create a baseline by incorporating the concept of ReLoRA, uploading aggregated LoRA weights to the pretrained models after several communication rounds. - However, we observed a slower convergence speed than the regular FlexLoRA. We show in Table 1 below that incorporating the step of "uploading LoRA weights" does not lead to better performance. This suggests that **observations and improvements seen in centralized dynamic LoRA methods may not translate directly to federated learning due to the unique challenges posed by heterogeneity in FL**, which underlines the importance of tailored solutions like FlexLoRA for effectively managing heterogeneity in FL environments. | | Homo Rank (Baseline) | Uniform | | --- | --- | --- | | Table 2 Result (from paper) | 56.53(from paper) | 58.07(from paper) | | +ReLoRA Result | 53.9 | 56.56 | Table 1: Results with/without incorporating ReLoRA into either regular FedAvg or FlexLoRA. --- ## Mentioned Refs [1] "Motley: Benchmarking heterogeneity and personalization in federated learning." arXiv preprint arXiv:2206.09262 (2022). [2] "Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes." International Conference on Machine Learning. PMLR, 2023. [3] "Relora: High-rank training through low-rank updates." The Twelfth International Conference on Learning Representations. 2023. --- ## Closing Remark We hope our responses can address your comments. Again, we would like to thank you for your support of our work and detailed comments!
Summary: This paper proposes a LoRA-based federated fine-tuning algorithm called FlexLoRA. FlexLoRA keeps full-size LoRA modules on the server, and decomposes them into heterogeneous rank LoRA modules according to the client task and capacity. The heterogeneous local LoRA modules will be transferred back to full-size and then aggregated on the server. FlexLoRA can make LoRA-based federated fine-tuning more flexible and easier to be implemented. Strengths: (1) This paper uses SVD to solve the heterogeneous LoRA rank problem in FL, which looks valid and straightforward. The explanation of the proposed method is clear and easy to follow. (2) The paper provides sufficient scalability study. With slightly increased cost per round, FlexLoRA can convergence much faster than baselines. Weaknesses: (1)The main concern is the privacy issue of FlexLoRA. Since the server needs to preset all the local LoRA ranks, the server needs to get access to some client information such as the amount of data, computation resources, ..., etc. Will this threaten the privacy of local data? (2)The experiment shows results of language model around 1B. Why larger models (e.g., llama) and benchmarks (e.g., mmlu, gsm8k) are not shown in the body part. There are some results in the appendix but may need more detailed discussion in the experiment section. Technical Quality: 3 Clarity: 3 Questions for Authors: please see the weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback and their recognition of our work's contribution to addressing heterogeneity problems in FL! In response to the raised comments, we summarize the two main questions that the reviewer pointed out in the weakness section. --- ## Response to W.1 > Since the server needs to preset all the local LoRA ranks, the server needs to get access to some client information such as the amount of data, computation resources, ..., etc. Will this threaten the privacy of local data? Thank you for your comments. The information disclosed from the clients to the server is minimal: **Compared with the vanilla FL method, only the preset LoRA ranks are required to be shared from the clients.** We discuss the influence of disclosure of LoRA rank as the following: - [**Computation resource**] The LoRA rank is primarily correlated with the computation resources of each client. If the information about computation resources is sensitive to the client, the client can choose to employ encryption techniques or rely on trusted third parties for secure communication. - [**Amount of data**] In Federated Learning, the amount of data is usually shared with the server for calculating weight aggregation. **FlexLoRA does not spill more information than vanilla FL methods.** If the amount of data becomes sensitive information in some FL training settings, we can encrypt such information with existing approaches like secure aggregation [1]. --- ## Response to W.2 > The experiment shows results of language model around 1B. Why larger models (e.g., llama) and benchmarks (e.g., mmlu, gsm8k) are not shown in the body part. We answer the reviewer's question about the experiment setting in the following two aspects: 1. **Model Size:** - We selected a 1B base model for the main experiments to accommodate the real-world cross-device FL setting where thousands of devices are present and a majority of clients are edge devices with limited computation power. - Following the reviewer's suggestion on accounting for FlexLoRA's performance under larger models, we present the results of llama 3(8B) on Dolly-15K dataset with 1000 clients a supplement experiment in Table 1. The client sample rate is 0.01 and we use FedIT as the base framework. We can see that FlexLoRA achieves better performance under uniform distribution and heavy tail light distribution compared with regular FedIT, which assigns homogeneous rank to each client's base model. 2. **Benchmark:** - For the results under Natural Instructions (Table 2 in the paper), we followed the standard of the original Natural Instruction paper [2], which involves using Rouge-L scores and task-specific evaluations for evaluating model performance. [2] has shown that **Rouge-L is positively correlated with the human evaluation result and is a valid indicator of model language processing performance**. | **Resource Dist** | **RougeL** | | --- | --- | | Homo Rank (FedIT) | 68.88 | | Uniform | 69.49 | | Heavy tail light | 69.29 | Table 1: FlexLoRA's performance on llama-3 (8B) under 1000 clients. --- ## Mentioned Refs [1] "Practical secure aggregation for federated learning on user-held data." Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017. [2] "Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks." arXiv preprint arXiv:2204.07705 (2022). --- ## Closing Remark We truly appreciate your valuable comments, which have contributed to the improvement of our paper. We hope that our responses demonstrate our commitment to addressing your comments and encourage you to favorably consider accepting the paper. We will ensure that the new discussion and results are incorporated into the final version. Thank you once again for your support! --- Rebuttal Comment 1.1: Title: Thank for the response Comment: I very appreciate the authors' response. My concerns have been well addressed, and hence I am willing to raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your response! We greatly appreciate your support throughout the review process. Your constructive feedback have been instrumental in helping us refine our work.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity
Accept (spotlight)
Summary: Differently from other papers in the field of federated learning, the authors consider the problem of uplink compression - from the server to the workers. The main contributions of the paper are MARINA-P and its variants. These optimization schemes leverage PermK sparsification to reduce the communication complexity in the uplink. When combined with momentum, MARINA-P achieves a total communication complexity that, in the case of $n>>1$ and for close-to-homogeneous clients, is smaller than competing state-of-the-art alternatives. Strengths: The paper is original as it focuses on uplink compression, an aspect which is often overlooked in the federated learning community. The results are very interesting, as the authors not only show that under suitable conditions, MARINA-P has very low communication complexity, but they also demonstrate that without additional assumptions, there is an insurpassable lower bound to the communication complexity. The paper is very well-written; in fact, despite being very dense, it still introduces the problem and the proposed solution in a very clear manner. Weaknesses: - The main weakness of the paper is the assumption of data homogeneity. While I understand that this assumption is necessary, it would be interesting to include experimental evidence in the main text showing how MARINA-P performs with heterogeneous data. - The following might be erroneous, but to me seems like that the overall number of coordinates sent in the uplink is $d$. From the discussion in Section 4.5 it seems like that the parameter vector is permuted and divided into non-overlapping chunks, and each chunk is sent to a different client. As such, it looks like that the \emph{overall} number of coordinates sent to the server is $d$. Technical Quality: 3 Clarity: 4 Questions for Authors: - Related to my previous point, how many coordinates are sent into the uplink at each round from the server to all users? i.e. what is the support of $\sum^n_{i=1} \mathcal{C}_i(x)$? - Is MARINA-P compatible with client selection? Meaning, is it possible to apply MARINA-P if, at each round, the number of users varies or a different subset is chosen? - Would it be possible to compare MARINA-P to other state-of-the-art schemes that use masking to reduce communication (e.g., SPARSE RANDOM NETWORKS FOR COMMUNICATION-EFFICIENT FEDERATED LEARNING)?" Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations are adequately discussed in the text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for acknowledging the contributions of our research. We will now address each of your comments and provide clarifications. __Weaknesses__ > The main weakness of the paper is the assumption of data homogeneity. While I understand that this assumption is necessary, it would be interesting to include experimental evidence in the main text showing how MARINA-P performs with heterogeneous data. The assumption of data homogeneity is not necessary, and we do not assume it! While it is true that MARINA-P and M3 perform exceptionally well in the close-to-homogeneous regime, all our main results hold for arbitrarily heterogeneous clients. The only requirement is that the functional $(L_A,L_B)$ inequality holds. Importantly, for this to be true, the local functions do not need to be homogeneous. In fact, the functional $(L_A,L_B)$ inequality is weaker than the $L_i$-smoothness of the local functions $f_i$, which can clearly hold in heterogeneous scenarios. > The following might be erroneous, but to me seems like that the overall number of coordinates sent in the uplink is $d$. From the discussion in subsection 4.5 it seems like that the parameter vector is permuted and divided into non-overlapping chunks, and each chunk is sent to a different client. As such, it looks like that the _overall_ number of coordinates sent to the server is $d$. Yes, it is true that each client receives $\approx d/n$ coordinates, so the total number sent by the server is $d$ (as is the case for any sparsifier with the same sparsification level). Notice that without compression, the server would send $d$ coordinates to each worker, resulting in a total of $n \times d$ instead of $d$ coordinates sent. __Questions__ > Related to my previous point, how many coordinates are sent into the uplink at each round from the server to all users? i.e. what is the support of $\sum \mathcal{C}_i(x)$? As mentioned previously, the server sends $K \approx d/n$ (distinct) coordinates to each client in the case of the permutation compressors, and $\sum \mathcal{C}_i(x) = x$. > Is MARINA-P compatible with client selection? Meaning, is it possible to apply MARINA-P if, at each round, the number of users varies or a different subset is chosen? Such behavior can be modeled using a specific compressor that satisfies Definition 1.4, and hence our theory applies. Specifically, let us define a compression operator $\mathcal{C}$ via $\mathcal{C}(x) = \mathcal{\bar{C}}(x)$ with probability $p$, and $0$ otherwise, where $\mathcal{\bar{C}} \in \mathbb{U}(\bar{\omega})$. It can easily be shown that $\mathcal{C} \in \mathbb{U}(p\bar{\omega}+1-p)$. In this model, the case when $\mathcal{C}(x) = 0$ can be interpreted as a worker not participating in this step. > Would it be possible to compare MARINA-P to other state-of-the-art schemes that use masking to reduce communication (e.g., SPARSE RANDOM NETWORKS FOR COMMUNICATION-EFFICIENT FEDERATED LEARNING)?" First, [5] focuses on training sparse networks, whereas our objective is the optimization of a general objective function. More importantly, the method in [5] does not address communication reduction in the downlink direction. The primary goal of the proposed FedPM algorithm is to train a probability mask to find the optimal sparse random network within the original random one, aiming to reduce the uplink communication cost. In contrast, this paper targets both downlink and uplink communication directions. MARINA-P specifically targets the downlink communication direction. Since these two algorithms address different goals, a direct comparison is not evident. [5] Isik, B., Pase, F., Gunduz, D., Weissman, T., & Zorzi, M. (2022). Sparse random networks for communication-efficient federated learning. arXiv preprint arXiv:2209.15328. --- We hope that the explanations provided adequately address the reviewer’s comments. Should any additional clarifications be required, we are happy to provide them. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! Regarding the number of bits sent by the server to the workers: The channel from the server to the workers is broadcast, meaning that whatever the server transmits is received by all workers. Therefore, whether the server sends the same vector to all devices (as normally done) or sending different chunks of the vector to different device (as in MARINA-P), the number of bits and communication cost remain the same. Could you clarify under which communication model MARINA-P is more efficient? --- Reply to Comment 1.1.1: Comment: Thank you for the question. We agree that the number of bits that the server produces on the way out is the same in both cases. However, the number of bits that the server sends to *one worker* will be different. For MARINA-P with PermK, it is $d / n$ instead of $d.$ In Definition 1.3, we say "The server-to-worker (s2w) communication complexity of a method is the expected number of coordinates/floats the server sends to **a worker** to find an $\varepsilon$–solution." This definition is consistent with the fact that the downlink communication issue stems from the clients' download speed. Hence, the critical factor is the size of the message each client receives. To clarify further, imagine a tree (graph) with a central node (server) and $n$ child nodes (clients). Each worker is connected to the server by a unique edge. In this setup, the server-to-worker (s2w) communication complexity of a method is measured by the expected number of coordinates/floats the main node transmits through each edge to each of the connected workers. We hope this answers your question. If you have any more, please do not hesitate to ask.
Summary: This paper presents methods to improve communication efficiency in distributed optimization, particularly focusing on the server-to-worker (s2w) communication costs, which are often overlooked. The authors first present a lower bound on the required round of communication. They used it to state that under no assumption between the individual functions stored at the distributed workers, the s2w communication loads can not be improved compared to the naive design. Then, the authors provide an algorithm and analysis showing that when the hessian of the individual functions are similar or aligned, improvements in communication complexity can be achieved. Strengths: The paper introduces a formal quantification, called functional $(L_A, L_B)$ inequality, to measure the similarity in the smoothness of the individual functions available to the workers. Novel algorithms and analysis are provided to show that the similarity can be used to improve the s2w communication load. The paper also includes an interesting theoretical analysis, providing lower bounds on the required rounds of communication in the worst case. Weaknesses: 1. The authors use the lower bound on the rounds communication to claim a lower bound on the communication load. However, there is a gap in this argument: The authors state that since the rounds of communication are lower bounded by a certain function, the overall communication load is lower bounded by d times that function, given that the full gradient contains d entries. But if a compressor is used as assumed by the authors, where the server can deliver an unbiased function that has fewer entries, it is totally possible that the communication load is smaller than d times the rounds of communication, violating this claimed bound. Therefore, I have to suggest a weak rejection. I'm happy to raise my score if this issue can be appropriately fixed. 2. The concept of functional $(L_A, L_B)$ inequality can be simplified since only L_A is used in the main results. It can be stated such as the functions satisfy an $L_A$ condition, if we can find $L_B$ such that ineq. (9) is satisfied. Technical Quality: 3 Clarity: 3 Questions for Authors: Is there any connection between $L_A$ and the Hessians of the individual functions, similar to the smoothness parameter $L$, which can be interpreted as the largest eigenvalue of the Hessian of f? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weekness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and appreciating the strengths of our work. We would like to address your concerns and provide additional explanations. __Weaknesses__ > The authors use the lower bound on the rounds communication to claim a lower bound on the communication load.... As far as we understand, the reviewer is pointing to a possible gap in our first contribution from Section 3. We believe that there may have been a misunderstanding, and there are no gaps in our argument. First of all, we are careful with our claims and only provide the lower bound $\Omega\left( \frac{(\omega + 1) L \delta_0}{\varepsilon} \right)$ for the number of **iterations/rounds.** We have never provided _a lower bound on the communication load_, and never claimed that _the overall communication load is lower bounded by $d$ times that function, given that the full gradient contains $d$ entries_. Once again, we only proved the lower bound $\Omega\left( \frac{(\omega + 1) L \delta_0}{\varepsilon} \right)$ for the number of iterations/rounds, where $\omega$ is the variance from Definition 1.4. We are on the same page with the reviewer that in order to determine the communication complexity, we have to multiply the number of iterations/rounds by the number of entries/coordinates of a compressor. For simplicity, let us consider the Rand$K$ compressor, which sends $K$ random coordinates and has variance $\omega = \frac{d}{K} - 1.$ Assuming that the server uses Rand$K$ in every iteration/round, the lower bound on the communication complexity is $$K \times \Omega\left( \frac{(\omega + 1) L \delta_0}{\varepsilon} \right) = K \times \Omega\left( \frac{d L \delta_0}{K \varepsilon} \right) = \Omega\left( \frac{d L \delta_0}{\varepsilon} \right),$$ because, as the reviewer noted, we have to multiply by $K$ (not $d,$ as both we and the paper agree upon). Hence, we get the communication complexity $\Omega\left( \frac{d L \delta_0}{\varepsilon} \right)$ of GD. This argument leaves us no hope to improve the $\Omega\left( \frac{d L \delta_0}{\varepsilon} \right)$ communication complexity without an extra assumption. It is worth noting that we do not prove a lower bound for the communication complexity $\Omega\left( \frac{d L \delta_0}{\varepsilon} \right)$ in general. Instead, we establish a lower bound $\Omega\left( \frac{(\omega + 1) L \delta_0}{\varepsilon} \right)$ for the number of iterations/rounds, and then, assuming that the server uses, for instance, Rand$K$ in every iteration, we get the $\Omega\left( \frac{d L \delta_0}{\varepsilon} \right)$ communication complexity. Even this result provides strong evidence that we need an extra assumption to get further theoretical improvement. We believe that this clarification addresses the reviewer's concerns and explains that there are no mathematical gaps in our argument. We hope that the reviewer will reconsider the score. In case of any additional questions or if further clarification is needed, please let us know. > The concept of functional ($L_A, L_B$) inequality can be simplified since only $L_A$ is used in the main results. It can be stated such as the functions satisfy an $L_A$ condition, if we can find $L_B$ such that ineq. (9) is satisfied. It is true that $L_B$ does not appear in the result from Theorem 4.6 and Corollary 4.7. This is a particular strength of MARINA-P with permutation compressors. Let us look at the general results in Theorem D.1. (plus Theorem D.2 and Corollary D.4). The upper bound on the stepsize depends on $L_B^2\theta$, where $\theta$ is a compression parameter from Definition A.3. The reason why this term does not appear in Theorem 4.6 and Corollary 4.7 is that for permutation compressors, very conveniently one obtains $\theta=0$, which ultimately translates into superior complexity bound. However, the $L_B$ term is necessary for comparing this result with different compression schemes where $\theta>0$. Moreover, both $L_A$ and $L_B$ appear in the complexity bounds of M3 (Theorem 5.1), so their introduction in the main part of the paper is necessary. Overall, the functional $(L_A,L_B)$ inequality enables us to capture the intricate structure of the objective function and obtain superior complexity results for certain objective functions (those satisfying the assumption with small $L_A$). This would not be possible without introducing both the $L_A$ and $L_B$ terms. While the second term on the right-hand side of (9) (the $L_B$ term) can be bounded by the first term using Jensen's inequality, reallocating mass from $L_B$ to $L_A$ results in worse theoretical guarantees. Therefore, keeping $L_A$ as small as possible is beneficial, as smaller values yield better complexity results. Hence, the possibility that $L_B > 0$ is critical for obtaining the enhancements we present. On the other hand, since in general $L_A\neq0$, requiring that the $(L_A,L_B)$ holds with $L_A=0$ would limit the applicability of our results. We hope this clarifies why both terms are needed. Please let us know if there are any further questions. __Questions__ > Is there any connection between and the Hessians of the individual functions, similar to the smoothness parameter $L$, which can be interpreted as the largest eigenvalue of the Hessian of $f$? Yes, there is such a connection! In fact, this result is already included in the paper. Please, see Theorem 4.8, where we derive $L_A$ and $L_B$ based on the Hessians of the functions $f_i$. --- We hope that the explanations provided adequately address the reviewer's comments, and we kindly ask for a reconsideration of the score. Please let us know if more details or clarifications are needed. --- Rebuttal Comment 1.1: Comment: I have read the reviewer's response. Let me first clarify that my comment on the gap is due to the statements in the contributions in Section 2 rather than Section 3. The first contribution states: "This result gives no hope for improving the communication complexity (2) in the worst case." but the worst case in equation (2) refers to the communication cost instead of rounds of communication, i.e., $d\delta^0L/\epsilon$ vs the $(\omega+1)\delta^0L/\epsilon$ in the manuscript. This difference is precisely where the gap happens and the only way to make sense of it is to have a lower bound on communication cost rather than the rounds of communication. Similarly, the second main contribution in Section 2 uses the $(\omega+1)\delta^0L/\epsilon$ lower bound to provide a motivation to introduce extra assumptions, but the third main contribution again shows an upper bound in communication complexity, so the same gap happens. As explained by the author, the attempt to bridge this gap is different but similar to the spirit of the example provided by the reviewer, which is to take some "worst cases" over the designs rather than the problem scenarios. I believe we can agree that this is not a convincing argument, as the author's response clarified this is not proof as well. However, the issue of the manuscript is that by reading Section 2, it hints at the opposite, as explained above. For clarity and readability, the reviewer recommends the authors revise related statements to ensure that all claims in the paper exactly reflect the technical contributions. --- Reply to Comment 1.1.1: Comment: First, we would like to thank the reviewer for a quick response! We still believe that there is no mathematical gap in our assertions, as the sentence _This result gives no hope for improving the communication complexity (2) in the worst case_ is not a strong mathematical statement. What we intended to say is that _This result **indicates** it is not possible to improve the communication complexity (2) in the worst case_, which means that while we cannot be certain the bound is impossible to break with *any technique*, it is definitely not possible with approaches like the Rand$K$ compressor due the lower bound theorem. Nevertheless, we agree that this can lead to misunderstandings. Therefore, we propose to rewrite the contributions section in the following way (changes are in **bold**): 1. We start by proving the impossibility of devising a method where the server communicates with the workers using unbiased compressors $\mathbb{U}(\omega)$ (or biased compressors from Section B) and achieves an iteration rate faster than $\Omega\left(\frac{(\omega + 1) L \delta^0}{\varepsilon}\right)$ (Theorem 3.1) under Assumptions 1.1, 1.2, and 1.6. ~~**This result gives no hope for improving the communication complexity (2) in the worst case**~~. **This result provides a lower bound for any method that applies such compressors to vectors sent from the server to the workers in every iteration. Moreover, we prove a more general iteration lower bound of $\Omega\left(\frac{(\omega + 1)L \delta^0}{\varepsilon}\right)$ for all methods where the server zeroes out a coordinate with probability $1 / (\omega + 1)$ (see Remark 3.2).** 2. In view of this results, it is clear that an extra assumption is needed to break the lower bound $\Omega\left(\frac{(\omega + 1) L \delta^0}{\varepsilon}\right).$ In response, we introduce a novel assumption termed "Functional $(L_A, L_B)$ Inequality" (see Assumption 4.2). We prove that this assumption is relatively weak and holds, for instance, under the local smoothness of the functions $f_i$ (see Assumption 1.5). 3. We develop a new method for downlink compression, MARINA-P, and show that, under our new assumption, **along with Assumptions 1.1, 1.2 and 1.6, it can achieve the iteration rate of** \begin{align*} \textstyle O\left(\frac{\delta^0 L}{\varepsilon} + \frac{\delta^0 L_A (\omega + 1)}{\varepsilon} + \frac{\delta^0 L_B (\omega + 1)}{\varepsilon \sqrt{n}}\right). \end{align*} **(see Theorem D.1 with $p = 1 / (\omega + 1)$ + Lemma A.5)**. Notably, when $L_A$ is small and $n \gg 1,$ this **iteration** complexity is provably superior to **$\Theta(\frac{\delta^0 L (\omega + 1)}{\varepsilon})$** and the complexities of the previous compressed methods. In this context, $L_A$ serves as a measure of the similarity between the functions $f_i$, and can be bounded by the ``variance'' of the Hessians of the functions $f_i$ (see Theorem 4.8). Thus, MARINA-P is the first method whose **iteration** complexity can provably improve with the number of workers $n$. 4. **Moreover, MARINA-P can achieve the s2w communication complexity of** \begin{align*} \textstyle O\left(\frac{d \delta^0 L}{n \varepsilon} + \frac{d \delta^0 L_A}{\varepsilon}\right). \end{align*} **When $L_A$ is small and $n \gg 1,$ this communication complexity is provably superior to (2) and the communication complexities of the previous compressed methods.** 5. Our theoretical improvements can be combined with techniques enhancing the w2s communication complexities. In particular, by combining MARINA-P with MARINA (Gorbunov et al., 2021) and adding the crucial momentum step, we develop a new method, M3, that guarantees a total communication complexity (s2w + w2s) of \begin{align*} \textstyle O\left(\frac{d \delta^0 L_{\max}}{n^{1/3} \varepsilon} + \frac{d \delta^0 L_A}{\varepsilon}\right). \end{align*} When $n > 1$ and in the close-to-homogeneous regime, i.e., when $L_A$ is small, this **communication** complexity is better than (2) and the complexities of the previous bidirectionally compressed methods. 6. Our theoretical results are supported by numerical experiments (see Section F). Additionally, we will change Lines 122-123 to "Let us first investigate the possibility of improving the iteration complexity $O\left(\frac{(\omega + 1)\delta^0 L}{\varepsilon}\right)$ under Assumptions 1.1, 1.2, and 1.6." These extra 4-5 sentences fully resolve any potential misunderstandings and clarify our contributions. These are very minor adjustments that can be easily incorporated into the camera-ready version of the paper. We believe that we have resolved the concerns and will happily answer any additional questions.
Summary: This paper addresses optimizing server-to-worker communication in distributed optimization by introducing MARINA-P, a novel method using correlated compressors for downlink compression. The study identifies inefficiencies in current downlink compression approaches and shows that MARINA-P can reduce communication complexity as the number of workers increases. Additionally, MARINA-P serves as a foundation for bidirectional compression methods. The paper also presents M3, which combines MARINA-P with uplink compression and a momentum step, further improving total communication complexity. Both theoretical analyses and empirical experiments validate the efficiency of the proposed algorithms. Strengths: The paper is exceptionally well written, with technical results rigorously proved. One of the main novelties is the use of correlated compressors to achieve a server-to-worker communication complexity that improves with the number of workers. Weaknesses: The compressor model (Definition 1.4), while standard in the federated learning literature, is somewhat contrived. It is primarily defined in its current form to facilitate analysis. It remains unclear whether the main conclusions derived under this model can be directly translated to the information-theoretic setting (quantization plus entropy coding), where compression efficiency is measured in terms of bits per sample. Technical Quality: 4 Clarity: 4 Questions for Authors: The role of L_B is not completely clear, as it does not prominently affect communication complexities compared to L_A. Is it that by isolating L_B from L_A, one can effectively reduce L_A and, consequently, the communication complexities as well? Moreover, the class of functions for which L_B is known to be positive is quite restricted (Theorem 4.5). It would be beneficial to find more general examples. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The paper does not explicitly address its limitations. The only related point is that the proposed method does not consistently outperform the CORE method by Yue et al. (2023) in terms of performance (p. 8). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments and are grateful for highlighting the positive aspects of our work. We will now proceed to address the concerns you raised and provide clarifications. __Weaknesses__ > The compressor model (Definition 1.4), while standard in the federated learning literature, is somewhat contrived. It is primarily defined in its current form to facilitate analysis. As in all mathematical theories, we need some assumptions about compressors to prove theorems and develop theory. Definition 1.4 is particularly useful because it not only facilitates analysis but also captures a wide variety of compressors. > It remains unclear whether the main conclusions derived under this model can be directly translated to the information-theoretic setting (quantization plus entropy coding), where compression efficiency is measured in terms of bits per sample. In Tables 1 and 2, we compare the compression efficiencies of the algorithms based on the total number of coordinates sent to find a $\varepsilon$-stationary point. The formulas we compare there do not involve parameters and notations from Definition 1.4. Following the convention in virtually all papers in computer science published at NeurIPS, we ignore the fact that all computations use float32/64 instead of real numbers. With this simplification, sending one coordinate is equivalent to sending 32/64 bits. We acknowledge that addressing the errors arising from floating-point arithmetic is an important problem. __Questions__ > The role of $L_B$ is not completely clear, as it does not prominently affect communication complexities compared to $L_A$. Is it that by isolating $L_B$ from $L_A$, one can effectively reduce $L_A$ and, consequently, the communication complexities as well? Moreover, the class of functions for which $L_B$ is known to be positive is quite restricted (Theorem 4.5). It would be beneficial to find more general examples. Thank you for your comment. We agree that a more detailed discussion of this point (currently in the appendix), would benefit the paper, and we are happy to include it in the revised version. Let us clarify the results. Theorem 4.5 serves as an illustration of a case when $L_A=0$. However, please note that we do have much more general results with $L_B>0$ - see, for example, Theorem 4.8 (where we derive the constants $L_A$ and $L_B$ in terms of Hessians of the local functions $f_i$). It is true that $L_B$ does not appear in the result from Theorem 4.6 and Corollary 4.7. However, this is not a disadvantage but rather a particular strength of MARINA-P with permutation compressors. To explain this, let us refer to a more general result in Theorem D.1. (along with Theorem D.2 and Corollary D.4). The upper bound on the stepsize depends on $L_B^2\theta$, where $\theta$ is a compression parameter from Definition A.3. The reason why this term does not appear in Theorem 4.6 and Corollary 4.7 is that for permutation compressors, we very conveniently obtain $\theta=0$, which ultimately translates into a superior complexity bound. The functional $(L_A,L_B)$ inequality enables us to capture the intricate structure of the objective function and achieve superior complexity results for certain objective functions (particularly those satisfying the assumption with small $L_A$). This would not be possible without introducing both the $L_A$ and $L_B$ terms. While the second term on the right-hand side of (9) (the $L_B$ term) can be bounded by the first term using Jensen's inequality, reallocating mass from $L_B$ to $L_A$ results in worse theoretical guarantees. Therefore, keeping $L_A$ as small as possible is beneficial, as smaller values yield better complexity results, and hence the possibility that $L_B > 0$ is critical for obtaining the enhancements we present. Furthermore, in general $L_A\neq0$, so requiring that the $(L_A,L_B)$ inequality holds with $L_A=0$ would limit the applicability of our results. Lastly, both $L_A$ and $L_B$ appear in the complexity bounds of M3 (Theorem 5.1), making their introduction in the main part of the paper necessary. We hope this clarifies the topic. Please let us know if there are any further questions. --- We appreciate your constructive feedback and trust that our responses have satisfactorily addressed the questions. Should any additional clarifications be required, we are happy to provide them. --- Rebuttal Comment 1.1: Comment: I am happy with the authors's response and will keep my original score.
Summary: This paper deals with the downlink (server-to-client) communication cost in federated learning (FL). The main motivation of the paper is to provide a theoretical analysis of the total communication cost with downlink compression. This is achieved considering unbiased or a certain class of biased lossy compression algorithms at the server. First, the authors present an impossibility result (i.e., a converse theorem), showing that faster than certain rate is not possible if the server employs an unbiased compressor (or biased compressor as specified in the paper). Then, they proposed a downlink compression method called MARINA-P, which achieves better iteration than those in the literature. This method, basically, transmits an uncompressed model with a small probability p to all the clients, while the compressed model update is transmitted otherwise. The main twist in the proposed scheme is to transmit different compressed versions to the clients, rather than sending the same compression. In particular, an orthogonal sparse subset of the parameters are sent to each client, so that they together can recover all the parameters. This way, each parameter is considered at least by one of the workers. Here, the compressed versions transmitted to the users become correlated. The authors show that this correlated compression approach achieves the best convergence rate. Finally, they extend this framework by combining downlink compression with uplink compression using MARINA together with a momentum step. Strengths: The paper looks at an important yet under explored aspect of communication in FL. The authors propose a novel server-to-client compression method, and provide upper and lower bounds on the convergence rate under various assumptions. Overall, the assumptions seem reasonable, and are shown to be satisfied by common compressors such as randK (and the introduced permK). The proposed compressor is rather simple, but the observation that the best convergence rate is achieved by this particular correlated compression method is interesting. Weaknesses: I would say that the lack of any empirical results is the main weakness of the paper. While it is interesting to obtain the theoretical convergence guarantees, one also wonders how these three different compression schemes would perform in practice (for scenarios that satisfy the assumptions, as well as those that do not). Also, only sparsification type compression methods are considered. What about quantization? Can the arguments be easily extended to the case with quantization? Technical Quality: 3 Clarity: 3 Questions for Authors: - While the authors have covered most of the relevant literature, as far as I'm aware, one missing paper that I am aware of is the following: M. Mohammadi Amiri, D. Gündüz, S. R. Kulkarni, and H. V. Poor, "Federated Learning With Quantized Global Model Updates" arXiv preprint arXiv:2006.10672, Jun 2020. Here, not only the authors consider downlink communication, but they actually transmit different updates to different clients. Instead of compressing the difference with rest to the previous global model, here the authors compress the difference with respect to the model updated by the client. This way the gap to the new global model is potentially lower. I can't say how the convergence rates compare, but I would like the authors to comment on the pros/cons of this approach compared to their own. - How limiting is the assumption that the calls to the compressor are independent? There are some works in the literature that consider time-correlated sparsification, and show improved results (which also makes sense as the sparsity pattern should not change drastically from one iteration to the next). Please provide some comments in the paper if this may be a significant limitation on the practical performance of the considered class of compressors. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: As mentioned above: - lack of numerical comparisons, - one missing reference Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and for recognizing the strengths of our work. We will now address each of your comments in detail. __Weaknesses__ > The lack of any empirical results is the main weakness of the paper. We thank the reviewer for the suggestion. We would like to highlight that our empirical results are presented in Section F. Our primary focus in this paper was on the theoretical aspects, rather than extensive experimental validation, which is why we placed the empirical section in the appendix. The experiments included are intended to demonstrate that our theoretical results closely align with practical observations, highlighting the robustness of our theoretical framework. In the camera-ready version, having an extra page, we are happy to include the highlights of experiments in the main part of the paper. > Only sparsification type compression methods are considered. What about quantization? Can the arguments be easily extended to the case with quantization? Yes, of course, our theory already covers quantization. Our general theoretical results (Theorems D.1, D.2, E.1) rely on Definition 1.4, which encompasses a wide range of compression operators. These operators include not only sparsification, but also, for example, natural compression and natural dithering [1], general exponential dithering (a generalization of ternary quantization), general unbiased rounding, and unbiased exponential rounding [2]. Various combinations of these compression methods are also covered. Naturally, our theory also applies to quantization. More information can be found in the references in Section 1.1 of the paper. In some theoretical results, we specialize in sparsification to highlight the superiority of permutation compressors. However, the aforementioned theorems demonstrate that our approach is applicable to a variety of other compression schemes as well (all compression techniques satisfying Definition 1.4). We agree that we can clarify this point further in the paper and will make the necessary revisions to ensure this. [1] Horvath, S., Ho, C. Y., Horvath, L., Sahu, A. N., Canini, M., & Richtárik, P. (2022, September). Natural compression for distributed deep learning. In Mathematical and Scientific Machine Learning (pp. 129-141). PMLR. [2] Beznosikov, A., Horváth, S., Richtárik, P., & Safaryan, M. (2023). On biased compression for distributed learning. Journal of Machine Learning Research, 24(276), 1-50. __Questions__ > While the authors have covered most of the relevant literature, as far as I'm aware, one missing paper that I am aware of is the following: M. Mohammadi Amiri, D. Gündüz, S. R. Kulkarni, and H. V. Poor, "Federated Learning With Quantized Global Model Updates" arXiv preprint arXiv:2006.10672, Jun 2020. Here, not only the authors consider downlink communication, but they actually transmit different updates to different clients. Instead of compressing the difference with rest to the previous global model, here the authors compress the difference with respect to the model updated by the client. This way the gap to the new global model is potentially lower. I can't say how the convergence rates compare, but I would like the authors to comment on the pros/cons of this approach compared to their own. Thank you for pointing out this reference, we will add this work to the references. We would like to provide our comments on it. First, the statement that _they actually transmit different updates to different clients_ is incorrect. While it is true that the updates transmitted from the devices are different, the server sends the same message $Q(\theta(t) - \hat{\theta}(t-1), q_1)$ to all clients (line 2 of Algorithm 1). Regarding the theoretical comparison between the proposed LFL and our MARINA-P/M3, a crucial difference lies in the underlying assumptions. Most importantly, the results of [3] rely on the strong convexity of the local loss functions, whereas MARINA-P and M3 operate in the non-convex setting. Additionally, the theory in [3] assumes the boundedness of the stochastic gradients, which is a very restrictive assumption and, in fact, contradicts the strong convexity assumption (see, e.g., [4]). Therefore, the constant $G$ in Theorem 1 can be arbitrarily large, potentially making the upper bound infinite. We hope this clarifies the matter. [4] Nguyen, L., Nguyen, P. H., Dijk, M., Richtárik, P., Scheinberg, K., \& Takác, M. (2018, July). SGD and Hogwild! convergence without the bounded gradients assumption. In International Conference on Machine Learning (pp. 3750-3758). PMLR. > How limiting is the assumption that the calls to the compressor are independent? There are some works in the literature that consider time-correlated sparsification, and show improved results (which also makes sense as the sparsity pattern should not change drastically from one iteration to the next). Please provide some comments in the paper if this may be a significant limitation on the practical performance of the considered class of compressors. There exist various classes of compressors, each of them coming with some benefits and potentially some drawbacks. The assumption of time-independence is not a limitation. The class of compressors we consider is very wide, as a result of which the theoretical framework we developed covers a broad range of compression operators. Most importantly, the use of permutation compressors enables us to obtain superior complexity results, resulting in the first downlink compression algorithms whose complexities can provably improve with the number of workers. We are not aware of any work with theoretical evidence that _the sparsity pattern should not change drastically from one iteration to the next_. We would be grateful if the reviewer could provide such references. ___ We are grateful for the feedback and believe that we have resolved the issues identified. If further information or explanations are required, please do not hesitate to ask. --- Rebuttal Comment 1.1: Comment: I have read all the reviews and the authors' responses. I thank the authors for the detailed explanations. I believe it is a solid contribution on a relatively less explored aspect of distributed learning. The techniques are not necessarily new, but I think their application is rigorous and seems correct. I retain my original score.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes
Accept (poster)
Summary: The paper presents a novel approach namely MimicTalk to personalized talking face generation. Unlike previous methods that rely on individual neural radiance fields (NeRF) for each identity, the authors propose a more efficient and generalized framework using a person-agnostic generic model. MimicTalk introduces a static-dynamic-hybrid adaptation pipeline and an in-context stylized audio-to-motion model. The proposed method aims to achieve high-quality, expressive results while significantly reducing training time. Experimental results suggest that MimicTalk surpasses previous baselines in video quality, efficiency, and expressiveness. Strengths: 1. The paper introduces a novel hybrid approach that combines static and dynamic features for personalized talking face generation. The proposed in-context stylized audio-to-motion model enhances the expressiveness of generated videos by mimicking the talking style from reference videos. The method is reasonable and interesting, and could probably bring some insights to the community. 2. The proposed method significantly reduces the training time required for generating personalized talking faces. 3. By leveraging a person-agnostic model, the paper demonstrates improved robustness and efficiency in handling various identities and out-of-domain conditions. 4. The authors promised to release the code. Weaknesses: 1. The paper claims superior performance over existing methods, but the experimental results presented are not sufficiently comprehensive or detailed to fully substantiate these claims. Although some video results are provided in the URL, the results are not convincing enough. In particular, I cannot see the advantages of the proposed method in the style-control results. There are also not enough examples for comparison to state-of-the-art approaches. 2. The experiments are limited in scope, primarily focusing on a narrow set of scenarios and datasets. The evaluation lacks a diverse range of conditions such as varying lighting, occlusions, and different levels of head movements, which are crucial for real-world applicability. More comprehensive testing across varied and challenging scenarios would strengthen the validity of the claims. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper addressed the limitations in the appendices. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive review and valuable comments, and we hope our response fully resolves your concerns. > Q1: The experimental results presented are not sufficiently comprehensive or detailed to fully substantiate these claims. Although some video results are provided in the URL, the results are not convincing enough. In particular, I cannot see the advantages of the proposed method in the style-control results. - A1: Thanks for your helpful feedback. We have enriched the results of talking style control. Specifically, 1) we tried different classifier-free guidance (CFG) scales in the sampling process of the ICS-A2M model to further improve the talking style control. We provide a new demo video (please kindly refer to **Rebuttal Demo 4** on the demo page) with 6 additional style control examples to help the reader make a qualitative comparison. 2) We also perform user studies to evaluate the style control ability of our method and the baseline StyleTalk. Please refer to **Table 2 in the attached one-page PDF** for details. Both qualitative/quantitative evaluations show that our method performs better in talking style control. > Q2: There are also not enough examples for comparison to state-of-the-art approaches. - A2: We acknowledge that making comparisons on more identities with diverse languages and appearances could better prove the generalizability of our method. To this end, we provide an additional demo video (please kindly refer to **Rebuttal Demo 1** on the demo page) with 9 additional identities and the results prove that our method has better data efficiency and lip-sync quality than the most competitive baseline ER-NeRF (ICCV 2023). > Q3: The experiments are limited in scope. The evaluation lacks a diverse range of conditions such as different levels of head movements, which are crucial for real-world applicability. - A3: Thanks for your insightful comment! We admit that the limited evaluation scope is a common problem in the field of NeRF-based talking face generation. And we have performed an additional experiment that drives our method and the baseline with out-of-domain (OOD) head movements. We provide an additional demo video (please kindly refer to **Rebuttal Demo 3** on the demo page), and the results show that our method could well handle the OOD head movement while the baseline cannot. We suspect the prior knowledge in our generic backbone is the reason for our method's generalizability to OOD poses. We will add this experiment to the revised manuscript. Besides, as for other critical situations, as can be seen **Rebuttal Demo 1** on our demo page, our method is more robust than the baseline for difficult training videos (such as identities with long hair or sided faces). We also plan to benchmark our method on more critical datasets such as varying lighting and occlusions in the future. # Summary Following your comments, we have performed several additional experiments and found the experiment results have been more comprehensive and convincing. Again, we thank the reviewer for the insightful review and positive recommendation for our paper. --- Rebuttal 2: Title: Hoping that our response could address your concern Comment: Dear Reviewer dHKC, Thank you again for your time and effort in reviewing our work! We would appreciate it if you can let us know if our response has addressed your concern. As the end of the rebuttal phase is approaching, we look forward to hearing from you and remain at your disposal for any further clarification that you might require. --- Rebuttal 3: Title: Dear Reviewer Comment: Dear Reviewer dHKC. As the discussion period is closing in several hours, we would like to know if there are any additional questions. We are glad to answer them. Again, we sincerely appreciate your insightful review and positive recommendation for our paper.
Summary: The paper presents MimicTalk, an approach to improve the efficiency and robustness of personalized talking face generation. Instead of using separate NeRFs for each identity, MimicTalk adapts a person-agnostic NeRF-based model for specific individuals. They also propose an in-context stylized audio-to-motion (ICS-A2M) model to create facial movement that imitates the target person's speaking style. The adaptation process converges quickly for unseen identities. Experiments show that MimicTalk outperforms previous baselines in terms of video quality, efficiency, and expressiveness. Strengths: 1. This paper can achieve fast adaption from generic to person-specific model, and the overall reconstruction quality seems good compared to other methods. 2. Several proposed improvement also has been proved as effective. Weaknesses: 1. The head pose of the talking face video is not generated, for a talking face video, the sync of the head pose and audio is very important. 2. The facial motion is represented by PNCC, which limits the expressiveness of the generated results, e.g. detailed emotions, eyeball movements. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The $L_{sync}$ provided in **GeneFace** is used to evaluate the synchronization between audio and 68 sparse landmarks. In this paper, the authors call it audio-expression synchronization loss, which is not adequate. Sparse landmarks cannot reflect the richness and expressiveness of facial motions. It seems the judgement of the motion expressiveness (FID/user study) is missing in this paper. 2. The results of talking style control are not sufficient. Only two examples are given. More qualitative/quantitative evaluations of the style controllability are needed. 3. Since this is about 3D talking face generation. It would be better to see some novel-view results. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations and broader impacts in their appendix. The authors should also discuss the limitations as I list in the **Weaknesses** section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and the positive remarks on our Soundness and Contribution. We acknowledge that your concerns are mainly about the experiment of this paper, and hope our response resolves your concerns fully. > Q1: The head pose of the talking face video is not generated. - A1: Thanks for pointing this out! In the original paper, we didn't consider predicting the head pose since we needed to calculate error-based metrics such as average pose distance (APD). We acknowledge that the sync of the head pose and audio is important for talking head generation. Therefore, we additionally train an **audio-to-pose model**, which follows the main structure of the ICS-A2M model proposed in the original paper. Please kindly refer to the **Rebuttal Demo Video 2** on our demo page, whose URL is given in the original paper (due to the rebuttal policy). The Rebuttal Demo 2 shows that our model can produce novel head poses that are coherent with the input audio. We will add the audio-to-pose model to the revised manuscript. Thanks for your helpful comment! > Q2: The motion representation PNCC may limit the expressiveness of the generated results, such as detailed emotions or eyeball movements. - A2: We acknowledge that the PNCC cannot represent subtle emotion and eyeball movement. This is a common problem for all talking face methods that use explicit motion representations (e.g. 3DMM exp code, landmark, PNCC). In the future, we will explore implicit representation such as audio-to-latent to improve the expressiveness of the model. We will add the above discussion to the limitation section in the revised version. Thanks for your insightful suggestion! > Q3: Sparse landmark-based $L_{sync}$ metric cannot reflect the richness and expressiveness of facial motions. It seems the judgment of motion expressiveness (FID/user study) is missing in this paper. - A3: We acknowledge that sparse landmarks (used in GeneFace) or the 3DMM expression code (used in this paper) can cause information loss in measuring the expressiveness of facial motion, so the audio-expression sync loss $L_{sync}$ is not a good choice for evaluation metrics in Table 5 of the original paper. We found it non-trivial to design an objective metric for audio-lip expressiveness. Following your suggestion, we turned to user study. Specifically, we adopt the Comparative Mean Opinion Score (CMOS) as the metric of lip-sync (CMOS-sync) and expressiveness (CMOS-expressive). Please kindly refer to **Table 1 in the attached one-page PDF** for details. We can see that our default setting performs best and the other 4 settings or the baseline ER-NeRF lead to lower CMOS scores. > Q4: The results of talking style control are not sufficient. More qualitative/quantitative evaluations of the style controllability are needed. - A4: Thanks for your helpful feedback. We have made amendments to the results of talking style control. Specifically, 1) we provide an additional demo video (please kindly refer to **Rebuttal Demo 4** on the demo page) with more style control examples to help the reader make a qualitative comparison. 2) We additionally perform user studies (the Comparative Mean Opinion Score test) to evaluate the style control ability of our method and the baseline StyleTalk. Please refer to **Table 2 in the attached one-page PDF** for details. Both qualitative/quantitative evaluations show that our method performs better in talking style control. > Q5: It would be better to see some novel-view results. - A5: Thanks for your suggestion! We provide an additional demo video (please kindly refer to **Rebuttal Demo 3** on the demo page), in which we drive our method and the baseline with several out-of-domain poses. We found that our method can well handle the OOD poses and generate high-quality results while the baseline cannot. Possibly, the prior knowledge in our generic backbone is the reason for the generalizability of OOD poses. # Summary In summary, following the given comments, we have performed in-depth analysis and several experiments, which we find has enhanced the soundness of the paper and proved the performance improvement of the proposed method. Again, we would like to appreciate the reviewer’s valuable review. We sincerely hope the reviewer will reconsider their rating in light of the rebuttal. --- Rebuttal Comment 1.1: Title: Missing details for rebuttal response Comment: Thank you for your response. However, I still have some questions and concerns as follows: 1. Why do you model audio-to-pose and audio-to-face motion separately? Since head pose should align with face motion, it would be better to model them together. I understand the rebuttal time is limited, but if you want to claim to handle head pose generation, you should include some evaluations in your revised version. 2. Could you provide more details about the user study? e.g., how many individuals participated and how many cases were evaluated? 3. I checked **Rebuttal Demo 4** for more style prompts. The results aren't good. A larger CFG sometimes enhances the style's intensity, but it also adds more artifacts. These results don't seem to support the claim of style mimicking. --- Rebuttal 2: Title: Author Response to Reviewer TKxi (Part 1/2) Comment: Dear Reviewer yTCz, Thanks for your fast response! Sorry for the late reply since we were doing the relevant additional experiments. We hope our response fully resolves your remaining concerns. > Q1: It would be better to model the audio-to-motion and audio-to-pose together. Please include some evaluations of the audio-to-pose in your revised version. - Thanks for your feedback. In the initial rebuttal, we train a new audio-to-pose model from scratch due to its simplicity. Following your suggestion, we tried to **jointly model the audio-to-pose and audio-to-face motion together** . Specifically, we load the pre-trained audio-to-motion model and change the output dimension of its final layer: from the 64 dimensions (3DMM expression) to 70 (64+6) dimensions, in which the 6 additional dimensions denote the predicted head pose, including Euler angle (3 dim) and head translation (3 dim). We finetuned the audio-to-motion/pose model for 50,000 steps until reaching its convergence. - We then perform a **quantitative evaluation to evaluate audio-lip/pose synchronization and expressiveness** in different settings of the audio-to-motion/pose model. Specifically, we finetune 10 person-specific renderers on 10 ten-second-long training videos. Then we use the different settings of the generic audio-to-motion/pose model to predict the face motion or head pose. To be coherent with real-world applications, we randomly select 5 out-of-domain audio clips for driving each renderer. So there are 50 result videos for each setting. Since there is no ground truth for the result videos, we turn to user study for evaluation. We included 20 attendees and asked them to evaluate the comparative mean opinion score between each setting with the results of our original version. The attendees are required to rate in terms of audio-lip-sync, audio-pose-sync, and overall expressiveness. Please refer to **Q2 & A2 for more details of the user study**. The results are shown as follows. The error bars are 95\% confidence interval. | settings | CMOS audio-lip-sync | CMOS audio-pose-sync | CMOS-expressive | | :--- | :----: | :----: | :----: | | Audio-to-Motion + poses extracted from other videos (original version) | 0.000 | 0.000 | 0.000 | | Audio-to-Motion + Audio-to-Pose, separately (initial rebuttal version) | $0.215 \pm 0.293$ | $1.341 \pm 0.284$ | $0.647 \pm 0.251$ | | Audio-to-Motion/Pose, jointly (current version) | $0.468 \pm 0.235$ | $1.653 \pm 0.316$ | $0.952 \pm 0.301$ | - In line 1 and line 2, we can see that predicting the pose from audio greatly improves the audio-pose-sync performance, and in line 3 we can see that jointly predicting the face motion and head pose could further improve the performance from audio-lip-sync, audio-pose-sync, and expressiveness. We suppose that it is because the head pose strictly aligns with face motion, and there is a high correlation between them (facial motion can be a strong hint for predicting the head pose and vice versa). Therefore, joint modeling of motion and pose could improve the model's ability to better predict them. We found the additional modeling of head pose greatly improves the naturalness of our method, and will add the above discussion in the revised manuscript. Thanks for your insightful suggestion! > Q2: Could you provide more details about the user study? e.g., how many individuals participated and how many cases were evaluated? A2: In the additional user study during the rebuttal, we follow the main setting as we did in the paper. Specifically, we first trained 10 person-specific renderers on 10 identities' ten-second-long videos. We randomly select 5 out-of-domain audio clips for driving each renderer. So there are **50 result videos for each setting**. We include **20 participants in the user study**. The biggest difference is that we test the **Comparative Mean Opinion Score (CMOS)** instead of the Mean Opinion Score (MOS) in the original paper to better evaluate the comparative performance between different settings. For CMOS, each tester is asked to evaluate the subjective score of two paired videos on a -3~+3 Likert scale(e.g., the first video is constantly 0.0, and the second video is +3 means the tester strongly prefers the second video). To examine the aspect of (lip-sync, pose-sync, expressiveness), we tell the participants to *"only focus on the (lip-sync, pose-sync, expressiveness), and ignore the other two factors."* Due to space limitations, please refer to the next comment for the **reply to Q3**. --- Rebuttal 3: Title: Author Response to Reviewer TKxi (Part 2/2) Comment: > Q3: I checked Rebuttal Demo 4 for more style prompts. The results aren't good. A larger CFG sometimes enhances the style's intensity, but it also adds more artifacts. These results don't seem to support the claim of style mimicking. - We have provided an additional **Post-Rebuttal Demo 1**. Please kindly refer to our demo page for a quick view. - Thanks for your feedback. Firstly we want to **clarify the claim of style mimicking**. We acknowledge that the claim of style mimicking in the initial manuscript is somewhat misleading. Note that the task of this paper is personalized talking face generation (TFG), in which we have a short training video clip of the target identity, and we aim to train a TFG system that mimics not only its visual attributes but also its talking style. Therefore, in personalized TFG, we only need to run our model with the in-domain style prompt of the target identity (**in-domain style prompt** denotes using the training video clip as the style prompt). Actually, in Lines 74-78 of the original manuscript, we claim that the initial intention of proposing the ICS-A2M model is to let the model in-context (i.e., without the need for finetune) mimic the target identity's talking style. By contrast, as for supporting out-of-domain style prompts (**OOD style prompt** denotes cross-identity videos that have critical expressions like duck face that are unseen in the training video of the target identity), we regard it as an interesting feature of our method, rather than the key problem to be solved in this paper. As shown in **Post-Rebuttal Demo 1**, using the target identity's in-domain style prompt, our method could produce expressive results with good identity similarity and style similarity, and the performance is robust to a large CFG scale (e.g, cfg_scale=4). This result demonstrates that our method achieves the goal of personalized TFG (i.e., mimicking the target identity's visual attributes and talking style). We feel sorry for causing the misunderstanding, and **we will revise the claim of style mimicking** in the original manuscript by adopting the above discussion into the Introduction Part. We will emphasize that the major goal of the proposed ICS-A2M model is to better mimic the talking style of the target identity to achieve personalized TFG, and **we will introduce OOD talking style mimicking as an interesting feature** of our model in the Experiment Part. - Then we want to discuss **why the OOD style mimicking is not very robust**. In the Rebuttal Demo 4, we used various OOD style prompts to drive the target identities (e.g., Obama never played a duck face in the training video clip, but we set Trump's duck face video clip as the style prompt of the Obama renderer). Considering the scarce of motion-image pair in the ten-second-long training video, it is hard to render OOD extreme expressions without artifacts (e.g., for a model trained on a video that the speaker never says "oh", it is difficult to synthesize the speaker saying "oh" with good image quality). To analyze the reason for the artifact, we visualize the facial motion generated by our ICS-A2M model at various CFG scales and notice that the **ICS-A2M model could faithfully produce lip-sync facial motion of OOD talking styles at all tested CFG scales**. Now that the audio-to-motion stage works well, we suspect that the reason behind the visual artifact is that, when increasing the CFG scale, the generated facial motion is getting more similar to the OOD style prompt. However, the facial motion pattern in the OOD style prompt is quite different from the training motion condition of our person-specific renderer, hence resulting in visual artifacts. - To **improve the robustness of OOD style mimicking**, we found two possible solutions: **1)** the first is to **tune the CFG scale** as a hyper-parameter. For instance, in Rebuttal Demo 4, while CFG=4 occasionally leads to artifacts, CFG=2 is a stable choice that balances the style similarity and visual quality. **2)** the second direction is to improve the generalizability of the motion-conditioned renderer. We could adopt **data augmentation** to synthesize images of diverse facial motions through recent advanced face editing methods, which can improve OOD performance, and hence could facilitate more stable OOD style mimicking. Due to time limitations, we leave this as future work. Finally, we'd like to thank you for your precious time and valuable comments, which have improved the soundness and clarity of this manuscript. We will be very happy to clarify any remaining points (if any). --- Rebuttal 4: Comment: After reading your response, most of my questions have been addressed. However, I believe that compared to the submitted version, many changes are needed. For example, the user studies, the experiments on pose generation, and the claims regarding the contribution of style mimicking should be revised. I have mixed feelings about this. I improve the rating, but I am more inclined to suggest resubmitting to another conference to refine these issues. The final decision can be left to the area chair. Title: Final Rating --- Rebuttal Comment 4.1: Title: Thanks for your acknowledgment of our rebuttal Comment: Dear Reviewer TKxi, Thanks for your engaged discussion and acknowledgment of our rebuttal. We'd like to especially thank you for the three suggestions (details of user study, head pose prediction experiments, and clarifying the contribution of style mimicking). We have integrated the latest discussion and additional experiments into the revised manuscript, which we found has improved the soundness and completeness of the paper. Again, thank you for your expertise and positive feedback!
Summary: This MimicTalk work aims to bridge the gap between the person-agnostic one-shot TFG setting and the person-dependent small-scale TFG setting. A carefully designed static-dynamic-hybrid adaptation pipeline is proposed to achieve expressive, generalized, and efficient personalized TFG. Furthermore, an in-context stylized audio-to-motion model is introduced to mimic the talking style from the reference video. Experiments demonstrate the superiority of MimicTalk compared to previous methods. Strengths: The introduction part presents a good observation of identity-dependent and identity-agnostic methods. To overcome their limitations, a static-dynamic-hybrid adaptation pipeline is introduced to first build an initial 3D face mode from a pretrained one and then finetune it with small-scale person-specific video data. To make the finetune process more efficient and stable, the LoRA technique is adopted. The design motivation is clear and intuitive. To achieve audio-to-motion generation, an In-Context Stylized Audio-to-Motion module is introduced built upon flow matching, and trained via infilling mask task. The illustrated video results and quantitative results show the effectiveness and efficiency of the introduced pipeline. The paper is well-organized and easy to follow. The illustration and implementation details are well demonstrated. Weaknesses: There are several losses used in the pipeline. How to choose these loss weights and how these loss weights affect the performance could be further discussed. Technical Quality: 4 Clarity: 3 Questions for Authors: The loss weight choice could be discussed. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive review and valuable comments, and we hope our response fully resolves your concerns. > Q1: There are several losses used in the pipeline. How to choose these loss weights and how these loss weights affect the performance could be further discussed. - A1: Thanks for your advice. Since the training objective of the renderer and the ICS-A2M consists of multiple terms, we acknowledge that there should be more discussion on choosing the appropriate loss weights to achieve a good overall quality. Specifically, - 1\) as shown in Equation (4) of the original paper, the loss of the renderer is: $$\mathcal{L}_\text{renderer} = \mathcal{L}_1 + \lambda _{LPIPS} \cdot \mathcal{L} _\text{LPIPS} + \lambda _{ID} \cdot \mathcal{L} _\text{ID} .$$ - Where $\mathcal{L}_ 1$ is the L1 loss between the predicted frame and the GT frame and is the main objective. However, using L1 loss alone is known to cause over-smoothing in the generated sample, hence we additionally adopt LPIPS loss to improve the high-frequency texture details in the generated result. However, LPIPS loss is insensitive to tiny spatial offsets, hence we take LPIPS as an auxiliary loss to improve image quality and set a relatively small loss weight $\lambda _\text{LPIPS}=0.2$. Identity loss $\mathcal{L} _{ID}$ is similar to LPIPS loss, except that the loss is empowered by VGGFace, which focuses more on face similarity. We find that a large identity loss causes training instability, hence setting a small loss weight $\lambda _\text{ID}=0.1$. - 2\) As for the ICS-A2M model, as shown in Equation (6) of the original paper, the loss is: $$\mathcal{L} _\text{ICS-A2M} = \mathcal{L} _\text{CFM} + \lambda _\text{sync}\cdot\mathcal{L} _\text{sync},$$ - where L_CFM is the conditional flow matching (CFM) objective and L_sync is a discriminative loss that measures the synchronization between the input audio and the predicted expression. We found sync loss is necessary to improve lip-sync quality as otherwise the generated motion tends to be over-smoothed. However, a large weight of sync loss (e.g.) makes the generated sample over-flicking, and we found $\lambda_\text{sync}=0.05$ is a good choice to balance the temporal stability and lip-sync accuracy of the generated sample. - We plan to add the above discussion in Appendix B. of the revised manuscript, which we found could help the reader to understand the design of the proposed multi-objective training loss. # Summary Following your comments, we did an in-depth analysis of the design of the proposed method, which we found has improved the clarity of the paper. Again, we thank the reviewer for the insightful review and "Accept" recommendation for our paper. --- Rebuttal 2: Title: Thanks for the detailed rebuttal Comment: I have read other reviews and author response. My concers have been addressed. The additional experiments on OOD poses and the limitation discussion of PNCC (raised by Reviewer TKxi) are interesting points to explore. I suggest including these discussions and results in the final version. --- Rebuttal Comment 2.1: Title: Author Response to Reviewer iWSN Comment: Dear Reviewer iWSN, Thanks for your acknowledgment of our rebuttal. We will add the discussion on the design of the multi-objective loss functions of our model in the Method Section. We will also add the additional experiments on OOD poses and the limitation discussion of PNCC in the revised manuscript. Again, we'd like to thank you for your precious time, expertise, and deep understanding of our work.
Summary: This paper targets to tackle efficient 3D realistic talking face customization. Rather than learning an individual neural radiance field for each identity, this work exploits a person-agnostic model to improve the efficiency and robustness of personalized talking face generation. A static-dynamic-hybrid adaptation pipeline is proposed to adapt a person agnostic base model to a specific identity. To achieve style control, an in-context audio-to-motion model is devised to mimic the talking style of reference video. Strengths: 1. Overall, I like the idea of first learning a generic base talking face model, on top of which the model is adapted to a personalized identity. 2. The design of adapting the neural volume through LoRA with PNCC input is intuitive and interesting. 3. To generate personalized facial motion, an in-context style mimicking audio-driven module is proposed to inject the dynamic motion style. Weaknesses: 1. According to the attached supplementary video, the improvement seems very marginal or even hard to notice. Also, only three personal identities are showcased. It will be more persuasive if more identities are demonstrated. I am concerned whether the PNCC representation offers sufficient information or not. At the same time, the simultaneous optimization of a learnable reconstructed face and LoRA looks weird, making it confusing about which component is really functioning. 2. Authors claim this approach is a fast adaptation method that significantly surpasses the previous person-dependent approaches. In the comparison approaches, all the 3 Nerf based methods do not aim to improve the efficiency. Is there any related work that also handles the efficiency of generation or this is the first work. If not, authors should also include relevant approaches in this paper. Specifically, there might be other works targeted to efficiently fitting to a specific identity rather than adaptation. Will it be better to also take them into consideration instead of making a strong claim of 47 times speed faster. 3. The information in the workflow of Fig.2 is minimal. The representation of the NerF field uses a cube, which might have a clearer description. Meanwhile, each component of the devised network is drawn as a block diagram, which might require clearer description on the specific architecture details (If not here, the appendix). Similarly, for the inference pipeline, authors spent a lot of effort to illustrate the flow-matching training paradigm, which makes this figure a little bit messy. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer the weakness section. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Author Response to Reviewer fWzj (Part 1/2) We thank the reviewer for the constructive feedback and the positive remarks on our proposed "generic-model-to-adaptation" framework. We acknowledge that your concerns are mainly about qualitative results and some technical designs, and hope our response resolves your concerns fully. > Q1: The improvement seems very marginal or even hard to notice. Only three personal identities are showcased. It will be more persuasive if more identities are demonstrated. - A1: Thanks for your helpful feedback. To better compare the performance, we additionally tested 9 additional identities (from an AAAI 2024 Kaggle competition of talking face generation) with diverse languages and appearances and critical conditions (such as large head pose or long hair). Please kindly refer to **Rebuttal Demo 1** on our demo page (please refer to the paper for the URL) and the results prove that our method has better data efficiency and lip-sync quality than the most competitive baseline ER-NeRF (ICCV 2023). Besides, in the **Rebuttal Demo 3** on our demo page, we show that our method could well handle the OOD pose while the baseline cannot. We suspect the prior knowledge in our generic backbone is the reason for our method's generalizability to OOD poses. > Q2: I am concerned whether the PNCC representation offers sufficient information or not. - A2: PNCC is a rasterized image obtained by projecting the 3DMM mesh of the target identity. It possesses fine-grained geometry information of the human head of the target expression. So we suspect the PNCC's amount of information is equal to (or slightly less than) the 3DMM mesh. There are several previous works that also adopt PNCC and achieve subtle expression control. We also provide an additional demo video (**Rebuttal Demo 4 on our demo page**), in which we show that using PNCC representation we can control various talking styles. - However, we acknowledge that the PNCC (or other explicit motion representations like 3DMM expression code or facial landmark) makes it hard to represent subtle movements like eyeball movement, which is a common problem for all talking face methods that use explicit motion representations. We plan to explore implicit representation such as audio-to-latent to improve the expressiveness of the model. We will add the above discussion to the limitation and future work section in the revised version. Thanks for your insightful suggestion! > Q3: The simultaneous optimization of a learnable reconstructed face and LoRA looks weird, making it confusing about which component is really functioning. - A3: Thanks for your feedback. As shown in **line 2 and line 3 of Table 3 in the original paper**, only optimizing either the learnable reconstructed face or LoRA leads to a significant performance drop. Actually, the simultaneous optimization of these two components is the reason for the name of our adaptation pipeline "static-dynamic-hybrid adaptation" in Section 3.3. Specifically, learning the reconstructed face (i.e., the triplane representation) is to optimize the target identity's static attributes (such as geometry shape or appearance), and optimizing the LoRA in the model backbone is to learn the personalized dynamic attributes (such as how the facial muscle moves when doing a specific expression). **Empirically**, we find only optimizing the learnable reconstructed face could preserve the static details like teeth and hair of the target identity, but the talking avatar lacks personalized dynamic attributes of the target person (e.g., Theresa May has deep wrinkles when smiling). On the other hand, only optimizing the LoRA could produce videos with personalized facial subtle motions, but the static details are missing (e.g., high-frequency in the hair or teeth region). Therefore, we find a joint optimization of the learnable reconstructed face and LoRA could simultaneously learn the static and dynamic attributes of the target identity. - We found our original discussion on the design of SD-hybrid adaptation is insufficient and may cause confusion. We will add the discussion above in Section 3.3 of the revised version. Thanks for your helpful comment! > Q4: Is there any related work that also handles the efficiency of generation or this is the first work. If not, the authors should also include relevant approaches in this paper. Specifically, there might be other works targeted at efficiently fitting a specific identity rather than adaptation. Will it be better to also take them into consideration instead of making a strong claim of 47 times speed faster? - A4: Thanks for pointing this out! We notice that there has been training acceleration from the earliest NeRF-based AD-NeRF (ICCV 2021, taking 40 hours for training an identity) to RAD-NeRF (arxiv 2022, taking 10 hours) to the recent ER-NERF (ICCV 2023, taking 4 hours). However, this improvement of efficient fitting is caused by the improvement of model structure (from the AD-NeRF's MLP-based vanilla NeRF to the RAD-NeRF's grid-based instant-NGP, to the ER-NeRF's attention-enhanced instant-NGP). To our knowledge, we are the first work that improves training efficiency by proposing a new training paradigm (fast adapting from a pre-trained person-agnostic model), which is orthogonal to previous works that improve the network structure. By exploiting the rich prior knowledge from the generic model backbone, our method achieves not only faster convergence but also better video quality and generalizability. We hope the proposed "generic-model-to-adaptation" training paradigm could pave the way for a new generation of NeRF-based works. We will add the discussion above in the related works section of the revised version, which we find could better highlight the novelty of this paper. --- Rebuttal 2: Title: Rebuttal by Authors Comment: # Author Response to Reviewer fWzj (Part 2/2) > Q5: The representation of the NeRF field needs a clearer description. Meanwhile, each component of the devised network is drawn as a block diagram, which might require a clearer description of the specific architectural details. - A5: Thanks for your suggestion. We use tri-plane representation to represent the 3D NeRF field. We have replaced the cube with a tri-plane image and added a label description to improve the clarity of Figure 2 in the original paper. Please refer to **Figure 1 in the attached one-page PDF** for details. As for the specific architectural details of each block diagram, we provide an additional figure that plots the architectural details of each component in our generic model. Please refer to **Figure 2 in the attached one-page PDF** for details. Actually, in the original paper, we omitted the structure details because we want to imply that our SD-Hybrid method can be applied to an arbitrary one-shot person-agnostic model, not only the model we used in the paper. However, we acknowledge that it is necessary to provide network details to make it easier for readers to understand the role of each component in the model. We will add Figure 2 in the attached one-page PDF in Appendix B.1 of the revised manuscript. > Q6: For the inference pipeline (Fig 3 in the original paper), the authors spent a lot of effort to illustrate the flow-matching training paradigm, which makes this figure a little bit messy. - Thanks for pointing it out. In Figure 3 of the original paper, we introduced the model input and the sample processes of the ICS-A2M model, respectively. The intention is to highlight two novelty points of our ICS-A2M model: The first is the masked input that concatenates the style prompt with the audio condition, which is the key contribution that enables in-context talking style control; The other is the flow-matching sample process that predicts velocity. Note that we are the first to use flow-matching for modeling audio-to-motion mapping. However, we acknowledge that this diagram is a bit messy. We provide a revised version of this figure to improve clarity. Please refer to **Figure 3 in the attached one-page PDF**. # Summary In summary, following the given comments, we have performed several experiments and revised the manuscript from several aspects, which we find has enhanced the soundness and clarity of the paper. Again, we would like to appreciate the reviewer’s valuable review. We sincerely hope the reviewer will reconsider their rating in light of the rebuttal. --- Rebuttal 3: Title: Hoping that our response could address your concern Comment: Dear Reviewer fWzj, Thank you again for your time and effort in reviewing our work! We would appreciate it if you can let us know if our response has addressed your concern. As the end of the rebuttal phase is approaching, we look forward to hearing from you and remain at your disposal for any further clarification that you might require. --- Rebuttal 4: Title: Dear Reviewer Comment: Dear Reviewer fWzj. As the discussion period is closing in several hours, we would like to know your feedback on our rebuttal and if there are any additional questions. We are glad to answer them.
Rebuttal 1: Rebuttal: # General Response We would like to thank the reviewers for their constructive reviews! Following the comments and suggestions of reviews, we have performed additional experiments and revised the manuscript. We have also uploaded **4 new demo videos on the demo page** (Please kindly refer to our original paper for the URL due to the rebuttal policy). Here we summarize the revision as follows: - As suggested by Reviewer fWzj and dHKC, we provide a demo video (**Rebuttal Demo 1** on the demo page) that compares our method with the baseline of 9 additional identities, and the results prove that our method has better data efficiency and lip-sync quality. - To predict head pose from audio, as suggested by Reviewer TKxi, we additionally train an audio-to-pose model, which follows the main structure of the ICS-A2M model proposed in the original paper. As shown in **Rebuttal Demo 2** on the demo page, our model can produce novel head poses that are coherent with the input audio. - As suggested by Reviewer TKxi and dHKC, we provide a demo video (**Rebuttal Demo 3** on the demo page) that compares our method with the baseline when driven by various OOD head poses, and the results show that our method could well handle the OOD pose while the baseline cannot. - As suggested by Reviewer dHKC and fWzj, we provide a demo video (**Rebuttal Demo 4** on the demo page) that tests the talking style mimicking ability of our method. By tuning the classifier-free guidance (CFG) scale during the sampling process of the ICS-A2M model, we further improve the style similarity for our flow-matching-based model. The results show that our method could handle various style references well (6 prompts in the video). - As suggested by Reviewer iWSN and fWzj, we have enriched technical content such as how to make tradeoffs among multiple losses, discussion on previous works that also explore training-efficient NeRF-based talking face generation, and improved the clarity of Figure 2/3 in the original paper. Please refer to the **attached one-page PDF** for details. Thanks again for the reviewers' great efforts and valuable comments, which have improved the soundness of the manuscript. We have carefully addressed the main concerns and provided detailed responses to each reviewer. We hope you will find the responses satisfactory. We would be grateful if we could hear your feedback regarding our answers to the reviews. Pdf: /pdf/dba6fb9768c451663ec1d6aab336491de2a99bde.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Conditional Synthesis of 3D Molecules with Time Correction Sampler
Accept (poster)
Summary: This paper focuses on diffusion-model-based molecular inverse design (i.e., conditional molecular generation), and proposes a novel approach to address the inconsistency between target distribution and that after online guidance. Specifically, a time predictor is trained to predict the time of the manifold that the sample lies on, and then a correction is done to ensure that the generated molecules remain on the correct manifold by Tweedie's formula. The proposed method shows satisfactory results on the QM9 dataset. Strengths: - The motivation is clear and the idea is novel. The inconsistency is indeed a challenge in online guidance of diffusion models. TCS and TCAS solve this problem elegantly. - The authors have introduced extensive related work to help the audience to better understand the literature. - The authors provided necessary ablation studies in the appendix. Weaknesses: - The experiments are not sufficient. More experiments (like designing molecules with given substructures in EEGSDE). - The idea of TCS is good, and an experiment beyond molecules may further demonstrate the generalizability of the proposed approach. For example, a comparison with [1] is recommended, which the authors have also mentioned when introducing the exposure bias in diffusion models. Besides, I think the accuracy of the time predictor is highly dependent on the data itself. It is suspected that this approach may not work in other fields. - The presentation of the paper needs to be improved. There are some obvious typos (e.g., line 63). This is just a kind reminder and does not impact my rating of this paper. [1] Ning, M., Li, M., Su, J., Salah, A.A. and Ertugrul, I.O., 2023. Elucidating the exposure bias in diffusion models. arXiv preprint arXiv:2308.15321. Technical Quality: 3 Clarity: 2 Questions for Authors: See the above weaknesses. Besides, - as this paper focuses on the online guidance of the diffusion models for continuous variables. What about the version of TCS and TACS for the discrete variables, if applicable? - EEGSDE and some other related works that optimize the properties of generated molecules (e.g., [1]) cannot optimize the number of atoms, which is also important in molecule design. Do the authors have some idea about this? (This is only a related question and does not impact my rating.) [1] Zhou, Xiangxin, Liang Wang, and Yichi Zhou. "Stabilizing Policy Gradients for Stochastic Differential Equations via Consistency with Perturbation Process." arXiv preprint arXiv:2403.04154 (2024). Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have discussed the limitations in terms of how the signal of guidance is computed. However, I wonder whether the use of computational quantum chemistry methods works as the authors mentioned. My concerns lie on the differentiability required by guidance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive and helpful comments. We have addressed your comments below. We appreciate your positive comments that - Clear motivation and a novel idea to the challenging problem. - Extensive related works for the audience. - Necessary ablation studies. We initially address your concerns below. --- **W1** Additional experiments on another task. Thank you for the suggestion. In Table. R2 of the uploaded pdf, we provide the results of our methods (TCS/TACS) in target substructure generation following the setting of EEGSDE and in another dataset GEOM-DRUG (Table. R3). TACS outperforms EEGSDE in similarity scores while maintaining high stability, and achieving higher stability and validity outperforming baselines even in a more complex dataset (GEOM-DRUG). For a detailed analysis of these results, please refer to the discussion in [GR1]. --- **W2** Generalizability beyond molecules / comparison with [1] We appreciate your insightful suggestion. As detailed in [GR1], we have applied TCS to image generation on CIFAR-10 dataset and our method results in improved FID scores. Our results indicate that TCS effectively mitigates exposure bias across different domains. For instance, on CIFAR-10 dataset, applying TCS only improved the FID score from 3.353 to 2.663, while ADM-ES+TCS further reduced it to 2.179. This consistent improvement across molecular and image domains suggests that our approach addresses fundamental aspects of exposure bias in diffusion models, rather than being data-specific. Methodologically, our approach differs from Epsilon Scaling [1] in that we focus on correcting the timesteps based on predicted variance by time predictor, while [1] scales the network output directly. This allows for more precise adjustments at each step, potentially leading to better quality outputs across various domains. For a more detailed analysis of these results and their implications, please refer to [GR1]. --- **W3** Correction We sincerely appreciate your careful reading and for bringing this to our attention. We will thoroughly review and correct any typographical errors in the final version of our manuscript. --- **Q1** Online guidance for discrete variables While our work focus on 3D molecules with continuous variables, adapting TACS for discrete variables using zero-order gradient estimation—similar to our approach with synthetic data and VQE—presents an intriguing direction for future research. We will include a discussion on this potential extension in the limitations and future work section of the manuscript. --- **Q2** Optimizing number of atoms Exploring variable atom numbers is indeed a promising direction for future research, which we would like to mention in the final manuscript. TACS could potentially be extended to handle variable atom numbers, possibly by incorporating a learnable atom number prediction step in the generation process. [1] Ning, M., Li, M., Su, J., Salah, A.A. and Ertugrul, I.O., 2023. Elucidating the exposure bias in diffusion models. arXiv preprint arXiv:2308.15321. [2] Zhou, Xiangxin, Liang Wang, and Yichi Zhou. "Stabilizing Policy Gradients for Stochastic Differential Equations via Consistency with Perturbation Process." arXiv preprint arXiv:2403.04154 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will keep my current score. It would be promising to introduce the optimization of the number of atoms and extend this framework to the case of discrete variables. --- Rebuttal 2: Comment: Dear reviewer ByqK We are pleased if our response has addressed all of your concerns. If you have any further questions, please let us know. Your careful review and insightful comments on our paper are greatly appreciated. Thank you once again for putting in your valuable time and effort.
Summary: This study utilizes a predicted time estimator to correct the data manifold during the guided generation of diffusion models for molecules, to mitigate the discrepancy between the forward and reverse distribution. The authors show that by adjusting the noised sample according to predicted time, as opposed to relying on the pre-defined time schedule, the guided generative process remains on the correct manifold, leading to higher generation quality for several conditional generation tasks. Strengths: This study offers useful insights into reducing the exposure bias of diffusion models using time correction. It also performs comprehensive analysis and ablation studies on the proposed framework. The results could help future works on diffusion models for 3D molecules in enhancing the generative quality with property guidance. In addition, the authors demonstrate the possibility of using quantum computing (instead of data-driven classifier, which is the common practice) for the online guidance. Weaknesses: Though the proposed method and most results are solid, some important results and experimental details seem to be missing or incorrect. See “Questions”. I'm willing to adjust my score if the questions are properly addressed. Technical Quality: 3 Clarity: 2 Questions for Authors: Major: 1\. Line 173: the value of time window size is an important hyperparameter, but the value is not provided. Furthermore, the impact of the time window size choice should be evaluated in the ablation. 2\. Appendix C seems incomplete. How is the function incorporated into the guidance? 3\. Some highlighted values in Table 1 are not the best value, and some values seem abnormally high or low. Please confirm. 4\. A previous work [1] directly searches the time window for a better match of the time-dependent variance. How does the proposed method compare to it? [1] Li, Mingxiao, et al. "Alleviating exposure bias in diffusion models through sampling with shifted time steps." arXiv preprint arXiv:2305.15583 (2023). Minor: 1\. Several references to the Appendix needs to be fixed, e.g.: Line 264 should be B.4; Line 149: the said comparison is not in the Appendix. Line 248: the reference is broken and the said result is not in the Appendix. 2\. According to Fig 4, the corrected time becomes very close to the actual time after 400. Appendix B.4 shows the molecule generation quality is the highest if the time correction and OG starts at 400. Is there any possible relation between these observations? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have properly addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive and helpful comments. We have addressed your comments below. We appreciate your positive comments that - Offers useful insights in reducing the exposure bias of diffusion models. - Comprehensive analysis and ablation studies which could help future works. - Demonstrates the possibility of quantum computing. We initially address your concerns below. --- **Q1** Size of Time Window We appreciate your feedback and we agree that sensitive analysis on time window size can be an important point. We used $\Delta = 10$ for all of the experimental results reported in our original manuscript and the value is chosen from selecting the best performed MAE values. Here, we provide the experimental results for target property $\alpha$ in Table R6 of the uploaded pdf. The results indicate that performance is relatively stable for $\Delta$ between 2 and 16. We appreciate this constructive suggestion and we will include the detailed ablation study on window size in our final manuscript for a clearer understanding of our method's behavior. --- **Q2** Online guidance using VQE. In each denoising step of diffusion model, we first apply Tweedie’s formula to estimate clean molecule and use VQE calculates ground state energy of this predicted molecule by updating $\theta$ for $E(x,\theta) = \langle\psi(\theta)|H(x)|\psi(\theta)\rangle$. Then we use the zeroth-order method with respect to the position of atoms $x$ to obtain gradient $\nabla_x E(x, \theta)$. This gradient is used as an online guidance for our synthetic experiment. We will add this information in the Appendix in our final version of the paper. --- **Q3** Table 1 correction. Thank you for the feedback. Our original intention was to highlight the best performance in conditioning (MAE) with molecular stability above 80%, aligning with our research goal of achieving desired quantum properties while maintaining molecular stability and validity. This is because molecules with lower stability are unlikely to exist in the real world (please refer to Figure 3, second row, in our manuscript for an example). However, we admit that this can cause the confusion of the readers and thus decided to change the notations accordingly. In the updated table, we use bold for the best overall performance and color highlighting for the best performance with stability above 80%. We believe that this better represents the trade-offs in our method and baselines, while emphasizing the importance of molecular stability in practical applications. --- **Q4** Comparison with Time Shift Sampler in [1] Regarding the comparison with [1], we would like to clarify the key differences between TS [1] and our TCS / TACS: - Our method directly selects timesteps based on a trained time predictor, which has demonstrated robustness in our experiments. In contrast, [1] selects timesteps by calculating variance of image pixels at each step and matching the noise level from the predefined noise schedule. - In [1], the corrected timestep is used directly for the start of the next step, while our approach maintains the predefined timestep after accurately estimate clean sample by corrected time step. By taking every single diffusion step while carefully using predicted time, TCS can generate samples closer to the target distribution. To further validate our approach, we provide additional experimental results comparing TS-DDPM [1] and TACS on the QM9 dataset with step size 10 In Table A.3 below. The consistent improvements across various quantum properties underscore the robustness of our approach. [Table A.3] Comparison between TS ([1]) and TCS / TACS | Property | TS MAE | TS Stab. (%) | TACS(Ours) MAE | TACS(Ours) Stab. | |----------|--------|--------------|----------------|-------------------| | Cv | 1.066 | 74.89 | 0.659 | 83.6 | | μ | 1.166 | 73.55 | 0.387 | 83.3 | | α | 2.777 | 75.2 | 1.44 | 86.0 | | Δε | 665.3 | 82.72 | 332 | 88.8 | | HOMO | 371.8 | 72.74 | 168 | 87.3 | | LUMO | 607.6 | 74.98 | 289 | 82.7 | We will include this comparison in our final manuscript.. --- **Q5** Addition and corrections of Appendix. We appreciate your careful review, which has helped improve the clarity and completeness of our paper. In our final version, we will change the links and add information accordingly. Please refer to [GR2] for more details on these additions. --- **Q6** Possible relationship between start of OG and Figure 4. Thank you for the insightful comment. We also think there might be a connection between starting time of online guidance and the convergence of predicted time to the ground truth values as in Fig.4 of the main paper. We suspect this happens because applying online guidance can be especially effective after the sample converges to some degree as pointed out in [2]. Investigating the exact relationship between this convergence and where to start OG would be an interesting future direction. [1] Li et al., Alleviating exposure bias in diffusion models through sampling with shifted time steps, arXiv 2023 [2] Han, Xu, et al. "Training-free Multi-objective Diffusion Model for 3D Molecule Generation." The Twelfth International Conference on Learning Representations. 2023. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response and extensive experiments. I will raise my score to 7. --- Rebuttal 2: Comment: Dear Reviewer 1ecU, We are glad to hear that our response properly addressed your concerns. Your careful review and insightful comments on our paper are greatly appreciated. Thank you once again for putting in your valuable time and effort.
Summary: This paper presents a framework for generating 3D molecules called Time-Aware Conditional Synthesis TACS. The proposed approach uses conditional generation with adaptively controlled plug-and-play online guidance into a diffusion model to drive samples toward the desired properties while maintaining validity and stability. To prevent generated samples deviating from the data distribution during the conditional generation process authors introduce a Time Correction Sampler to control guidance and ensure that the generated molecules remain on the correct manifold at each reverse step of the diffusion process. Authors compare their TACS results with Equivariant Diffusion Models EDM and Equivariant Energy Guided Stochastic Differential Equations EEGSDE. Strengths: This paper presents a framework for generating 3D molecules called Time-Aware Conditional Synthesis TACS. The proposed approach uses conditional generation with adaptively controlled plug-and-play online guidance into a diffusion model to drive samples toward the desired properties while maintaining validity and stability. To prevent generated samples deviating from the data distribution during the conditional generation process authors introduce a Time Correction Sampler to control guidance and ensure that the generated molecules remain on the correct manifold at each reverse step of the diffusion process. Authors compare their TACS results with Equivariant Diffusion Models EDM and Equivariant Energy Guided Stochastic Differential Equations EEGSDE. Weaknesses: How to efficiently use the Time Corrected Sampler and whether this method improves the performance in other domains such as in image generation. Technical Quality: 3 Clarity: 3 Questions for Authors: Have the authors experimented TACS approach on other datasets? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: In this study, authors have used a trained neural network to estimate chemical properties of each molecule. Using an exact computational chemistry-based method could improve the guidance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive and helpful comments. We have addressed your comments below. **W1** Generalizability of TCS As detailed in [GR1], we have applied TCS to image generation using the CIFAR-10 dataset, demonstrating significant improvements in FID scores. These results show that our method generalizes well beyond molecular data. We believe that our algorithm can be applicable in other domains as well since the design of TCS and TACS does not include any domain-specific knowledge. It would be an excellent direction for future work to investigate our method in various domains. **Q1** Experiments on other datasets As mentioned in [GR1], we have conducted experiments on the Geom-Drug dataset for molecule generation. Our method shows improved performance in generating molecules with molecular stability compared to the previous baselines like in EEGSDE. This further demonstrates TACS's ability to generalize to more complex molecular datasets and tasks. We will add these results to the final version of our manuscript.
Summary: This paper proposes Time-Aware Conditional Synthesis (TACS), a method that aims to improve the robustness of property-conditioned diffusion models for 3D molecule generation. The key idea is to mitigate the exposure bias of the conditional denoising process by training a time prediction model that matches samples to the most likely marginal distribution of the forward process before applying online guidance via Tweedie's formula. Experiments on a synthetic dataset and QM9 show that TACS generates valid samples that match the conditioning label more closely than alternative methods. Strengths: * The paper introduces a promising approach to keep the generated samples aligned with the marginal distributions of the forward process and address the problem of exposure drift in conditional molecule generation. * The paper demonstrates that TACS performs better than several well-established baselines on the QM9 dataset, showing improvements in generating molecules with desired quantum properties while maintaining stability and validity. * The paper is well-written and clearly outlines the methodology and motivation behind the method. Weaknesses: * The method is only compared to well-established baselines on a single dataset (QM9). It would be good to evaluate the model on at least one other benchmark, to ensure that the results indicate a general trend and are not specific to the very distinct data distribution of QM9. * None of the quantitative empirical results in Section 5 include error bars or measures of statistical significance. * The main text contains multiple references to Appendix A.1, which I assume are incorrect links. I could not find the comparison with relevant work in [2] (referenced in line 149) or any details on the MAE distribution of samples below and above an 80% stability threshold (referenced in lines 227-229). Furthermore, the model performance analysis for $m>1$ MCMC samples referred to in line 248 seems to be missing from the Appendix. * Minor Point: The results in Table 1 and Figure 4 show that online guidance is often able to generate samples with much lower property MAEs at the expense of stability and validity. This tradeoff is discussed in the Ablation Studies paragraph, but it would be good to calibrate the claim that TACS "outperforms competitive baseline methods across all metrics", since online guidance refers to a published baseline model [1]. Technical Quality: 3 Clarity: 2 Questions for Authors: * Do the stability values reported in Table 1 refer to atom or molecule stability? * The online guidance results in Table 1 are sometimes much worse than those reported in [1], especially for $\mu(D)$, $\epsilon_\text{HOMO}$ and $\epsilon_\text{HOMO}$. Do you know what could cause that? * The paper mentions having to use a classification rather than a regression model to estimate the time step $t_\text{pred}$ because $p(t\vert\mathbf{x})$ cannot be estimated from a point estimate (line 147). However, Algorithm 1 then discards the information about the full distribution by taking the maximum likelihood estimate (I assume that's what $\operatorname{argmax}\phi(\mathbf{x}')$ in line 6 means). Would it be better to use $\mathbb{E}_{p(t\vert\mathbf{x}')}[t]$ instead? * Is the time predictor is only trained on samples from the forward process? --- [1] Han, Xu, et al. "Training-free Multi-objective Diffusion Model for 3D Molecule Generation." The Twelfth International Conference on Learning Representations. 2023. [2] Kim, Beomsu, and Jong Chul Ye. "Denoising mcmc for accelerating diffusion-based generative models." arXiv preprint arXiv:2209.14593 (2022). Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors list the reliance on a potentially flawed predictive model for sample ranking as the main limitation. It would be good to also discuss any limitations of the method itself. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive and helpful comments. We have addressed your comments below. We appreciate your positive comments that our work - Introduces a promising approach to the exposure bias - Show improvements compared to baselines with desired quantum properties while maintaining stability and validity. - Well-written and clearly outlines the methodology and motivation. We initially address your concerns below. --- **W1** Evaluation on other benchmarks As detailed in General Response-[GR1], we provide additional experiments on Geom-Drug and CIFAR-10 which demonstrate TACS's ability to generalize beyond QM9, addressing molecules with more complex structures (Geom-Drug) and even extending to image generation (CIFAR-10). [Table A.1] TCS on CIFAR 10 | | ADM | ADM-ES |ADM-TCS(Ours)| ADM-ES+TCS(Ours)| |-----|-------|--------|-------------|-----------------| | FID | 3.353 | 2.213 | 2.663 | 2.179 | | | | | | | --- **W2** Error bars and statistical significance Thank you for pointing out the missing error bars. We have included them in Table.R1 of the uploaded pdf and will add in our final manuscript. --- **W3** Missing information in Appendix A. We appreciate your careful review. In our final version, we will expand on several points as follows: **R3-1** Comparison with DMCMC [3] DMCMC uses a classifier to predict noise levels for MCMC on the product space of data. In contrast, our time predictor directly predicts timesteps for correction in diffusion sampling itself. Moreover, while the purpose of noise prediction in [3] is for fast sampling, our work use time predictor to accurately produce samples from the desired data distribution. **R3-2** Impact of MC numbers We will also include detailed analysis on MC numbers (line 248), demonstrating TACS's robust performance across different sample sizes. We will also include our analysis of experiments conducted on the effect of MC sample numbers. Unlike [1], where increasing samples from 5 to 10 more than doubled performance at increased computational cost, our method shows robust performance across different sample numbers. For details on these additions, we kindly refer to [GR2]. **R3-3** MAE distribution given the threshold. We will also clarify our discussion on MAE and molecular stability (MS) thresholds. The revised text will emphasize TACS's lower MAE compared to methods achieving MS above 80%, and its performance even for samples below this threshold. --- **W4** Calibrate the claim that “TACS outperforms competitive baseline methods across all metrics", since online guidance refers to a published baseline model [1]. We will revise our claim as "TACS outperforms competitive baselines when considering MAE, stability, and validity collectively," which more accurately reflects our results. The updated Table. R1 of the uploaded pdf in the supplementary material supports this claim. --- **Q1** Stability values reported in Table 1. Stability values denote the molecule stability. **Q2** Online guidance results in Table 1. Online guidance yields higher MAE for some properties such as $\mu$ compared to that reported in [1], and we suspect this stems from the sensitivity of online guidance strength. First, data distribution of $\mu$ in the QM9 training dataset is sharp as can be seen in Figure.6-(a) in [2] the distribution of $\mu$. This in turn, can make online guidance sensitive to its guidance strength, where we sometimes observe diverging gradients when naively applying online guidance. Additionally, we provide additional experimental results for impact of online guidance strength on $\mu$ to support our claim. We will include the above result in our final version of the paper for the better understanding of the reader. **Q3** How to obtain predicted time from the time predictor Thank you for an interesting question. We provide experimental results on expectation-based $\mathbb{E}[{\phi(\boldsymbol{x})}]$ and argmax-based time prediction in the table below. The result shows that the performance remains robust in general with some trade-offs, where argmax-based prediction achieves slightly better MAE while the expectation-based approach maintains higher molecular stability. [Table A.2] Different type of time prediction. Above is MAE for target condition and below is molecular stability of generated samples. | Method | Cv | $\mu$ | $\alpha$ | $\Delta(\epsilon)$ | $\epsilon_{HOMO}$ | $\epsilon_{LUMO}$ | |------------|--------|--------|--------|-------|-------|-------| | Expectation| 0.7032 | 0.4511 | 1.559 | 351.7 | 181.8 | 334.3 | | Argmax | 0.659 | 0.387 | 1.44 | 332 | 168 | 289.0 | | Method | Cv | $\mu$ | $\alpha$ | $\Delta(\epsilon)$ | $\epsilon_{HOMO}$ | $\epsilon_{LUMO}$ | |------------|--------|--------|--------|-------|-------|-------| | Expectation| 84.67 | 90.18 | 87.3 | 90.71 | 89.84 | 90.11 | | Argmax | 83.3 | 83.3 | 86.0 | 88.8 | 87.3 | 82.7 | --- **Q4** Is the time predictor only trained on samples from the forward process? Yes, we trained the time predictor with the samples with the forward process as described in Line 142. Our work used a time predictor for reducing the gap between the marginal distribution of the forward and reverse processes, and thereby it provides good guidance to the desired data distribution only when trained with noisy samples from the forward process. --- [1] Han, Xu, et al. "Training-free Multi-objective Diffusion Model for 3D Molecule Generation." The Twelfth International Conference on Learning Representations. 2023. [2] Bao, Fan, et al. "Equivariant energy-guided sde for inverse molecular design." The eleventh international conference on learning representations. 2022. [3] Kim, Beomsu, and Jong Chul Ye. "Denoising mcmc for accelerating diffusion-based generative models." arXiv preprint arXiv:2209.14593 (2022). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed response. I believe that the additional experimental results for the GEOM-DRUG and CIFAR-10 datasets corroborate the reported performance gains on the QM9 dataset and strengthen the experimental section of the paper. I will update my score accordingly. --- Reply to Comment 1.1.1: Comment: Dear reviewer VutT We are glad to hear that our response properly addressed your concerns. Your careful review and insightful comments on our paper are greatly appreciated. Thank you once again for putting in your valuable time and effort.
Rebuttal 1: Rebuttal: General response Dear reviewers and AC, We sincerely appreciate your valuable time and effort in reviewing our manuscript. Your insightful feedback has been instrumental in improving our work. Our paper introduces Time-Aware Conditional Synthesis (TACS), a novel approach for conditional 3D molecular generation using diffusion models. To our best knowledge, this is the first work to systematically achieve the generation of molecules that simultaneously meet desired conditions, stability, and validity. As highlighted by multiple reviewers, TACS offers a principled and effective method (VutT, qUzm, 9Yep, 1ecU), unlike existing works of conditional molecular generation that focus solely on desired properties. Our approach has been thoroughly validated through extensive quantitative and qualitative experiments (VutT, qUzm, 1ecU) and presented comprehensively (VutT, qUzm, ByqK). We have carefully considered the multiple concerns raised by reviewers and have addressed them comprehensively as follows. We kindly request that you review our responses below, as well as the attached supplementary PDF file. --- **[GR1] Generalizability Beyond QM9 Dataset (VutT, 9Yep, ByqK)** To demonstrate the broad applicability of TACS, we conducted additional experiments on: 1. Generating molecules with target structures on QM9 dataset (Different task). 2. The Geom-Drug dataset for molecule generation, demonstrating performance beyond QM9's distinct distribution.(Bigger dataset) 3. Image generation in CIFAR-10 dataset, showcasing generalizability to a different domain.(Other domain) For different task, we conduct additional experiments on target structure generation as in [1]. We tested our algorithms (TACS) on QM9 dataset and showed that TACS achieves a Tanimoto similarity score of 0.792±0.077 with 90.42% of molecular stability, outperforming the reported and reproduced values of [1] with high margin. This demonstrates TACS's ability to generalize to different tasks. We kindly refer the reader to see the Table. R2 in our uploaded pdf file for more details. We also tested Time Correction Sampler(TCS) on the Geom-Drug dataset on unconditional generation of 3D molecules. As can be seen in the Table. R3 in the attached pdf file, the result shows improved performance in generating molecules with target structures. Finally, we tested our algorithm on the image dataset. On CIFAR-10 dataset our Time Correction Sampler (TCS) achieves improved FID scores (3.353 to 2.661) without any fine-tuning of the hyperparameters.The improvement in FID score shows the effectiveness of our time correction approach in mitigating exposure bias, a problem common to diffusion models across various domains. We believe these results altogether demonstrates that our method generalizes well beyond molecular data, indicate possible applications in other tasks, datasets, and domains. --- **[GR2] Additional Analyses and Clarifications** (VutT, 1ecU): We appreciate the reviewers' thorough examination of our work. In our final version, we will expand on several points to offer a more comprehensive analysis: (Line 149) Regarding the DMCMC [2] comparison, we will add a more detailed discussion in the Appendix. This will include key differences such as: DMCMC uses a classifier to predict noise levels for MCMC on the product space of data and noise, focusing on earlier generation stages. In contrast, our time predictor directly predicts timesteps for correction in diffusion sampling, aiming to produce samples closer to the desired data distribution. (Line 248) We will also include our analysis of experiments conducted on the effect of MC sample numbers. Unlike [3], where increasing samples from 5 to 10 more than doubled performance at increased computational cost, our method shows robust performance across different sample numbers. As shown in Table. R4 of the supplementary material: | Num. Samples | MAE | Stab. (%) | |--------------|-------|-----------| | 1 | 1.44 | 86.0 | | 5 | 1.395 | 86.35 | | 10 | 1.505 | 82.21 | | 15 | 1.545 | 83.04 | | 20 | 1.468 | 86.76 | These results indicate that TACS maintains consistent performance in terms of both MAE and molecular stability across different numbers of MC samples. This stability eliminates the need for higher m values, avoiding extra computational costs while maintaining accuracy. All these modifications and additional analyses will be reflected in our final manuscript accordingly. --- **Reference** [1] Bao, Fan, et al. "Equivariant energy-guided sde for inverse molecular design." The eleventh international conference on learning representations. 2022. [2] Kim, Beomsu, and Jong Chul Ye. "Denoising mcmc for accelerating diffusion-based generative models." arXiv preprint arXiv:2209.14593 (2022). [3] Han, Xu, et al., Training-free Multi-objective Diffusion Model for 3D Molecule Generation, ICLR 2023. Pdf: /pdf/439d68ef160cc9fb3e689484d2166bda18fd1260.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Test-Time Adaptation Induces Stronger Accuracy and Agreement-on-the-Line
Accept (poster)
Summary: This paper try to achieve reliable TTA by tackling three bottlenecks, including performance evaluation without labeled data, some distribution shifts, and hyperparameter selection for TTA methods. Strengths: 1. TTA is an important topic, and this work is a relevant and timely contribution. 2. The experiments are extensive, covering various tasks comprehensively and convincingly. 3. The presentation is clear and the motivation is well-presented. Weaknesses: 1. The novelty is relatively limited; it is an application of Baek et al. (2022) to the TTA setup. And the limited theoretical conclusion is from Miller et al. (2019). 2. It might be necessary to evaluate on more complex tasks, such as object detection and semantic segmentation. Technical Quality: 3 Clarity: 3 Questions for Authors: Overall, the work is an important study of TTA, however there are still some questions. 1. About Algorithm 1: In TTA, does updating the model parameters require multiple iteration steps? If it does, according to the basic idea of online algorithms, Algorithm 1 does not seem to reflect the model parameters being trained with different batches of data at different time steps. Could you provide a detailed explanation of the difference between the online algorithm you provided and the traditional offline algorithm? 2. Why the inverse of the cumulative density function of the standard Gaussian distribution is applied on the axes of accuracy and agreement? Is there any intuitions and theoretical explanations? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 1nU3, We greatly appreciate your thoughtful reviews on our paper. To address the concerns you raised, we made changes and clarifications as following: + Clarified our paper’s novelty, + Clarified notations on Algorithm 1, + Added theoretical intuition of using inverse of cumulative density function of standard Gaussian distribution. For each concern, we attempted to address it with individual responses: **1. The novelty is relatively limited; it is an application of Baek et al.[2] to the TTA setup. And the limited theoretical conclusion is from Miller et al.[1].** We would like to note that our paper not just applies AGL to TTA setup, but first finds the significant restoration of AGL in various distribution shifts, including those where the linear trends did not hold previously. Previous studies[1-4], including Miller et al.[1] and Baek et al.[2], did not include such restorations, limiting the broader applicability of AGL. In addition, while Miller et al.[1] only considered the very simple Gaussian setup for the sufficient condition for Theorem 1, our paper presents that TTA involving more complex data with non-linear deep networks satisfy the condition. **2. It might be necessary to evaluate on more complex tasks, such as object detection and semantic segmentation.** We appreciate the Reviewer’s suggestion. Since it remains ambiguous to define the agreement between models’ outputs in more than a single label, e.g., object detection and semantic segmentation, our paper mainly focuses on image classification, following Baek et al.[2]. However, extension to such tasks could further extend the understanding of the potential of TTA in stronger linear trends. We will discuss this in our final manuscript. **3. Algorithm 1 does not seem to reflect the model parameters being trained with different batches of data at different time steps. Could you provide a detailed explanation of the difference between the online algorithm you provided and the traditional offline algorithm?** We apologize for the confusion regarding Algorithm 1. As mentioned by the reviewer, TTA updates the model parameters in multiple iterations, i.e., at each iteration using the different batch of unlabeled OOD data, as described in L6 of Algorithm 1. We use the notation of x_OOD that means the current batch of OOD test data, which is different at every iteration. We will clarify this by utilizing the different notations for data in different time steps in the final version. The online algorithm in Algorithm 1 describes the online _test_, which is different from traditional testing. Traditional testing tests the model that is fixed in its pre-trained parameters, while the online "test" is testing a model that is continuously updated its parameters during TTA. In other words, the model tested at iteration of $t-1$ is different from that at iteration $t$. Specifically in Algorithm 1, L6 makes the only difference between traditional and online testing, where L6 updates the model parameters every iteration before testing in L7 and L8. **4. Why the inverse of the cumulative density function of the standard Gaussian distribution is applied on the axes of accuracy and agreement? Is there any intuitions and theoretical explanations?** Great question! Let us revisit the simple Gaussian distribution setup as described in Eq.2 of our original manuscript, where $Q$ distribution shifts from $P$ only in scale of mean direction and covariance shape: \ $P(x\mid y)=\mathcal{N}(y\cdot \mu; \Sigma)$, $Q(x\mid y)=\mathcal{N}(y\cdot \alpha\mu;\gamma^2\Sigma)$, \ where the label $y \in \\{-1,1\\}$ and $\alpha,\gamma$ are constant scalars. When we consider a linear classifier $f_{\theta}:x \mapsto \theta^{\top}x$, its accuracy on distribution $P$ and $Q$ are defined as \ $\text{acc}_P(\theta)=\Phi(\frac{\theta^{\top} \mu}{\sqrt{\theta^{\top} \Sigma \theta}})$ ,\ $\text{acc}_Q(\theta)=\Phi(\frac{\theta^{\top} (\alpha\mu)}{\sqrt{\theta^{\top} \gamma^2\Sigma \theta}})$, \ where $\Phi$ is the cumulative density function (CDF) of standard Gaussian distribution. After applying the inverse of $\Phi$ for both accuracies, the linearity between two is calculated by\ $\frac{\Phi^{-1}(\text{acc}_Q(\theta))}{\Phi^{-1}(\text{acc}_P(\theta))} = \frac{\theta^{\top} (\alpha\mu)}{\sqrt{\theta^{\top} \gamma^2\Sigma \theta}} / \frac{\theta^{\top} \mu}{\sqrt{\theta^{\top} \Sigma \theta}} =\alpha / \gamma$, \ which is a constant independent of $\theta$. This means the linear relationship between $\Phi^{-1}(\text{acc}_P(\theta))$ and $\Phi^{-1}(\text{acc}_Q(\theta))$ holds across linear classifiers. Therefore, in theory with simple Gaussian setup, such inverse of CDF is required to exactly derive the linear relationship between ID and OOD. The application of the inverse of the CDF surprisingly improves linear fit beyond simple Gaussian setups, even in more complex real-world distribution shifts, as extensively observed in Miller et al.[1], Baek et al.[2], and our paper. \ We hope these responses address the concerns, and let us know if there is any further feedback. >**References** \ > [1] Miller et al., Accuracy on the Line: On the Strong Correlation between Out-of-Distribution and In-Distribution Generalization, ICML 2021 \ [2] Baek et al., Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift, NeurIPS 2022 \ [3] Wenzel et al., Assaying Out-Of-Distribution Generalization in Transfer Learning, NeurIPS 2022 \ [4] Teney et al., ID and OOD Performance are Sometimes Inversely Correlated on Real-world Datasets, NeurIPS 2023 --- Rebuttal 2: Comment: Dear Authors, Thank you for your response. I maintain my initial positive rating. --- Rebuttal Comment 2.1: Comment: Dear Reviewer 1nU3, Thank you for your positive score, and we are glad that the clarifications were helpful. We will make sure to include such clarifications in our final manuscript.
Summary: This papers shows that using test-time-adaptation improve the accuracy of the algorithms used for OOD performance estimation algorithms based on agreement-on-the-line and accuracy-on-the-line. This is shown through a series of experiments. Some justification has also been provided based on prior work,. Strengths: - OOD performance estimation using unlabeled data is a very important problem and methods based on agreement-on-the-line and accuracy-on-the-line are the state-of-art methods. - The experimental details are very well presented and the paper is very well written and clear. Weaknesses: - Intuitively, if we assume that TTA is *perfect*, then we expect ID and OOD accuracy to be the same (i.e., y = x); the model has been fully adapted to the OOD setting and there is no difference between ID and OOD. Also the ID and OOD agreements should intuitively become the same (i.e., y = x). Is this understanding correct? It seems that there is a very basic reason behind the main observation of the paper. ‌Because of this, I think table Table 2 does not show a fair comparison; TTA is making models more similar and shift all the lines towards the y = x line. Is the message of the paper as simple as this? I think this answers the question "What specific mechanisms within these adaptation methods cause the observed phenomena? " asked in the conclusion section. - In a practice, we use OOD performance estimation with unlabeled data in settings where compute, budget, etc. is limited; thus we cannot collect new labeled data. Given this, is a method that is computationally expensive (that involves test time adaptation) feasible? Also, note that this method does not work if we only have black box access to a model (which is again, typical in potential applications of AGL and ACL). - What is the condition of theorem 1? It is written that "... correlation if with a bias of zero and a slope ...". Also, the condition in Eq. (2) is very strong. See my comments below for other theoretical results on AGL and ACL that study the problem in other (I believe more realistic) settings where the covariance and mean can shift. - The literature review is missing many theoretical results on agreement-on-the-line and accuracy-on-the-line. In the past 1-2 years, there has been a lot of theory results on these topics that show that these phenomenon are features of the high-dimensionality. For example: [1] N Tripuraneni, B Adlam, J Pennington, Overparameterization improves robustness to covariate shift in high dimensions, NeurIPS 2021. [2] D Lee, B Moniri, X Huang, E Dobriban, H Hassani, Demystifying Disagreement-on-the-Line in High Dimensions, ICML 2023. [3] D LeJeune, J Liu, R Heckel, Monotonic Risk Relationships under Distribution Shifts for Regularized Risk Minimization, JMLR, 2024. [4] A Sanyal, Y Hu, Y Yu, Y Ma, Y Wang, B Schölkopf, Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation, Arxiv 2024. - The papers above show that these phenomena are fragile; these papers show clear settings where Accuracy-on-the-line and agreement-on-the-line break. - More important than the previous comment: Lee et al (2023) show that the line for agreement and the line for accuracy can have different intercepts (at least in their toy model). Although the liens have the same slope. This is seen when looking back at the plots of the original real world experiments in Baek et al. This issue needs to be addressed before any real application of AGL for practice. There might be some fundamental issue with the AGL method. Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors should discuss the limitations I mentioned in the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer ZucK, We greatly appreciate your thoughtful reviews on our paper. To address the concerns you raised, we made clarifications on: + Intuition of linear trend observations, + Our study’s contribution under cost-limited scenarios, + Condition 1 of the Eq. 2 of our original manuscript, + Intuition of condition for ACL guarantee and potential failure of AGL Detailed responses for each individual concern are as below: **1. TTA is making models more similar and shift all the lines towards the y = x line. Is the message of the paper as simple as this?** We would like to clarify that these two phenomena are independent. A substantial gap between ID and OOD accuracies can remain after TTA, but models’ performances may still be strongly correlated. For example, after applying TENT on ImageNet-C Shot Noise, there is still approximately a 40\% gap in ID and OOD accuracy, but strong AGL holds (see submission Fig.1). In other words, strong AGL is not simply explained by the decreased gap between ID and OOD by TTA. In Section 4, we build a theoretical intuition of why TTA induces these strong ACL/AGL trends. **2. We use OOD performance estimation with unlabeled data in settings where compute, budget, etc. is limited; thus we cannot collect new labeled data. Given this, is a method that is computationally expensive feasible? Also, this method does not work if we only have black box access to a model.** Great question! TTA is generally very computationally cheap, and arguably much less resource-intensive than collecting labels, which usually requires expensive manual labor. Oftentimes only a very small number of parameters are adapted, making TTA far cheaper than methods like test-time training[5,6] or unsupervised domain adaptation[1-3]] that require a separate pre-training procedure and heavy training. With a small amount of resources, our paper shows that TTA can significantly restore AGL, allowing TTA models to be safely deployed across various distribution shifts. While our method is not feasible with black box access, non-black box settings cover a wide variety of practical scenarios where our method is useful. Moreover, our method often significantly outperforms baselines that do work on black box models with estimation error smaller by a whole factor. **3. What is the condition of theorem 1?** We apologize for the confusion. The conditions for Theorem 1 are detailed in Eq. 2. In particular, the class-conditioned distributions must be normally distributed and the distribution shift must only be a simple scale shift, i.e., the mean and covariance can change in magnitude but not their directions. Also, the classifiers $f_{\theta}$ must be linear with no bias. Under these conditions, ID versus OOD accuracy is perfectly linear correlated with a slope of $\alpha / \gamma$. **3. The condition in Eq. (2) is very strong. Other theoretical results on AGL and ACL that study the problem in other settings where the covariance and mean can shift.** Thank you for the feedback! We emphasize that, while this assumption seems strong, our empirical study in Section 4 demonstrates that the “scale-shift” condition in Theorem 1 is almost perfectly satisfied via TTA. Indeed, this is precisely why our method induces strong linear trends in practice. Furthermore, the theoretical studies [7,8] that do allow the covariance shape to shift are asymptotic results that rely heavily on the dimensionality of the data and classifiers to go to infinity. They explicitly note that their theory does not guarantee linear trends in finite dimensions (e.g., Figure 2 of [8]). Our paper also include covariance shape shifts such as CIFAR10 vs. CIFAR10-C Gaussian Noise, where classifiers do not exhibit linear trends in finite dimensions. **4. The literature review is missing many theoretical results on AGL and ACL. The papers above show that these phenomena are fragile. This issue needs to be addressed before any real application of AGL for practice.** Thank you again for the feedback! We will make sure to include these works in our literature review. We do note that failure modes observed theoretically may not be consistent with common real-world data. For example, Lee et al.[9] show that in random feature linear regression, AGL does not hold in CIFAR10-C. However, in neural networks trained with logistic regression, both our paper and Baek et al. show strong AGL on CIFAR10-C. On the other hand, Sanyal et al.[10] points to AGL breaking under heavy label noise which we don’t observe in many practical settings. Generally, we agree that AGL could be fragile. In fact, our paper directly addresses this problem, expanding, via TTA, the set of distribution shifts where AGL holds. We observe repeatedly across a broad range of distribution shifts that TTA improves ACL and AGL trends, and in all these scenarios the shifts reduce to “scale shifts” thus satisfying the theoretically sufficient condition for ACL. \ We hope these responses address the concerns, and let us know if there is any further feedback. >**References** \ > [1] Ganin et al., Domain Adversarial Training of Neural Networks, JMLR 2016 \ [2] Sun and Saenko, Deep CORAL: Correlation alignment for deep domain adaptation, ECCV 2016 \ [3] Li et al., Domain generalization with adversarial feature learning. CVPR 2018 \ [5] Sun et al., Test-Time Training with Self-Supervision for Generalization under Distribution Shifts, ICML 2020 \ [6] Gandelsman et al., Test-Time Training with Masked Autoencoders, NeurIPS 2022 \ [7] Tripuraneni et al., Overparameterization improves robustness to covariate shift in high dimensions, NeurIPS 2021 \ [8] LeJeune et al., Monotonic Risk Relationships under Distribution Shifts for Regularized Risk Minimization, JMLR 2024 \ [9] Lee et al., Demystifying Disagreement-on-the-Line in High Dimensions, ICML 2023 \ [10] Sanyal et al., Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation, Arxiv 2024 --- Rebuttal 2: Comment: I thank the authors for their detailed response. Below please find my response: 1. I'm still not convinced that these phenomena are independent. In the same experiment that you mention in the rebuttal, there is a significant improvement in the OOD errors after applying TENT. The OOD and ID accuracies are becoming closer. How do you argue that these phenomena are independent? There needs to be a systematic experiment that demonstrates that the observed phenomena cannot be simply explained by the decreased gap between ID and OOD by TTA. 2. The rebuttal resolves my concerns about the computational complexity. Although I believe this that this potential limitation should be further discussed in the paper. 3. Thanks for the clarification. 4. Thanks for the clarification. However, this response further strengths my doubt that most probably, the observed phenomena can be simply explained by the decreased gap between ID and OOD by TTA. As you discuss here, after TTA, the covariances and means align very well. 5. Thanks. I believe such discussion will strengthen the quality of the paper. --- Rebuttal Comment 2.1: Title: Further response to Reviewer ZucK's comments Comment: Dear Reviewer Zuck, We really appreciate your quick response to our rebuttal, and we will definitely include further theoretical discussion on ACL/AGL theory and the computational complexity of our method in our final manuscript. You were still concerned that the strong linear trend, which appears after TTA, is caused by model performances approaching the $y=x$ line. We clarify the subtle point below as to **why the closer ID and OOD gap is correlated, but not causal, to the strong linear trend**. As stated in Theorem 1, under distribution shifts that simply “scale” up or down the _norm_ of the mean and covariance, i.e., $\mu_{OOD} = \alpha \mu_{ID}$, $\Sigma_{OOD} = \gamma^2 \Sigma_{ID}$, classifiers can observe perfect linear trends with slope $\alpha / \gamma$. In Section 4, we showed that TTA satisfies this condition by aligning the “direction/shape” of the ID/OOD mean and covariance, while the scaling factor might still be far off, i.e., $\alpha$ << $1$, $\gamma$ >> $1$. Note that the degree of “scale shift” can be large, and in fact, _the perfect trend can lie arbitrarily far from $y=x$_, i.e., $\alpha / \gamma$ << $1$. We see examples of this in our paper, such as TENT on ImageNet-C Shot Noise. On the other hand, any shift, however small, that moves the “direction / shape” of the mean or the covariance matrix breaks the perfect linear trend _in finite dimensions_ (Miller et al.[1]). As an intervention, let us imagine a “Scale TTA” which improves OOD performance by reducing “scales shifts” in covariance in the feature space, but the “shape” of the covariance matrices remain misaligned. We can synthetically simulate the effect of such TTA over toy Gaussian data, where the cosine similarity between the shape of ID and OOD covariance matrices remain less than 1, and $\gamma^2$ decreases from 1 to 0.2 after Scale TTA. Empirically, we can see that the linear trend moves closer to $y=x$ (slope 0.66$\rightarrow$ 0.8), but the strength of the linear trend remains weak ($R^2$ 0.75$\rightarrow$ 0.63). This empirical evidence proves that improving OOD performance does not necessarily make the linear trend stronger. Overall, we see that ACL improves after TTA because TTA aligns the “shape” of the means/covariances, which is the true causal factor for _both_ stronger ACL and the linear trend moving closer to $y = x$. On the other hand, had TTA only reduced “scale shifts” and the “shapes” were unaligned, models would observe weak linear trends even though the slope moves closer to $y=x$. We hope this addresses your concern, and thus makes our paper’s observation and analysis more interesting to you. We will make sure to add the clarification in our final manuscript. Let us know if you have any questions! >**Reference** \ > [1] Miller et al., Accuracy on the Line: On the Strong Correlation between Out-of-Distribution and In-Distribution Generalization, ICML 2021 --- Rebuttal 3: Comment: I thank the authors for their response. My concerns about the paper are now resolved. However, I believe this discussion is necessary and adding it will significantly increase the quality of the paper. Under the condition that these discussion will be added to the paper, I increase my score to 7 and recommend acceptance. --- Rebuttal Comment 3.1: Comment: Dear Reviewer ZucK, Thank you for your updated score and we are glad that the clarifications were helpful. We will make sure to include the discussion in our final manuscript.
Summary: The paper presents a study on how test-time adaptation influences accuracy-on-the-line (ACL) and agreement-on-the-line (AGL). The authors empirically find that TTA methods significantly enhance the ACL and AGL, enabling better OOD performance estimation. Extensive experiments support their findings. Strengths: 1. This paper investigates an under-explored but important observation of TTA. TTA can boost AGL and ACL phenomenon on out-of-distribution datasets. 2. The observed phenomenon is verified under diverse settings, including learning rates, the number of adapting steps, batch sizes, and the early-stopped epoch for pretraining, etc. Weaknesses: 1. The scope should be revised. TTA encompasses more than just feature extractor adaptation, but also classifier adaptation such as T3A [1]. Additionally, it appears that the author does not employ TTA methods with a memory bank [1-4]. Therefore, the paper should clearly specify which types of TTA methods support the observations. 2. The experimental setting is idealized. It seems that the paper only focuses that how TTA always improves accuracy. However, TTA might fail under continuously changing distributions, especially in cases of small batch sizes (e.g.,1), or significant distribution shifts. The author should provide a discussion of these scenarios. 3. The analysis in section 4.2 is not supervising, since BN_Adapt and TENT only modify the $\gamma$ and $\beta$ in BN layers to make the output features similar to the target domain. It would be better to discuss whether other non-BN methods align with these results. 4. One key advantage of TTA is that it does not require source domain data. However, the AGL-based method requires it, which limits the contribution of this paper. [1] Iwasawa Y, Matsuo Y. Test-time classifier adjustment module for model-agnostic domain generalization. In NeurIPS. 2021. [2] Jang M, Chung S Y, Chung H W. Test-time adaptation via self-training with nearest neighbor information. In ICLR. 2023. [3] Yuan L, Xie B, Li S. Robust test-time adaptation in dynamic scenarios. In CVPR. 2023. [4] Wang S, Zhang D, Yan Z, et al. Feature alignment and uniformity for test time adaptation. IN CVPR. 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Visualizing the change of correlation coefficient as the model adapts to the test data flow could be more beneficial for understanding the contribution of the paper. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer JQG9, We greatly appreciate your thoughtful reviews on our paper. To address the concerns you raised, we included additional results and clarifications as following: + Added T3A as non-BN method and its AGL visualizations and feature alignment analysis, + Clarified our paper’s results on when TTA does not improve accuracy, + Clarified our method’s robustness under ID-data limited setup, + Added visualizations of coefficient correlations during the progress of TTA. Detailed responses for each individual concern are as below: **1, 3. TTA encompasses more than just feature extractor adaptation, but also classifier adaptation such as T3A. Also, it would be better to discuss whether other non-BN methods align with these results.** We appreciate the Reviewer's feedback! To address the concern, in rebuttal Figure 2 and Table 2, we include + T3A[1]’s AGL visualizations as well as feature alignment analysis on CIFAR10-C Gaussian Noise. Since T3A[1] is applied to non-BN networks, we train the networks without BN layers and apply T3A. In Fig.2, we similarly observe that, using T3A (left) to adapt the last classifier only results in stronger ACL compared to models before adaptation (labeled as “Vanilla w/o BN”) (right). Specifically, the correlation coefficient of ACL increases from 0.29 to 0.80. Still, T3A’s correlation coefficients in both accuracy and agreement are relatively weak compared to BN-based methods (e.g., 0.80 and 0.46 in T3A vs. 1.0 and 1.0 in TENT). We also make note that T3A’s accuracy and agreement lines have different biases. However, while T3A does improve ACL trends, it does not seem to satisfy the same sufficient theoretical condition of ACL that BN-adaptation methods satisfy. As T3A only modifies the final classifier, the distribution shift from ID to OOD in the penultimate representation space remains more complicated than a simple scale shift. In Table 2 we reported the cosine similarity between the ID and OOD mean and covariance of Vanilla w/o BN, which is the same as that of T3A, since no updates in feature extractor. The results in Table 2 show that their cosine similarity are approximately 0.78 (averaged over different architectures), with standard deviation exceeding 0.1. The cosine similarities are _much lower compared to those of BN-based TTA methods_ (e.g, TENT has 0.990$\pm$0.005 and 0.974$\pm$0.011 for mean and covariance). Overall, our method may also apply to non-BN adaptation methods such as T3A, but the theory in our work does not explain why stronger ACL trends appear in this setting. This is an important open question we hope to understand further in the future. Due to the limited timeline of rebuttal, we could not examine other baselines mentioned by the Reviewer. **2. The paper only focuses that how TTA always improves accuracy, but TTA might fail under continuously changing distributions, especially in cases of small batch sizes (e.g.,1), or significant distribution shifts.** We would like to clarify that our paper already explores scenarios where the TTA degrades OOD performance, including: * Poor hyperparameter choices that lead to low ID and OOD accuracy, such as extremely small batch sizes (e.g., 1 on CIFAR10-C). However, these models still conform to the same linear ACL/AGL trends (Figs. 2 and 7 of original submission). In fact, our theoretical explanation generalizes to such settings. In Table 2 of original submission, we show that the features after TENT with batch size of 1 also demonstrate near perfect alignment (shown by near-zero standard deviation). * Real-world distribution shifts, e.g., CIFAR10.1, ImageNet-V2, FMoW-WILDS, where TTA hurts performances, but maintains linear trends (Fig. 3 of original submission). Importantly, _AGL is restored regardless of whether TTA succeeds_ for any one hyperparameter choice $-$ this is precisely why we can employ our method for hyperparameter optimization. **4. One key advantage of TTA is that it does not require source domain data. However, the AGL-based method requires it, which limits the contribution of this paper.** Our method works well at estimating OOD accuracy given just a small number of ID data (i.e., 1-5% of total), as demonstrated in Section D of our submission. In fact, even with limited data, our methods outperform other baselines, e.g., ATC and DoC-feat, which also require the access to ID data for temperature scaling. The fact of the matter is even recent TTA methods are highly sensitive to hyperparameters and their OOD is often unpredictable. Zhao et al.[2] recently pointed out this issue. Using a reasonably small amount of ID data, which is relatively easy-to-obtain than labeled OOD data, our method can greatly enhance the reliability of TTA methods in their practical usage. **5. Visualizing the change of correlation coefficient as the model adapts to the test data flow could be more beneficial for understanding the contribution of the paper.** Great suggestions! In Figure 3 of our rebuttal PDF we’ve added a * Visualization of BN_Adapt, TENT, and ETA's correlation coefficient ($R^2$) of ACL as TTA progresses on CIFAR10-C Gaussian Noise and CIFAR100-C Glass Blur For comparison, we also added the correlation coefficient of the Vanilla model at iteration=0. We observed that the strong correlation rapidly appears (e.g., 0.4 to 1.0 in TENT on CIFAR10-C Gaussian Noise) at the very beginning of adaptation (i.e., iteration=1) and remains high until the end of adaptation. Our finding shows that strong AGL induced by TTA appears in the very early stage of adaptation, and remains strong over time. \ We hope these responses address the concerns, and let us know if there is any further feedback. >**References** \ > [1] Iwasawa et al., Test-time classifier adjustment module for model-agnostic domain generalization. In NeurIPS 2021 \ [2] Zhao et al., On pitfalls of test-time adaptation, ICML 2023 --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your response. I maintain my positive rating. Best, Reviewer JQG9 --- Rebuttal 2: Comment: Dear Reviewer JQG9, Thank you for your positive score, and we are glad that the clarifications were helpful. We will make sure to include such clarifications in our final manuscript.
Summary: This paper presents observations that TTA models exhibit strong agreement-on-the-line (AGL) and accuracy-on-the-line (ACL) phenomenon, which persists across a wide range of distribution shifts and models. Leveraging this observation, the authors apply methods to estimate OOD accuracy without labeled data and perform a hyperparameter selection task for a TTA model. The proposed methods are evaluated extensively, demonstrating their effectiveness in TTA settings. Strengths: 1. The identification of the AGL phenomenon in TTA models is interesting and novel. This observation is leveraged effectively to address significant challenges in the field, including performance evaluation and hyperparameter tuning. 2. The authors provide extensive experimental results to support their claims. The evaluation covers various TTA methods and datasets, demonstrating the robustness and generality of the proposed approach. Weaknesses: 1. The paper lacks a comprehensive comparison with the extensive body of literature on model selection methods under distribution shifts (e.g., i-iv below) that share a same goal of predicting OOD performances and thereby achieving a better model selection performance. 2. While the paper examines the hyperparameter selection task for individual TTA methods, a more interesting and practical task would be to choose the best model among various TTA methods. The suggested metric might have a slope unique to each TTA method, potentially complicating the model selection task among different TTA methods. Addressing this issue would enhance the practical applicability of the proposed approach. 3. The paper is missing a core theoretical explanation of how TTA methods can result in a feature space that satisfies the sufficient condition in Theorem 1. Without this theoretical foundation, I have an impression that this paper could be interpreted as a simple application of Baek et al. and Miller et al.'s findings to the TTA setting. Providing a deeper theoretical insight would strengthen the contribution of the paper. i. Hu, D., Liang, J., Liew, J. H., Xue, C., Bai, S., & Wang, X. (2024). Mixed Samples as Probes for Unsupervised Model Selection in Domain Adaptation. In NeurIPS. ii. Musgrave, K., Belongie, S., & Lim, S. N. (2022). Benchmarking validation methods for unsupervised domain adaptation. arXiv preprint arXiv:2208.07360, 2(6), 12. iii. Yang, J., Qian, H., Xu, Y., & Xie, L. (2023). Can we evaluate domain adaptation models without target-domain labels? a metric for unsupervised evaluation of domain adaptation. arXiv preprint arXiv:2305.18712. iv. Saito, K., Kim, D., Teterwak, P., Sclaroff, S., Darrell, T., & Saenko, K. (2021). Tune it the right way: Unsupervised validation of domain adaptation via soft neighborhood density. In ICCV. Technical Quality: 3 Clarity: 3 Questions for Authors: Kindly see the weaknesses part. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors partially discuss the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer puv1, We greatly appreciate your thoughtful reviews on our paper. To address the concerns you raised, we included additional results and clarifications as following: + Comparisons with five existing model selection baselines, + Best TTA methods selection results using our method, + Clarified our paper’s novelty and insight provided by Section 4 analysis. We address the individual concern as below: **1. The paper lacks a comprehensive comparison with the extensive body of literature on model selection methods under distribution shifts.** To address the concern we add comparisons in Table 1 of the rebuttal pdf as below: + Hyperparameter selection with MixVal[1], Entropy[2], IM[3], Corr-C[4], and SND[5] on CIFAR10-C over all corruptions, ImageNet-R, and Camelyon17-WILDS. We evaluate methods across various hyperparameters including architecture, learning rates, early-stopped checkpoints, batch sizes, and adaptation steps. Our method consistently outperforms or is competitive against other baselines across datasets and hyperparameters, resulting in state-of-the-art performances on average. We noticed that current state-of-the-art UDA model selection methods, i.e., MixVal[1] or IM[3], perform well on CIFAR10-C and ImageNet-R, but they critically fail in Camelyon17-WILDS, e.g., validation error of 7.98\% in MixVal and 23.52\% in IM. In contrast, our method is consistently reliable across shifts, including those where other baselines fail, e.g., 0.62\% in Camelyon17-WILDS. The failure modes of existing baselines might come from their assumptions, e.g., low-density separation, that do not generalize to such distribution shifts. Overall, our method of inducing AGL using TTA is most reliable for model selection. **2. A more interesting and practical task would be to choose the best model among various TTA methods.** Great suggestions! Our method can be easily applied to select the best TTA strategy. Our unsupervised hyperparameter selection strategy not only effectively chooses the optimal hyperparameters for each TTA method, but _using AGL-based estimators, can also closely predict the OOD accuracy of each method + hyperparameter choice pair_. These estimates can be used to select the overall best strategy. In Fig.1 of the rebuttal pdf, we present a + Comparison of our OOD accuracy estimates in SHOT, BN, TENT, ConjPL, and ETA on CIFAR10-C over all corruptions For each TTA method, we plot the true (GT) OOD accuracy on the x-axis, and our estimates on the y-axis. Plotted as “x” marks are TTA strategies paired with the optimal hyperparameters selected using our method. For each TTA strategy, we also report its OOD accuracy averaged over all hyperparameter choices and our average estimates of these accuracies, marked as “o”. You can see that our method precisely estimates both the best and the averaged OOD performances of each TTA strategy (i.e., very close to $y=x$ line). Notably, our estimates preserve the ranking of the TTA methods by their OOD accuracy, i.e., almost no reversed order in $y$-axis compared to $x$-axis. This indicates that our methods can be easily adapted for selecting the best TTA models overall. **3. Without this theoretical foundation, I have an impression that this paper could be interpreted as a simple application of Baek et al. and Miller et al.'s findings to the TTA setting.** Thank you for the feedback! Beyond our methodological contributions, we identify an interesting behavior where TTA methods reduce distribution shift in each class distribution to just a scale shift in the representation space. This happens to be the theoretically sufficient condition for ACL as described in Miller et al [6] and our Theorem 1. We take this even further in Table 1 of our original submission, where we show that if we measure the scale shift in the representation space and compute the theoretical slope, it closely matches the actual empirical slope of the ACL trend. We are the first to characterize this behavior of TTA methods and this is a novel contribution, on its own, unexplored by Miller et al. [6] or Baek et al. [7]. We provide rough intuition for why this may happen in BN_Adapt. In BN_Adapt, the data is standardized in each BN layer using the OOD 1st and 2nd moments calculated at test time instead of the ID statistics saved during training. Imagine that the final-layer features are the output of a BN layer. Before TTA, due to frozen BN stats, the ID features are roughly standardized to mean 0 and unit variance, while the OOD features have shifted mean and covariance. Then after TTA, the test-time BN stats also standardizes the OOD features closer to mean 0 and unit variance. However, while BN_Adapt standardizes the _overall_ OOD distribution, BN may not be able to deal with any scale shifts within each _class-conditioned_ distribution. We can follow-up during the discussion period with a more precisely constructed argument if interested. We hope these responses address the concerns, and let us know if there is any further feedback. >**References** \ > [1] Hu et al., Mixed Samples as Probes for Unsupervised Model Selection in Domain Adaptation, NeurIPS 2023 \ [2] Morerio et al., Minimal-entropy correlation alignment for unsupervised deep domain adaptation, ICLR 2018 \ [3] Musgrave et al., Benchmarking validation methods for unsupervised domain adaptation, arXiv 2022 \ [4] Tu et al., Assessing model out-of-distribution generalization with softmax prediction probability baselines and a correlation method, 2023 \ [5] Saito et al., Tune it the right way: Unsupervised validation of domain adaptation via soft neighborhood density, ICCV 2021 \ [6] Miller et al., Accuracy on the Line: On the Strong Correlation between Out-of-Distribution and In-Distribution Generalization, ICML 2021 \ [7] Baek et al., Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift, NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in addressing my questions and conducting additional experiments. I believe the newly added experiments enhance the practical value of the findings presented in the paper. Considering the rebuttal and the comments from other reviewers, I have increased my score to 5. --- Rebuttal 2: Comment: Dear Reviewer puv1, Thank you once again for your thoughtful and detailed review. We kindly remind you of our rebuttal that include: * (**W1**) Adding a comparison to existing model selection baselines, * (**W2**) Demonstrating best TTA method selection, * (**W3**) Providing intuition on how TTA satisfies the condition in Theorem 1, and clarified novelty against Miller et al. and Baek et al., to address your concerns. We are happy to discuss with you further if you have any other questions or feedback! --- Rebuttal 3: Comment: Dear Reviewer puv1, Thank you for updating your score, and we are glad that the clarifications were helpful. We will make sure to include such results and clarifications in our final manuscript.
Rebuttal 1: Rebuttal: We greatly appreciate all four reviewers' valuable feedback and thoughtful suggestions. The reviewers highlighted the following strengths of our paper: * Our paper investigates the under-explored but novel and important observation of TTA inducing AGL phenomenon (Reviewer puv1, Reviewer JQG9). * Our paper addresses significant and important challenges, such as OOD performance estimation and hyperparameter tuning, and demonstrates its effectiveness (Reviewer puv1, Reviewer ZucK, Reviewer 1nU3). * Our paper includes extensive experimental results to support our claims (Reviewer puv1, Reviewer JQG9, Reviewer 1nU3). Overall summary of additional experiments, clarifications, and discussions are as below: * We added clarifications on our paper's novelty compared to existing TTA studies and Miller et al.[1] and Baek et al.[2] (Reviewer puv1, Reviewer 1nU3) * We additionally tested our method for comparison with other model selection baselines and best TTA methods selection application (Reviewer puv1) * We discussed our theoretical insight on how TTA achieves features that satisfy the theoretical guarantee of ACL (Reviewer puv1) * We included additional analysis on how TTA induces strong AGL, by adding T3A as new TTA baseline and tracking correlation coefficient during TTA. (Reviewer JQG9) * We added clarifications on our observations on circumstances including those TTA fails or access to ID data is limited (Reviewer JQG9) * We added clarifications on why we observe stronger AGL in TTA is interesting, and TTA's feasibility under compute-limited scenario for better OOD estimation (Reviewer ZucK) * We discussed on our paper compared to existing theoretical studies on ACL and AGL (Reviewer ZucK) * We added clarification on lack of details, including condition for theoretical setup in Theorem 1, concept of online test in Algorithm 1, and theoretical insight in using inverse of CDF of standard Gaussian distribution for better linear fit (Reviewer ZucK, Reviewer 1nU3) To address the concerns and suggestions raised by the reviewers, we uploaded additional pdf that includes figures and tables. Pdf: /pdf/8a3a4d8711a5b85d49829b83fcace3222b8a2204.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
In-N-Out: Lifting 2D Diffusion Prior for 3D Object Removal via Tuning-Free Latents Alignment
Accept (poster)
Summary: The paper introduces a novel approach termed "In-N-Out" for enhancing the performance of 3D object removal tasks by leveraging tuning-free latents alignment. The authors have addressed the challenges of geometric mismatches and color inconsistencies prevalent in existing methods. Strengths: The authors have conducted an extensive review of related work and clearly delineated the distinctions and connections between their work and existing literature.The explicit and implicit latents alignment proposed by the authors is an intriguing concept that could inspire work in related domains. Weaknesses: The paper exhibits signs of being finished in a rush, which affects the overall quality of academic writing, and is hard to follow. For example, 1. Line172,Omega(·) is not defined 2. Formula (5), D_hat is note defined 3. Line 208, by replace -> by replacing The paper might has the potential to contribute to the field but requires significant revisions to improve the academic writing quality. The method involves several hyperparamete. The paper does not discuss the sensitivity analysis or the robustness of the method against variations in hyperparameters. Especially,the lambda_a, which plays an essential role in ILA. Technical Quality: 3 Clarity: 1 Questions for Authors: 1. In ELA module, using latent z^T to replace color c for volume rendering. Will this compromise the view-dependent in NeRF? In NeRF,the color is dependent on the position x and direction d, whereas in ELA, the z^T is merely reprojected to the image plane. 2. How do you choose lambda_a in ILA. Confidence: 5 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: The method involves several hyperparamete. The paper does not discuss the sensitivity analysis or the robustness of the method against variations in hyperparameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **1. Quality of academic writing.** Thanks for pointing out these typos. In line 172, Omega(·) is a decoder which is introduced in line 151, and $\hat{D}_{\phi}$ is the depth estimated by NeRF. We hope these typos do not affect the clarity of the ideas presented in this paper. - **2. Compromise of view-dependent effect in ELA.** Yes, we compromise the view-dependent effect in NeRF within the ELA module, using NeRF only as a geometry prior to propagate and aggregate initial latents. Due to the heuristic nature of diffusion models, incorporating view-dependent effects into their output remains elusive. - **3. Lack of sensitivity analysis.** Thanks for the constructive feedback. We've conducted 4 sensitivity analysis regarding the base view selection, $\lambda_a$ in ILA, subset selection and $\lambda_\text{patch}$ for patch loss. Due to the computational burden, we conduct the sensitivity analysis on six out of ten scenes with higher inpainting variability from SPIn-NeRF dataset. ### (a) Base View Selection: To achieve better generalization, we propose to sample $n$ candidate views around the geometry centroid of the camera viewpoints and select the one with the highest similarity votes, automatically avoiding artifacts without human intervention. We used 5 candidate views, with similarity calculated using perceptual hashing. We tested our results under different settings (candidate numbers): 3, 5, 7, and 9. The selection algorithm proved to be robust, with our algorithm typically yielding the same base frame. However, another factor influencing this step is the random seed. Setting different seeds causes the 2D inpainting model to produce different results, leading to different base frames being selected. We tested our method under five different seeds, with final scores reported in the table below and additional qualitative results in Figure 2 of the rebuttal PDF. While different seeds affect the appearance of the masked region in the final NeRF, the consistency of multi-view inpainting remains robust, resulting in minimal variance in evaluation scores. We will explicitly clarify this point in the revision. >|Seed|LPIPS↓|MUSIQ↑|FID↓| >|------|---------|---------|-------| >| 1|0.46|46.61|264.91| >| 2|0.44|48.04|255.29| >| 3|0.44|46.47|262.09| >| 4|0.44|45.72|261.04| >| 5|0.46|48.65|258.50| >|*Avg*|*0.45*|*47.10*|*260.37*| >|*Std*|*0.01*|*1.21*|*3.657*| ### (b) $\lambda_a$ in ILA: To examine the effect of the hyper-parameter $\lambda_a$ in ILA, we evaluated our method's rendering quality with varies $\lambda_a$ value. The metrics are reported in below table. The results are consistent across different $\lambda_a$ values, indicating a relatively small effect. This conclusion is supported by qualitative results in Figure 3 of the rebuttal PDF, where larger $\lambda_a$ values produce slight variations in small regions, but the global structure and semantics are preserved. This stability is attributed to the significant role of the initial latent alignment in ELA, which effectively aligns the underlying inpainting structure, thereby maintaining low variability in appearance. Additionally, the self-attention layer, where cross-view attention is introduced, does not dominate the entire Stable Diffusion Unet. It is balanced by the presence of other (residual and linear) layers, ensuring cross-view attention does not override the signal during the denoising process. Hence we simply set $\lambda_a$ in our implementation. We will explicitly clarify this point in the revision. >|$\lambda_a$|LPIPS↓|MUSIQ↑|FID↓| >|-------------|---------|---------|---------| >|0.2|0.44|**47.11**|**261.62**| >|0.4|**0.44**|46.76|264.91| >|0.6|0.44|46.47|264.37| >|0.8|0.45|46.33|265.10| >|*Avg*|*0.44*|*46.67*|*264.00*| >|*Std* |*0.01*|*0.35 *|*1.62*| ### (c) Subset Selection: We propose selecting a subset for stage 3 training based on the distribution of camera viewpoints. We evenly split the viewpoints into 12 groups based on the base view's camera space (evenly 2 on the x and y axes and 3 on the z axis) and select 50% views within each group according to perceptual hashing similarity to the base view. This approach avoids redundant views introducing supervision conflicts, while covering different viewpoints effectively. We evaluated our method with varies selection percentages, as reported in the table below. The scores are close, indicating minimal differences for most scenes. For one complex scene with high frequencies, setting the percentage too low (0.2) yields artifacts due to insufficient viewpoint coverage, while setting it too high (0.8) introduces appearance conflicts due to variability in inpainted results. These results are visualized in Figure 4 of the rebuttal PDF. >|Percentage|LPIPS↓|MUSIQ↑|FID↓| >|------------|---------|---------|---------| >|0.2|0.46|45.98|265.48| >|0.4|*0.44*|46.32|264.9| >|0.6|**0.44**|**47.11** |**261.62**| >|0.8|0.45|*46.47*|*263.20*| >|*Avg*|0.44|46.67|264.00| >|*Std*|0.01|0.35|1.62| ### (d) $\lambda_{patch}$ in patch loss: To assess the sensitivity of the patch loss multiplier $\lambda_{patch}$ in Eq. 10 of the main paper, we evaluated the method's performance using varies $\lambda_{patch}$ values. The results in below table show similar performance across different settings, with low standard deviation. However, setting $\lambda_{patch}$ too low or too high adversely affects performance. $\lambda_{patch}$ is critical as it influences the extent of multi-view supervision on NeRF; insufficient supervision leads to inadequate training, while excessive supervision causes conflicts. We set $\lambda_{patch}$ at 0.01 in our implementation for optimal balance. >|$\lambda_{patch}$|LPIPS↓|MUSIQ↑|FID↓| >|-------------------|---------|---------|---------| >|0.001|0.46|46.07|263.32| >|0.005|0.45|47.08|262.43| >|0.01|**0.44**|**47.11**|**261.62**| >|0.05|0.47| 4.93|265.31| >|0.1|0.49|44.05|277.36| >|*Avg*|*0.46*|*45.85*|*266.01*| >|*Std*|*0.02*|*1.35*|*6.49*| --- Rebuttal Comment 1.1: Comment: The author has conducted additional experiments, and the results now appear satisfactory. Therefore, I have decided to increase my score by one point. --- Reply to Comment 1.1.1: Comment: Thank you for reconsidering the revised experiments and for acknowledging the improvements in the results. We appreciate your decision to increase the score and are grateful for your thorough feedback.
Summary: The paper deals with 3D inpainting problem in the NeRF setup with 2D diffusion models. To achieve multiview consistency, they took a "inpaint-outstretch" strategy. They first inpaint one key frame, and conditioned on which, generate a view-consistent inpainted image set. Finally, they use the multiview images to train the final NeRF model, together with the reconstruction losses. The method is compared to a few recent works (Spin-NeRF, NeRFiller, InFusion) and achieved better results both qualitatively and quantitatively. Strengths: - The method is interesting and seems effective. They improve the consistency of multiview generation in two ways. The first is explicit latent alignment (ELA) that initializes the latent from the inpainted base frame. The second is ILA that post-hoc modifies the attention layer of stable diffusion models by replacing the key and value with the ones from the inpainted base frame. ILA does not need finetuning. - The method is evaluated on the SPIn-NeRF dataset and the performance gain over existing methods is quite large, both quantitatively and via user study. - There is also an ablation study that shows the importance of ELA, ILA, and the patch-based losses applied on the generated image sets. - The method is well written and easy to follow. Weaknesses: - My biggest confusion is on the explanation of ILA. - I don't fully understand why replacing the KV with the base frame KV would encourage consistency. I would expect the style to be consistent but not the local geometry, where the latter is more important for 3d reconstruction. - What is the rationale of replacing KV with "prior" p, but not Q? - The algorithm relies on a pre-defined base frame p. How is the base frame chosen, and how sensitive is the algorithm to the chosen frame? Technical Quality: 3 Clarity: 3 Questions for Authors: - Is Eq (10) applied to Stage 3? If so, what is the purpose of the prior loss L_prior? Once the inpainted base image is used to generate a multiview training set, I would expect this information to be unnecessary? Can this term be merged into L_patch? - In L216, what is the subset? Why not inpaint the whole set of images and how much does it affect the results? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **1. Explanation of ILA** (a) The reviewer's intuition of ILA is entirely correct. ILA primarily contributes to appearance (color) consistency. Geometry consistency is achieved by ELA (the alignment of initial noise) since this element serves as a foundational structure (semantic) for the inpainting. This is verified in our ablation study in Figure 6 of the main paper. Removing ELA results in geometry mismatch (blurry boundaries), while dropping ILA achieves geometry alignment but causes color mismatch issues (blurry leaves). This is why we propose ELA and ILA together to ensure both geometry and appearance consistency. (b) The rationale of replacing $KV$ with "prior" $p$, but not $Q$ is that the appearance information (**$V$**) of the prior image should be considered when inpainting the other views, with amount of information propagation is weighted by its attention key value (**$K$**). The attention query value comes from the current inpainting view $i$, $Q_i$ , representing the information the current inpainting for view $i$ is searching for. Together with $K_p$ , it decides how much attention the view $i$ inpainter should place on the prior view, and finally incorporates the prior view information $V_p$ into view $i$. We will elaborate on these points in the revised version of the paper. - **2. Discussion of selecting base frame.** To achieve better generalization, we propose sampling the base frame based on the geometrical centroid of the training camera poses. However, Stable Diffusion occasionally inpaints strange artifacts in the masked region. To mitigate this, we sample $n$ candidate views around the centroid of the camera viewpoints and select the one with the highest similarity votes, automatically avoiding artifacts without human intervention. In our implementation, we used five candidate views, with similarity calculated using perceptual hashing. We tested our results under different settings (candidate numbers): 3, 5, 7, and 9. The base frame selection algorithm proved to be robust, with our algorithm typically yielding the same base frame. However, another factor influencing this step is the random seed. Setting different seeds causes the 2D inpainting model to produce different results, leading to different base frames being selected. We tested our method under five different seeds, with final scores reported in the table below and additional qualitative results in Figure 2 of the rebuttal PDF. While different seeds affect the appearance of the masked region in the final NeRF, the consistency of multi-view inpainting remains robust, resulting in minimal variance in evaluation scores. We will explicitly clarify this point in the revision. >| Seed | LPIPS ↓ | MUSIQ ↑ | FID ↓ | >|------|---------|---------|-------| >| 1 | 0.46 | 46.61 | 264.91| >| 2 | 0.44 | 48.04 | 255.29| >| 3 | 0.44 | 46.47 | 262.09| >| 4 | 0.44 | 45.72 | 261.04| >| 5 | 0.46 | 48.65 | 258.50| >| *Avg* | *0.45* | *47.10* | *260.37* | >| *Std* | *0.01* | *1.21* | *3.657* | _Table 1: Sensitivity analysis on the prior inpainting results and prior view selection. Results are evaluated on SPIn-NeRF dataset with different random seeds._ - **3. Eq (10) applied to Stage 3.** Yes, Eq. 10 is the final loss for Stage 3, designed to alleviate noise and inconsistency. Thank you for the reviewer's insight, and I fully understand the confusion. The prior loss $L_\text{prior}$ is an L2 loss first proposed in Stage 2. We propose keeping this term in Stage 3 because we believe the base view does not require patch-based optimization. As discussed in the limitations section, achieving perfect 3D consistency is challenging, so the remaining views may still have subtle inconsistencies. To preserve high-frequency details, we use patch loss instead of the exact-match L2 loss. We regard the base view as a baseline that does not exhibit inconsistency; thus, we do not apply the patch loss to it. - **4. Discussion of Subset.** We found that for reconstruction tasks, more views can enhance quality; however, for generation tasks, using the entire set of images introduces unnecessary inconsistencies. Therefore, we propose selecting a subset based on the distribution of camera viewpoints. We evenly split the viewpoints into 12 groups based on the base view's camera space (evenly 2 on the x and y axes and 3 on the z axis) and select 50 percent frames within each group according to perceptual hashing similarity to the base view. This approach avoids redundant views introducing supervision conflicts, while covering different viewpoints effectively. We also evaluated our method with different selection percentages, as reported in the table below. The quantitative scores are quite close, indicating minimal differences for most scenes. For one complex scene with high frequencies, setting the percentage too low (0.2) yields artifacts due to insufficient viewpoint coverage, while setting it too high (0.8) introduces appearance conflicts due to variability in inpainted results. These results are visualized in Figure 4 of the rebuttal PDF. >| Percentage | LPIPS ↓ | MUSIQ ↑ | FID ↓ | >|------------|---------|---------|---------| >| 0.2 | 0.46 | 45.98 | 265.48 | >| 0.4 | *0.44* | 46.32 | 264.91 | <!-- Used italics to simulate underlining --> >| 0.6 | **0.44**| **47.11** | **261.62** | <!-- Bold for emphasis --> >| 0.8 | 0.45 | *46.47* | *263.20* | <!-- Used italics to simulate underlining --> *Table 2 : Sensitivity analysis on proportion of images selected for the subset.* Overall, for most scenes, the subset selection algorithm is robust due to the consideration of viewpoints distribution. For extreme cases, careful selection of the percentage might be necessary. However, values between 0.5 and 0.7 remain a reliable choice. We will carefully revise the manuscript to include these details. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal that clarifies my confusion. The additional ablations are helpful. I believe incorporating them will substantially strengthen the paper. Can you further clarify what are the two components in L169 and prompt e in L199? --- Rebuttal 2: Comment: Thanks you very much for your feedback. Regarding L169, the two components described are the input prompt $e$ and the masked image $I_p^{\prime}$ , as detailed in L170. Figure 2 illustrates the masked image $I_p^{\prime}$. The prompt $e$ is simply set as the description of the background we aim to inpaint, which is scene-dependent. In L199, the prompt $e$ is defined same as In L169, we employ a unified prompt for each scene. It's important to note that we did not involve any prompt engineering in our work, and we use the same prompt for each scene across all the methods in the experiments. We will provide further clarification on this point in the revised version. --- Rebuttal Comment 2.1: Comment: Thanks for the clarification. I'm willing to raise my score and recommend for acceptance. --- Reply to Comment 2.1.1: Comment: Thank you very much for your updated feedback and for considering an increase in your score. We greatly appreciate your support and recommendation for acceptance.
Summary: This paper presents how to tackle the challenge of 3D object removal by involving 2D diffusion prior. The approach involves pretraining NeRF with inpainting prior and then jointly optimizing it with latent alignment to align feature priors. Strengths: 1. Introducing a 2D diffusion prior as a solution for the 3D inpainting task is an innovative and interesting approach. 2. The incorporation of latent alignment to align feature priors is a promising design. Weaknesses: 1. Although the generated results outperform the listed baseline methods, the paper lacks a comparison with other baselines, such as "Reference-guided controllable inpainting of neural radiance fields" (ICCV 2023)", which achieves good results of novel view synthesis. 2. The presentation of the paper could be improved, for example, it is uncertain how to get the patches in the patch-based loss (Equ. 9). Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In Equ. 5, does the first term indicate the use of ground truth images (with the object removed) for supervising the mask region? If so, are there any concerns regarding potential information leakage? 2. Regarding efficiency, does the multi-stage training approach in this paper have a comparison with baseline methods in terms of computational efficiency? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The author provides a comprehensive discussion of the broader impacts in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for providing these valuable comments. - **1. Lacks a comparison with other baselines.** Thank you to the reviewer for pointing this out. Unfortunately, the method "Reference-guided Controllable Inpainting of Neural Radiance Fields" (ICCV 2023) is not open-sourced. However, this method, as stated in the introduction of our paper, operates similarly to the baseline method (InFusion), i.e., using a single reference view to serve the entire scene. These approaches assume that the geometry inferred by the radiance field (NeRF or Gaussian Splatting) is almost accurate, thus relying heavily on the geometric prior. From the results of InFusion in Figure 4 of the main paper, we can observe that the accuracy of the geometry cannot be guaranteed from scene to scene. For instance, the InFusion excels in the first and third rows but exhibit some disconnection of the primitives in the second and fourth rows. Similarly evidence can be found in our new collected datasets as well, as in rebuttal PDF Figure 1. These findings highlight the importance of multi-view supervision for 3D generation and inpainting tasks. Consistent multi-view supervision can mitigate the reliance on geometric priors and improve the robustness and accuracy of the inpainting results. By incorporating multi-view supervision, our method still works in the circumstances when such depth/geometry is not accurate, achieving promising results. - **2. How to get the patches.** Thank you to the reviewer for the constructive feedback. We will revise the paper to include more complete details in the revision. The patches are uniformly sampled within the bounding box of the mask, and the patch size we used is 256×256. Therefore, only the content within the bounding box (mostly the inpainted area) is being optimized by the patch loss. - **3. Eq.5 ground truth supervision.** Thank you to the reviewer for the comment. There is no ground truth supervision for the masked region. We used the official train-test split provided by SPIn-NeRF. In the training set, the masked region contains the unwanted object, while the test set contains the ground truth background in the masked region. We only optimize our radiance field on the training set, ensuring no information leakage. We will explicitly clarify this point in the revision. - **4. Computational efficiency.** Thank you to the reviewers for the valuable feedback. We would like to provide a comparison of training time in below table. InFusion ranks first in efficiency due to the high rendering efficiency of Gaussian splatting and the simplicity of single-view optimization. The three NeRF-based multi-view methods (SPIn-NeRF, NeRFiller, and ours) exhibit similar optimization efficiency. Our method has a slight advantage in efficiency due to the sampled prior image and the relatively consistent multi-view images, which facilitate faster convergence. >| Method | Supervision | Training Time (min) | >|------------|-------------|---------------------| >| SPIn-NeRF | Multi-view | 71 | >| NeRFiller | Multi-view | 65 | >| InFusion | Single-view | **17** | >| Ours | Multi-view | *49* | _Table 1: A comparison of Training Time._ --- Rebuttal 2: Comment: Dear Reviewer uTYn, I hope this message finds you well. We are grateful for the time and attention you've already dedicated to reviewing our work. We have submitted a rebuttal addressing the concerns raised, and we would greatly appreciate any additional feedback or comments you might have. As the discussion period is drawing to a close soon, we are eager to know if our rebuttal addresses your concerns or if there is any additional feedback you might have. Thank you very much for your continued support and guidance. Best regards, Authors of Submission 4061
Summary: This paper proposes to solve multi-view or 3D inpainting problem. Given multiple posed views and the masks in each view specifying an object to be removed. It first trains a NeRF on the unmasked regions. To in-paint the masked regions, a seed view is selected and inpainted with a pre-trained diffusion inpainting pipeline. Depth is also predicted for this seed view, together with the RGB value, fused into the NeRF representation. To propagate the features from this view to other views for NeRF supervision, it proposes explicit and implicit feature alignment methods. For the explicit one, it propagates the initial latent noise to another view via a depth/density-based aggregation method. For the implicit one, it uses a method similar to reference-only control, i.e., extracting intermediate key/value pairs from the seed view and injecting them into other inpainting views via cross-attention. Strengths: (1) Experiments show that properly propagating anchor-view to other views with methods shows good performance for multi-view consistency. More notably, propagating the initial noise to other views is novel and interesting. Weaknesses: (1) The proposed method seems much simpler(or even over-simplified) than prior works such as NeRFiller and InFusion. The former uses a synchronized multi-view inpainting technique which is similar to the ILA of this paper. The latter one introduces new diffusion-based depth completion. Both prior methods have used a depth prediction model, and InFusion has shown that using a pre-trained depth model and aligning it with NeRF/Gaussian depth is sub-optimal. For qualitative comparisons, there is little or only marginal difference between this paper and InFusion, while InFusion/NeRFillter has shown much more results than only the scenes included in this paper. I suspect that the result difference is due to the choice of base components, (i.e., inpainting pipeline, depth estimation model, etc.) which can not be considered as the novelty of this paper. (2) As I understand, the three stages run sequentially. i.e., Train unmasked NeRF(Stage-1) -> Update the NeRF with seed view and depth(Stage-1) -> Update the NeRF with novel inpainted views (guided by the seed view)(Stage2 then 3). Although Equation. (10) shows a total loss function. - This way, the proposed ELA approach seems over-complicated. After stage 1, there is no NeRF information in the masked region, and there is only one view with depth provided to supervise the masked area. There seems no need to use the NeRF density to propagate the feature from the reference inpainted view to other views. An alternative can be using the depth prior and the epipolar line to aggregate features. (3) Prior works show both object insertion and removal tasks, while the proposed method seems can only be applied to object removal tasks, as it heavily relies on the initial inpainting and depth estimation result. Moreover, the examples shown in this paper don't include complex structures and only show simple background, inconsistency caused by possible occlusion or partial observation has not been tested. Overall, this paper compares the results with the previous generation's scene completion method in a very limited scenario, which I believe is unfair. (4) The artifacts are obvious in the result videos, which can be caused by the noise/artifacts in the depth estimation model. More evidence is needed to make this paper stronger. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations addressed, however, the author has not mentioned that the proposed method is only tested in a limited scene case. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the constructive feedback. Below we provide the answers to reviewer's concern: - **1. Method over-simplified, and marginal difference compared to priors works.** We would like to argue that while our motivation may appear straightforward, the solution we propose is both non-trivial and novel. The multi-view inpainter NeRFiller achieves consistent inpainting results by averaging noise predictions derived from multiple iterations. This approach tends to produce smooth results, as demonstrated in Figure 4 of the main paper and Figure 1 of the rebuttal PDF. In contrast, our method explicitly and implicitly aligns latents without sacrificing high-frequency details. InFusion relies on a single reference view and its depth to represent the entire scene. This reliance becomes problematic when the assumption of accurate depth is violated. As shown in Figure 4 of the main paper, misaligned geometry would lead to disconnected primitives (second and fourth rows). Similar evidence can also be seen in Figure 1 of the rebuttal PDF. By incorporating consistent multi-view supervision, our method remains effective even when depth or geometry is inaccurate, achieving robust and promising results. **This explains why our method shows little difference from InFusion when geometry is accurate** (first and third rows of Figure 4 in the main paper), but excels when the depth is inaccurate. We further collected nine scenes with manually annotated masks to evaluate the effectiveness of our method. This dataset includes four indoor scenes and five outdoor scenes. We conducted the experiment and evaluation under the same protocol as in the main paper, using the same inpainting pipeline for all methods as mentioned in the paper. The quantitative results are reported in table below and the qualitative results are shown in Figure 1 of the rebuttal PDF. We will include this evaluation with more details in the revised manuscript. >| Method | LPIPS ↓ | MUSIQ ↑ | FID ↓ | >|-----------|---------|---------|--------| >| NeRFiller | 0.68 | 19.43 | 399.19 | >| InFusion | 0.47 | 31.35 | 319.59 | >| Ours | **0.35** | **37.22** | **250.63** | _Table1: Quantitative Result on new collected dataset._ In summary, although our motivation is simple, i.e.,using consistently inpainted multi-view images to optimize the radiance field, our approach to achieve consistent results is both novel and effective without relying on any geometric assumptions. - **2. ELA approach seems over-complicated.** Thank you to the reviewer for providing valuable comments. First, the reviewer's understanding of the three stages is totally correct. However, there are two key reasons why we propose fine-tuning the NeRF and using it as a geometric prior for ELA: >(a) After finetuning the NeRF, the geometric is represented by NeRF as a sharp (low variance) unimodal distribution on the ray. Consequently, the aggregated feature remains sharp, preserving the variations in the initial latents. >(b) As we discussed in the paper, we empirically found the depth prior inferred by the monocular depth estimator is not perfectly aligned for the NeRF. Fine-tuning the NeRF can also benefit this depth prior. Since NeRF learns relatively certain geometry in the known (unmasked) areas, this geometry constraint can improve the geometry of neighboring inpainted (masked) areas due to their geometric proximity. We will explain these points in more detail in the revised version of the paper. - **3. Limited scenario.** While prior works have been proposed for more general cases, our motivation focuses on a relatively smaller scope, specifically "object removal." As outlined in the paper, we have found that this task is extremely challenging due to the requirement for scene-level generation, which demands consistent multi-view supervision of a scene rather than just an object. Current approaches still exhibit some limitations in this regard. In contrast, there are many works focusing on 3D object-level generation using multi-view diffusion models. Therefore, we believe that tasks such as object insertion and completion should follow this more promising direction, which we have decided to leave as future work. However, in response to the reviewer's comments, we have further tested our method on three 3D object completion scenes, which are already included in the nine scenes mentioned in our first response (marginal difference compared to prior works). The results, shown in the rebuttal PDF file Figure 1 (first two rows), indicate that our method still performs promisingly on this complex task. Despite this, we continue to believe that inpainting tasks for objects might be more effectively addressed using multi-view diffusion or video diffusion techniques. We will elaborate on these points in the revised version of the paper. - **4. Artifacts are obvious in the result video.** We have carefully examined our implementation of the method and identified that the artifacts are caused by the depth loss used. Initially, we borrowed the loss function proposed in "Depth-supervised NeRF: Fewer Views and Faster Training for Free (CVPR 2022)", which applies KL divergence to the rays' density distribution for sparse view reconstruction. We found that this loss is relatively sensitive and tends to produce more noise in under-constrained areas (with no multi-view supervision). To address this issue, we have replaced our depth loss with a more stable L2 loss, which has largely eliminated the artifacts. The revised implementation now produces significantly cleaner results. --- Rebuttal 2: Comment: Dear Reviewer S5oa, I hope this message finds you well. We are grateful for the time and attention you've already dedicated to reviewing our work. We have submitted a rebuttal addressing the concerns raised, and we would greatly appreciate any additional feedback or comments you might have. As the discussion period is drawing to a close soon, we are eager to know if our rebuttal addresses your concerns or if there is any additional feedback you might have. Thank you very much for your continued support and guidance. Best regards, Authors of Submission 4061
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their thoughtful and constructive feedback on our manuscript. Below, we summarize the modifications made to the manuscript based on your comments. - To Reviewer S5oa - We have clarified the motivation and contribution compared to the priors work. - We have explained the rationale behind our ELA approach. - We added experiments to further demonstrate the effectiveness of our method, showing promising results for complex tasks like object completion, as shown in Figure 1 of the attachment. - The artifacts in the videos have been discussed and resolved. - To Reviewer uTYn - We have clarify the lack of comparison with other baselines. - We have detailed the technical process of obtaining patches. - We have addressed the concern of potential information leakage. - We have provided a comparison of computational efficiency. - To Reviewer f6e3 - We have elaborated on the intuition behind ILA with more details. - We have discussed our algorithm for selecting the base frame, including a sensitivity analysis and following discussion. - We have explained the motivation for retaining the loss of the prior image in Eq. 10. - We have discussed our motivation and methodology for selecting subsets, including a sensitivity analysis and following discussion. - To Reviewer P7tn - We have clarify the notation confusion in the paper. - We have clarify the compromise of view-dependent effect in ELA. - We have conducted four sensitivity analyses on different aspects of our method, providing deeper insights into our approach. We appreciate the reviewers' insights, which have been invaluable in refining our work. Pdf: /pdf/ab4ee96e9ad7f90521fc969fdb8476636c4be25b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Achieving Optimal Clustering in Gaussian Mixture Models with Anisotropic Covariance Structures
Accept (oral)
Summary: This paper states an hard-association EM algorithm for Gaussian mixture estimation, where the Gaussian components can be different and anisotropic. They also state theoretic bounds of the misclustering error rate. Strengths: Originality: The theoretical association bound for different and anisotropic clustering is novel. Quality: Contribution is technically sound. Claims are very well supported by theoretical analysis and also some experimental results. Methods used are appropriate. It is a complete piece of work. Clarity: Submission is clearly written and well organized. It adequately informs the reader and states the proposed algorithms in concise form to be easily reproduced. Significance: Key contribution to the community are the theorems and very detailed and elaborate proofs about the error rate that can theoretically achieved in anisotropic clustering. Overcoming the constraint of isotropic components (or anisotropic but identical components) is a major step. Weaknesses: Originality: A related work is "Frisch, Hanebeck, Gaussian Mixture Estimation from Weighted Samples" (not telling you to include this, just check if it's relevant; its soft-EM algorithm may be similar to your hard-EM). Suggestions: - in Line 108 give some inutition about the loss function, e.g. "ratio of mis-associations". - mention that Model 1 is equivalent to isotropic components in linearly transformed space with root of covariance (if that's correct) Typos: - line 102 displayed as follow*s* - line 136: "Figure 1" should be in same line, use \sim or cleveref package - line 150 computational*ly* feasible - Theorem (and elsewhere): too big whitespace in "exp (". Use "exp\!(" or \mathopen{}, \mathclose{} instead. - SNR has a non-math font in formulas and is sometimes italic and sometimes not - References: check capitalization, e.g. [11] PCM etc. I recommend: \usepackage[style=numeric-verb, maxbibnames=20, sorting=none, eprint=false]{biblatex} Technical Quality: 4 Clarity: 3 Questions for Authors: What about individual weights (mixing proportions) of the Gaussian components? E.g. from line 271, must the 30 clusters each have 1200/30=40 samples? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Limitations are stated but maybe should be summarized in a dedicated section. - "decent" initial guess required. What does that mean? Were you able to reliably get the rate-optimal result without taking into account prior knowledge about the ground truth? At least state the qualitative dependencies, like "decent" initial guesses are more restricted for lower SNR / overlapping components, fewer samples, higher dimensions, more components etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable comments and remarks. > Originality: A related work is Thank you for pointing this out. Our hard-EM algorithm indeed shares similarities with the soft one. Specifically, our algorithm modifies the E-step of the soft-EM by implementing a maximization step that hard assigns data points to clusters instead of calculating probabilities. We will make this clearer in the final version of the paper. > Suggestions: Thank you for pointing these out. We will surely address them in the final version. > What about individual weights (mixing proportions) of the Gaussian components? In our numerical study, we initially opted for equal cluster sizes for simplicity, but our model and analysis are fully capable of accommodating variable cluster sizes. To better demonstrate the flexibility and applicability of our approach, we will have a variety of cluster sizes in the numerical settings in the final version of the paper. > "decent" initial guess required Thank you for the question. In our manuscript, the term 'decent initial guess' refers to the need for initial cluster centers to be sufficiently close to the ground truth so that our algorithm achieves the rate-optimal result. It is due to that our theoretical analysis requires the initialization being within a specific proximity to the true parameters. In the final version of our paper, following your suggestion, we will explicitly detail the dependencies to provide a clearer understanding of when and how our algorithm can be expected to perform optimally.
Summary: This paper analyzes the minimax error rate in clustering anisotropic Gaussian mixture models. The authors establish a universal lower bound in two different models: different means, same covariance matrix (resp., different means, different covariance matrix) for every cluster and different covariance. For both models, they prove that a simple iterative algorithm (a variant of Lloyd's) attains the minimax rate. Strengths: Strong paper. The technical proofs are impressive and the results quite general with a minimum amount of assumptions (clusters all not too small, dimension O(\sqrt(n)), covariances well conditioned; the only strong assumption is over the initialisation). Weaknesses: * The paper misses some high-level intuition, and several proofs are tough to parse. For example, the error decomposition with the terms F, G, H etc appears mysterious and a bit out of nowhere. While the ideal error has a natural explanation (at least when delta = 0), I did not find it properly mentioned. For example, line 435 embodies exactly my feeling while reading the proofs: "We next define a quantity that refers to as the ideal error"; yes this quantity is important, yet I do not tell explicitly why and I leave the reader to understand himself why this is indeed an ideal error. * The interpretation of the minimax rate for model 2 is quite complex. Because it involves hypothesis testing, one should naively expect to find the Chernoff information. Quick computations show that the Chernoff information between N( $\theta_1, \Sigma$ ) and N( $\theta_2, \Sigma$ ) equals $ 1 / 8 \times \|| \theta_1 - \theta_2 \||^2_{\Sigma}$ and indeed recovers the minimax rate of Model 1. Expression is more complex for Model 2. That could help interpret the quantity SNR' (which is not truly a signal-to-noise ratio, as it is hard from the expression of SNR'_{ab} to find where is the signal, the noise and the ratio...). * The assumption on the loss of the initialisation is quite strong (\hat z^{(0)} should already be a consistent estimator). Especially, the authors consider spectral clustering, which does not scale as well as iterative methods such as Lloyd's. Moreover, spectral clustering is rate-optimal in an isotropic mixture model. A natural question is: what happens if one takes the best over 10 runs over random initialisation \hat z^{(0)} (or something more clever such as k-means++)? Numerical simulations could show whether a strong condition on the initialisation is indeed needed or not. In particular, Paper [1] had weaker conditions on the initialisation (but stronger conditions on everything else). * In large dimensions (say d >> n), the lower bound derived here cannot be attained by algorithms agnostic on the model parameters. While this is obviously out of the scope of the paper, the authors should mention it more clearly (key reference 14 is a bit diluted in the intro, other recent work such as [2], albeit posterior to the author's submission, could also appear). [1] Lu, Y., & Zhou, H. H. (2016). Statistical and computational guarantees of Lloyd's algorithm and its variants. arXiv preprint arXiv:1612.02099. [2] Even, B., Giraud, C., & Verzelen, N. (2024). Computation-information gap in high-dimensional clustering. arXiv preprint arXiv:2402.18378. Minor comments: * line 377: "where we use k = O(1)". No, you use SNR / log k >> 1. * Typo in line 189: the with probability 1-exp(-0.08) should be 1-exp(-0.08n) according to [12, Proposition D.1]. This typo appears several times (Corollary 2.1, 3.1 at least) and is important to correct. * line 290: I assume the log is in basis 10. * Line 641: typo in the ref. * Line 661: I don't think the SE(a,b) is properly defined (namely, a proper definition to explain what the a and b in SE(a,b) exactly stands for). Technical Quality: 4 Clarity: 4 Questions for Authors: * Is the assumption SNR / log k \gg 1 needed? It seems to appear in the lower bound because of the choice of the space \mathcal{Z}. Would a more refined analysis delete this assumption? (By this comment, I am specifically referring to the paragraph "If our goal is only to obtain a lower bound ... there exists a much simpler way" in [1].) I am not sure if this assumption is needed in the upper bound (for example Lemma A.5 for the ideal error does not seem to require it?). * Line 42 the sentence: 'From a minimax perspective, the least favorable scenario among all sub-Gaussian distributions with variance proxy \sigma^2—and thus the most challenging for clustering—is when the errors are distributed as N (0, \sigma^2 I)". Does it mean that the SNR obtained in Model 2 is always larger than the SNR of Model 1 (with the same centres \theta_a but all covariances equal to, say, \Sigma_1)? * I am not sure how to obtain the second to last inequality of the inequations starting line 473. I get that Cauchy-Schwarz is applied, but it leads to something slightly different (more precisely, I don't know how to obtain the $\|| \sum_{a} 1(z_j = a) \epsilon_j \epsilon_j^T \||$ as Cauchy-Schwarz gives $\|| \epsilon_j \||$. I may miss a simple one-liner showing that $ \sum_{a} 1(z_j = a) \|| \epsilon_j \|| = \|| \sum_{a} 1(z_j = a) \epsilon_j \epsilon_j^T \||$ . * I do not understand the last statement "Future work will explore broader covariance structures and refine these methods to further bridge theoretical robustness with computational feasibility in complex clustering scenarios." For the first part of the sentence, I believe Model 2 already encompasses the broadest covariance structure (besides some assumptions of \Sigma^* being well-conditioned). For the second part, if the authors mean robustness to noise, they should cite recent work addressing this in isotropic Gaussian mixture models (for example [1,2]). Ref: [1] Anderson Y. Zhang. Harrison H. Zhou. "Minimax rates of community detection in stochastic block models." Ann. Statist. 44 (5) 2252 - 2280, October 2016. https://doi.org/10.1214/15-AOS1428 [2] Liu, Allen, and Ankur Moitra. "Robustly learning general mixtures of Gaussians." Journal of the ACM 70.3 (2023): 1-53. [3] Patel, D., Shen, H., Bhamidi, S., Liu, Y., & Pipiras, V. (2023). Consistency of Lloyd's Algorithm Under Perturbations. arXiv preprint arXiv:2309.00578. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: * Albeit this is already a long and dense paper, I would have enjoyed a section motivating anisotropic mixture models in real-data examples. For example: standard Lloyd versus Algorithm 1 versus Algorithm 2. I believe Algorithm 2 obtains a higher accuracy in large data sets, but may not in smaller data sets (where the estimation of the covariance) may lead to less reliable predictions. Having (even heuristics) guidelines on which algorithm one should use depending on the data set size is important. * While I went quite far down the proof, I found them hard to read. Appendix C is just a dump of technical Lemmas over 17 (!!) pages with no structure. Come on, this is not really serious... Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable comments and remarks. > The paper misses some high-level intuition, and several proofs are tough to parse. Thanks for pointing this out. In the final version, we will enhance our discussion to better articulate the derivation and significance of key terms. The "ideal error" is defined as the error remaining after one algorithm iteration with known ground truth $z^*$. It represents the minimum error under ideal conditions. The actual error emerges when iterating from an estimation $z$, with the difference between actual and ideal errors expressed through the terms $F$, $G$, and $H$: $F$ includes terms related to noise $\epsilon_j$, illustrating the impact of measurement noise. $G$ covers estimation errors for cluster centers ($\hat{\theta}_a(z) - \hat{\theta}_a(z^*)$) and covariance matrices ($\hat{\Sigma}(z) - \hat{\Sigma}(z^*)$), showing the effect of parameter estimation inaccuracies. $H$ contains all other terms from additional error sources. > The interpretation of the minimax rate for model 2 is quite complex. Thanks for the suggestion. We will include connections with Chernoff information in the final version to clarify the understanding of the minimax rate. Regarding SNR', although it diverges from the traditional signal-to-noise ratio, we use this term as it extends the classical definition $ \frac{||\theta_1^* - \theta_2^*||^2}{\sigma^2}$ in the context of isotropic Gaussian noise. > The assumption on the loss of the initialization is quite strong We acknowledge that this assumption appears strong; it is primarily driven by technical challenges encountered in our analytical framework. > In large dimensions (say d >> n), the lower bound derived here cannot be attained You are correct that our paper does not cover the large dimensional case. In the final version, we will add more comments and references to make it clearer. > Minor comments Thank you and we will correct them in the final version. > Is the assumption SNR / log k \gg 1 needed? Thank you for the question. On one hand, this condition helps simplify the complexity for establishing the lower bound. As noted in the literature you referenced, a more refined analysis might allow us to relax this assumption to simply $\text{SNR} \gg 1$. On the other hand, this condition is essential to establish a matching upper bound. The ideal error analysis does not explicitly require this assumption; however, it becomes necessary when considering the aggregate impact of errors $F$, $G$, and $H$. These components introduce multiple factors of $k$ that appear before the desired exponential term, and the assumption ensures that these factors remain negligible. > Line 42 the sentence: Thank you for the question. To address it, consider a simpler case with two Gaussian distributions: whether the SNR calculated from $N(\theta^*_1, \Sigma^*_1)$ and $N(\theta^*_2, \Sigma^*_2)$ is always larger than that from $N(\theta^*_1, \Sigma^*_1)$ and $N(\theta^*_2, \Sigma^*_1)$. In this case, the answer to your question is not necessarily yes, as demonstrated by the following counterexample. When $\theta^*_1=(0,0)$, $\theta^*_2=(1,0)$, $\Sigma^*_1 = I_2$ and $\Sigma^*_2$ is a diagonal matrix with 2 in its (1,1) entry and 1 in its (2,2) entry, one can verify SNR is not larger. In fact, the effect of replacing $\Sigma^*_2$ with $\Sigma^*_1$ on SNR depends on the shapes of $\Sigma^*_1$ and $\Sigma^*_2$ and the direction of $\theta^*_2 - \theta^*_1$. This is different from the sub-Gaussian setting. The rationale why $N(0, \sigma^2 I_d)$ leads to the smallest SNR among sub-Gaussian distributions with variance proxy $\sigma^2$ is that it is flatter in all directions compared to any other sub-Gaussian distribution. > I am not sure how to obtain the second to last inequality of the inequations starting line 473. The inequality does not result directly from Cauchy-Schwarz. Instead, it first formulates a quadratic form and then upper bounds it by the operator norm of the matrix. Let $x$ be a vector. The inequality is essentially about showing $\sum_j |\epsilon_j^Tx|^2$ can be upper bounded by $||x||^2 ||\sum_j \epsilon_j \epsilon_j^T||$. This can be proved by $\sum_j |\epsilon_j^Tx|^2 = \sum_j (x^T\epsilon_j)(\epsilon_j^T x) = \sum_j x^T(\epsilon_j \epsilon_j^T)x = x^T(\sum_j \epsilon_j \epsilon_j^T)x \leq ||x||^2 ||\sum_j \epsilon_j \epsilon_j^T||$. In the final version, we will add this intermediate argument. > I do not understand the last statement Thanks for the feedback. We agree it needs clarification. Our intent was to indicate a potential relaxation of the well-conditioned assumption on the covariance matrices' condition numbers. In the final version we will make it clearer. > Albeit this is already a long and dense paper, I would have enjoyed a section Thank you for your suggestion. In the final version of the manuscript, we will incorporate a new section dedicated to evaluating the performance of the algorithms on real datasets. This section will also include practical guidelines for practitioners on selecting the most suitable algorithm. > While I went quite far down the proof, I found them hard to read. Thanks for the comment. In the final version, we will add more details to make the proof more accessible. For example, in Appendix C, we will give an overview of lemmas to be proved. --- Rebuttal Comment 1.1: Comment: Thank you for your answers! I updated my grade as I am quite sure this paper is a strong accept. In particular, the addition of some numerical experiments on real data sets makes it valuable for the NeurIPS community. With small cleaning, the proofs in the Appendix can be made easier to read (but even if it takes time, I do encourage you to add some details at key points in the proof, as it makes a difference from a reader's perspective). Reading your answer about SNRs, another observation came to my mind. If we consider the problem of clustering an instance of an anisotropic GMM, Algo 2 is rate-optimal, while vanilla Lloyd's should be sub-optimal. In contrast, studying the problem in a worst-case scenario (minimax over all sub-gaussians mixture models), vanilla-Lloyd is rate-optimal (albeit I believe Algo 2 should also be rate-optimal). --- Reply to Comment 1.1.1: Comment: Thank you! Your observation about SNR is correct. We will follow your suggestions to enhance our paper in the final version. Thank you once again for your constructive and detailed feedback.
Summary: The paper provides a minimax lower bound of misclustering error rate for clustering under the anisotropic GMM. Then, the paper designs an adjusted Lloyd's algorithm which can obtain the minimax lower bound within log(n) iterations. The paper also conducts some experiments to show the performance of the proposed method. Strengths: 1. The paper tackles a more difficult setting of GMM, i.e., anisotropic GMM. The paper provides the bound of the misclustering error rate in this setting and designs a rate-optimal algorithm, the adjusted Lloyd's algorithm. 2. The paper is theoretically sound and solid. Weaknesses: 1. My major concern is about its experiments. The paper only conducts experiments on synthetic data sets. In machine learning community, we often care more about the performance on the real-world data sets. 2. In Algorithm 2, it needs to compute the inverse and determinant of the covariance matrices, which seems time-consuming. Could you give the time complexity analysis and experiments on how much the overhead has increased compared to the vanilla Lloyd's algorithm? Technical Quality: 4 Clarity: 3 Questions for Authors: Please see above. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The paper claimed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable comments and remarks. > My major concern is about its experiments. Thank you for your comment. We plan to include a new experiment in the final version of our paper using a real dataset from the Fashion-MNIST collection. This experiment will focus on the clustering of two classes, T-shirt/top and Trouser, each comprising 6000 images. Numerical results show that Algorithm 2 achieves a misclustering error rate of 5.7%, which outperforms the vanilla Lloyd’s algorithm (8%). This addition will provide a more comprehensive evaluation of our methods, highlighting their effectiveness in practical scenarios. > In Algorithm 2, it needs to compute the inverse and determinant of the covariance matrices Thanks for the comment. You are correct in noting that it incurs additional computational overhead due to the necessity of computing the inverse and determinant of covariance matrices. The time complexity of Algorithm 2 is $O(nkd^3T)$, where $n$ is the number of points, $d$ is the dimensionality of each data point, $k$ is the number of clusters, and $T$ is the number of iterations. This contrasts with the vanilla Lloyd's algorithm, which has a lower time complexity of $O(nkdT)$. The increased complexity is primarily due to the matrix operations in $d$ dimensions, which scale as $O(d^3)$ for matrix inversion and determinant computation. To provide a clearer perspective on the performance impact, our experiments show that at a dimensionality of 5, Algorithm 2 requires approximately twice the computation time of the vanilla Lloyd’s algorithm. This ratio increases to approximately 14 when the dimensionality is increased to 100. In the final version of the paper, we will include a detailed time complexity analysis and experimental results to illustrate how the overhead scales with increased dimensionality. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses which addressed my concerns. I've no more comments.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their detailed and insightful comments. Each reviewer has provided valuable feedback from different perspectives. In recognition of this, we respond to each reviewer's comments individually to ensure that all concerns and suggestions are thoroughly addressed.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Quasi-Bayes meets Vines
Accept (poster)
Summary: Quasi-Bayes, in the sense of building a model as a consistent sequence of predictive distributions, has recently received a lot of interest. The initial construction in [25] used products of bivariate copulas to build their predictives; the authors propose here to extend the construction to vine copulas, a flexible model for multivariate distributions. Strengths: * Quasi-Bayes in the wake of [25] is indeed a hot topic in Bayesian statistics, and the paper is potentially impactful. * The initial copula constructions in [25] felt a bit constrained, and more flexibility would intuitively help. * I overall enjoyed reading the paper, though I have many (hopefully constructive) comments below. Weaknesses: ## Major 1. p1 L1 I would avoid flattering formulations like "[25] initiated a new era of Bayesian computation" and L33 "[25] heralded a revolution in Bayesian methods", "liberating Bayes". While you are free to think it and predict a revolution, that kind of statements about a 2023 paper are necessarily statements of opinion or personal predictions at best, and we should let time tell whether a new era or a revolution happened. You can say, e.g., that [25] proposed a stimulating change of paradigm, that is a fact. More pragmatically, starting a paper with opinion statements puts the reader in a suspicious mood. 2. Abstract: the sentence "But extensions [...] used" is unclear. What assumptions? What kernel? 3. Figure 1 is not particularly useful, I'd say, and the fonts in the decomposition are a bit strange. I know ICLR papers, for example, tend to have a small summary figure at the beginning, but I believe that it doesn't especially help here. The space gained could be used to give e.g. a more formal introduction to vine copulas in the main text. 4. p2 L42 "DPMM kernel structure". Maybe remind the reader on p1, when you first mention the DPMM, of what it is and what you call the kernel. It will not be obvious to many. 5. The introduction should contain a sentence defining vine copulas, or "vines". It is likely not a standard notion in the NeurIPS community, and the introduction should make it clear what the title of the paper means. Actually, the definition is somehow informally given in the caption of Figure 1, but I had missed it at first reading. I would avoid putting anything else in a caption than the description of what the figure shows. Important details should go to the main text. 6. p2 L56 "under the assumption of well-identified simplified vine copula model". It is unclear at this stage what you mean here. 7. Eqn 2: as far as I understand, going from the first to the second line requires Sklar's formula, at least if I follow the derivation of [25]. If you also use it, this should be mentioned. There is a reference to Appendix A, where copula tools are introduced, such as Sklar's theorem, but I'm not sure the argument is explained. 8. p3 L 82-82, in what sense does $c^{(n)}(\cdot,\cdot)$ converge to $1$? How does is guarantee a convergence for $p^{(n)}$? Why is the copula symmetric? 9. The paragraph L102--L113 is a bit dense and can gain in clarity. In what sense and under what assumptions does the "univariate R-BP model converge to a limiting distribution ... with density..."? What do you mean by "Newton's algorithm"? Also, there are steps to explain if you want to talk about how quasi-Bayes "approximates a posterior density"; so far, we have only talked about prediction. What parameter do you consider? How do you approximate the posterior density? You could refer to the corresponding passages in [25] and explain them in a few words. Or avoid discussing parameter inference if it is not central to your contribution. 10. p4 L128 what do you mean by "the predictives are unconditional marginal densities"? The sentence is puzzling. 11. Importantly, the authors of [25] restrict their designs of predictive distributions so that they satisfy their martingale condition (Eqn (4.1) in the Arxiv v2 version). They use their Corollary 1 for univariate copulas, and note in Section 4.3 that a multivariate extension of their Corollary 1 is not easy, which prompts them to take instpiration from factorized kernels in DPMMs, for which they can guarantee (4.1). So the obvious question is: how do you guarantee that your vine predictives satisfy Fong el al.'s martingale condition? Or, alternatively, how do you guarantee the existence of the limiting martingale posterior $P^{(\infty)}$? It is possible that I am missing an obvious argument. 12. Lemma 3.2 uses $P^{(\infty)}$, which should be formally defined. Or at least, a reference should be given to the formal definition in [25]. But I think that giving a formal definition would help, since it would entail checking the martingale assumption of [25]; see bullet 11 above. 13. Overall, guarantees like Lemma 3.2 or Theorem 3.3 are guarantees in probability, under the distribution formed by the product of your predictives, am I right? In that case, these guarantees are meaningful only if I believe that the product of predictives is actually a good model for the data generating process. Is this realistic? Even under the classical simplifying assumption on conditional copulas that you are making? 14. p5 L146-173 the introduction to vine copulas is a bit confusing and informal. This section should be more formal, as understanding vine copulas is key to the paper, and I think not standard to many NeurIPS readers. I suggest having a subsection of Section 2 that formally introduces vine copulas, independently of their application to quasi-Bayes, and gives your example (23). Then Section 3 can focus on how you use vine copulas to define your joint predictive distributions. Also, I would use Pseudocode at the end of Section 3, to summarize all the steps in your procedure: hyperparameter tuning, estimation of the pair copulas, etc. I am not 100% confident that I can write this pseudocode from the text only. 15. Relatedly, having pseudocode would allow you commenting on the computational cost of the different steps. p9 L335 you mention "a search over a vast model space during estimation", but this has not been commented on before. 16. Theorem 3.3: what do you mean by "Assuming a correctly identified simplified vine strucgture for $c^{(\infty)}(\mathbf{u})$?". If I understand well, a more standard phrasing would be to formally introduce your model, say as $P$, and then state that under the distribution $P$, such and such is true. 17. The caption of Table 1 mentions error bars of two standard deviations averaged over 5 runs. I am unconfortable with the number of runs, which is intuitively too low for a $2\\hat{\sigma}$ interval to make sense, be it through a CLT or a Chebyshev argument. ## Minor * p1 L29 the notation $\mathbf{p}^{n}(\mathbf{x})$ has not been defined yet, so I would simply avoid using the notation here. * p2 L52 while you are not the only ones to do so, I don't see the need to boldface "main contributions". * Theorem A.1: I would define what a copula distribution is. * p3 L87 "intractable": following [25], I would even say it is not particularly desireable, as they try to bypass the need to specify a prior-likelihood pair. * In Eqn (8), the densities should appear with their arguments. * Section 5: don't you miss a log in your expressions of the log score? Technical Quality: 3 Clarity: 2 Questions for Authors: Q1. Can you give pseudocode for your method applied to a particular task, including all hyperparameter tuning steps like pair-copula estimation? Q2. Can you then discuss the computational cost of each step, and identify the bottlenecks? Q3. A convincing answer to items 11-12-13 in my list of major comments would also make me reconsider my mark. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: This is fundamental work, and no immediate negative societal consequence is foreseen. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reply 1: We did not mean to alienate readers, and will modify the text according to your suggestions. For the camera ready, we will use your phrasing for the abstract and use more neutral expressions in the introduction, citing papers emanating from [25] as evidence of an active area of research instead. Reply 2: Please see point 1 in the main reply. # Reply 3,5,6,14: We appreciate your judgement and removed the figure. We use the space to describe vines briefly in the introduction and thoroughly as a subsection of Section 2, accordingly correcting the relevant passages in the rest of the text. # Reply 4: We thank the reviewer for this suggestion and have implemented it in the text in the style of point 1 of the main reply. # Reply 7: We have added a sentence below the equation explaining the use of Sklar's theorem in the derivation by the definition of a copula. # Reply 8: In Corollary 1 of [25], for univariate predictives, it is shown that the existence of a copula sequence for predictive recursion is equivalent to the sequence of predictive densities satisfying the martingale condition, in turn implying that $p^{(n)}(x)\rightarrow p^{(\infty)}$ almost surely for each $x\in\mathbb{R}$ by the martingale convergence theorem. Due to the recursion of Equation (2), this implies $\lim_{n\rightarrow\infty}c^{(n)}(x,x^{n})=1\, a.s. \forall x$, meaning that the copula does not interfere with the convergence, and eventually the predictive does not change anymore. We have made the almost sure convergence explicit in writing and have justified it by saying "...$\lim_{n\rightarrow\infty}c^{(n)}(x,x^{n})=1\, a.s. \forall x$ as a consequence of the almost sure convergence of $ p^{(n)}$ with $n$ [25]". Lastly, the copula is symmetric as one can freely exchange the components of the copula in equation (2). Namely, exchange $f(x|\theta)$ with $f(x^n|\theta)$ for the expression of the joint density in the numerator and $p^{(n-1)}(x^n|x^{1:n-1})$ with $p^{(n-1)}(x|x^{1:n-1})$ for the marginals in the denominator, thus obtaining $c(x_1,x_2):=\frac{p(x_1,x_2)}{p_1(x_1)p_2(x_2)}=\frac{p(x_2,x_1)}{p_2(x_2)p_1(x_1)}=:c(x_2,x_1)$. # Reply 9: We are quoting the result from theorem 5 in [25], saying that the R-BP distribution $P^{(n)}$ converges in total variation to $P^{\infty}$ which is absolutely continuous with respect to the Lebesgue measure with an associated density $p^{\infty}$. The theorem needs the R-BP density $p^{(n)}$ to be continuous, $\int_K {p^{(n)}}^2(x) d x<\infty$ for $K$ a compact subset of $\mathbb{R}$ with finite Lebesgue measure, the weight sequence to have the form $\alpha_i=\left(2-\frac{1}{i}\right) \frac{1}{i+1}$ and that the parameter of the copula be $\rho<1 / \sqrt{3}$. Then, in [39] the R-BP density is shown to converge to the true density in a Kulbach-Leibler sense, under similar assumptions. Newton's algorithm is the original work that initiated the study of recursive density estimates for the DPMM. The two cited papers [68,69] focus on establishing a recursion for the mixing distribution of a DPMM. However, at every step of the recursion their approach needs to solve an integral with dimensionality scaling with the dimension of the data, thereby making it impractical in higher dimensions. A thorough review of the original and ensuing works has been provided in [59]. We comment on this in Section 4. The "posterior density" was a typo, as we meant the predictive density, which is the object of the R-BP recursion. Parameter inference is indeed not central to our work, so we do not discuss it. We thank the reviewer for commenting on this and have ameliorated the clarity of the paragraph by referring to the corresponding results explicitly and fixing the typo about the posterior. # Reply 10: The goal was to emphasise that the univariate densities are not conditional as in [25,39], but are independent of each other allowing them to be modelled separately (i.e.~in parallel). We appreciate this comment on clarity and have rephrased the sentence into "the predictives are simple univariate densities". # Reply 11: In the interest of space, we include this as a comment below. # Reply 13: You are correct, our proofs are statements about the distribution formed by our model of the joint predictive, being marginal R-BPs multiplied with the vine copula. We do believe this is indeed a useful model with a rich complexity. Firstly, the marginal R-BP minimises the KL to the true data generating process with $n$ [39], and converges in total variation [25], leading to precise pseudo-observations $\{u_i=P^{(n)}(x_i)\}_{i=1}^d$ (lemma 3.2). Then, we employ copulas to recover the joint predictive, knowing that such a copula exists and is unique (theorem 3.1). We believe the best copula model currently available in the literature is a simplified vine copula, as it is both computationally feasible and offers a large model flexibility with non-parametric pair copulas (see e.g. \cite{czado2019analyzing}). Further, the convergence rates of the R-BP (lemma 3.2) and the vine copula (theorem 3.3) make them appealing density estimators. # Reply 14: Please see the pseudo code in the pdf, and our point 2 in the main reply. # Reply 15: We will include a discussion on the computational cost of our approach. -see reply to other reviewer and experiment on computing time-\\ The model space we refer to is the specific decomposition of the vine copula, or equivalently its tree structure. We discuss this in L185-187. Following your comment, we will rephrase the sentence in Section 6 to be coherent with our discussion on computation costs mentioned above. Further replies and the proof are in the comment. --- Rebuttal 2: Title: Rebuttal by Authors part 2 - proof Comment: # Reply 11: We do not claim to converge to a multivariate $P^\infty(\mathbf{x})$. The $P^\infty(\mathbf{x})$ is only relevant when one wants to do predictive resampling to impute the unobserved data, as done in the martingale posterior for parameter inference. We do not investigate predictive resampling and do not advertise our work as such, nor focus on parameter inference. We need the marginals to converge (lemma 3.2). Then, for our theorem 3.3, this is purely a statement on the density as done in the vine copula literature, not on the limiting $P^\infty(\mathbf{x})$ martingale posterior in multiple dimensions. Similarly, theorem 3.1 adapts Sklar's Theorem to multivariate densities, holding for any $n$. For experiments, we show that our model approximates well all the datasets, doing density estimation, for which a $P^\infty(\mathbf{x})$ is not required. Our paper focuses on modelling predictive densities in an efficient way, which is a worthwhile pursuit as classical density estimators such as KDE are known to scale poorly with dimension. However, we were able to prove the martingale condition, see point 4 in the main reply, and the proof below. (We could not get it to format properly, we express our deepest apologies and hope it is still readable.) We write $\mathbf{c}^{(i)}= \mathbf{v}^{(i)}$ for clarity and denote inputs to copulas as vectors $(P_1(x_1),\ldots,P_d(x_d))=[P_i(x_i)]$: $$ \begin{aligned} &\int \mathbf{p}^{(n)}(\mathbf{x})\cdot\mathbf{p}^{(n-1)}(\mathbf{x}^n)d\mathbf{x}^n\ =& \int \prod_{i=1}^d \left( p^{(n-1)}(x_i)\cdot c^{n}(P^{(n-1)}(x_i),P^{(n-1)}(x_i^n)) \right) \cdot \mathbf{v}^{(n)}([P_i^{(n)}(x_i)]) \cdot \mathbf{p}^{(n-1)}(\mathbf{x}^n) d\mathbf{x}^n\ =& \prod_{i=1}^d\left(p^{(n-1)}(x_i)\right)\cdot\mathbf{v}^{(n-1)}([P_i^{(n-1)}(x_i)])\ \cdot \int\prod_{i=1}^d\left( p^{(n-1)}(x_i^n)\cdot c^{n}(P^{(n-1)}(x_i),P^{(n-1)}(x_i^n))\right)\cdot\frac{\mathbf{v}^{(n)}([P_i^{(n)}(x_i)])}{\mathbf{v}^{(n-1)}([P_i^{(n-1)}(x_i)])}\cdot\mathbf{v}^{(n-1)}([P_i^{(n-1)}(x_i^n)])d\mathbf{x}^n\=&\mathbf{p}^{(n-1)}(\mathbf{x})\frac{1}{\mathbf{v}^{(n-1)}([P_i^{(n-1)}(x_i)])}\int\prod_{i=1}^d\left(c^{n}(P^{(n-1)}(x_i),u_i^n)\right)\cdot\mathbf{v}^{(n)}([P_i^{(n)}(x_i)])\cdot\mathbf{v}^{(n-1)}([u_i^n]) d\mathbf{u}^n\=&\mathbf{p}^{(n-1)}(\mathbf{x})\frac{1}{\mathbf{v}^{(n-1)}([P_i^{(n-1)}(x_i)])}\int\prod_{i=1}^d\left(c^{n}(P^{(n-1)}(x_i),u_i^n)\right)\cdot\mathbf{v}^{(n-1)}([u_i^n])\cdot\mathbf{v}^{(n)}([(1-\alpha_n)\cdot P_i^{(n-1)}(x_i)+\alpha\cdot H(P_i^{(n)}(x_i)|u^n)])d\mathbf{u}^n\=&\mathbf{p}^{(n-1)}(\mathbf{x})\frac{\mathbf{v}^{(n)}([P_i^{(n-1)}(x_i)])}{\mathbf{v}^{(n-1)}([P_i^{(n-1)}(x_i)])}=\mathbf{p}^{(n-1)}(\mathbf{x}). \end{aligned}$$ The first equality is applying Sklar on $ \mathbf{p}^{(n)}(\mathbf{x})$ and using recursion (4) on the ensuing marginal densities $p_i^{(n)}(x_i)$. The second equality is Sklar on $\mathbf{p}^{(n-1)}(\mathbf{x}^n)$ and writing out the recursive ratio of copulas for $\mathbf{v}^{(n)}$ (Equation (8) of the main text). The third step is obtained by the substitution $du_i^n=p_i^{(n)}(x_i^n)dx$. Lastly, we use equation (4) for the cdf inside the copula. Then, the result follows by noticing that the bivariate copulas and the copula $ \mathbf{v}^{(n-1)}([u_i^n])$ integrate to 1 by copula properties (see e.g. the proof of theorem 6 in [9]), and $[(1-\alpha_n)\cdot P_i^{(n-1)}(x_i) + \alpha \cdot H(P_i^{(n)}(x_i)|u^n)]$ integrates into $[P_i^{(n-1)}(x_i)]$ due to it being a martingale marginally for each $i$. Consequently, predictive resampling is also possible with our approach and is as simple as sampling from the fitted copula, and marginally updating each univariate R-BP. This can be done in parallel across dimensions, instead of sequentially as in [25,31], which is computationally much more appealing and opens an interesting avenue for future work. Thank you for your perceptive remark, which has guided us to new insights. --- Rebuttal 3: Title: Rebuttal by Authors - part 3 Comment: # Reply 12: We note that $P^\infty(x)$ used in lemma 3.2 is univariate, with the martingale condition being established for the univariate R-BP in [25] already. Following your advice, in the camera ready, we will dedicate a paragraph to the introduction of the martingale condition and the proof of the QB-Vine satisfying it. # Reply 16: We thank you for this comment, and have rewritten the statement with that structure. # Reply 17: Please see our reply to Q2 of reviewer iVqh, explaining our limitation due to [31]. With their code unavailable, we assume the best case scenario for other methods, believing those intervals are adequate there, but study the intervals of the QB-Vine in more detail, running 15 more runs for each point Figure 2 (see pdf). # Reply minor points: We incorporated the changes in our manuscript. # Reply Q1: Please find the pseudo-code in Figure 2 of the pdf, including hyperparameter estimation. Some brief clarification: the training data can be permuted in full or partially depending on how much data is available, amd is captured by the variable $M$ giving the number of permutations. We keep $M=10$. Other hyperparameters are the number of points $B_1,\ldots,B_l$ to select the bandwidth of the vine. Our experiments were run with a grid of size $50$ between $2-4$ for UCI dataset and $0.5-3$ for the rest. $V=0.8$ to have $5-$fold cross-validation, and $J=100$. $\rho$ can be optimised with gradients and converged very quickly in our experiments, requiring less than five evaluations on average. # Reply Q2: The computational cost of each step comes from the existing method, R-BP or vines [65,], with the exception that we half the time of the R-BP [25,39] due to the optimisation of the Energy score. Due to space constraints, please see our reply to weakness 3 of reviewer iVqh. Finally, we thank the reviewer many times for their great suggestions and critical comments. We truly believe the paper is stronger as a result, and express our greatest thanks for dedicating your time to our work. We hope we addressed your concerns, and remain available to discuss any further points you would wish to raise. --- Rebuttal 4: Comment: Thanks a lot for the detailed clarifications. Trusting the authors to include the proposed changes, I will increase my score to a 5. I am reluctant to increase more, because if I draw a parallel with a journal submission, for the latter I would have liked to review and carefully proofread a revised version of the paper.
Summary: This paper develops the recently proposed quasi-Bayesian methods by applying vine copula (hence named QB-Vine) to the recursive Bayesian predicative distributions and bypassing the need for expensive posterior integration. The proposed method consists of two parts: independent recursion of marginals by bivariate Gaussian copulas; and estimating the simplified vine copula to capture data dependence hence relaxing the kernel assumption for DPMM. Error bounds for both the distribution functions and the copulas are provided to justify the proposed QB-Vine. Numerical examples including density estimation and regression/classification are used to showcase the advantage compared with the state-of-the-art alternatives. Strengths: The paper proposes a novel Bayesian method to compute the predictive distribution. The proposed method, QB-Vine demonstrates numerical advantage over alternatives. Theoretic characterization on the errors is provided. Weaknesses: The dimension of the problems is relatively low (up to 64). Technical Quality: 3 Clarity: 3 Questions for Authors: Should there be $|\mathcal S_{ij}$ after the second conditional distribution in equation (10)? Did you have dimension specific bandwidth $b$ and correlation $\rho$ for all the numerical experiments? Did it increase the overall computation time significantly by having them different for each dimension? Line 223: "convergence" should be "converges" Figure 2: why LPS increases from 500 samples to 900 samples for QB-Vine? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort in reviewing and offer answers to their comments below. # Reply to weaknesses: To showcase the scalability of our approach in higher dimensions, we have expanded the experiments from Table 6 in the paper with a study in $d=400,500,600$ dimensions on Gaussian mixture models (GMMs) with 20 random means (drawn uniformly from $[-100,100]^d$) and non-isotropic covariances drawn from a Wishart distribution, with $n=20000$ observations, and using a $50/50$ split for training and testing. We compare the QB-Vine against the RQ-NSF taken to be a benchmark for high-dimensional modelling, with the same hyperparameters from the experiments of Table 6. We repeated this study 5 times with different seeds, leading to different GMM models. In the additional uploaded material, we included Figure 3 showing the LPS over the 5 runs for each method and dimension. Further, as the generating distribution is know, we sample from the fitted models and evaluate their samples under the true GMM density (known as the reverse KL divergence, lower is better), reported in Figure 1. Finally, we compute the MMD between samples and observations, reported in the table below. Dimension GMM 1 GMM 2 GMM 3 GMM 4 GMM 5 ------------------------------------------------------- -------------------------------------------------------- 400 ------------------------------------------------------ -------------------------------------------------------- QB-Vine 29.2213 29.7847 29.2775 29.7731 29.0477 ------------------------------------------------------ -------------------------------------------------------- RQNSF 29.7247 29.2993 29.7835 29.3515 29.8447 ------------------------------------------------------- -------------------------------------------------------- 500 ------------------------------------------------------ -------------------------------------------------------- QB-Vine 32.7893 33.1789 32.8354 33.4401 32.5520 ------------------------------------------------------- -------------------------------------------------------- RQNSF 33.2011 32.6044 33.4355 32.7249 33.4143 ------------------------------------------------------- -------------------------------------------------------- 600 ------------------------------------------------------ -------------------------------------------------------- QB-Vine 35.7948 36.5586 35.8328 36.6095 35.9390 ------------------------------------------------------- -------------------------------------------------------- RQNSF 36.4700 35.6756 36.4400 35.7731 36.7090 ------------------------------------------------------- -------------------------------------------------------- \caption{Comparison of the MMD (lower is better) computed on samples from the QBVine and RQNSF models across different dimensions and GMMs. Each cell shows the QBVine value on top and the RQNSF value on the bottom, separated by a dotted line.} # Reply Q1: Yes, this is a consequence of the vine decomposition following Equation (9), meaning the copula density $c_{i,j}$ is itself conditional on $S_{ij}$. The vine copula of equation (10) is still a full vine copula, with no simplifying assumption. The simplifying assumption of a simplified vine copula then ignores that last conditioning in favour of a more parsimonious model. Additionally, the simplifying assumption makes the vine model easier to estimate, as then all the pairs $(P_{i|S_{ij}}(x_{i|S_{ij}}),P_{j|S_{ij}}(x_{j|S_{ij}}))$ can be used in the estimation of $c_{ij}$ instead of having to also take into account the values of the conditioning set $S_{ij}$. We take your comment as a potential confusion induced by our sentence on L154 "rewriting any conditional densities as copulas", and have therefore removed it for clarity. # Reply Q2 We use a dimension-dependent correlation $\rho$ (L176), but use the same bandwidth for all KDE pair copulas (L181). We have modified L176 to state this more clearly. # Reply Q3 The computational time is the same as estimating a common correlation parameter across all dimensions, since each recursion still has to be computed to select $\rho$. In fact, choosing a separate $\rho_i$ per dimension is preferred since it allows to estimate the parameters in parallel without each dimension needing to interact, for example to pool gradients as would be the case with a common bandwidth. Therefore, with parallelisation, selecting a different bandwidth for each dimension takes the same time as selecting the correlation for a single dimension. # Reply Q3 We thank the reviewer for spotting this typo and have fixed it. # Reply Q4 We believe this was a consequence of not taking enough runs in the experiment. We have now updated our picture to include 15 runs, showing a smooth decrease of the LPS with training size, as one would expect. --- Rebuttal Comment 1.1: Title: I have raised my score Comment: Thanks to the authors for addressing my concerns. I appreciate the added results which make the numerics more convincing.
Summary: The authors propose a novel method for modeling high-dimensional distributions (for density estimation and supervised learning), where they break the estimation task into estimation of univariate marginals and estimation of a multivariate copula. To expedite the univariate estimation tasks they utilize the novel quasi-Bayesian (QB) estimation method, and they use a simplified vine copula to approximate the multivariate copula. They present empirical results that compare their method to others in density estimation and supervised learning. Strengths: - The authors present a well-defined problem that is of potential interest to conference readership, place it within the existing literature, and clearly demarcate their innovations. - They break down the estimation problem into two sub tasks (estimation of the univariate marginals and the copula), and offer tangible contributions for both tasks, and provide convergence results for their estimator. - Empirical results are convincing regarding authors' method's practical usefulness. Weaknesses: Although mostly outside of my expertise, this is a well-written paper overall. However, there are a number of potential changes that can improve the readability and accessibility of the paper. I list these in the section below. Technical Quality: 3 Clarity: 3 Questions for Authors: - L29: The paper makes a confusing start regarding its notation. $n$ is used here without being introduced, on L69 it's referred to as $K$, which leads to further confusion. This is likely the most important variable in the paper, so a proper introduction is required - especially because the recursive nature of the approach in question may not be as obvious for a reader from a different subfield. - L163: Potential implications of using simplified vine copula can be discussed here, with a reference to forthcoming Thm 3.3. - L174: Any particular reasons for the choice of Cauchy as the initial distribution? - L188: Please use the additional space afforded after reviews to make this discussion more explicit. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors provide a satisfactory discussion of the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your effort in reviewing our paper and are glad you found it well-written - thank you. We provide answers to your suggestions below. # Reply Q1: We thank the reviewer for pointing out this important oversight and have accordingly removed all $K$ and replaced them with $(n)$. We also removed the notation in the introduction and changed the indexing to be more standard, with dimensions as superscripts and recursive numbering as a subscript. # Reply Q2: We thank the reviewer for their comment. Following other reviewers' suggestions, for clarity's sake, we have moved the introduction of vine copulas to Section 2 as a subsection. We will incorporate your remarks there, detailing the implications of a simplified vine copula. # Reply Q3: The choice of initial predictive has 2 implications. Firstly, it is an implicit statement about your beliefs on observables, in a similar way to a prior. Secondly, it contributes to the efficiency of the recursion in fitting data. We discuss both of these aspects in Appendix E. In particular, a Cauchy distribution is effective at minimising numerical overflow when computing the cdf of data in the tails, making our algorithm more robust to different types of data. Further, taking a heavy tailed distribution such as the Cauchy coincides with the theoretical work on similar recursive density estimators, where it is common to assume that the tails of the true data generating density are not too heavy compared to the tails of the predictive, see e.g. condition (3.2) in [59]. # Reply Q4: We thank you for this comment. We will expand upon this point in the camera ready. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I believe the modifications they propose will improve the paper.
Summary: Inspired by previous quasi-Bayesian (QB) methods, the recursive decomposition of Bayesian predictive posterior distributions, and vine copulas, the work introduces a new adaptation of QB to higher dimensions. The driving idea, introduced in Section 3 consists of making an adaptation of the previous decomposition for univariate densities to a higher number of dimensions, which ends up being a product of marginals times a vine copula with a certain design. Such vine copula models the conditioning among dimensions since the marginals do not. This one works via a decomposition into $d(d-1)/2$ elements that capture the dependency structure. Experiments on UCI datasets with a log-predictive metric show improvement wrt the selected SOTA methods with datasets of small N. Strengths: I would like to add some points of strength that I think are worth to be mentioned: - The work introduces well enough the problem of higher dimension d in certain Bayesian inference problems and how some SOTA methods and frameworks might struggle when this is larger than a few dozens. Additionally, the reference to previous works and advances is important and well pointed out in my opinion, giving the right credit for the technical parts to each one of the previously published works. - The paper is in general concise, and I could follow the technical details -- so I don't think there are details missing in the math part, despite some lack of analysis in certain directions that I'd had loved to see. - In general, the idea of breaking the conditioning in this way, exploiting both recursive predictives and an additional element that captures correlations in a moderately-scalable way (i.e. vine copulas) is interesting to me and for the paper. Weaknesses: Some comments that I would like to add in terms of weak parts or ideas that I consider kind of a problem (or at least I'm concerned about): - I think the technical derivations, and in general the notation could be improved. From the usual Bayesian perspective and for readers familiar with probabilistic methods, it is not really orthodox and the subscript-superscript system could be confusing at times. The work introduces too many things sometimes instead of focusing on clarity and highlighting the actual contribution and strong ideas proposed in the manuscript. At least, that's what I get from reading it. - The contribution and novelty are a big concern to me. Honestly, I feel that if Eq. (7) is the big contribution or the point of novelty of the paper, it would not be enough, as it is just an adaptation of the previous derivations in Section 2 / Background to a vine copula. Additionally, the ratio in Eq. (8) is trivial, right? nothing really to derive there, or am I wrong here? - The experiments are very limited, on small UCI datasets and only show one empirical result which is the improvement with the dimensions, however, no discussion about complexity/computational cost, run times, sensitivity to the choice of the vine copula system, etc is made. In that regard, I fear a bit how scalable really is the vine copula with such $d/(d-1)/2$ mechanism to capture conditioning. Technical Quality: 2 Clarity: 3 Questions for Authors: **Q1** -- What is the scalability of the simplified vine copula in L146-L152? Could the authors add some extra info about the computational cost per iteration n? **Q2** -- Why so much focus on the log-predictive metric? Wasn't there an additional informative metric that could give the reader more insights about the performance compared with other inference methods? Is there a log() operator missing in the equation on L271? **After rebuttal comment:** Dear authors, thanks for your rebuttal and the clarifications made regarding my points of weakness and concerns. Some things are clearer now to me, so I am happy to increase my score to 5 now. However, I still feel that some issues raised in my review have not or at difficult to fix atm, and I feel that there are not yet strong reasons to clearly accept the paper (i.e. clarity/readability and empirical results are yet limited, and I still have doubts on the novelty side). I don't object to acceptance, but won't be the supporter of that either. Best of luck. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reply to weakness 1 We appreciate your recommendation on this. Following your and other reviewer's comments, we have (1) interchanged subscripts and superscripts between dimension and predictive step, (2) added an introduction of vines as a subsection in Section 2, providing a clearer overview there and subsequently focusing more on our contribution on Section 3, along with minor fixes. # Reply to weakness 2 The big contribution of our paper is the construction of a recursive QB method that is not restricted by assumptions on the DPMM kernel. As explained in point 1 of our main reply, the main drawback of previous methods is their necessary assumptions on the multivariate structure to obtain their multivariate recursion, leading to sub-optimal fits on multivariate data. Our tool to avoid restrictive assumptions on the multivariate structure is Sklar's theorem, which in itself is not new, but its application to bypass kernel assumption in a DPMM model is novel. Previous work [31] made incremental improvements to the recursive construction, whereas here we take a wholly different approach, overcoming the limitations of the existing literature in the field, and outperforming them. Additional points of novelty are the construction of a recursive estimator that can be computed in parallel instead of sequentially like the auto-regressive decomposition of [25,31], allowing multiple-fold computational savings, which is the primary focus of this line of research for the DPMM. This is further assisted by optimising the energy score, halving our training time compared to existing work. Equation (8) is included there to make the potential for parallelisation and unrestricted dependence structure apparent. Our approach can further benefit from improvements to marginal recursion (as e.g. in [29] or future works since this is an active area of research as pointed out by reviewer qqFA) as well as improvement to copula models since both the R-BP and vines can be replaced with upgraded methods. This places our work in a unique intersection of recursive computations and multivariate copula models, connecting these up to now mostly disjoint subfields and opening a flow of cross-fertilisation between them. We thank you for your comment and take this as our queue to better motivate our contribution in the introduction. For the camera ready, we will add more context about the DPMM assumption (following point 1 in the main reply) in the text to clarify our innovation. # Reply to weakness 3: We appreciate your concern about scalability. We have included an example on Gaussian mixture models in high dimensions addressing this (see main reply). Further, to asses sensitivity, we expanded our study on the digits dataset, totalling 15 runs per each training size. We report that the change of copula structure does not negatively affect the final predictions, demonstrating robustness to the vine structure. We included a pseudo-code of our proposed algorithm in the pdf page. The computational cost of each step mirrors that of the existing methods, R-BP and vines [65], with the exception that we half the time of the R-BP [25,39] due to the optimisation of the Energy score. For the R-BP, our decomposition into marginals yields the same cost as the cheapest initial update step of [25] or [31] across all our recursive steps. Through parallelising, we obtain a constant cost with $d$, and scale as in [25,39], meaning $\mathcal{O}(n^2)$ for initialising the recursion (the first bullet point of the algorithm in the pdf), and $\mathcal{O}(n)$ to compute the pdf or cdf at a point. For the vine, the number of pair copulas grows quadratically with the dimension, and the number of tree structures is exponential with the dimension, leading to greedy algorithms being used for the tree selection, see [66]. To simulate from a vine, one can efficiently compute the inverse probability transformations (see e.g. Chapter 6 in [16]). Further, one can "truncate" the tree by assuming that pair copulas past a certain degree of conditioning are uniform, reducing the complexity significantly. One can also set a threshold of a simple association measure like Kendal's Tau, computable without model fitting, to decide which pair copulas to model and which to keep a priori. The drawbacks of the vine can be mitigated with truncation or thresholding as discussed above but inevitably loses out on modelling power. The flexibility of the vine in that regard has been studied in the literature, we refer to [16,17,65,66,67,87] among others. Vines are generally accepted as the best copula model for high dimensions, hence our use of it. # Reply Q1: Thank you for this important point. Following your question, we included an algorithmic description of our method (see reply to all reviewers) as well as a discussion on computational times and complexity. # Reply Q2: The LPS is a strictly proper scoring rule, meaning that the unique minimiser of the LPS is the true data generating mechanism. Further, the LPS is equivalent to the negative log-likelihood as commonly used to evaluate density estimators in the literature. Notably, the LPS is the only metric used in the previous QB literature [25,31]. As we compare with the results of [31], and their code has been removed from their website, we are unfortunately unable to replicate their experiments and evaluate other metrics. Following your insight, in our additional high-dimensional experiment on Gaussian Mixture models, we also compare the sampling quality of the model by computing the density of the samples under the model density (known as the reverse KL divergence) as well as the MMD, observing similar performance. There was indeed a missing log() on L271, we thank you for spotting it and have corrected it.
Rebuttal 1: Rebuttal: #### We thank all the reviewers for dedicating their time and effort to the review of our paper. Below, we outline the main points raised by reviewers and how we address them. We also uploaded a pdf file with extra results and a figure showing the algorithm for the QB-Vine. Further, we have posted an individual reply to each reviewer discussing their specific comments. # Introduction to DPMM and main novelty of our approach: Reviewers iVqh and qqFA have asked about the contribution of our work, and clarifications about statements related to previous work. We provide details here and have reformulated relevant parts of the abstract and introduction to clarify the novelty further. Our paper addresses the recursive modelling of the DPMM's predictive density, also studied by previous work [25,31]. The multivariate DPMM is formulated as $$f(\mathbf{x} \mid G)=\int_{\Theta} K(\mathbf{x} \mid \boldsymbol{\theta}) d G(\boldsymbol{\theta}), \text { with } G \sim \operatorname{DP}\left(c, G_0\right)$$ where $K$ is a kernel for the observables $\mathbf{x}\in\mathbb{R}^d$ parameterised by $\theta$, similarly to the kernel in kernel density estimation, and $G$ is called the mixing distribution, upon which a Dirichlet process prior is placed with base measure $G_0$ and concentration parameter $c>0$. The specification of $K$ and $G_0$ are thus important as they regulate the expressivity of the model, but their computation in a Bayesian model requires significant effort. To address this shortcoming, [25] provide a copula-based recursion obtained by assuming $K(\mathbf{x}|\theta)=\prod_{j=1}^d\mathcal{N}(x^j|\theta^j,1)$ and $G_0(\boldsymbol{\theta})=\prod_{j=1}^d\mathcal{N}(\theta^j|0,\tau^{-1})$, $\tau>0$, meaning both the kernel and base measure are assumed independent across dimensions, lacking the expressivity required to capture dependencies in the data. In [31], the form of the kernel is relaxed to be autoregressive with $K(\mathbf{x}|\theta)=\prod_{j=1}^d\mathcal{N}(x^j|\theta^j(\mathbf{x}^{1:j-1}),1)$ where the kernel mean $\theta^j:\mathbb{R}^{j-1}\mapsto\mathbb{R}$ is dependent on previous dimensions, and the base measure of [31] is a product of Gaussian Process priors $G_0(\boldsymbol{\theta})=\prod_{j=1}^d GP({\theta}^j|0,\tau^{-1}k)$ for $k:\mathbb{R}^{j-1}\times\mathbb{R}^{j-1}\mapsto\mathbb{R}$ a covariance function. In our work, we posit the existing forms of $K$ and $G_0$, necessary for their recursions, lack the flexibility to model multivariate data accurately. As a result, we introduce the QB-Vine as a more general approach circumventing assumptions on the recursive form. Instead of deriving an autoregressive recursion through assumptions on $K$ and $G_0$, we show that by applying Sklar's theorem on the joint density, one only needs to specify the recursive form of the marginals, overcoming the limitations of existing work and leading to better performance in experiments. Lastly, our proposed method is inherently parallelizable over dimensions instead of sequential, leading to significant computational gains. # Scalability to high dimensional problems: To showcase the scalability of our approach in higher dimensions, we have expanded the experiments from Table 6 in the paper with a study in $d=400,500,600$ dimensions on Gaussian mixture models (GMMs). We compare the QB-Vine against the RQ-NSF as a benchmark, repeating the experiment 5 times, and showing our competitive performance. In the uploaded pdf page, we show: the LPS (Figure 3), the reverse KL under the true model (Figure 1) and the MMD on samples (Table 1, see the reply to reviewer 2HTp). Further details are in the reply to weakness 1 of reviewer 2HTp. # Computational Complexity: We added pseudo code to the pdf for the QB-Vine. $M$ is the number of permutations, $B_1,...,B_l$ the bandwidth grid, $V$ the cross validation percentage and $J$ the number of samples to compute the Energy score in the vine. Please see our reply to Q1 of reviewer qqFA for more details. We further ran 10 additional runs on the digits dataset in Figure 4 of the pdf, to assess sensitivity. We discuss computational costs in detail in the reply to weakness 3 of reviewer iVqh. # Checking the Martingale condition: Reviewer qqFA asked to prove the martingale condition (4.1) in [25] for the QB-Vine, please see the proof in our reply 11 to them. The only assumption is the ratio of two consecutive vines must be 1, i.e., the dependence structure is constant between predictive steps. We note that these derivations hold for any marginal recursive construction of the form (2) and any copula density used for $\mathbf{c}^{(i)}$, but we focus here on the QB-Vine. We interpret the condition of having the same dependence structure between steps as natural when data is supposed to come from the same data-generating process, which is indeed the circumstance in which we apply the QB-Vine. Further, given $n$ observations, the best guess of the multivariate copula is given by fitting it at the last iteration, which is our approach in experiments. Pdf: /pdf/80cb9e668bbba175cd6094a6a5a980720c3c1874.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on the Line of Sight
Accept (poster)
Summary: The paper introduces TALoS, a novel test-time adaptation method for Semantic Scene Completion (SSC) that leverages information from driving environments. The key idea is to use observations and predictions from different moments as self-supervision signals to adapt the pretrained model. Specifically, LiDAR point clouds collected at various timestamps are used to create binary occupancy labels, and semantic predictions from these timestamps are fused based on their confidence levels. Additionally, a dual optimization scheme is proposed to exploit observations from both past and future frames. Experimental results on the SemanticKITTI dataset validate the effectiveness of the proposed method. Strengths: 1. The key idea of exploring the LiDAR observations and model predictions across frames during inference is novel and interesting. The Line of Sight binary occupancy and confidence-aware prediction fusion are effective and interesting. 2. The proposed dual optimization scheme is also useful and interesting. Such a strategy can make sure to exploit both historical and future frames for test-time adaptation. 3. The method shows improvements in both geometric and semantic segmentation tasks on the SemanticKITTI dataset, demonstrating its effectiveness. The paper includes extensive experiments, ablation studies, and cross-dataset evaluations, providing a thorough analysis of TALoS's performance and robustness. Weaknesses: 1. My major concern lies in the evaluation. SemanticKITTI is a small-scale dataset and does not adequately address dynamic objects in the creation of ground truth SSC labels. The authors are suggested to conduct experiments on larger and more diverse datasets, such as KITTI-360, nuScenes, and Waymo, to better validate their approach. The authors are suggested to consider using SSCBench [1], which employs the same format as SemanticKITTI. Cross-dataset evaluation on SSCBench would significantly strengthen the paper. Furthermore, Occ3D [2] could also be utilized for evaluation, though it uses a different format than SemanticKITTI. 2. My second concern pertains to the computational overhead introduced by the proposed method. The test-time adaptation involves multiple modules, such as creating binary occupancy labels and training two different models at each timestamp, which can significantly increase computational demands. I recommend that the authors conduct a detailed computational analysis, including an evaluation of the computation-performance tradeoff and the frames per second (FPS) achieved. 3. The literature survey is notably incomplete. Several important SSC works are missing from the related work section, including both datasets and methods [1-7]. A more comprehensive review of the existing literature is necessary to provide proper context and background for your study. Meanwhile, the idea of Line of Sight has been studied in prior works, such as 3d object detection [8] and point cloud registration [9,10]. The authors are suggested to add the missing references. 4. The current approach is tailored to LiDAR data, which may limit its applicability to other sensor modalities or combined sensor data. The authors are suggested to add more details about how to use their method in the camera-based SSC methods. [1] Li, Y., Li, S., Liu, X., Gong, M., Li, K., Chen, N., Wang, Z., Li, Z., Jiang, T., Yu, F. and Wang, Y., 2023. Sscbench: A large-scale 3d semantic scene completion benchmark for autonomous driving. arXiv preprint arXiv:2306.09001. [2] Tian, X., Jiang, T., Yun, L., Mao, Y., Yang, H., Wang, Y., Wang, Y. and Zhao, H., 2024. Occ3d: A large-scale 3d occupancy prediction benchmark for autonomous driving. Advances in Neural Information Processing Systems, 36. [3] Cao, A.Q. and De Charette, R., 2022. Monoscene: Monocular 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3991-4001). [4] Shi, Y., Li, J., Jiang, K., Wang, K., Wang, Y., Yang, M. and Yang, D., 2024, March. PanoSSC: Exploring Monocular Panoptic 3D Scene Reconstruction for Autonomous Driving. In 2024 International Conference on 3D Vision (3DV) (pp. 1219-1228). IEEE. [5] Li, Y., Yu, Z., Choy, C., Xiao, C., Alvarez, J.M., Fidler, S., Feng, C. and Anandkumar, A., 2023. Voxformer: Sparse voxel transformer for camera-based 3d semantic scene completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9087-9098). [6] Zhang, Y., Zhu, Z. and Du, D., 2023. Occformer: Dual-path transformer for vision-based 3d semantic occupancy prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 9433-9443). [7] Huang, Y., Zheng, W., Zhang, Y., Zhou, J. and Lu, J., 2023. Tri-perspective view for vision-based 3d semantic occupancy prediction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9223-9232). [8] Hu, P., Ziglar, J., Held, D. and Ramanan, D., 2020. What you see is what you get: Exploiting visibility for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11001-11009). [9] Ding, L. and Feng, C., 2019. DeepMapping: Unsupervised map estimation from multiple point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8650-8659). [10] Chen, C., Liu, X., Li, Y., Ding, L. and Feng, C., 2023. Deepmapping2: Self-supervised large-scale lidar map optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9306-9316). Technical Quality: 3 Clarity: 3 Questions for Authors: Can the authors discuss potential extensions of their method to other sensor modalities, such as camera data, and how their approach could be adapted to handle combined sensor inputs? Can the authors provide additional details and clarification on the implementation and functioning of the dual optimization scheme (such as a pseudo algorithm)? Can the authors discuss the scalability and practicality of their approach in more detail, including potential challenges and solutions for real-world implementation? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed several limitations in their work, but there are areas where additional detail and clarity would be beneficial. Here are some suggestions for improvement: ### 1. Computational Overhead **Current Limitation:** The method involves significant computational overhead due to the creation of binary occupancy labels and the training of two models at each timestamp. **Suggestion:** Provide a detailed analysis of the computational demands, including the computation-performance tradeoff and the impact on frames per second (FPS). This would help to understand the practical feasibility of the approach. ### 2. Dataset and Dynamic Objects **Current Limitation:** The evaluation primarily relies on the SemanticKITTI dataset, which is small-scale and does not adequately handle dynamic objects. **Suggestion:** Extend the evaluation to larger and more diverse datasets such as KITTI-360, nuScenes, and Waymo in SSCBench would provide a more comprehensive validation of the method. ### 3. Applicability to Different Sensor Modalities **Current Limitation:** The current approach is tailored to LiDAR data, which may limit its applicability to other sensor modalities or combined sensor data. **Suggestion:** Discuss potential extensions to other sensor modalities, such as camera or radar data, and how the approach could be adapted to handle combined sensor inputs. This would enhance the method's applicability and robustness in real-world scenarios. ### 4. Literature Survey **Current Limitation:** The literature survey is incomplete, missing several important SSC works. **Suggestion:** Expand the related work section to include a more comprehensive review of existing SSC literature, including both datasets and methods [1-7]. This would provide better context and background for the study. ### 5. Potential Negative Societal Impact **Current Limitation:** The potential negative societal impacts of the work have not been explicitly addressed. **Suggestion:** Discuss any potential negative societal impacts, such as privacy concerns or misuse of the technology, and suggest mitigation strategies. Being upfront about these aspects would align with the NeurIPS guidelines on broader societal impacts and contribute to responsible AI research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### 1. Evaluation on the other datasets | SSC method | Train dataset | Validation dataset | Baseline | TALoS | |--------------|--------------|---------------|:----------:|:-------:| | SSCNet | KITTI-360 | KITTI-360 | 17.0 | **17.4** | | SCPNet | SemanticKITTI | KITTI-360 | 14.0 | **14.7** | We deeply appreciate the reviewer's constructive comment and for informing us about the excellent benchmark. Unfortunately, we found that the official repository of SSCBench does not provide evaluation GT for the nuScenes and Waymo datasets. Therefore, we conducted an experiment on the KITTI-360 dataset instead. Specifically, we used the SSCNet weights pre-trained on the KITTI-360 training set as our baseline. We adapted the baseline model to the KITTI-360 test sequence using TALoS. The second row of the table above shows that TALoS provides a meaningful mIoU gain even on the KITTI-360 dataset. Additionally, we performed an additional cross-dataset evaluation from SemanticKITTI to KITTI-360. In this experiment, we used the SCPNet pre-trained on SemanticKITTI as our baseline. Then, we applied TALoS to adapt the baseline model to the KITTI-360 test sequence. The third row of the table above demonstrates a reasonable performance gain in mIoU, supporting the usability of the proposed TALoS. - - - ### 2. Computational overhead | Method | Time per step (s) | Overhead (%) | mIoU (%) | |--------------|:----------:|:-------:|:-------:| | Baseline | 2.26 | - | 37.56 | | TENT | 4.09 | +81 | 37.92 | | CoTTA | 6.14 | +171 | 36.55 | | TALoS | 6.65 | +194 | **39.29** | The table above provides the required time per step of the baseline (SCPNet), conventional TTA methods, and TALoS. Given the significant performance gains of TALoS, we believe this additional burden can be justified. - - - ### 3. Incomplete literature survey We apologize for the incomplete literature survey. We promise to revise the related works section to include the mentioned SSC studies and various methods based on the LoS concept. Thank you for the constructive feedback. - - - ### 4. Extension to camera-based setting Since this paper's primary focus is LiDAR-based SSC, we did not extensively explore camera-based settings. However, we agree that extending our framework to camera-based or multi-modal settings would be both interesting and valuable. In response, we devised and evaluated the proposed TALoS under a camera-only setting. Specifically, we employed MonoScene, an SSC method using a monocular camera, as our baseline. Unfortunately, our main idea of using Line of Sight (LoS) cannot be extended to the camera-only setting, as it assumes LiDAR input. Therefore, for this camera-based TALoS, we exclusively used our entropy-based pseudo-labeling and a dual optimization scheme. The experiments were conducted on the SemanticKITTI validation set, with the baseline MonoScene model pre-trained on the training set. Interestingly, TALoS still enhances the mIoU of the baseline model from 11.3% to **11.8%**, even in the camera-only setting. This further supports the practicality of TALoS as a viable option for the SSC field. Although we could not conduct experiments under a multi-modal setting due to limited time, combining our original LiDAR version with the camera-based attempt could be even more promising. Thank you for suggesting constructive research directions. - - - ### 5. Additional details about the dual optimization scheme We provide an algorithm table of the proposed dual optimization scheme in the form of the pdf file attached to the global rebuttal above. For better understanding, we set the number of iterations per sample as 1 in the algorithm table. - - - ### 6. The various limitations * Computational overhead: Please refer to the 2nd response. * Dataset and dynamic objects: Dynamic objects present a challenge for the proposed TALoS, as simply transforming such objects from one timestep to another cannot ensure their actual position. Therefore, in TALoS, we exclude these unreliable points while sampling the line of sight (refer to lines 119-121). We believe that the integration of the trajectory prediction model could be an interesting research direction. Besides, please refer to the 1st response for experiments on the KITTI-360 dataset. * Applicability to different sensor modalities: Please refer to the 4th response. * Literature survey: Please refer to the 3rd response. * Potential negative societal impact: One possible issue is that TALoS will sense surrounding data and use it to enhance the model in an online manner. As private data might be gathered and exploited through this process without agreement, it could present a potential problem in practice. This may be mitigated by private filtering. While this issue is beyond our current research interest, we will mention it as a societal impact in the final version if necessary. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' feedback. The camera-based results and the additional dataset should be added to the main paper. I am happy to raise my rating to 6. --- Reply to Comment 1.1.1: Comment: We deeply appreciate the reviewer's positive feedback. We promise to add the experiments conducted during the rebuttal to our revision.
Summary: This work proposes a test-time adaptation method for 3d semantic scene completion from point clouds. Using the point cloud and predictions of past and future frames in a sequence the authors propose a self-supervision strategy to adjust the model weights during test time. They propose a completion loss based on the line of sight of each measured point and a semantic loss using pseudo labels from predictions at different points in time. Furthermore they propose to use two versions of the model, one that updates the model based on the information at one moment in time and another that gets updated continuously throughout the sequence of test images. Strengths: The authors propose a novel test time adaptation (TTA) framework for semantic scene completion based on the temporal line-of-sight consistency. The presentation of the method is clear. Weaknesses: The instant adaptation of F_M(oment) seems like a indirect fusion of previous point clouds. The method should then also be compared to using the fused point cloud as for the SCNet teacher and fusing the predictions at i and j directly. Related to this it is also unclear what distance is used between i and j and how this effects the method or if using multiple frames is even better. The fusion of p_m and p_G is not explained in detail, while the intuition of using only the static parts from p_G makes sense the actual implementation remains unclear. It is only mentioned that the last "few" layers of the network are updated at test time. For reproducibility it would be good to provide the details. The authors write "Considering that SCPNet also involves knowledge distillation using future frames during training, this performance gain confirms that our TTA-based method is more effective for leveraging future information". Could you elaborate on this argument? As from my point of view in SCPNet they use samples from the training data for the teacher model, here the samples could actually overlap the same spatial area and provide additional measurements. Technical Quality: 2 Clarity: 3 Questions for Authors: The main question is how much of the TTA actually related to temporal fusion and how much is statistical fine-tuning of the model. It seems that for F_M it can only be fusion, as the model is optimized for each frame from the original model, but for F_G it would be interesting to instead of playback the same sequence use a different sequence to see if it also helps statistically. As it seems to be more of a fusion method, it would be good to compare against other temporal/multi-view scene completion baselines e.g. simply fusing measurements or predictions. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: I think the limitations were addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### 1. Comparison with temporal fusion We deeply appreciate the reviewer's valuable feedback. As a response, we would like to clarify that TALoS indeed performs "something more" than a naive temporal fusion. Besides, we also experimentally verified that this statement is still true even when using the $F^M$ only (without statistical adaptation of $F^G$). For this, we devised temporal fusion methods at the level of either measurements (raw LiDAR point cloud) or predictions (SSC result of the baseline model). Please refer to the global rebuttal above, which discusses temporal fusion. - - - ### 2. Distance between $i$ and $j$ | Frame difference | mIoU (%) | cIoU (%)| |:--------------:|:---------------:|:----------:| |0| 38.66| 54.81| |1| **39.29**| **56.09**| |2| 39.21| 55.94| |3| 39.14| 55.92| |4| 39.09| 55.91| In the main paper, we set the distance between $i$ and $j$ to 1 by default. The experimental results regarding this distance are provided in Section C.3 of the supp. material (refer to pages 12-13 of the main paper). For convenience, we have replicated Table C.5 here as the table above. Although not mentioned in the paper, we also tested using multiple frames. However, considering the increased computational burdens, this approach did not yield meaningful gains. - - - ### 3. Missing details for implementation We apologize for missing some details and promise to add them to the final version of our paper. **Details about categories for fusion**: We make $F^G$ be responsible for static and large-scale categories, which are relatively stable to learn from the sequence gradually. On the other hand, more confusing movable or small objects are handled by $F^M$ via instant adaptation. For this, we consider the SemanticKITTI's official category hierarchy. Specifically: * $F^G$: ground (road, sidewalk, parking, and other-ground), structure (building), and nature (vegetation, trunk, and terrain) * $F^M$: vehicle (car, truck, bicycle, motorcycle, and other-vehicle), human (person, bicyclist, and motorcyclist), and object (fence, pole, and traffic sign) **Details about architecture**: In the main paper, we use the pre-trained SCPNet as our baseline. It comprises 3 modules: 1) a stack of MLPs for extracting point features, 2) a completion sub-network, and 3) a segmentation sub-network. In TALoS, we update 3) only while simply freezing 1) and 2). We have also tested updating the entire model, but the gains were trivial, considering the increased computational burdens. - - - ### 4. Statement about SCPNet and future information As pointed out by the reviewer, SCPNet uses future information to train the teacher model and distills this knowledge to the student model during the training phase. In contrast, the goal of TALoS is to leverage information from different timesteps during test time. Since we verified that TALoS achieves meaningful gains over the SCPNet baseline, we intended to convey that our approach provides a more explicit and advantageous method of utilizing future information than SCPNet's implicit approach, particularly for test-time adaptation. Nevertheless, we acknowledge that the statement in lines 237-239 is confusing and not adequately justified. Thank you for the feedback, and we will revise this statement in the final version. - - - ### 5. Playback on the different sequence We find that this is an insightful question for understanding TALoS. We deeply appreciate this feedback and the opportunity to explain this in more detail. As pointed out, we intend for $F^M$ to be responsible for temporal fusion while employing $F^G$ to perform statistical adaptation to the sequence. Here, as we explained in the first response, $F^M$ achieves more than naive temporal fusion. The question now is, does $F^G$ indeed perform a continual adaptation of the scene over the sequence? We provided the playback experiment as an indirect answer to this. However, we acknowledge that performing playback on different sequences instead of the same sequence could be an interesting experiment. To this end, we thoroughly compared the three different settings below. Here, sequence A is the validation sequence (08) of the SemanticKITTI dataset, and sequence B is the test sequence (19). 1. Baseline (SCPNet, just inference on sequence A) 2. TALoS (first round on sequence A with $F^G$ only; refer to Exp. D in Table 1 of the main paper) 3. Playback TALoS (first round on sequence B and playback inference on sequence A) For setting (3), we obtained the adapted $F^G$ after the first round, as in the playback experiment in the main paper. Then, during the playback, we performed inference while fixing $F^G$. Note that we do not use $F^M$ in these experiments. We provide the results as a **cumulative** mIoU plot of each setting in the attached PDF, where the x-axis is the frame number of sequence A, from 0 to 814. The results suggest two interesting insights: * The gain from the gradual adaptation of $F^G$ remains even on a different sequence to some extent. For example, at the early stage of the playback on sequence A, setting (3), which adapted to sequence B, significantly outperforms both (1) and (2). This indicates that setting (3)'s first round on sequence B indeed makes $F^G$ learn the statistics of the general driving scene, and this knowledge is still useful for performing SSC on the different sequence A. Consequently, setting (3)'s performance is meaningfully better than the baseline, even without any adaptation to sequence A at all. * Despite the generalizability of the knowledge learned by $F^G$, ultimately, setting (2) outperforms setting (3). This implies that sequence-wise variance exists. Although sequences A and B share some common distributions about the driving scene, their statistics cannot be completely identical. Therefore, from the perspective of sequence A, even if $F^G$ adapted to sequence B is better than the baseline, it cannot exceed the $F^G$ adapted to sequence A itself. --- Rebuttal Comment 1.1: Title: Rebuttal Answer Comment: Thank you for providing the additional information and evaluations. I think the comparison with the temporal fusion baselines and the evaluation on two different sequences provide interesting insights. The results showing that the simple temporal fusion actually hurts the performance are somehow unexpected. What do you think could be the reason for this? Might it be related to the use of only the previous frame which maybe does not cover so much additional volume, but rather creates a domain gap due to the different point cloud density? Because intuitively in the extremum when combining all measurements at least for the static parts (assuming good sensor pose estimation) it should also improve the semantic metrics, or not? However, the insight that the TTA temporal fusion outperforms the other two baselines is actually very interesting. What do you think could be the underlying mechanisms enabling this? Thank you. --- Reply to Comment 1.1.1: Comment: |Method|Static and large-scale|Movable or small-scale| |------|:---:|:---:| |Baseline|44.98|32.16| |Late fusion (single previous frame)|45.46|31.05| |Late fusion (historical)|44.66|30.21| |Exp. C|46.01|32.85| |TALoS|**47.81**|**33.08**| Thank you for your prompt reply and for allowing us to discuss your concerns in detail. Firstly, regarding early fusion, we agree with the reviewer’s insight. We observe that early fusion results in a denser fused point cloud, which leads to a domain gap from the perspective of the pre-trained baseline model. However, we would like to emphasize that late fusion, while seemingly promising, is actually more vulnerable to errors than it appears. As the reviewer pointed out, the performance of static and large-scale categories increases (at least in late fusion with a single previous frame), as shown in the above table. However, these gains are still lower than those achieved by TALoS. **Actually, during our rebuttal process, we realized that further improving performance using late fusion is extremely challenging!** For example, there is a trade-off between accuracy and the number of newly fused voxels during fusion. Since the SSC results from another timestep include mispredictions, we have applied confidence-based thresholding (refer to Section 3.3) to decide whether to use the predicted voxels for fusion. Here, lowering the threshold increases the number of newly fused voxels but decreases their accuracy, significantly hurting performance. Conversely, raising the threshold results in highly accurate but fewer newly fused voxels, making the gain less meaningful. Finding an optimal balance in this trade-off is not straightforward. Moreover, even assuming an extremely accurate sensor pose estimation (which is already a pretty strong assumption) cannot ensure error-free transformation between time steps. Standard SSC model predictions are in quantized voxel format, which has a low resolution (voxel side length is 0.2m). Since late fusion occurs at this voxel level, errors due to transformation are almost inevitable and worsen with error accumulation when combining multiple timestep results. While this may seem like an implementation issue, the SSC community currently lacks an online remedy for it, as far as we know. Our experimental results also support this. As the reviewer suggested, we performed historical late fusion with multiple previous frames, not just a single frame. Surprisingly, this resulted in even lower performance, including static and large-scale categories, as shown in the table above (refer to the 4th row). - - - Finally, we would like to explain why our proposed TALoS surpasses fusion-based methods. In our view, all the issues with temporal fusion stem from the "direct fusion" of predictions from different timesteps into the final prediction. This approach is inherently vulnerable to mispredictions and has no chance to learn something new from the fused results. In contrast, TALoS leverages predictions from different timesteps as "supervision" to enhance the model. One primary advantage of our TTA-based method over naive temporal fusion is its robustness to errors, thanks to the model's additional learning potential. For example, we empirically know that deep learning models are known to reject noise in supervision to some extent, allowing them to learn patterns robustly [1,2]. Within this context, TALoS can be a more promising way to utilize previous information than temporal fusion. Furthermore, we believe that the advantage of a gradually adapted model ($F^G$) is unique and cannot be achieved by naive temporal fusion. We deeply appreciate the reviewer's consideration of this long-detailed response and will include this discussion in our final version. We sincerely hope that our explanation clarifies the contributions of our paper. - - - [1] Early-Learning Regularization Prevents Memorization of Noisy Labels, NeurIPS 2020 [2] Deep Learning is Robust to Massive Label Noise, Arxiv 2017 --- Rebuttal 2: Title: Author Comment Response Comment: Thank you for the detailed and extensive reply and the additional evaluations. From my perspective the additional observations and discussions given by the authors are very valuable and worth to be added to the paper or supplementary. While I still think that the counter intuitive decrease of performance with more temporal information might be related to the SSC setup and a potential domain gap, this question is still not answered sufficiently. However this is also not the core of the paper and the evaluation has clearly shown an interesting property of the proposed TTA approach that surpasses other TTA approaches and in my opinion is of interest to the community. As my other concerns have been addressed as well I am considering raising my rating to a weak accept, if there are no further concerns raised by the other reviewers. --- Rebuttal Comment 2.1: Comment: We sincerely appreciate the reviewer's warm and positive comments. We promise to add our observations and discussions to the final version.
Summary: The manuscript proposes an method for test time adaptation of semantic scene completion models. During test time the method updates two models: one causally based only on previous times and one (in a delayed way) on future times. The models are updated through harvesting freespace and occupied annotations from lidar point clouds, as well as confident semantic predictions as annotations from the other timestamps. This method improves the base pretrained model without TTA on the in-domain test set and can also be used to improve the base model on out-of-domain sequences to some degree. Strengths: - The test time losses for TALoS are well motivated and sensible. Leveraging not only point but also the freespace information from LiDAR rays makes use of all available information from it. The Shannon entropy-based uncertainty measure for selecting which semantic voxels to use for TTA is well motivated in Fig 2. The idea of leveraging pose tracking to enable leveraging information from previous timesteps makes sense and seems to help (it is a bit questionable how much of a completion problem this then still is see weaknesses) - Overall the paper is well written and well illustrated. The figures (Fig 1 and 2 for example) are very helpful in conveying the message and ideas in the paper. The qualitative and quantitative experiments are comprehensive and clarify the choices made in designing TALoS as well as the improvement over the pretrained base model. It is good to see that other more generic TTA approaches are less performant that the proposed one (but the communication could have been more clean with a table see weaknesses) - It is great to see that TALoS only needs 1-3 optimization iterations to adapt to a new timestep. I do wonder how long that optimization takes per step? (Table 5) Weaknesses: - Approach is only shown to improve one method (SCPNet) in the experiments which weakens the (implicit) claim that TALoS is a universally useful approach for any SSC model. - The approaches that TALoS was compared against are not communicated well. I am not sure what I should compare the performance of the TENT and CoTTA adapted models too. Please put those into a table. Similarly the OccFiner results are just mentioned in text but not propperly reported in the table. To make the OccFiner result comparable to TALoS one could simply "replay" TALoS on the same sequence after TTA was performed, no? (Like in Sec 4.6) - Leveraging additional information from previous timestamps makes SSC less of a completion problem since it is quite possible that surfaces that are not observable in the current time are actually observed in the previous timesteps. Technical Quality: 3 Clarity: 3 Questions for Authors: I do wonder how long that optimization takes per step? (Table 5) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation that the method needs point cloud information is addressed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### 1. Experiments using other SSC models | SSC method | Dataset | Baseline | TALoS | |--------------|---------------|:----------:|:-------:| | SSA-SC | SemanticKITTI | 24.5 | **25.3** | | SSCNet | KITTI-360 | 17.0 | **17.4** | Thank you for the constructive comment to clarify the general usefulness of TALoS further. In response, we conducted two additional experiments using different architectures and datasets: 1) SSA-SC with SemanticKITTI and 2) SSCNet with KITTI-360. For both, we utilized the officially released code, with baseline weights of SSA-SC and SSCNet pre-trained on the SemanticKITTI and KITTI-360 training sets, respectively. The evaluation is conducted on the validation set of each dataset. As shown in the table above, TALoS meaningfully enhances SSC performance (mIoU) across different architectures and datasets. The results imply that TALoS can be a promising solution for SSC in various settings. - - - ### 2. Comparisons with TENT/CoTTA | Method | mIoU (%) | cIoU (%) | |----------|:-------:|:------:| | Baseline | 37.56 | 50.24 | | TENT ($F^M$ only) | 37.92 | 49.86 | | Exp. C ($F^M$ only) | **38.38** | **52.95** | | CoTTA ($F^M$ and $F^G$) | 36.55 | 50.61 | | Exp. E ($F^M$ and $F^G$) | **39.29** | **56.09** | We conducted experiments using conventional TTA methods to elucidate the benefits of TALoS. However, we acknowledge that it could be unclear regarding which settings of our method should be compared with these techniques. We apologize for this ambiguity and would like to clarify the comparisons. All of the experiments were conducted on the SemanticKITTI validation set. **TENT**: TENT focuses on test-time adaptation on single data rather than continual adaptation on sequence. Therefore, as mentioned in lines 255-256, we exclusively used $F^M$ for this setting (without $F^G$). This setting is the same as that of **Exp. C** in Table 1 of the main paper, except using entropy minimization as its optimization goal instead of $\mathcal{L}^{comp}$ or $\mathcal{L}^{sem}$. The 3rd row of the table above presents the performance results of this TENT experiment. We have also replicated the performance of the Exp. C in the 4th row for convenient comparison. The results demonstrate that the optimization goal of TALoS is significantly more beneficial than TENT's entropy minimization. **CoTTA**: CoTTA aims for continual adaptation to sequential data. Therefore, as mentioned in lines 261-262, we used both $F^M$ and $F^G$ for CoTTA. However, unlike in TALoS, which is optimized by the proposed dual optimization scheme (refer to Section 3.4), in this setting, we performed a standard teacher($F^G$)-student($F^M$) optimization following CoTTA. Here, we use the prediction of the student model as the final SSC output. Also, similar to TENT, we use entropy minimization as the CoTTA setting's optimization goal. The 5th row of the table above provides the performance results of this CoTTA experiment. For a fair comparison, we have replicated the performance of **Exp. E**, which uses both $F^M$ and $F^G$, in the 6th row. Interestingly, we find that CoTTA fails to learn the continual distribution of the sequence, resulting in decreased mIoU. Conversely, TALoS effectively learns the gradual sequence, achieving significant gains in both cIoU and mIoU. This strongly supports the benefit of the TALoS optimization goal in fully leveraging information from sequential LiDAR data. - - - ### 3. Comparison with OccFiner We would like to clarify that a direct comparison between TALoS and OccFiner may not be pertinent, as our key objectives differ significantly. OccFiner is designed to refine the results of existing SSC methods in an "offline" manner. It first generates predictions for a LiDAR sequence using an SSC method and then fuses these predictions post-driving to refine the results. In contrast, TALoS aims to perform test-time adaptation instantly in an "online" manner. We assume the sequential sensing of LiDAR data during driving, and TALoS gradually enhances the model as the test-time adaptation progresses. In our main paper and this rebuttal, we do not assert that one approach is superior to the other, as both methods have advantages in their respective practical settings. This is why we simply mention OccFiner in the text rather than directly comparing ours with it in Table 2 of the main paper. Additionally, the purpose of the playback experiment is solely to verify whether the proposed dual optimization achieves gradual adaptation. We did not claim that the playback of TALoS is superior to OccFiner, nor did we intend to compare them directly, given that our primary goals differ. We hope this explanation helps clarify the research scope of our paper, and promise to include this discussion in the final version in more detail. - - - ### 4. About leveraging additional information from previous timestamps We understand this concern, as using observations from the previous timestep might seem to simplify the completion problem of the current timestep. As a response, we aim to demonstrate the unique contribution of this paper by showing that naively incorporating information from the previous timestep is actually less effective than TALoS. Please refer to the global rebuttal above, which discusses the comparisons with the methods based on temporal fusion. - - - ### 5. Computational overhead | Method | Time per step (s) | Overhead (%) | mIoU (%) | |--------------|:----------:|:-------:|:-------:| | Baseline | 2.26 | - | 37.56 | | TENT | 4.09 | +81 | 37.92 | | CoTTA | 6.14 | +171 | 36.55 | | TALoS | 6.65 | +194 | **39.29** | The table above provides the required time per step of the baseline (SCPNet), conventional TTA methods, and TALoS. Given the significant performance gains of TALoS, we believe this additional burden can be justified. --- Rebuttal Comment 1.1: Comment: I have red the rebuttal and the other reviews. I appreciate the author responses to clarify my questions and concerns. Tables in responses 1, 2, 5 are very valuable and should be added to final manuscript of accepted. Table in response 1 helps support that the proposed approach can be used on other methods but the lift in performance seems much smaller. I wonder why? The computational overhead is similar to related COTTA but the proposed method does improve performance more. That is good to see. I do like the comparison with the early and late fusion approaches. I agree with the discussion in response to xDnF - It is surprising that the performance from late fusion does not increase. From my own experience getting late fusion to work correctly can be tricky. --- Reply to Comment 1.1.1: Comment: We deeply appreciate the reviewer's consideration of our rebuttal. We promise to include the tables from responses 1, 2, and 5 in the revised version of our paper. This constructive feedback will greatly strengthen our work. We are also grateful to the reviewer for positive comments on our cost efficiency. Additionally, in our revision, we will incorporate a section discussing temporal fusion, along with extensive experiments conducted during the rebuttal process. - - - |SSC method|Dataset|Baseline|TENT|TALoS| |------|--|:--:|:---:|:---:| |SSCNet|KITTI-360|17.0|17.0 (+0.0) |**17.4 (+0.4)** |SSA-SC|SemanticKITTI| 24.5 | 24.8 (+0.2) | **25.3 (+0.8)** |SCPNet|SemanticKITTI| 37.6 | 37.9 (+0.3) | **39.3 (+1.7)** Regarding the table in response 1, we humbly acknowledge that the performance improvement achieved by TALoS is smaller in SSA-SC and SSCNet compared to SCPNet. However, we would like to carefully emphasize that the baseline performance of each model is empirically a critical factor in TTAs effectiveness. While SSA-SC and SSCNet are both excellent methodologies, their overall performance is significantly lower than that of SCPNet. For example, SSA-SC struggles to achieve meaningful semantic completion in certain categories, such as "motorcyclist" or "person." Although TALoS effectively enhances the performance of pre-trained models through test-time adaptation, it is still challenging to fully overcome such limitations with only a few iterations of online optimization. The table above shows that TENT encounters similar issues when using SSA-SC or SSCNet as its baseline model. TENT with SSCNet achieved an improvement of less than a tenth of a point, which is practically negligible. Interestingly, we observe that TALoS consistently achieves higher gains over TENT. This suggests that the lower gains of TALoS on SSA-SC or SSCNet are probably due to the inherent limitations of these models rather than the TALoS itself. Based on these observations, we hypothesize that SSA-SC and SSCNet may not be sufficient to fully benefit from TTA-based approaches, unlike the relatively stronger SCPNet. The results indicate that as the baseline model's performance improves, the performance gains achieved by TALoS also tend to increase. For instance, SSA-SC, which has relatively higher performance than SSCNet, exhibited a greater improvement when using TALoS (as well as TENT). Of course, we acknowledge that our experiments are not extensive enough to generalize this trend confidently. However, we believe at least it is reasonable to conclude that the extent of TTA gains depends on the original model's performance and capacity. In summary, at this current stage, where SSC models are still actively researched and continuously improved, TALoS has the potential to further demonstrate its effectiveness. Please note that TALoS has significantly improved performance with SCPNet, one of the current state-of-the-art SSC models. In this context, we believe TALoS makes a meaningful contribution to the SSC field. We deeply appreciate the reviewer for reading this long response and hope it successfully addresses the reviewer's concerns.
null
null
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's valuable feedback and will incorporate the suggestions into our paper revision. We also sincerely thank the reviewer for acknowledging the following strengths of our paper: * Novelty and soundness of TALoS (MQ39, xDnF, and 9T82) * Comprehensive qualitative and quantitative experiments (MQ39 and 9T82) * Good writing quality and helpful illustrations in conveying the ideas (MQ39). In the reviewer-wise rebuttal below, we address the weaknesses and questions raised by each reviewer point by point. We hope this clarifies the contribution of the proposed TALoS. Finally, the attached PDF file includes the following items: * Experimental results of the playback on the different sequence (please refer to the 5th response for reviewer xDnF) * Algorithm table of our dual optimization scheme (please refer to the 5th response for reviewer 9T82) - - - Meanwhile, in this global rebuttal, we address the concern commonly raised by two reviewers (MQ39 and 9T82). This concern is mainly about the difference between the proposed TALoS, especially $F^M$, and the temporal fusion. Is the role of $F^M$ identical with a simple temporal fusion of the observations from multiple timesteps or something more? To answer this question and demonstrate this paper's unique contribution, we experimentally verified that naively incorporating information from the previous timestep is actually less effective than TALoS. Specifically, we devised naive temporal fusion methods for the previous and current timesteps in two different ways, named early and late fusion. In **early fusion**, we merge the raw point clouds of both the previous and current timesteps and use the fused point cloud as input for our baseline (the pre-trained SCPNet). On the other hand, in **late fusion**, we separately obtain the predictions of the baseline at each timestep and aggregate the results of different timesteps at the voxel level. Here, when the predicted classes differ, we trust the one with lower entropy (which means higher confidence). | Method | mIoU (%) | cIoU (%) | |--------------|:----------:|:-------:| | Baseline | 37.56 | 50.24 | | Early fusion | 35.86 | 52.85 | | Late fusion | 37.11 | 52.49 | | Exp. C (with $F^M$ only) | 38.38 | 52.95 | | TALoS | **39.29** | **56.09** | The table above compares the performance of these fusion-based approaches with that of TALoS. All of the experiments are conducted on the SemanticKITTI validation set. For a fair comparison, we also replicated the performance of Exp. C in Table 1 of the main paper to the 5th row. Note that this setting exclusively uses $F^M$. The results show that the cIoU gain of TALoS exceeds the naive temporal fusions, both early and late. Also, for mIoU, the fusion-based methods even harm the mIoU performance, while TALoS achieves significant gain. Finally, it is noteworthy that Exp. C, with $F^M$ only, still outperforms all the other fusion-based methods. This is strong evidence that $F^M$ indeed performs something more and better than the naive temporal fusion. To sum up, naive temporal fusion suffers from identifying semantically correct predictions. On the contrary, TALoS effectively aggregates the reliable observations from the multiple timesteps and leverages them to enhance the model instantly. We believe that this advantage is the unique contribution of our TALoS, a TTA-based approach for the SSC task. Pdf: /pdf/6446ba8a9d521af17f2983402f505c72f148ae45.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Diffeomorphic interpolation for efficient persistence-based topological optimization
Accept (poster)
Summary: The paper introduces a novel approach to topological optimization using diffeomorphic interpolation to tackle the sparsity and inefficiency of gradient descent in topological data analysis (TDA). The authors focus on optimizing point clouds by transforming sparse gradients into smooth vector fields. They achieve this by assigning exponentially decaying weights to each point with a gradient, allowing the influence to span the entire space. This approach enables a larger set of points to be optimized in each iteration, thereby accelerating convergence. The effectiveness of the method is demonstrated through experiments that showcase its fast convergence. Additionally, the method proves to be particularly efficient when combined with subsampling techniques, facilitating large-scale topological optimization. Strengths: 1. In practical applications, optimizing topological loss is often slow. The proposed method significantly accelerates this process. 2. Diffeomorphic interpolation is proven to provide a descent direction for the topological loss, with a provided smoothness bound. 3. The proposed method is especially beneficial for subsampling techniques, leading to both faster convergence and reduced computation time. Weaknesses: The authors describe three types of topological loss where the proposed method can be applied. However, the application to topological registration losses is not demonstrated in their experiments. My concern is that, in topological registration loss, the match between the output diagram and the target diagram is often unstable. Any mismatch could result in an increased registration loss. With diffeomorphic interpolation, this increment could be even larger, potentially making the overall optimization more unstable. Technical Quality: 3 Clarity: 3 Questions for Authors: The experiments on convergence rate are only designed for simple topological situations. When the topological complexity increases, the optimization becomes unstable. Will the proposed method introduce additional instability to the optimization? In other words, could a convergence theorem for the proposed method be derived? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > topological registration losses is not demonstrated See Global Rebuttal and companion pdf (Figure 2(b) and Table 1), where we showcased our methodology on a topological registration loss on a real-world dataset of single cells. While we do observe oscillations, we still note a global decrease of the loss. As for the correlation score, the improvement is very marginal, which we believe is due to the fact that correlation score only measures the correlation between latent space angles and ground truth. Thus, while the optimized latent space does exhibit a better delineated circular shape with the registration loss (see Figure 2(b)), the size of that circle is prescribed by the coordinates of the target PD point $[-3.5, 3.5]$ (whereas this size is made as large as possible with the augmentation loss), which in turn does not change the latent space angles very much (as well as the correlation score overall) compared to the augmentation loss. > The experiments on convergence rate are only designed for simple topological situations. When the topological complexity increases, the optimization becomes unstable. Will the proposed method introduce additional instability to the optimization? In other words, could a convergence theorem for the proposed method be derived? Thank you for the suggestion. Proving a convergence theorem is indeed one of the future directions we aim at pursuing. We believe that such convergence results should be accessible provided that existing results in this vein (such as, e.g, [_Optimizing persistent homology based functions_, Carrière et al., **ICML**, 2021]) can be thought of as particular instances of our setting when $\sigma \to 0$. Hence, convergence should also occur for $\sigma$ and learning rates that are sufficiently small, in order to ensure that they do not interfere too much with the different terms involved by the high topological complexity. Guaranteeing the convergence for fixed bandwidth $\sigma$ and learning rate $\lambda$ in the greatest generality seems too demanding: we designed in Global Rebuttal and the companion pdf some cases where convergence seems to fail numerically because $\sigma$ is much too large with respect to the diameter of the point cloud. As commented in Global Rebuttal, a good heuristic in our experiments is to take $\sigma$ as a fraction of the median distance in the point cloud. Therefore, an appealing generalization suggested by reviewer v7wD would be to use in practice an adaptive / locally varying $\sigma$ that would in some sense, adapt to small vs large topological features. Note that one should do it in a way such that $K(x,y)$ remains positive semi-definite for any $x,y$ though, so that the theory of diffeomorphic interpolations presented in Section 2.2 still applies. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It’s encouraging to hear that global convergence has been observed in experiments. I look forward to seeing a convergence theorem for diffeomorphic interpolation. Overall, this is a strong paper.
Summary: In "Diffeomorphic interpolation for efficient persistence-based topological optimization", the authors provide a novel method to compute interpolations of gradients of persistence diagrams. This approach makes optimisation of point clouds w.r.t. loss functions defined on the associated persistence diagrams feasible, as it provides generalised gradients which smoothly vary over the underlying space. In summary, the authors use some form of Gaussian kernels and ideas from stochastic gradient descent to extend the standard gradients only defined on a few points to the entire base space. They compare the running time of their algorithm favourably to other existing topological gradients, and perform qualitative experiments on synthetic data and the latent space of an autoencoder. Strengths: 1. The paper presents a novel combination of topological gradients and diffeomorphic interpolation. The authors do a good job of embedding this approach into the existing literature and highlighting its differences and relevancy. 2. The submission is technically sound and provides theoretical guarantees which are proven. The application to the latent space of variational auto encoders provides a nice and simple application adding to the theoretical contributions. 3. The proposed method significantly enhances the applicability of optimisation of point clouds w.r.t. to topological loss functions. The idea to solve some of the problems of topological gradients using diffeomorphic interpolation is new and very good. (4. The gif is very nice :) ) Weaknesses: 1. There is no intuition provided behind theoretical result number 2. One can either just believe it, or read the very brief and technical derivation in the appendix; a brief summary and intuition in the main text would be great! 2. While there are theoretical guarantees for the derivative of the loss at a single time point (and thus in the step-size -> 0 case), any guarantees which also apply in the case for discrete steps are missing. To the reviewer's understanding, the diffeomorphic interpolation of the loss function does not need to be continuous in t, even in the case of infinitesimal small step sizes. Thus the relevancy of the theoretical guarantees in practice is not clear. 3. The choice of the bandwidth of the Gaussian kernel used for the diffeomorphic interpolation, a seemingly very important hyperparameter, is not discussed in the paper. The impact of different choices of sigma is not looked at in the experiments. (See the question section.) 4. The experiments presented are either toy examples or very artificial. An application on a down-stream task relevant in some field of practice or real-world application would significantly enhance the strength of the paper. (This is a caveat applicable to much of the TDA literature in general though.) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the intuition behind the proof of Theorem 2? (Cf. Weakness 1) 2. What do the theoretical guarantees tell us in practice? Do they tell us something? Can you provide any guarantees on the evolution of the loss over a number of steps with a discrete step size? Or even in the continuous case? If not, what are the theoretical reasons these cases are so much harder? 3. How did you choose the bandwidth in practice? If someone knows nothing about their dataset, what would be a good heuristic to choose sigma? Do we need to fix a global sigma or can we vary sigma locally? 4. What is the impact of different choices of sigma on the results of the experiments? How sensitive are the experiments to varying sigma? 5. In the bottom part of Figure 4, why is the loss of Oineus significantly larger than the loss of the other methods for small x values? 6. In Figure 3, how did you pick the hyperparameters? Did you tune them for the diffeomorphic interpolation, or for the vanilla case? It might be the case that different hyperparameter regimes are optimal. 7. In Figure 9, 3rd from the left: what are the purple points? 4th from the left: How can the large loss spikes for the diffeo algorithm be explained? How do the hyperparameter choices influence the loss stability? I would like to thank the authors in advance for their answers! Comments: * Line 20: "can be done once and then re-applied to new data in linear time" -> What does that even mean? * ll 33: "active line OR research" * ll 37: "yielding to the problem" * ll 84: "allows to process datasets whose sizes are currently out of reach in TDA," This claim is a bit bold: Subsampling and taking landmarks was a thing even before this paper. Maybe this applies to optimisation of point clouds in TDA: * ll 100: "machinary" * section 2.2: The defintion of RKHS is really confusing. If I didn't know the definition beforehand, I would not have understood it there * ll 181: where does v live? can X and Y be any arbitrary point clouds? With which definitions? How can X be the argument on the left side of the definition, and still be referenced as all X in the set builder notation on the RHS? * ll 193: "i-th line" * ll 214: "s-points" * ll 476: The grammar of this sentence is confusing. * page 13, footnote 5: only non-empty simplices. footnote 8: "deteriorate" Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As detailed above, a discussion on hyperparameter choices, the stability of the algorithm, and the relevance of the guarantees in practice is missing. For possible negative societal impact: In Figure 5 the authors blow up a synthetic bunny and make the forgivable mistake to fail to mention that this experiment should not be performed on real-world bunnies... Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. What is the intuition behind the proof of Theorem 2? Our goal with this result was to quantify the smoothness of our proposed diffeomorphic interpolation (as characterized with its Lipschitz constant) based on the kernel it is computed from. In the case of the Gaussian kernel, we found that the Lipschitz constant depends on the kernel bandwidth $\sigma$: indeed, the more spread the Gaussian is, the more critical points (identified during the computation of PH) can influence other data points potentially far from them, introducing unwanted distortions and larger Lipschitz constant. Similarly, if the condition number is large, inverting the kernel matrix might introduce instabilities in the $\alpha_i$ in Equation (4), and thus a larger Lipschitz constant as well. Finally, the total persistence also appears, as the more PD points one has to optimize, the more critical pairs will appear, and thus the more constrained the minimization problem in section 2.2 is, leading to more complex diffeomorphic interpolation solutions with larger Lipschitz constants. > 2. What do the theoretical guarantees tell us in practice? (…) evolution of loss with discrete step-size? (…) what makes it much harder? We agree that the theoretical guarantees remain of limited use in practice: they only tell us that our diffeomorphic interpolations also provide meaningful descent directions for the loss; and thus that for a sufficiently small step-size, the loss must decrease. Empirically, we showcase that overall, the loss decreases substantially faster using our method. Proving this formally in some reasonable settings (continuous-time, etc.) remains quite hard though, for various reasons: - the losses considered are typically not convex (with respect to the input point cloud $X$), and thus it is hard to quantify how large the learning rate can be taken and how much the loss decreases after a discrete-time step, - Our $\tilde{v}$ that dictates the dynamic of the flow is not a gradient (at least for Euclidean geometry), so we cannot apply faithfully the gradient flow/descent literature to get theoretical guarantees, - The dependence of the Vietoris--Rips filtration on the point cloud $X$ is cumbersome. We can derive theoretical guarantees as long as the ordering of the pairwise distances $\|x_i - x_j\|$ remains unchanged (i.e., as long as the points are moved by less than $\min_{ij,kl} |\|x_i - x_j\| - \|x_k - x_l\| |$), but this is fairly restrictive. It is clear that studying this model is of interest, but we believe that it may be postponed for future work. > 3. and 4. How did you choose the bandwidth in practice? (…) heuristic ? (…) vary sigma locally ? (…) different choices of sigma ? We chose $\sigma$ with no parameter tuning (see Global Rebuttal and companion pdf, which actually show that the results of our method in Figure 3 can actually be improved if we pick $\sigma = 0.3$ instead of $0.1$); the heuristic being that $\sigma$ should be a fraction of the median distance between input points (this heuristic is common in calibration of Gaussian kernel in other statistical scenarios). It may be possible to consider a space-dependent $\sigma$ or other variant: as long as $(x,y) \mapsto K(x,y)$ remains valued in positive semidefinite matrices, the theory presented in Section 2.2. adapts faithfully (see also line 154 in the main paper). We included in Global Rebuttal and companion pdf (Figures 1 and 2(a)) some experiments for varying $\sigma$, showcasing the robustness of our approach w.r.t. $\sigma$. > 5. Oineus loss larger Good catch, thanks! We forgot a normalization factor in the way the oineus loss is displayed. Fortunately, as our stopping criterion is that a loss of exactly $0$ should be reached (see line 248 in the main paper), this does not change the number of iterations (e.g., 49 iterations for oineus alone) nor the running times, and thus the conclusions of this proof-of-concept experiment remain unchanged. We will correct this in the next version. > 6. In Figure 3, how did you pick the hyperparameters? (…) It might be the case that different hyperparameter regimes are optimal. We actually pick $\lambda = 0.1$ for every experiments, with no specific tuning. Given that our approach is directly extrapolating the vanilla topological gradient, we believe that picking the same learning rate for both models is the natural choice. > Figure 9, (a) 3rd from left (…) purple points? (b) 4th (…) loss spikes for the diffeo algorithm be explained? (a) For 3rd Figure, transparent blue points simply represent the initial point cloud as a reference. Orange points represent the final state of the point cloud. (b) We report the loss on the subsample batch (because evaluating the loss on the whole point cloud is computationally prohibitive, see line 227). Because of the stochastic nature of our approach, it may happen that this estimated loss is, on a few occasions, quite high because the subsample does not reflect the expected topological structure. This does not happen with vanilla gradient descent for a simple reason: the initial point cloud is broadly uniform, so are the subsamples (with high probability) yielding a loss around $-0.03$, and because _vanilla gradient descent fails to perform any significant update of the point cloud_, this situation remains through iterations. In contrast, because our diffeomorphic interpolations do change the point cloud, they reach configurations where some subsamples may yield a high loss, though the loss decreases overall. In a nutshell, these spikes are not related to the optimization technique involved (if the vanilla gradient descent was initialized with, say, the epoch 600 reached by our approach based on diffeomorphisms, one would observe similar spikes), but to the “topological variance” in the subsample for a given configuration, the point being that vanilla gradient descent does not reach such configurations. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed rebuttal! I believe your changes and proposed changes to the manuscript are very valuable. I still have some brief comments: 1. I believe that adding the explanations and intuition you gave me in the rebuttal would be a valuable addition to the (appendix of the) paper. For example, sometimes it is very relevant and interesting information why a theorem takes the form it has, and why it is hard to prove something more difficult. 2. I appreciate your study on the effect of the bandwidth $\sigma$ on the performance of your algorithm. I also value the heuristic for choosing $\sigma$. However, to fully appreciate this I think you should use __(a)__ the $\sigma$ computed by this heuristic in your experiments or __(b)__ at least report the suggested $\sigma$ in comparison to the $\sigma$ used in your experiments. 3. I am not convinced by the additional experiments on the VAE. (Or I might have misunderstood.) I know that you have added a loss-term to optimise for the embedding of the VAE having a circular structure. However, did you check whether this particular circle, and the individual location of the points on the circle correspond to any meaningful real-world features? I know that, because of the setup of the experiments and the studied cell-cycle, it is not unreasonable to expect some circle to form. I just don't understand how to quantitatively evaluate the meaningfulness of your result. (Maybe using circular coordinates, for example from [1]) 4. I know that this is a complicated question: What would you need to do to generalise your topological optimisation to work with DTM filtrations? It seems that many topological features in noisy real-world settings are much more likely to be uncovered by DTM filtrations, for example in the scihc embedding. [2] Despite all the above remarks, I believe that this is a very cool paper even in its current state. I am looking forward to exciting applications of this in the world of ML. I will thus be raising my score and recommending the paper for acceptance [1] Perea, Jose A. "Sparse circular coordinates via principal ℤ-bundles." Topological Data Analysis: The Abel Symposium 2018. Cham: Springer International Publishing, 2020. [2] Anai, H., Chazal, F., Glisse, M., Ike, Y., Inakoshi, H., Tinarrage, R., & Umeda, Y. (2020). DTM-based filtrations. In Topological Data Analysis: The Abel Symposium 2018 (pp. 33-66). Springer International Publishing. --- Reply to Comment 1.1.1: Comment: _I believe that adding the explanations and intuition you gave me in the rebuttal would be a valuable addition to the (appendix of the) paper (...)_ _I appreciate your study on the effect of the bandwidth on the performance of your algorithm (...) to fully appreciate this I think you should use (a) the $\sigma$ computed by this heuristic in your experiments or (b) at least report the suggested $\sigma$ in comparison to the $\sigma$ used in your experiments._ Thank you for your suggestions. We will include both our theorem's discussion and highlight our heuristic and the corresponding $\sigma$ value in the experiment section. _I am not convinced by the additional experiments on the VAE (...) did you check whether this particular circle, and the individual location of the points on the circle correspond to any meaningful real-world features?_ This is a rightful question, and we apologize for not getting into much details about it in the rebuttal (mostly due to lack of space). In order to make sure that the circular shape that we end up with (after topological optimization) is biologically relevant and not artifactual, we measured the correlation between the latent angles (computed with the same formula than the one in the main paper), and a quantity called the **repli-score**, which is computed out of the copy numbers of genome regions associated to the early phases of the cell cycle. It was introduced and provided in [_Cell-cycle dynamics of chromosomal organization at single-cell resolution_. Nagano et al. **Nature**, 2017] as an efficient, computable, and real-valued proxy for the cell cycle. Thus, a large correlation indicates that the circular shape in the optimized latent space is indeed representative of the cell cycle itself. _What would you need to do to generalise your topological optimisation to work with DTM filtrations? (...)_ Thank you for this suggestion. As far as our method is concerned, one only requires a sparse gradient on a point cloud in order to run. Hence, the main question is how to compute such gradients for DTM-based filtrations. For instance, DTM-Rips filtrations should be possible, as the equation for computing the DTM is easy to differentiate, and the gradient for Rips is already known, but DTM-Alpha filtrations should be more difficult, as it is less clear what the gradient of the Alpha filtration is. See also our answer to the last question of reviewer fiZD in the context of multiparameter persistence (which can also be used for dealing with noisy data). --- Rebuttal 2: Comment: Dear reviewer v7wD, Thank you for your review. The authors have tried to address your concerns in the rebuttal. Please carefully read their rebuttal, and let them know what you think, and whether there is any more clarifications you require. Note that author-reviewer discussion ends on August 13th. Thanks! the AC
Summary: The paper proposes a novel way based on diffeomorphic interpolation to deal with the problem of sparse gradients when performing topological optimisation, which is relevant in the intersection between Topological Data Analysis (TDA) and ML. Strengths: (S1) The paper addresses a relevant problem in the TDA field (S2) The paper is comprehensive and well-written (S3) The paper proposes interesting applications of the proposed method Weaknesses: (W1) The experimental setup could be strengthen by providing a real-world application Technical Quality: 4 Clarity: 3 Questions for Authors: (Q1) In Figure 7, the initial LS is in blue and the final is in orange, while in Figure 6, the initial LS is in orange and the final is in orange, I find this confusing, is there a reason for it? (Q2) Can you please elaborate more on the importance of your suggested application to black-box autoencoder models, and how does your method compare to other topological optimisation techniques such as Oineus (as in the table in this section you have only compared your approach to the Vanilla, and you reported Oineus for the rest of the experiments) (Q3) The performance between Vanilla and Diffeo is very similar in the table in section Black-box autoencoder models (apart from Pig), have you ran your experiments multiple times and what is the mean and standard deviation? Also, do you have an explanation as to why your method outperforms Vanilla by a large margin for Pig? (Q4) Figure 3 is difficult to interpret. What is considered to be the expected end result of topological optimisation? The final shape after 100 iterations of your algorithm is very distorted, can you please add more details on how is that good? (Q5) What are some real-world uses of topological optimisation? Can you please elaborate on when one should care about this, and what possible applications are there (Q6) Do you see application in the real-world of your method, and how big datasets can your approach handle? (Q7) Can you please elaborate on how exactly your method offers better interpretability for the black-box autoencoder application? (Q8) How robust is your method to taking different samples and to different sampling strategies? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > (Q1) color swap in Figures 6 and 7 This was accidental. Thank you for catching this error! > (Q2) elaborate on (…) black-box AE models, (…) compare to other topological optimisation techniques Sorry: we made a misleading mistake by accidentally naming “Vanilla” the competitor of “Diffeo” (ours) in the Table in page 9. Here, “Vanilla” is supposed to mean _no topological optimization on the latent space_ (like “Vanilla VAE”, not “Vanilla topological gradient descent”). We will change this in the revised version and are deeply sorry for the confusion it yielded. We do not compare our method with existing topological optimization methods (Oineus, Vanilla, etc.) in this experiment. That is for a good reason: the experiment we propose can only be made using our method. Indeed, given a pre-trained VAE $D \circ E$, we design $\varphi : \mathbb{R}^{d’} \to \mathbb{R}^{d’}$ such that $\varphi \circ E$ induces a “better” latent representation (in terms of topological properties) by concatenating our diffeomorphic interpolations $\tilde{v}_k, k=1,\dots,T$. In contrast, **existing methods do not provide maps defined on the latent space**. Their gradients are only defined on training observations. Given (say) a new latent representation $z$, there is no way to “correct it” using vanilla or oineus gradients. Aside from computational efficiency, this is the biggest edge of our method over pre-existing topological optimization techniques: having access to maps defined on the whole space opens the way for new applications. > (Q3) ran experiment multiple times? See Global Rebuttal and companion pdf, Table 1. As for the Pig result, the large margin is because the initial space (from the VAE) was particularly bad (topologically), see Figure 2(b). > (Q4) Figure 3 (…) expected end result of topological optimisation? The final shape (…) is very distorted, (…) how is that good? The loss that we minimize in this PoC experiment is proportional to the topological information contained in the input point cloud. That is, we want to destroy the topological structure (here, the presence of a loop). A good output (low loss) is a point cloud with no underlying loop. The take-home messages of this experiment are: - The vanilla gradient only updates a few points (sparse gradient) of the (small) subsample. This yields a low decrease in the loss: after $t=100$ steps, the point cloud has barely changed. - In sharp contrast, our diffeomorphic interpolation updates _the whole point cloud_ at each step. It yields a much faster optimization (showcased also in Figure 4), and the whole point cloud is deformed smoothly from the input one. - In terms of raw topological loss, there is no “better” final output: both methods manage to eventually provide a point cloud with trivial topology. Nonetheless, having a smooth, invertible, deformation between the input and final objects can be useful and more interpretable; this is for instance leveraged in the black-box AE experiment. > (W1, real-world experiment) + (Q5) real-world uses of topological optimisation? See Global Rebuttal and the companion pdf (Figure 2(b) and Table 1) for a real-world application on single-cell data. In a nutshell, topological optimization can be useful if you have a topological prior (e.g., "my model should exhibit a periodic / cyclic structure"). One can also consider the references given in lines 40--42 in the main paper for other possible applications of topological optimization. > (Q6) how big datasets can your approach handle? Diffeomorphic interpolation updates computed from a subsample of the dataset modify the whole input point cloud, a sharp difference with the vanilla gradients that only update a fraction of the original point cloud at each step. Our PoC experiments illustrate this (in particular Figure 5 shows that our approach can seamlessly perform topological optimization on a point cloud with $\sim 36,000$ points, a scale completely out of reach of previous methods), and we confirmed this on our new real-world application, which involves Vietoris-Rips PH on $1,171$ single cells. While this is already hardly tractable for vanilla gradients, subsampling combined with our diffeomorphic interpolations allows to perform the task fastly and efficiently. In terms of computational complexity, the overhead induced by our method compared to vanilla gradients are: - inverting the kernel matrix $\mathbf{K}\in \mathbb{R}^{|I| \times |I|}$ where $I$ is the set of indices for which the vanilla gradient is non-zero. Since the vanilla gradient is usually sparse, $I$ is small and this matrix inversion is basically negligible. - Computing $\tilde{v}$ on the whole input point cloud $X$, which scales _linearly in the number of points_. > (Q7) elaborate on (…) interpretability for black-box autoencoder? The idea showcased in this experiment is the following: each dataset is a collection of images representing a rotating object, hence it is natural to assume that there must be a 2-dimensional underlying circle (topological prior). When training a VAE on such a dataset, we obtain an encoder $E$ and a decoder $D$ that enables the generation of new data. However, the latent space does not exhibit the expected circle structure (it has no reason to do so). We change the latent space representation using the diffeomorphism $\varphi$ optimized so that $(\varphi \circ E(x_i), i=1,\dots,n )$ satisfies our topological prior ($(x_i)_{i}$ is our training set). Since $\varphi$ is invertible, we can generate new observations using $D \circ \varphi^{-1}$. This increases interpretability: after topological correction, the latent representation is organized on a circle, and angles in the latent space reflect angles between rotated images. You can now sample "an image rotated by 30°” (wrt some reference image) by simply taking an angle of 30° in the latent space. > (Q8) How robust (…) to (…) different samples See Global Rebuttal and companion pdf, Figure 1. --- Rebuttal Comment 1.1: Comment: I thank the authors for their efforts and the elaborative response. I have read the response and would like to stick with my score.
Summary: This paper proposes a new topological optimization scheme mainly due to the sparsity of topological gradients. The authors introduce the notion of diffeomorphic interpolation and use this to create a smooth vector field over the whole space, which gives a gradient descent algorithm. The authors show some theoretical results which guarantee that the smoothness of the interpolation can be controlled. Further, the authors show numerical experimental evidence for the quick convergence of their method. The authors show, using experiments, that this scheme can be used to interpolate gradient on the whole space by considering a gradient on a small sample yielding expected results. The authors show that this framework can be used to infuse topological information in a pre-trained autoencoder model. Strengths: 1. Overall, the paper is well-written and organized. 2. The proposed method converges faster than its competitors, as shown in Figure 4. 3. The idea of diffeomorphically interpolating the gradient on a subsample to the entire sample, to the best of my knowledge, is novel and quite useful in practice. 4. The concept of inducing topological information into the latent space of a pre-trained autoencoder is also new to the best of my knowledge. Weaknesses: 1. Section 3 is notation intensive. For example, Proposition 3.2, concepts like condition number are not defined earlier and hence, the expression can be moved to the appendix with the main text containing a line which says that the Lipschitz constant can be bounded which guarantees the smoothness. 2. Proof of Proposition 3.1 can be moved to the appendix as it is not hard to see, given the way \tilde v is defined/constructed to improve the readability. 3. For the subsampling experiment, the comparison with vanilla topological gradient seems slightly unfair because there is no way for the network to update gradient over the whole space when a small percentage of points are being sampled for gradient update. Updating gradient over the whole space is not feasible, but is there a threshold on the number of points until which the vanilla gradient update is feasible and if the final output, in that case, is comparable to the output from diffeomorphic gradient descent? Minor: In Algorithm 1, should it be “Set $X_k \coloneqq X_{k-1} - \lambda \tilde v(X_{k-1})$"? Technical Quality: 3 Clarity: 3 Questions for Authors: Can this method work for differentiable multiparameter vectorization methods? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Weaknesses 1. and 2. (Section 3 is notation intensive. // Proof of Proposition 3.1 can be moved to the appendix) Thank you for the suggestion. We will alleviate Section 3, stating Proposition 3.2. more informally and deferring the complete, technical statement to the appendix. As far as the proof of Proposition 3.1 is concerned, as it is very short, we consider keeping it in the main paper as it shows in a simple way how $\tilde v$ interplays with the loss, which is helpful for developing intuition. > Weakness 3. (For the subsampling experiment, the comparison with vanilla topological gradient seems slightly unfair because there is no way for the network to update gradient over the whole space) We agree. One could say that it is precisely the purpose of this experiment: since the vanilla topological gradients are not defined on the whole space (not even the whole input point cloud), and because of the inherent computational limitation of (Vietoris-Rips) persistent homology, it is expected that vanilla gradient descent will perform poorly on this task. So yes, to some extent, the comparison is indeed unfair, but this is exactly the point we wanted to emphasize: extending gradients via diffeomorphic interpolations has a crucial impact in terms of computational efficiency, opening the way for new experiments involving topological optimization that are not currently accessible with vanilla gradient descent. > Minor (Should $X_t$ be $X_k$ ?) Indeed, that’s a typo. Thank you for catching it! > Question : Can this method work for differentiable multiparameter vectorization methods? Our approach may be generalized easily: as long as one is given a (sparse) gradient on a point cloud $X \in \mathbb{R}^{n \times d}$, one can build its diffeomorphic interpolation and use the resulting $\tilde{v}$ to update $X$ instead. Therefore, any methods that returns such gradients can benefit from our approach. This includes recent pipelines involving differentiable multi-parameter PH such as D-GRIL [_D-GRIL: End-to-end topological learning with 2-parameter persistence._ Mukherjee et al. arXiv:2406.07100, 2024] and differentiable signed measures [_Differentiability and convergence of filtration learning with multiparameter persistence._ Scoccola et al. **ICML**, 2024]. In both methods, gradients can be obtained by computing free resolutions of multi-parameter persistence modules, and are thus also very sparse (similar to the 1-parameter case studied in our work) as they depend only on the multigraded Betti numbers of the module. --- Rebuttal Comment 1.1: Comment: I thank the authors for their efforts and the response. I have read the response and would like to stick with my score.
Rebuttal 1: Rebuttal: # Global rebuttal We thank the reviewers for their constructive reviews and overall positive comments on our work. We will take into account all comments on clarity, typo, etc., that have been reported by the reviewers in the revised version of our work. In addition to individual responses, we provide complementary experimental results hopefully addressing remaining concerns in our attached companion pdf: - 1. A proof-of-concept study showcasing the robustness of our method to (i) the choice of $\sigma$ as suggested by Reviewer v7wD and (ii) the randomness induced by the subsampling as suggested by Reviewer GJy4 (Figure 1). The influence of $\sigma$ over the correlation scores is also provided (Figure 2(a)). - 2. A real-world experiment (suggested by Reviewers GJy4, v7wD) on single-cell genomics that involves minimizing a _topological registration loss_ as suggested by Reviewer uH7P (Figure 2(b)). - 3. The results of the black-box VAE experiments averaged over 100 runs as suggested by Reviewer GJy4 (Table 1), in which we compute the mean scores and standard deviations after adding for each run random uniform noise (with amplitude $0.05$ for COIL and $0.01$ for the single-cell dataset). We describe below the experiments we ran (for 1. and 2.; 3. is exactly the one in the paper but averaged over 100 runs). **1. Influence of $\sigma$ and subsampling (PoC).** We reproduce the same experimental setting as in Figure 3 of the main paper, i.e. sample points uniformly on a circle of radius $1$ plus additional noise $\sim \mathcal{N}(0,0.05 I_2)$, and consider minimizing the total persistence of the point cloud. We take $\sigma \in \{ 0, 0.1, 0.2, 0.3, 0.5, 0.7, 1, 2, 3, 4, 5 \}$ (with the convention that $\sigma = 0$ corresponds to the vanilla topological gradients and learning rate $\lambda = 0.1$ (as in the paper). We rely on a subsampling with batch size $s=50$ (as in the paper). To quantify the variability of the scheme (suggestion by reviewer GJy4) with respect to the randomness induced by the subsampling step, we run each gradient descent $50$ times with a fixed initialization $X_0$, up to a maximum of $200$ iterations, stopped earlier if a loss of $0$ (no topology left, global minimum has been reached) is measured. Figure 1 in the companion pdf displays the results of this illustrative experiment. We report the median of both running time and number of iterations to reach convergence (or reach the $200$ iterations limit), along with the $10\%$ and $90\%$ percentiles. The conclusions are: - For $\sigma = 0$ (vanilla) and $\sigma \geq 3$, the gradient descent _never converges_ in less than 200 steps. Since the radius of the diameter of the circle is $2$, it is not surprising that taking a bandwidth larger than that hinders convergence. - For $\sigma \in (0.1,1]$, the convergence occurs within the same order of magnitude (between $0.49$ and $1.74$s), the best performance being reached at $\sigma = 0.3$ (recall that we used $\sigma = 0.1$ in the paper, testifying that we did not rely on hyperparameter tuning). It suggests that, on regular structure, the approach is smooth with respect to $\sigma$. Empirically, we observe that a good proxy is to take $\sigma < \mathrm{med}((\|x_i - x_j\|)_{ij})$. Note that even though it theory, $\sigma \to 0$ should recover the vanilla topological gradients, one is limited by numerical accuracy when evaluating the Gaussian kernel. - The variation around the median over 50 runs is very small: the randomness of the samples at each iteration (hence of the trajectory) barely impacts the decrease of the loss and thus the convergence time. As for correlation scores in Figure 2(a), we observe oscillations for $\sigma$ values that are roughly on the sides (very small or very large), and more stable scores for middle range values. Note that these oscillations could also come from how the correlation score itself is computed. **2. Real-world application on single-cell data.** We also designed an experiment on single cell HiC data inspired from [_A Gradient Sampling Algorithm for Stratified Maps with Applications to Topological Data Analysis_. Leygonie et al. **Math. Prog.**, 2023]. The dataset is comprised of single cells characterized by chromatin folding, that is, each cell is encoded by the distance matrix of its DNA fragments. The dataset we focus on is taken from [_Cell-cycle dynamics of chromosomal organization at single-cell resolution_. Nagano et al. **Nature**, 2017], in which it was shown that cells are sampled at different phases of the cell cycle. Thus, we expect latent embeddings of this dataset to exhibit a circular shape, that we can constrain with diffeomorphic topological optimization. Specifically, we processed this single cell dataset of $1,171$ cells with the stratum-adjusted correlation coefficient (SCC) with $500$kb and convolution parameter $h=1$ on chromosome $10$. Then, we run kernel PCA on the SCC matrix to obtain a preprocessed dataset of dimension $100$, on which we applied the same VAE architecture than the one used for COIL data in the main paper. Finally, we optimized the same augmentation loss than for the COIL data, as well as the following registration loss: $$L : X\mapsto W_{2}^2({\rm Dgm}^1(X), {\rm Dgm}^{1, t}),$$ where ${\rm Dgm}^1(X)$ contains the PD points of $X$ with distance-to-diagonal at least $\tau=1$, ${\rm Dgm}^{1,t}$ is a target PD with only one point $p^*=[-3.5,3.5]$, and $W_{2}$ is the 2-Wasserstein distance. We used $\sigma=0.25$ (aug.) and $\sigma=0.025$ (reg., sigma is smaller to mitigate the effects of matching instability), $\lambda=0.1$ on $500$ epochs, with subsampling of size $s=300$ for computational efficiency (as computing Vietoris-Rips PH without radius thresholding on $1,171$ points already takes few minutes on a laptop CPU, which becomes hardly tractable if done repetitively as in gradient descent), and we measured correlation between latent space angles and repli scores in Table 1. Pdf: /pdf/ef632739293e8bc1b2f21ae5bf2dfe5a101c660a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Unsupervised Model Selection for Domain Adaptive Object Detection
Accept (poster)
Summary: The paper introduces the Detection Adaptation Score (DAS), an unsupervised approach for selecting optimal models in Domain Adaptive Object Detection (DAOD) without target domain labels. DAS is based on the flat minima principle and includes the Flatness Index Score (FIS) and Prototypical Distance Ratio (PDR) to evaluate model generalization. Experiments on various benchmarks demonstrate DAS's effectiveness in checkpoint selection and hyperparameter tuning, outperforming existing methods. The perspective of enhancing model selection for DAOD is interesting and novel. It attempts to tackle an overlooked yet important limitation of existing DAOD methods and shows potential for other computer vision tasks. Strengths: 1. The paper writing is clear and easy to follow. The motivation is well described. 2. The perspective about model selection for DAOD is interesting because it seems to be the first attempt in this field. Besides, this can be extended to other communities since most tasks need to decide which checkpoint is optimal for practical usage without labels. 3. The proposed method can be deployed for different DAOD methods and give moderate gains without sacrificing inference speed. Weaknesses: 1. It seems that the proposed method is only tested on the Faster RCNN baseline. 2. The performance gains on some detectors under some settings are limited. 3. The method is only tested on object detection, lacking an analysis for more fine-grained tasks, e.g., instance segmentation. Technical Quality: 3 Clarity: 3 Questions for Authors: I didn't find major issues with the paper, and I only have some minor confusion, as listed below. 1. Can the proposed method be extended to other baselines for DAOD, such as the FCOS-based method [1] and the DETR-based method [2]? It will be great to enhance the adaptability of the method further. 2. Could the authors explain why DAS does not give obvious gains in some settings? For example, DAS gives no gains for CMT on Real-to-Art Adaptation in Table 1. 3. Can the method be used for fine-grained tasks, such as segmentation? 4. Consider giving a more comprehensive review of some advanced cross-domain object detection works [1-4]. [1] Deng, J., Xu, D., Li, W., & Duan, L. (2023). Harmonious teacher for cross-domain object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 23829-23838). [2] Huang, W. J., Lu, Y. L., Lin, S. Y., Xie, Y., & Lin, Y. Y. (2022, July). AQT: Adversarial Query Transformers for Domain Adaptive Object Detection. In IJCAI (pp. 972-979). [3] Zhou, W., Fan, H., Luo, T., & Zhang, L. (2023). Unsupervised domain adaptive detection with network stability analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6986-6995). [4] Li, W., Liu, X., & Yuan, Y. (2023). Sigma++: Improved semantic-complete graph matching for domain adaptive object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7), 9022-9040. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank the Reviewer XtC2 for your constructive comments and valuable feedback. We appreciate the recognition of our work on motivation and perspective. Now we answer the question raised by the reviewer as follows. **Q1**: It seems that the proposed method is only tested on the Faster RCNN baseline. **A1**: We thank the reviewer for the valuable comments. To further verify the effectiveness of the proposed DAS method on other detectors, we conduct experiments on SIGMA++[4] (FCOS-based) and AQT[2] (DETR-based). The results on Cityscapes to Foggy Cityscapes are shown below table. It is shown that the proposed method also works with the FCOS-based, and DETR-based methods. In particular, our method chooses a SIGMA++ checkpoint with 41.7% map while the last checkpoint is 39.5% mAP, demonstrating the adaptability of the proposed method on other detectors. Although DAS gives a moderate gain compared with the last checkpoint of AQT, our method selected the optimal checkpoint. We will evaluate our method on other detectors in future research. | Method | Last | Ours | Imp.$\uparrow$ | Oracle | | ------- | ---- | ---- | -------------- | ------ | | SIGMA++ | 39.5 | 41.7 | 2.2 | 41.7 | | AQT | 45.5 | 45.7 | 0.2 | 45.7 | **Q2**: Could the authors explain why DAS does not give obvious gains in some settings? For example, DAS gives no gains for CMT on Real-to-Art Adaptation in Table 1. **A2**: The main reason lies in that the performance gap between the last checkpoint and Oracle results is small, so give our method relatively small improvement space. For example, the average performance gap of AT over three settings is $8.68%$, while that of MT over three settings is $1.28%$. Nevertheless, the proposed method can select better checkpoints in most cases and at least the last checkpoints in very few cases. **Q3**: Can the method be used for fine-grained tasks, such as segmentation? **A3**: Yes, our method can be used for fine-grained tasks. To verify this, we conduct the experiment on the semantic segmentation. In particular, we evaluate the well-known domain adaptive semantic segmentation approach DAFormer[5]. The qualitative results on GTA5->Cityscapes are shown below. It showes that our DAS method can choose a better checkpoint of the model with an mIoU of 64.2% while the last checkpoint of the model only achieves 60.9%. This demonstrates that our DAS method still works effectively on domain adaptive semantic segmentation tasks. | Method | Last | Ours | Imp.$\uparrow$ | Oracle | | -------- | ---- | ---- | -------------- | ------ | | DAFormer | 60.9 | 64.2 | 3.3 | 65.9 | **Q4**: Consider giving a more comprehensive review of some advanced cross-domain object detection works [1-4]. **A4**: Thanks for the valuable feedback. We will enhance the final paper by providing a more comprehensive review of some advanced cross-domain object detection works. [1] Deng, J., Xu, D., Li, W., & Duan, L. (2023). Harmonious teacher for cross-domain object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 23829-23838). [2] Huang, W. J., Lu, Y. L., Lin, S. Y., Xie, Y., & Lin, Y. Y. (2022, July). AQT: Adversarial Query Transformers for Domain Adaptive Object Detection. In IJCAI (pp. 972-979). [3] Zhou, W., Fan, H., Luo, T., & Zhang, L. (2023). Unsupervised domain adaptive detection with network stability analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6986-6995). [4] Li, W., Liu, X., & Yuan, Y. (2023). Sigma++: Improved semantic-complete graph matching for domain adaptive object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7), 9022-9040. [5] Hoyer L, Dai D, Van Gool L. Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 9924-9935. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and effort. I have read all the author's responses and other reviews, which addressed my concerns and highlighted additional strengths and potential in tasks beyond DAOD, such as segmentation and DG. Therefore, I am happy to raise my score. --- Reply to Comment 1.1.1: Comment: Thanks for your timely responses. We sincerely appreciate your constructive comments and valuable feedback, which enhance the quality of our paper. We will revise our paper in the final paper as you suggested.
Summary: This submission provides a strategy to select an appropriate domain adaptive object detection model without access to labels of target domain. The basic premise is the minima flat to instruct the selection, because minima flat means better generalization. Experiment indicate the effect of the design based on their detection adaption score. Strengths: 1. This paper is easy to read and the idea is clearly presented. 2. The proposed strategy is based flat minima which is therefore rational. 3. Experiments show the effectiveness of the proposed strategy. Weaknesses: 1. This paper may neglect a fact that in real application, target labels are often known for testing the provided model, and in fact the training labels may not be accessible. So, I do not think this strategy is really necessary. 2. This paper is essentially a DG domain generalizationt task, where target domain is unseen. So we can only select (train) a better model by using existing source domain data. Flat minima has also been used in DG method to instruct how to train a domain generalization model. 3. The proposed selection strategy is orthogonal to the high-level task, such as object detection. Therefore, why focus only on the domain adaptive object detection task? More tasks should also be tested such as image classification, semantic segmentation and others. 4. In experimental comparison, the authors compare the last checkpoint, which may not be fair. 5. Lacking comparisons to many SOTA DAOD models. 6. What is the computational overhead? Technical Quality: 2 Clarity: 3 Questions for Authors: 1. It is similar to DG and therefore the proposed strategy is not novel athought the motivation sounds good. 2. Necessity of such motivation. 3. More experiments on other related tasks. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: 1. The motivation is interesting but does not well match real-world applications. Test labels should be seen even not too much. Otherwise, it is just a domain generalization scenario, which means the novelty of flat minima is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank Reviewer 9Lzv for the constructive comments. We appreciate the positive comments including clear presentation, rational strategy, and effective experiments. Following are the responses to the reviewer's concerns. **Q1**: This paper may neglect a fact that in real application, target labels are often known for testing the provided model, and in fact the training labels may not be accessible. So, I do not think this strategy is really necessary. **A1**: The goal of unsupervised domain adaptation (UDA) is to solve the scenario where the target domain labels are inaccessible, which are aligned with the real-world application. However, how to evaluate the UDA model is a long-standing problem in the UDA research community. The researcher in the UDA community uses the target domain label to select models, which is inevitable to overfit the test set. Meanwhile, it violates the assumption of unavailability of the target domain labels. The unsupervised model selection of UDA is also discussed in many previous works [16,51]. In our paper, we are the first attempt to address the unsupervised model selection for domain adaptive object detection. Moreover, because the annotation of object detection is hard to obtain, our method is meaningful for the real-world application. We believe our work would inspire researchers in the community and push the real-world application of DAOD models forward. **Q2**: This paper is essentially a DG domain generalization task, where target domain is unseen. So we can only select (train) a better model by using existing source domain data. Flat minima has also been used in DG method to instruct how to train a domain generalization model. **A2**: Our paper is intrinsically different from the DG task. Domain generalization addresses the issue where the target domain is not accessible, while our paper addresses the model selection method for DAOD. The flat minima is a general concept in machine learning and has been used in many fields. Different fields have different estimation methods. In domain generalization, flat minima are estimated using the source domain labels. However, the target domain labels are not available for domain adaptation. Therefore, we propose a novel method via a generalization bound to obtain the flat minima without using target labels to address the unsupervised model selection problem for DAOD. This is essentially different from the DG task. **Q3**: Experiments on other related tasks. **A3**: We thank the reviewer for the valuable feedback. To further verify the effectiveness of the proposed method, we conduct the experiments on the semantic segmentation. In particular, we evaluate the well-known domain adaptive semantic segmentation approach DAFormer. The qualitative results are shown below. It is shown that our DAS method can choose a better checkpoint of the model with an mIoU of 64.2% while the last checkpoint of the model only achieves 60.9%. This demonstrates that our DAS method still works effectively on domain adaptive semantic segmentation task. We will evaluate our method on more tasks in future research. | Method | Last | Ours | Imp.$\uparrow$ | Oracle | | -------- | ---- | ---- | -------------- | ------ | | DAFormer | 60.9 | 64.2 | 3.3 | 65.9 | **Q4**: In experimental comparison, the authors compare the last checkpoint, which may not be fair. **A4**: Due to the unavailability of target labels, choosing the last checkpoint without a specifically designed model selection is a good choice. To this end, we choose the last checkpoint as a baseline. We also compare with other model selection methods in Table 2 in the manuscript to demonstrate the effectiveness of the proposed method. **Q5**: Lacking comparisons to many SOTA DAOD models. **A5**: Thank the reviewer for the comments. Our method is orthogonal to existing DAOD models. To evaluate the effectiveness of the proposed method, we evaluate our method on classic DAOD methods. To further verify the generalization of our method on SOTA DAOD models, we evaluate the model on SIGMA++. The results are shown in the below table. We can see that the proposed method also works with SIGMA++ where we choose a checkpoint with $41.7%$ AP, while the last checkpoint is $39.5%$. We will include the new results in our final paper. We will evaluate our method on more SOTA DAOD models in future research. | Method | Last | Ours | Imp.$\uparrow$ | Oracle | | ------- | ---- | ---- | -------------- | ------ | | SIGMA++ | 39.5 | 41.7 | 2.2 | 41.7 | **Q6**: What is the computational overhead? **A6**: In the implementation, the calculation of DAS needs only two inferences for each target domain sample and one inference for each source domain sample. Thus one checkpoint requires just a few minutes to obtain its DAS. For example, we evaluate one model like AT using 2 minutes (on RTX 3090) while the model training time is usually around 30 hours. Therefore, the computational overhead is small compared with the model training. Once a model is selected by our method, the computational overhead is the same as the original detector. --- Rebuttal Comment 1.1: Title: impractical and insufficient evaluation on the confidence Comment: Thanks for the authors' feedback. Although the authors would like to present a strategy for model selection, I do think this is impractical in real application and the evaluation is also insufficient to support the conclusion. (1) In real application, for selecting or testing a model, some labeled target data from one or more target domains should be tested to guarantee the reliability of the deployed model. From this view, the proposed work may be useless. (2) From another view, if you think in real application the selected model "must be" reliable without need of any labeled target domain data, I think the current evaluation experiments are very insufficient. At least, you should test your selected model on more target domain data to make sure if your selected model is really reliable in real application even without using any labeled target data. --- Reply to Comment 1.1.1: Comment: Thank you for your comments, and we would like to further clarify a few points as follows. First, it is true that testing with labeled data is the ideal way to evaluate a model. However, annotating target domain data is time-consuming and labor-intensive, and the target domain labels are often unavailable, especially when the target domain changes rapidly as pointed out by Reviewer 4YNt. Therefore, how to effectively evaluate the domain adaptation models in an unsupervised way is important for real-world applications. In our work, we address the UMS problem for DAOD through the flat minima principle. Different from the traditional flat minima methods that require labeled data to evaluate how well a model is located at the flat minima region, we show via the generation bound that the flatness can be deemed as model variance while the minima depends on the domain distribution distance for the DAOD task. We believe our work can inspire other researchers in the community and make the DAOD methods more practical in real-world applications. Second, regarding the experiments, we evaluate the proposed method on the widely-used DAOD benchmark datasets of multiple domain adaptation scenarios, including real-to-art adaptation, weather adaptation, and synthetic-to-real adaptation over serval representative DAOD methods. We also compare many model selection baselines and the proposed method outperforms these methods, which demonstrates the effectiveness of our method on unsupervised model selection. We appreciate your comments for further experiments on more target domain data and sincerely look forward to your specific suggestions for the experimental design. We would like to include these results in the final version to strengthen our work. We thank the reviewer for the opportunity to further clarify the motivation behind our work. We sincerely hope that our response can address the concerns from the reviewer and the reviewer can re-evaluate this paper based on our response. --- Rebuttal 2: Comment: > (1) In real application, for selecting or testing a model, some labeled target data from one or more target domains should be tested to guarantee the reliability of the deployed model. From this view, the proposed work may be useless. I agree with you that target domain labels are necessary to give any kind of guarantees on the target domain. However, this might simply not be feasible, for example if the target domain is continuously shifting faster than labels can reasonably be obtained. In this setting, this method would select a best-effort model without any kind of guarantees, which is still useful for many real applications. I don't feel like the authors overclaim by suggesting guaranteed reliability on the target dataset. > (2) From another view, if you think in real application the selected model "must be" reliable without need of any labeled target domain data, I think the current evaluation experiments are very insufficient. At least, you should test your selected model on more target domain data to make sure if your selected model is really reliable in real application even without using any labeled target data. I'm not sure I understand how one would test the selected model on more target domain data. I feel like evaluating mAP on the target datasets, like the authors did, is the best one can do given the limited scope of existing UDA benchmark datasets. --- Rebuttal Comment 2.1: Title: Response to Reviewer 4YNt Comment: We thank the reviewer for the constructive comments. We agree with the comments that the target domain labels might be infeasible to obtain in many real-world domain adaptation cases and our method is useful for many real applications. Thank you again, we are happy to receive your comments to further improve the quality of our paper. --- Rebuttal Comment 2.2: Title: "Rating 3: Reject" may be biased Comment: **I agree with Reviewer 4YNt and believe the "Rating 3: Reject " may be biased.** The criteria for "Reject" include "technical flaws, weak evaluation, inadequate reproducibility, and/or incompletely addressed ethical considerations." However, this paper is technically sound, with no apparent flaws. Extensive experiments validate its effectiveness on standard benchmarks, and they provide sufficient reproducibility information without any potential ethical issues. **Therefore, the paper does not meet any of the "Reject" criteria.** Additionally, I am puzzled by the reviewer's concerns about the "uselessness of the DAOD setting," as **there is ample literature supporting the practicality and validity, which has also been recognized by the ML community [1-4]**. I also believe that extending to DG is beyond the scope of this paper and should be considered for the journal-extended version as future work. [1] Learning Domain-Aware Detection Head with Prompt Tuning NeurIPS 2023 [2] Learning Domain Adaptive Object Detection with Probabilistic Teacher ICML 2022 [3] Decoupled Adaptation for Cross-Domain Object Detection ICLR 2022 [4] Synergizing between Self-Training and Adversarial Learning for Domain Adaptive Object Detection NeurIPS 2021 --- Reply to Comment 2.2.1: Comment: We sincerely appreciate the comments and the recognition of our work from reviewer XtC2. We are also committed to continuously advancing DAOD towards more practical applications.
Summary: This paper tackles the problem of model selection in unsupervised domain adaptation for object detection (DAOD). In DAOD, existing methods choose the models (checkpoints) using ground truth data on the target domain, which is impractical in real-world settings. To solve the problem, this paper proposes a model selection method called Detection Adaptation Score (DAS) based on the relationship between flat minima and generalization ability. The proposed DAS consists of two scores: Flatness Index Score (FIS) and Prototypical Distance Ratio (PDR). The FIS calculates the distance between the predictions before and after the perturbation of the model parameters. The PDR calculates the distance between the class prototypes on source and target domains. The experimental results demonstrate the positive correlation between the DAS and mAP. Also, the models chosen by the DAS outperform those chosen by other unsupervised model selection methods. Strengths: i) This paper is well-written and easy to follow. The motivation for tackling the task is discussed in Sec. 1, and related works are well-summarized in Sec. 2. The ideas and details of the proposed method are clearly described in Sec. 3. ii) The proposed method is simple yet effective. In addition, it is practically beneficial for not only model selection but also hyper-parameter tuning on DAOD. iii) The performance of the proposed method is good. The proposed method shows better performance than other unsupervised model selection methods in terms of the mAP of the chosen models. The ablation studies show that each of PDR and FIS contributes to better model selection. Weaknesses: i) The experiments in this paper show that the proposed method works well on model selection in DAOD. However, I think the proposed method is not limited to DAOD; that is, it can be directly applied to other tasks such as UDA for image classification and semantic segmentation (except IoU calculation in Eq. 2). I would like to see whether the proposed method works well those tasks as well. I have some concerns about the relationships between the flat minima principal and the proposed method as follows: ii) The first term of the right-hand side in Eq. (1) is the difference between the risks (i.e., losses) of $h$ and its neighbor $h'$. In contrast, the proposed method calculates the variance in the outputs, not the risks. They are different measures, and the relationships between them are not addressed in this paper although there might be a correlation between them. This paper will be more convincing if mathematical relations between the difference in the risks and the variance in the outputs are derived. iii) I understand that the PDR evaluates the transferability and discriminability because high PDR means that the prototypes of the same classes are well-aligned between the source and target domains. However, the relationship between better transferability (and discriminability) and flat minima is not clear in this paper. Why do better transferability and discriminability lead to flat minima? iv) Although it was confirmed that each of FIS and PDR contributes to better model selection in terms of mAP, the evaluation of the correlations between each of FIS and PDR and flatness is missing. Similar to [7], the flatness can be evaluated as the change in risk using GT labels. This paper will be better if it provides the experiments to evaluate the correlations between each of FIS and PDR and flatness. The experiments can address my concerns ii) and iii) indirectly. Technical Quality: 2 Clarity: 3 Questions for Authors: See i), ii), and iii) in Weakness. i) Does the proposed method work well in other UDA tasks as well? ii) Is it possible to derive the mathematical relationship between the difference in risks and the variance in outputs? iii) Why do better transferability and discriminability lead to flat minima? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Limitations are adequately addressed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank the Reviewer jxDH for your insightful comments. We appreciate the positive comments regarding the writing, method, and performance. Now we answer the question raised by the reviewer as follows. **Q1**: Does the proposed method work well in other UDA tasks as well? **A1** Yes. Our method can also work in other UDA tasks. To verify this, we conduct experiments on semantic segmentation and evaluate the well-known DAFormer method on GTA-5 to Cityscapes. The experimental results in terms of mIoU are shown below. It is shown that our DAS score can choose a better checkpoint of the model with an average mIoU of 64.2% while the last checkpoint of the model only achieves 60.9%. This demonstrates that our DAS method can also work effectively on other UDA tasks. | Method | Last | Ours | Imp.$\uparrow$ | Oracle | | -------- | ---- | ---- | -------------- | ------ | | DAFormer | 60.9 | 64.2 | 3.3 | 65.9 | **Q2**: Is it possible to derive the mathematical relationship between the difference in risks and the variance in outputs? **A2**: The variance in outputs serves as an upper bound for the difference in risks. To establish this relationship, let us define the error as $\mathcal{E} _ {h}(o _ {h}, g) = \left| o_ {h}-g \right|$, where $o _ {h}$ is the output of the network $h$ and $g$ is the ground truth. Similarly, error of network $h'$ is denoted as $\mathcal{E} _ {h'}(o _ {h'}, g) = \left| o_ {h'}-g \right|$. By applying the triangle inequality, we derive: $$ \left|\mathcal{E} _ {h}(o_{h}, g) - \mathcal{E} _ {h'}(o _ {h'}, g) \right| = \left|\left| o_{h} - g \right| + \left| o _ {h'} - g \right| \right| \leq \left| o _ {h} - o _ {h'} \right| $$ Consequently, minimizing the variance in outputs effectively constrains the difference in risks, providing a mechanism to bound the variability in predictions across networks. We appreciate your insightful comments and will delve further into these mathematical underpinnings in our revised manuscript to illustrate the implications of variance in outputs on risk difference. **Q3**:Why do better transferability and discriminability lead to flat minima? **A3**: This is a good question. I would like to gently point out that variance in outputs primarily indicates **flatness**. If the labels are available, we can use the labels to calculate the error on the target domain so as to ensure the property of **minima**. Without using target labels, we estimate the generalization error on the target domain through a domain adaptation theory, where the target error is bounded by source error, domain distance, and a constant term. The transferability and discriminability are widely used to improve the performance on the target domain. With the both properties of flatness and minima on the target domain, we can obtain the flat minima without using target labels. Thanks for your valuable feedback, we will add more explanations to make it clearer in the final paper. **Q4**: The correlations between each of FIS and PDR and flatness with GT labels. **A4**: Thank the reviewer for pointing this out. We provide an experiment to verify the positive correlation between our FIS and PDR and flatness with GT labels. In particular, we show the correlation coefficient between them on AT weather adaptation. It shows that the proposed FIS is well correlated with the GT flatness with a coefficient of 0.64, demonstrating that our DAS score can be an appropriate estimation for flatness with GT labels. However, PDR represents the target error by assessing the transferability and discriminability thus it doesn't well correlate with the flatness of GT labels, as the coefficient is 0.45. It is by combining FIS and PDR that we could find the flat minima without accessing target labels while achieving accurate model selection for DAOD models. --- Rebuttal 2: Comment: Dear Reviewer jxDH, We hope this message finds you well. As the discussion period is ending soon, we would like to bring to your attention that we have submitted our response to your questions. We carefully consider your valuable feedback and suggestions, and hope the response well addressed your concerns. We are truly thankful for the comments on our work. If you find our responses to be satisfactory, we would greatly appreciate it if you could take this into account when considering the final score. Best regards, Authors of Submission 7161 --- Rebuttal Comment 2.1: Comment: Reviewer jxDH, do you have any additional questions or feedback?
Summary: The paper introduces a new metric ("DAS") for unsupervised model selection in domain-adaptive object detection. DAS consists of two components: a flatness index scores that approximates the flatness of the loss landscape in the target domain by measuring prediction agreement across perturbed model parameters, and a prototypical distance ratio that measures the ratio of inter-class and intra-class feature dissimilarity. DAS is evaluated for model selection and hyperparameter optimization and correlates well with true model performance. Strengths: The paper presents a significant step forward in the underexplored area of unsupervised model selection in domain adaptive object detection. This is evidenced by considerable gains in object detection performance on the target domain, compared to baseline approaches. Although concepts similar to the two main contributions (flatness index score & prototypical distance ratio) have previously been in explored in the literature, their combination and application to unsupervised model selection in domain adaptive object detection is novel. The overall structure of the paper is really clear, and the evaluation is extensive and informative. Weaknesses: The writing quality should be improved (e.g. general grammar, typos such as "Faster RCCN", sometimes inconsistent terminology such as "Prototype Distance Ratio" / Prototypical Distance Ratio"). Apart from writing quality, my only significant concern is the apparent invariance of DAS performance towards lambda, which raises the question of whether the flatness index score and prototypical distance ratio measure different aspects in practice and the degree to which they are complementary. Technical Quality: 3 Clarity: 3 Questions for Authors: How is PDR or d_inter defined for single-class benchmarks such as Sim10k? DAS seems to work consistently very well for AT, while performance with other frameworks is less consistent (very apparent in Figure 4). Do you know why this might be the case? It might make sense to include the oracle checkpoint in the main evaluation, especially tables 1-3. This would make it easier to see how much of the gap to the oracle is closed. Please clarify what is meant by "ES" in table 2. It would be interesting to see analogues for Figures 2 and 5 for Cityscapes and Sim10k in the appendix. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations and societal impact are adequately discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank Reviewer 4YNt for constructive comments and valuable feedback. We appreciate the recognition of our work including the support for the novelty, structure, and evaluation. Now, we address the reviewer's concerns as follows. **Q1**: The writing quality should be improved. **A1**:We sincerely thank your constructive comments. We have corrected the grammar issues, typos, and inconsistent terminology. We also carefully proofread the paper. **Q2**: Apart from writing quality, my only significant concern is the apparent invariance of DAS performance towards lambda, which raises the question of whether the flatness index score and prototypical distance ratio measure different aspects in practice and the degree to which they are complementary. **A2**: From the generalization bound theory for flat minima present in the paper, the FIS and PDR measure the flatness and the minimum, thus they are theoretically evaluating the models in two aspects. In Table 5, the selected model mAP doesn't vary too much with different $\lambda$ setting. This might have contributed to when the model convergences to some degree, the two terms FIS and PDR represent have been both optimized well. Thus they both show relatively stable results on the mAP. This is why we use PCC between DAS and the model to measure the subtle differences. On the other hand, the PCC has obvious changes with the variation of $\lambda$. This indicates the $\lambda$ indeed changes the DAS and influences the correlation between the proposed DAS and ground truth performance. To examine the complementary, we have conducted an ablation study in Table 4 from the main paper. It is shown that FIS only achieves $0.48$ PCC. With the help of PDR, our DAS improved the PCC from $0.48$ to $0.67$, demonstrating the complementary between FIS and PDR. **Q3**: How is PDR or d_inter defined for single-class benchmarks such as Sim10k? **A3**: Although Sim10k only has a single class, the object detection model also considers the background class, thus the PDR or d_inter can be calculated. We will add more descriptions in the final paper to reduce the confusion. **Q4**: DAS seems to work consistently very well for AT, while performance with other frameworks is less consistent (very apparent in Figure 4). Do you know why this might be the case? **A4**: The main reason lies in that the performance gap of other frameworks between the last checkpoint and Oracle results is narrow, so it gives our method a relatively small improvement space. For example, the average performance gap of AT over three settings is 8.68%, while that of MT over three settings is 1.28%. In particular, a narrow performance gap gives less tolerance for potential uncertainty of the DAS estimation. Therefore, the improvements with other frameworks show less consistent improvement like AT. Nevertheless, the proposed method can still select better checkpoints in most cases and checkpoints that are at least comparable with last checkpoints in very few cases. **Q5**: It might make sense to include the Oracle checkpoint in the main evaluation, especially tables 1-3. This would make it easier to see how much of the gap to the oracle is closed. **A5**: We thank your constructive comments. We will add the orcale checkpoint in tables 1-3. We present Tables A1 and A2 in the attached PDF in the rebuttal to show the added version of Tables 1 and 2 in the main manuscript. **Q6**: Please clarify what is meant by "ES" in table 2. **A6**: ES[40] indicates the entropy score, i.e., using the entropy of the prediction from the classification branch to select the checkpoints. We have clarified this in the experiment section. **Q7**: It would be interesting to see analogues for Figures 2 and 5 for Cityscapes and Sim10k in the appendix. **A7**: We provide more examples on Sim10k to Cityscapes and Cityscapes to Foggy Citysacpes adaptation in Figures A1 and A2 in the attached PDF. Similar to Figure 2 in the main paper, the proposed DAS score correlates well with the performance of DAOD checkpoints compared with other baselines. --- Rebuttal Comment 1.1: Comment: Based on the author's responses to my questions and the additional information provided in the rebuttal, I'm raising my score to a strong accept. --- Reply to Comment 1.1.1: Comment: Thank you for your timely and valuable feedback. We are grateful for your recognition of our work. We will carefully revise the final version based on your valuable feedback.
Rebuttal 1: Rebuttal: Dear reviewers, We would like to thank all the reviewers for their insightful comments and constructive feedback which have significantly enhanced the quality of our work. We appreciate that the reviewer acknowledges the advantages of our work: "**This paper is well motivated. The proposed method is effective and reasonable**"(Reviewer ood2), "**The paper presents a significant step forward in the underexplored area of unsupervised model selection in domain adaptive object detection**"(Reviewer 4YNt), "**The proposed method is simple yet effective**"(Reviewer jxDH), "**The proposed strategy is based flat minima which is therefore rational.**" (Reviewer 9Lzv), "**The perspective about model selection for DAOD is interesting**"(Reviewer XtC2). On the other hand, we have diligently addressed all the concerns raised by the reviewers. In the attached PDF file, we also provide the tables with Orale results for easier to see how much of the gap to the Oracle is closed and more comparison of different unsupervised model evaluation methods for DAOD. Best Regards, Authors of Submission 7161. Pdf: /pdf/4ce7a149c4d42b8b29e3e7402f7567b85109a614.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper delves into an unsupervised evaluation problem in Domain Adaptation Object Detection (DAOD). To solve this problem, this paper proposes a method based on the flat minima principle, named Detection Adaptation Score (DAS), which can measure the flat minima without using target labels. Specifically, the proposed method is composed of a flatness index score (FIS) and a prototypical distance ratio (PDR) to assess the flatness and measure the transferability and discriminability of the models. Experiments validate the effectiveness of the proposed method. Strengths: - This paper is well motivated. And this paper is the first work to evaluate DAOD without using target labels. - The proposed method is effective and reasonable. Weaknesses: - In line 49, the authors should explain why these methods fail to evaluate the object detection model. - In Eq.2, since KL term and IoU term are two metrics to evaluate the matching costs of two different tasks, object classification and box regression, why there is no a balance coefficient, such as d=KL-\lambda IoU. If \lambda=1 is the best? - The proposed FIS is easy coming to mind, which uses parameter perturbation to evaluate flatness minima without using target labels. And the proposed PDR to use prototype-based domain alignment is also used as a training method in the existing prototype-based domain alignment for DAOD, such as [1]. - The proposed FIS and PDR are used for unsupervised evaluation. However, from another perspective, these methods can also be used as unsupervised training methods. If these kinds of methods are used for unsupervised training, can they be used as unsupervised evaluation methods as the same time? For example, if the benchmark DAOD framework is a prototype-based domain alignment method, can PDR, a prototype-based domain alignment evaluation method, serves as an unsupervised evaluation? As I can see in line 261, this paper only uses adversarial training or self-training methods for unsupervised domain adaptive training? More different kinds of benchmark DAOD framework are necessary to evaluate the scope of the proposed evaluation methods. [1] Cross-domain detection via graph-induced prototype alignment. CVPR 2020. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weaknesses, especially the forth question. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations had been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer ood2 for the valuable feedback and insightful comments. We appreciate the reviewer's positive comments regarding motivation and method. We now clarify the reviewer's concerns as follows. **Q1**: Explain why these methods fail to evaluate the object detection model in line49. **A1**: The methods referenced in line 49 rely on classifier-specific properties like predicted confidence and entropy, which are tailored for classification tasks. In object detection, however, the evaluation involves not only classification but also the precise localization of objects within an image. This crucial distinction renders these methods ineffective in fully assessing an OD model. By incorporating your suggestion into our paper, we will explicitly clarify this difference between classification-centric evaluation methods and the multifaceted demands of OD evaluation. **Q2**: In Eq.2, since KL term and IoU term are two metrics to evaluate the matching costs of two different tasks, object classification and box regression, why there is no a balance coefficient, such as d=KL-\lambda IoU. If \lambda=1 is the best? **A2**: Thank you for regarding the potential absence of a balance coefficient in Eq. (2) to weigh the KL divergence and IoU metrics. We do not deliberately add a balance coefficient following the previous works (e.g., DETR) to combine the classification and localization cost. As suggested by the reviewer, we conducted experiments on AT weather adaptation to assess the impact of tuning the balance coefficient. The results are shown in the table below. Our findings suggest that the parameter adjustment can actually influence performance, while setting the balance coefficient to 1 is a fairly good choice, indicating a balanced consideration of classification and localization within our framework. | $\lambda$ |0|0.1|1|10|100| |-|-|-|-|-|-| | mAP |48.2|49.3|49.3|48.7|48.5| **Q3**: The proposed FIS is easy coming to mind, which uses parameter perturbation to evaluate flatness minima without using target labels. And the proposed PDR to use prototype-based domain alignment is also used as a training method in the existing prototype-based domain alignment for DAOD, such as [1]. **A3**: We would like to highlight our paper is essentially different from previous works. We address the unsupervised model selection (UMS) problem for DAOD. Previous works [1] design effective methods for addressing the domain gap for object detection, while we address the UMS problem for DAOD through a flat minima principle, i.e., models that locate the flat minima region in the parameter space usually exhibit excellent generalization ability. Different from the traditional flat minima methods that require labeled data to evaluate how well a model is located at the flat minima region, we show via the generalization bound that the flatness can be deemed as model variance, while the minima depends on the domain distribution distance for the DAOD task. To this end, although some strategies in our paper may seem to be similar to existing works, the task and motivation are distinct from the existing works. **Q4**: If these kinds of methods are used for unsupervised training, can they be used as unsupervised evaluation methods as the same time? For example, if the benchmark DAOD framework is a prototype-based domain alignment method, can PDR, a prototype-based domain alignment evaluation method, serves as an unsupervised evaluation? **A4**: To effectively evaluate the DAOD models, we designed a DAS including FIS and PDR, which is derived from a generalization bounding for flat minima. Thus, they evaluate the DAOD models in different aspects. Additionally, existing work [1] use the prototype-based alignment method to minimize the domain gap between domains. The prototype estimation in existing work only utilizes the samples in a mini-batch during the model training. In contrast, we use the entire dataset to estimate the prototypes, which is more robust and has better generalization ability. To verify this, we conduct experiment results on Sim10k to Cityscapes (S2C) for [1]. The experimental results are presented below. It is shown that the proposed method still works when DAOD frameworks also optimize the prototype-based distance, i.e., choosing a checkpoint with 44.8% mAP (our PDR) while the last checkpoint is 43.1% mAP. Moreover, our DAS with FIS and PDR can further improve the performance, demonstrating the effectiveness of the FIS. The FIS is proposed by us in this paper, and we have not found any DAOD methods using it as a training method. |Method|Setting|Last|PDR|DAS|Oracle| |-|-|-|-|-|-| |GPA|S2C|43.1|44.8(+1.7)|45.8(+2.7)|47.0| **Q5**: As I can see in line 261, this paper only uses adversarial training or self-training methods for unsupervised domain adaptive training? **A5**: We evaluate DAOD methods with adversarial training or self-training because they are classical paradigms and achieve promising results in DAOD scenarios. To demonstrate the generalization of the proposed DAS method to other DAOD frameworks, we evaluate our method on GPA[1] and SIGMA++[2] which minimize the domain gap by prototype-based domain alignment and graph matching, respectively. The results are shown in the below table, we can see that our method is able to select better checkpoints for those two DAOD methods (2.7% and 2.2% improvements, respectively) beyond adversarial learning and self-training paradigms. |Method|Setting|Last|Ours|Imp.$\uparrow$|Oracle| |-|-|-|-|-|-| |GPA|S2C|43.1|45.8|2.7|47.0| |SIGMA++|C2F|39.5|41.7|2.2|41.7| * We attempted but could not reproduce the GPA results on C2F using the released code. This problem is found by other researchers in the issue of the repo. Thus we choose the setting of S2C. [1] Cross-domain detection via graph-induced prototype alignment. CVPR 2020. [2] SIGMA++: Improved semantic-complete graph matching for domain adaptive object detection. TPAMI 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttals from the authors, which address my concerns to some extents. I tend to keep my original score, which leans to positive already. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response and your recognition of our work. We greatly appreciate your time and effort in reviewing our submission. We will carefully incorporate your feedback to further enhance the quality of our final work. --- Rebuttal 2: Comment: Dear Reviewer ood2, We hope this message finds you well. As the discussion period is ending soon, we would like to bring to your attention that we have submitted our response to your questions. We carefully consider your valuable feedback and suggestions, and hope the response well addressed your concerns. We are truly thankful for the comments on our work. If you find our responses to be satisfactory, we would greatly appreciate it if you could take this into account when considering the final score. Best regards, Authors of Submission 7161
null
null
null
null
null
null
Rough Transformers: Lightweight and Continuous Time Series Modelling through Signature Patching
Accept (poster)
Summary: This paper introduces the Rough Transformer, a variant of the original Transformer that allows the processing of discrete-time series as continuous-time signals through the use of multi-view signature attention. Empirical comparisons shows that Rough Transformers outperform vanilla Transformers and continuous-time models on a variety of time-series tasks and are robust to the sampling rate of the signal. Strengths: - Overall the paper is well written and is easy to follow - The idea of using signature transform within an attention mechanism is interesting Weaknesses: - Empirical evaluations are limited, casting doubts on the true potential of the proposed architecture - The tasks considered are rather simple, and it is not clear whether the proposed architecture will give favorable tradeoffs between accuracy and efficiency in the more challenging tasks (see below) - Missing evaluations on time series forecasting tasks (only classification and regression tasks are considered) - Missing comparisons with recent RNN models (such as https://arxiv.org/abs/2110.04744, https://arxiv.org/abs/2212.00228), Transformer models (e.g., those studied in https://arxiv.org/abs/2011.04006), State Space models (https://arxiv.org/abs/2111.00396 and the more recent variants such as Mamba: https://arxiv.org/abs/2312.00752) and other sequence models (https://arxiv.org/abs/2305.01638, https://arxiv.org/abs/2209.10655) - Missing ablation studies on the components of the proposed architecture, particularly the role of the global and local components in the multi-view signature, truncation level n, etc. - Missing related work; e.g., the papers mentioned above, https://link.springer.com/article/10.1007/s40304-017-0103-z and https://arxiv.org/abs/1710.10121 for continuous-time DL models, https://arxiv.org/abs/2006.12070, https://arxiv.org/abs/2102.04877 for continuous-time RNN models, https://dl.acm.org/doi/abs/10.5555/3546258.3546305 for using path signatures to understand continuous-time RNNs Technical Quality: 2 Clarity: 2 Questions for Authors: - I was wondering how effective is the proposed model in autoregressive generative tasks and time series forecasting (see https://arxiv.org/abs/2311.04147)? These are natural tasks in the domain of sequence modeling - While the proposed method can improve the modified transformer model in sequence modeling, why would the method be attractive for practitioners when they could just use more sophisticated models like SSMs for sequence modeling? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We are slightly surprised by some of their comments, which we believe are already addressed in the manuscript. We hope that this response will help to clarify any misunderstandings. In line with the reviewer's comments, we have carried out an extensive set of new experiments which we believe address all of the reviewer's concerns. We hope the reviewer will take this into account and consider raising the score accordingly. **[W1, W4] Empirical evaluations are limited / Missing comparisons with recent models.** In line with the reviewer's feedback, we have added comparisons to LRU, S5, and MAMBA on 5 long temporal modeling new datasets, where we find RFormer to perform very competitively. These results are included in Table 5 of the attached PDF to this rebuttal. The reason we focus on these tasks is because our method is tailored to time-series processing, and other image or text-based benchmark datasets (such as LRA/sMNIST, etc), which typically test model memory, are not as suitable in this setting. We will add a comment along these lines in the paper as well. In terms of performance on random dropping, we have added experiments on 15 new datasets, comparing against SoTA models for continuous time series processing. We report the performance and average rank of our model in Table 4 (PDF), where our model consistently outperforms the baselines. We find that our model performs very competitively, as well as being orders of magnitude faster than the baselines. In terms of the small empirical evaluation, we would like to stress that we have now tested our model on a wide variety of datasets. We have also run additional experiments whose results can be found in Tables 1, 2, and 3 of the PDF comparing RFormer with CRU [1]. Additionally, Table 4 (PDF) reports the performance of 8 extra baselines on 15 new datasets with different random dropping rates. We consider this extensive evaluation very much in line with the experiments carried out by other sequence modeling architectures (e.g. NCDE, NRDE, coRNN, noisy RNN). In terms of the tasks considered, we also find that they are consistent with the experiments in these papers, which focused on classification and regression tasks. Nonetheless, we have extended our results into the forecasting scenario through a step-ahead forecasting task, see response to [W2,W3] and the General Response. **[W5] Missing ablation studies on the components of the proposed architecture, particularly the role of the global and local components in the multi-view signature, truncation level \(n\), etc.** We would like to clarify that these results are already in the paper. In particular, they can be found in Appendix E: Ablation Studies. We have also carried out a set of experiments on the sensitivity of the model to hyperparameters, which can be found in Figure 2 of the associated PDF document. **[W6] Missing related work.** We would like to clarify that we have a dedicated related work section in Appendix B, where we have already included citations to many of the works that the reviewer mentions. In particular, we include a discussion of Unitary RNNs, Orthogonal RNNs, expRNNs, chronoLSTM, antisymmetric RNNs, Lipschitz RNNs, coRNNs, unicoRNNs, LEMs, waveRNN, Linear Recurrent Units, and Structured State Space Models (S4 and Mamba) on lines L678 - L690. Furthermore, we discuss efficient attention variants such as Sparse Transformer, Longformer, Linear Transformers, BigBird, Performer, and Diffuser in lines L691 - L702. We will add the remaining citations the reviewer suggests. **[W2, W3] The tasks considered are rather simple / Missing evaluations on time series forecasting tasks.** We would like to echo our previous statement on how similar models (NCDE, NRDE, coRNN, noisy RNN) have tested exclusively in classification and regression tasks and how these models have become widely accepted by the community. In terms of forecasting, we would like to highlight that time series forecasting pipelines using Transformers typically train by masking the input representation iteratively to predict the next time step. In our setting, since we are compressing the input representation, these pipelines cannot be straightforwardly implemented for our model. We intend to work towards a model that uses these principles just for time series forecasting, but note this would require a completely new training pipeline as well as heavy tuning. That said, we reiterate that the focus of this work is to showcase the benefits of using Rough Path Signatures within the attention mechanism for efficient and continuous time-series processing, as demonstrated through the widely accepted experimental settings of time-series classification and regression. We have, however, included a step-ahead forecasting task on the Apple Limit Order Book volatility to extend our results into te forecasting scenario. The results can be found in the General Response. **[Q2] While the proposed method can improve the modified transformer model in sequence modeling, why would the method be attractive for practitioners when they could use more sophisticated models like SSMs for sequence modeling?** We thank the reviewer for raising this point. We believe that showing the significant benefits of RFormer when compared to the traditional Transformer is beneficial to the community. There seems to be a wider adoption of hybrid models (SSM + Transformer) by the community given their superior performance [2]. In tasks that require temporal processing, the use of RFormer could potentially make these models have improved inductive biases, allow for continuous-time processing, and increase efficiency and inference speed. [1] Schirmer, Mona, et al. "Modeling irregular time series with continuous recurrent units." International conference on machine learning. 2022. [2] Lieber, Opher, et al. "Jamba: A hybrid transformer-mamba language model." arXiv preprint (2024).
Summary: The paper proposes Rough Transformers, an attention-based model for long continuous-time signals. The model utilizes ideas from rough path theory to extract path signatures from the continuous-time signal (obtained by interpolation of the original signal). Two types of signatures are extracted: global and local. The global signature extracts long-term information from the signal whereas the local signature extracts local information. Self-attention is then used on this "multi-view" signature. Experiments on classification and regression tasks show improved performance over existing model architectures. Strengths: - The paper views the problem of modeling long continuous-time signals through the lens of rough path theory. While this in itself is not novel, the combination of rough path signatures with attention is a novel combination. - Experiments on classification and regression tasks show that RFormer improves over existing models in terms of accuracy and significantly more compute efficient. (I have some concerns and questions about the empirical analysis, please see Weaknesses) Weaknesses: - The technical contribution is limited. In such scenarios, the empirical analysis needs to be sufficiently strong. - The empirical analysis has been conducted on a few toyish datasets. While the results are definitely promising, more experimental support is needed to validate the model. - Although the model is motivated from a continuous-time and irregularly-sampled data perspective, the actual investigation of these settings is limited. As per my understanding, all experiments under 4.1 have been conducted on regularly sampled time series (please correct me, if I am wrong). I find it surprising that simple RNN-based methods do not perform well in these settings. If this is indeed the case, some simple CNN-based method should be studied. Recent models based on state space layers (e.g., S4, Mamba) can also be explored as baselines. - OOM for important baselines is not really helpful to draw any conclusions. To highlight efficiency of RFormer, please conducted experiments where you increased the context length or other parameters to show where baselines run OOM and how do they perform before that. - More experiments are needed, particularly for the forecasting task to understand how well the model understands the dynamics. - I took a brief look at the code and it looks like hyperparameter tuning was conducted for RFormer. Was such tuning also performed for the baselines? If no (which is a valid response), how did you selected the baseline parameters? How sensitive is the model to different hyperparameters? - The discussion on related work needs to be moved to the main text and improved. Please contrast RFormer with the related works, particularly the ones that are closely related such as NRDE and ContiFormer. Discussion on some closely related works [1, 2, 3] is missing. Ideally there should also be a comparison with at least of these methods (e.g., CRU). [1] Schirmer, Mona, et al. "Modeling irregular time series with continuous recurrent units." International conference on machine learning. PMLR, 2022. [2] Ansari, Abdul Fatir, et al. "Neural continuous-discrete state space models for irregularly-sampled time series." International Conference on Machine Learning. PMLR, 2023. [3] Oh, YongKyung, Dongyoung Lim, and Sungil Kim. "Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data." arXiv preprint arXiv:2402.14989 (2024). I am happy to update my score, if my concerns are adequately addressed. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can the authors clarify by what they mean by "input sequences must be sampled at the same times, (ii) the sequence length must be fixed"? These do not seem to be limitations of transformers, especially (ii). See weaknesses for other questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: There is a brief discussion on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the engaging review and valuable feedback. We believe we have incorporated the reviewer's suggestions into our manuscript and hope our changes warrant an increase in the score. **[W1] Need for stronger empirical analysis.** We have added 19 new datasets, 8 extra baselines, and new synthetic experiments (Tables 4-5/1-3 in the PDF). We hope these additions address the reviewer's concerns and provide a stronger empirical analysis. **[W2.1] Investigation of continuous-time and irregularly-sampled data limited.** Most tasks are indeed regularly sampled, except for the Limit Order Book data. However, we performed random drop experiments in Section 4.3 and Appendix F, expanding Section 4.1 to irregularly sampled settings. We conducted additional experiments to enhance our model's empirical trustworthiness. For regularly sampled data, our model was tested against SOTA models (S5, Mamba, LRU) across 5 datasets, showing very competitive performance (Table 5 PDF). For irregularly sampled settings, we compared our model against 4 additional baselines, including [1,3] as suggested by the reviewer, across 15 datasets from the UCR time-series archive. Table 4 (PDF) shows our model's performance and average rank (due to space constraints), consistently outperforming the baselines. Further, Table 2 (PDF) compares CRU and RFormer in an irregularly sampled synthetic data setting (with shorter sinusoids and a smaller number of classes than the experiments in Section 4.1), supported by CRU's hyperparameters validation in Table 1 and training time analysis in Table 3. These experiments demonstrate that recurrent models perform well with short sequences. Note that despite its superior performance, our model is significantly faster than other continuous-time models, as shown in Appendix F.2 and Table 3 (PDF). **[W2.2] OOM results and efficiency plots.** We thank the reviewer for raising this point. Regarding OOM errors for ContiFormer, Appendix E of the ContiFormer paper demonstrates its memory requirements scale exponentially with sequence length, making it impossible to run on long time-series datasets considered in our paper. We felt this is worth noting because we think ContiFormer is most similar in spirit to RFormer in that it augments the attention mechanism in a continuous-time fashion. We hope you find that with the additional baselines, this point is made clearer. We want to highlight that efficiency experiments can be already found in the paper in Figure 4 (Section 4.2. Training efficiency), showing seconds per epoch of all models. When a line stops, it indicates an OOM error. Appendix F.2 contains more information on computational times, showing RFormer can be used on context lengths of 250k time steps. We apologize if this was not clear and will modify the text accordingly. **[W2.3] More experiments, particularly for forecasting.** We have significantly increased the number of experiments during this rebuttal, adding 19 new datasets and 8 new baselines (see responses to [W1,2.1]) and new synthetic experiments. Regarding forecasting, we should remark that forecasting pipelines using Transformers typically involve iteratively masking the input representation to predict the next time step. In our setting, since we compress the input representation, these pipelines cannot be directly applied and would require extensive modifications, which is not feasible during the rebuttal timeframe. However, we include a step-ahead forecasting of the Apple Limit Order Book volatility (see General Response). Further, we point the reviewer to the HR task, which contains classical temporal dynamics (e.g. cyclicality). We would like to highlight that the main points of our paper on efficiency, inductive bias, and continuous processing should hold with our experiments. Furthermore, we would like to point out that other models in the area (such as [1,3]) have also been applied in the same tasks and have not been tested for forecasting, which requires extensive tuning. **[W2.4] Hyperparameter tuning for baselines.** In short, yes, we did. We thoroughly validated the step and depth parameters used to compute the log-signatures for the NRDE model (Appendix D). We did this as these are the only hyperparameters for which we performed tuning in our model. For the ODE-RNN and NCDE/NRDE models, we validated the architectural choices. Due to occasional sub-optimal performance in our replications, we included the original results for shared datasets (see Tables 1,3). For ContiFormer, we used hyperparameters from the official repository, testing with both 1 and 4 heads (our model uses 1 head), as detailed in Appendix F.3. For the new datasets, we used optimal hyperparameters from manuscripts/repositories, except for CRU, which we validated ourselves. Regarding the sensitivity of the model to hyperparameters, we direct the reviewer to Appendix E and have included new sensitivity experiments in Figure 2 of the PDF. **[W3] Improving related work section.** For the final version, we will move the discussion on related work to the main text as suggested and we will contrast our work with other models, highlighting (i) the recurrent vs. attention-based structure, (ii) computational efficiency, and (iii) the treatment of the signature with local vs. global windows in the case of NRDEs. **[Q1] Clarifications.** What we mean here is that the input data should be regularly sampled (to retain decent performance), and each input sequence must be sampled the same number of times. In many practical scenarios (e.g., medical data), time-series may vary wildly in their length. For Transformers, which have a fixed context window, this poses a non-trivial challenge of how to standardize the input data such that it may be encoded by the model. As suggested by reviewer wNkv, this can be improved through padding. We will add some discussion along these lines and tone down some of our wording. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough responses. Many of my concerns have been addressed, so I am raising my score to 6. Looking forward to the final version of the paper. :) --- Reply to Comment 1.1.1: Title: Reply to Reviewer Comment: Thank you again for your thorough and constructive review, recommending acceptance, and raising the score!
Summary: The paper proposes Rough Transformer (RFormer), an extention of the Transformer architecture towards operating on continuous-time representations of time-series data. RFormer employs a novel technique called multi-view signature attention, which performs attention on path signatures pre-computed from input data offline, thereby capturing both local and global dependencies across observations. Experiments on various real-world time-series datasets shows that RFormer enjoys superior predictive performance as well as computational efficiency compared to previous methods, while being robust to changes in sequence length and irregular sampling. Strengths: - [S1] **Good novelty.** To the best of my knowledge, incorporating rough path theory to time-series representation learning is a novel approach, and would be of great interest to the machine learning community. - [S2] **Great empirical performance.** Experiments on a wide variety of real-world datasets show large improvements in both accuracy and efficiency, demonstrating strong utility of proposed multi-view signatures in time-series modeling. Weaknesses: - [W1] **Questionable motivation of synthetic frequency classification experiments.** The first experimental section tests RFormer on two synthetic datasets with which the task is to classify input time-series based on their ground-truth frequencies. While L233-234 mentions the second setup in particular is designed towards testing the long-range reasoning capabilities of RFormer, but it is unclear whether this indeed the case. For the second synthetic dataset, in particular, how can identifying the frequency be a proxy for long-range reasoning when the frequency $\omega_0$ is used for $t < t_0$ only? The results on Figure 2 showing that methods that are "tailor-made for long-range time series modeling" (L254) such as Neural-CDE and Neural-RDE underperforming significantly also indicates that the designed task is not really representative of long-range reasoning. \ \ More interesting questions to ask could be: What makes RFormer sample-efficient vs. vanilla Transformer particularly on the Sinusoidal dataset and not so much on the Long Sinusoidal dataset? What makes RFormer more robust to changes in sampling frequency compared to Neural-CDE and Neural-RDE? Table 1 of [A] shows Neural-CDE is also quite robust to dropped data, but is this characteristic not emergent for Sinusoidal and Long Sinusoidal datasets? - [W2] **Missing analysis on interpolation methods.** By default, RFormer uses piecewise-linear paths for computing path signatures, but as mentioned in L140, it seems any continuous-time interpolation can be used. As such, it would be interesting to discuss (1) whether any other interpolation techniques can be deployed efficiently similarly to piecewise-linear paths and (2) if they lead to any boosts in predictive accuracy, but these discussions are missing in the current draft (i.e., is the currently used piecewise-linear interpolation "pareto-optimal" under performance-efficiency trade-offs?). [A] Kidger et al., Neural Controlled Differential Equations for Irregular Time Series. NeurIPS 2020. Technical Quality: 2 Clarity: 3 Questions for Authors: - [Q1] **Large overlaps in representation space.** Basec on the multi-view signature formulation in Figure 1 and Equation 8, it seems the earlier observations would be covered by a large number of input tokens. As shown in the right plots in Figure 3, this would result in large "representational overlap" similarly to the oversmoothing phenomenon in graph representation learning [B]. Considering this, could restricting the global view to a few previous points rather than all previous points be a viable option? or does the theoretical and empirical robustness of RFormer to variable lengths and irregular sampling require that all previous points be covered in the global view? - [Q2] **Discussion on input length of MLP and Transformer.** L108 states that the input length $L$ of the MLP and Transformer is fixed by assumption, but is this true? Sequences with different lengths can be processed in a single batch via padding. [B] Rusch et al., A Survey on Oversmoothing in Graph Neural Networks. arXiv 2023. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations in Appendix B. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback, as it has been very relevant in updating our manuscript (especially the comment on oversmoothing). We hope that the reviewer will be satisfied with the additional experiments carried out and will be inclined to raise the score. **[W1.1] Motivation of synthetic frequency classification.** The idea of this experiment was to have the relevant information for the classification experiment at the beginning of the time series, as it would have to be propagated forward in order to correctly perform the classification. This is the type of task where we expect traditional recurrent models to fail due to vanishing gradients if the sequence is relatively long (2000 points in this case). This is likely the reason for the poor performance of Neural CDEs, which are a continuous-time analog of RNNs. However, we highlight that this was useful to test if RFormer was able to retain the long-range capabilities of the Transformer despite operating on a very compressed representation. This was further evaluated in a number of very long datasets (most notably EigenWorms), where RFormer was not only the fastest but also the most performant. If the reviewer would suggest an alternative that might be more suitable in their view, we would be more than happy to run additional experiments for this task. **[W1.2] Clarification on RFormer’s sample efficiency on Sinusoidal vs. Long Sinusoidal datasets.** Upon inspection, we found a bug in our code for this particular experiment, where we were not adding the time information to the dataset before passing it to RFormer (this was only the case in this experiment, due to an incorrect local version of the code). After re-running the experiment, we see similar sample efficiency gains from RFormer in this task for the same hyperparameters. Furthermore, we found that some additional tuning of the signature level in this new setting led to even greater sample efficiency gains (Thank you for this!). The results can be found in Figure 2 in the associated PDF. We have also included an experiment investigating the effect of signature hyperparameters in Figure 2 of the PDF as well. **[W1.3] RFormer's robustness to sampling frequency changes vs. Neural-CDE/RDE.** The theoretical reason behind why RFormer can process irregularly sampled sequences is shown in Proposition 3.1 (L175). In terms of the performance of Neural CDE and Neural RDE, we note that they experience a similar drop for this task as in the other tasks considered. To better understand how our model compares to NRDEs and NCDEs in settings in which these baselines perform well, we ran an additional random dropping on 15 experiments, which are shown in Table 4 of the PDF. We find that RFormer is not only better performing, but also several orders of magnitude faster than the rest of the SoTA benchmarks considered. **[W2] Missing analysis on interpolation methods.** We agree that investigating the impact of different interpolation methods in the multi-view signature computation is interesting. Our choice to use linear interpolation of data was due to the efficient signature computation of paths of this form. This efficiency is due to 1) linear interpolation being a *local* operation and 2) as noted in the Appendix (L644), the signature of piecewise-linear functions can be computed explicitly in terms of the data points. Continuous interpolations such as splines are non-local, meaning the computational burden is high for long time series. Deriving a convenient, computationally efficient method for computing the signature is also non-trivial. As such, we could consider this approach to be "pareto-optimal", as the reviewer suggests. Since we focused our experiments on long time series, we decided against investigating this in our paper. However, based on your review, we will add a small section detailing this choice. **[Q1] Large overlaps in representation space.** We thank the reviewer for this excellent question. We have investigated this matter further, and this has led to some very interesting new insights that we feel should be brought to the attention of all the reviewers. For this reason, we have included our response to this question in our General Response. Additionally, we would also note that the multi-view signature transform is only an instance of a broader idea of presenting the model with information at different scales, which we found to improve performance (see Ablation in Appendix E.1). There are many other ways of providing this multi-scale information (such as restricting the global view to a few previous points rather than all previous points, as suggested by the reviewer), but this would be dataset and task-dependent. In the limit, we hope that these parameters can be tuned with the downstream task loss, but we leave this for future work. We have included a discussion of our hypothesis of why RFormer achieves substantial efficiency gains in our manuscript, as well as a related citation [1]. We thank the reviewer again for the very helpful comment on oversmoothing. **[Q2] Discussion on input length of MLP and Transformer.** Thank you for bringing this point to our attention. We agree with the reviewer that padding can be used as an ad-hoc solution to guarantee the same sequence length in the batch, even though it has been found that other models (recurrent and convolutional) sometimes struggle with this solution, see [2]. We wanted to highlight this as a weakness of the Transformer model when comparing it to RFormer, which will always give the same representation to the model due to the flexibility of the signature. However, we will tone down our wording in accordance with the reviewer's suggestion. [1] Rusch, T. Konstantin, et al. "Graph-coupled oscillator networks." International Conference on Machine Learning. PMLR, 2022. [2] Dwarampudi, Mahidhar et al. "Effects of padding on LSTMs and CNNs." arXiv preprint arXiv:1903.07288 (2019). --- Rebuttal Comment 1.1: Comment: Thank you authors for your time and commitment in preparing the rebuttal. All of my concerns have been addressed, and thus I increase my rating to 7. --- Reply to Comment 1.1.1: Title: Thank you for raising the overall and confidence scores Comment: Thank you for your response and your very helpful comments, which have greatly improved our paper. We are glad that we could address your concerns and appreciate your decision to raise both the overall and confidence scores. Thank you again!
null
null
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their valuable feedback. We have carefully reviewed their comments and incorporated their suggestions into a new version of the manuscript. Additionally, we are encouraged by the positive feedback provided by the reviewers on the novelty, efficiency, and performance of the proposed method, as evidenced by comments such as: - Good novelty. [...] a novel approach [...] of great interest to the machine learning community. - Great empirical performance. Experiments on a wide variety of real-world datasets show large improvements in both accuracy and efficiency, demonstrating strong utility of proposed multi-view signatures in time-series modeling. - [...] the combination of rough path signatures with attention is a novel combination. - Experiments on classification and regression tasks show that RFormer improves over existing models in terms of accuracy and is significantly more compute efficient. - Overall the paper is well written and is easy to follow. - The idea of using signature transform within an attention mechanism is interesting. We believe RFormer constitutes a prime example of the general effort in combining effective modelling techniques from different research fields for addressing practical machine learning tasks, which has the potential to inspire further similar approaches. In this general response, we aim to address some of the common questions among reviewers, as well as new insights gained during the rebuttal process as a result of excellent observations from the reviewers. - **New Datasets and Baselines** A common sentiment noted among reviewers was the need for additional datasets and baselines. To address this, we have implemented RFormer in 19 new datasets from the UCR time-series archive, consisting of 15 shorter time-series datasets and 4 long time-series datasets. We benchmarked against a number of new methods including SoTA methods for irregular temporal processing such as Neural {SDE, LSDE, LNSDE, GSDE} [1] and CRU [2], as well as SoTA state-space models (S5, LRU, and Mamba). For the irregular sampling experiments, we randomly dropped datapoints at rates of 30%, 50%, and 70%. The results of these experiments can be found in Tables 4 and 5 in the accompanying PDF. RFormer consistently outperforms the Neural ODE-based methods on the 15 short time-series datasets and the state-space models on the long time-series datasets. We also include an additional comparison between CRU and RFormer in an irregularly sampled synthetic data setting (supported by CRU's hyperparameters validation in Table 1 and an analysis of training times in Table 3, both in the PDF). We present the results in Table 2 of the PDF. Note that shorter sinusoids with a smaller number of classes than the experiments in Section 4.1 are used here, which demonstrates that recurrent models perform well with short sequences in this synthetic example. Also, as well as demonstrating superior performance, our model is significantly faster than other continuous-time models, as shown in Appendix F.2 and Table 3 of the PDF. - **Large Overlaps in Representation Space** We thank Reviewer wNkv for raising the excellent connection between the representational overlap of the global signature transform and the oversmoothing phenomenon present in graph representation learning. We investigated this question deeply and attained new results which we feel give new insights into the performance of our model and should be brought to the attention of all reviewers. In our experiments, we found that Transformers work better with more coarse representations of input data, and we believe that this is the reason behind some of the performance gains that we observe in RFormer. By coarsening the representation offered to the Transformer backbone, the model is able to learn better and faster. However, as Reviewer wNkv points out, this only occurs if the intervals at which windows are taken are sufficiently large. Otherwise, the signatures may exhibit some form of "representational collapse". We note that in our experiments on very long time-series, we took the signature over windows that were very spaced apart, which prevented global representations from being too similar and seemed to yield better performance. To offer a more quantitative evaluation of oversmoothing in this setting, we measured the Dirichlet energy by interpreting the temporal signal as a directed path graph. We compared different numbers of windows (from 2 to ~18k) of the global signature in the EigenWorms dataset. The results are shown in Figure 1 of the attached PDF. Interestingly, we found that the "elbow" of the Dirichlet energy corresponded to 30 windows in this dataset, which we found empirically to be the most performant setting. This hints at the idea of the Dirichlet energy being used for signature hyperparameter tuning as well. We thank Reviewer wNkv again for this very interesting suggestion. We will add a dedicated section discussing these conclusions in the paper, as we believe it could shed light on the internal mechanisms of Transformers as well as motivate our method more strongly. We have also added some citations to works associated to oversmoothing. - **Time-Series Forecasting** To assess the model's ability to understand the dynamics of the time series, we have carried out an additional experiment of forecasting the next-step intraday volatility of an Apple Limit Order Book. The results are shown below: | | RFormer | Transformer | NRDE | |----------|---------|-------------|-------| | **RMSE** | 32.33 | 33.45 | 37.22 | [1] Oh, Y., et al. Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data.Twelfth International Conference on Learning Representations. [2] Schirmer, Mona, et al. "Modeling irregular time series with continuous recurrent units." International conference on machine learning. PMLR, 2022. Pdf: /pdf/5064204069679d0f4857e476bfaea6eaf64cc8fc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Gene-Gene Relationship Modeling Based on Genetic Evidence for Single-Cell RNA-Seq Data Imputation
Accept (poster)
Summary: This paper introduced a new scRNA-seq imputation method, scCR, utilizing both associating and dissociating gene-gene relationships to improve the accuracy of scRNA-seq imputation, especially on noisy dataset. The method constructed comprehensive k-NN graph for both cell-cell and gene-gene. Standardize the value distribution of each gene to enhance gene-gene relationship modeling. Comprehensive analysis on multiple datasets demonstrates the advantage of scCR compared with state-of-the-art methods in two tasks, gene expression recovery and cell clustering, especially on rare cell types. Strengths: This paper tackles an important problem, scRNA-seq data imputation, with extra information from negatively correlated gene relationships. The idea itself is straightforward yet effective. The paper includes extensive and solid experiments on multiple datasets. The experiments show solid performance improvement compared to multiple baseline methods. The paper explores the variance on dataset selection and dropout rate. This paper also includes a scalability study to demonstrate its advantage on runtime. Weaknesses: Though the paper explores variance on dataset selection and dropout rate, yet the variance is mainly focus on dataset. It would be helpful to include a detailed sensitivity analysis of key hyperparameters, such as k for k-NN, \alpha, \beta, and \gamma. The paper includes a runtime analysis, but not a memory usage analysis. The paper can be more comprehensive to include memory usage analysis to show its advantage on scalability. Technical Quality: 3 Clarity: 3 Questions for Authors: The overall paper is quite complete and comprehensive. Only a few questions, 1. the impact of hyperparameter selection, additional set of experiments would be perfect. But due to the time limit, theoretical analysis would also be very helpful to understand how sensitive the model is and make the method easier to use. 2. The memory usage of this method on different scales gives potential users a better sense of its applicability and resource requirements. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > It would be helpful to include a detailed sensitivity analysis of key hyperparameters, such as k for k-NN, \alpha, \beta, and \gamma. The impact of hyperparameter selection, additional set of experiments would be perfect. But due to the time limit, theoretical analysis would also be very helpful to understand how sensitive the model is and make the method easier to use. We conduct additional experiments to address the reviewer’s concerns and **provide a comprehensive analysis of the impact of different hyperparameters, including $\alpha$, $\beta$, $\gamma$ and $k$**, on the performance of scCR. We report ARI in cell clustering on three datasets by varying $\alpha$, $\beta$, $\gamma$, and $k$ in the ranges of \{$ 0.01, 0.05, 0.1, 0.5, 0.9$\}, \{$ 0.1, 0.5, 0.9, 0.95, 0.99, 0.999$\}, \{$ 0.001, 0.01, 0.05, 0.1, 0.5, 0.9$\}, and \{$ 1, 2, 3, 5, 10, 15$\}, respectively. When varying a target parameter, we fix the other hyperparameters to their default settings. Table 4, Table 5, Table 6, and Table 7 in the PDF of the global response demonstrate how the choice of hyperparameters impacts the performance of scCR. As shown in the tables, $\alpha=0.05$, $\beta=0.99$, $\gamma=0.01$, $k=2$, which are values used in this work, generally show good performance. In terms of sensitivity, when the runner-up’s ARI is $0.660\pm0.00$, $0.848\pm0.00$, and $0.677\pm0.00$ on Baron Mouse, Zeisel, Baron Human, respectively, **scCR show its robustness against hyperparameter variations.** Specifically, $\alpha \in $\{$ 0.05, 0.1, 0.5 $\}, $\beta \in$ \{$ 0.99, 0.999 $\}, and $\gamma \in $ \{$ 0.001, 0.01 $\} result in state-of-the-art performance across the datasets. In the case of $k$, except for Baron Human, scCR shows outstanding performance regardless of values for $k$. We can also observe that varying a hyperparameter results in superior performance to that of the default hyperparameter. Nevertheless, considering the unsupervised nature of the single-cell analysis, we set the hyperparameters of scCR to default settings by fixing the hyperparameters that generally work well. We will add this discussion and the experimental results regarding the hyperparameter sensitivity of scCR to our final version. > The paper includes a runtime analysis, but not a memory usage analysis. The paper can be more comprehensive to include memory usage analysis to show its advantage on scalability. The memory usage of this method on different scales gives potential users a better sense of its applicability and resource requirements. **We investigate the memory complexity of all methods used in this paper** and conduct additional experiments to analyze the memory usage of our scCR. Table 8 in the PDF of the general response compares the inputs and memory complexity of scCR with other state-of-the-art methods. To mitigate the heavy memory usage during $k$-NN graph construction, we utilize a batch-wise $k$-NN graph construction strategy. When constructing $k$-NN graphs among genes, we divide genes into batches with batchsize $B$, and compute $k$-nearest neighbors for each batch. Similarly, we apply the same batch-wise strategy when constructing $k$-NN graphs among cells. This strategy reduces the memory requirement because it avoids the need to store distances between all points in the entire dataset at once. Specifically, in the memory complexity of scCR, batch-wise $k$-NN graph construction changes $O(G^2)$ and $O(C^2)$ to $O(BG)$ and $O(BC)$, respectively. Thus, batch-wise $k$-NN graph construction can handle large datasets that would otherwise be infeasible to process due to memory constraints. Additionally, scCR does not require any trainable parameters used by other deep-learning-based models. **We further measure the memory usage of scCR across various datasets**, as shown in Table 9 in the PDF of the general response. The results in the table indicate that the advantages of scCR extend beyond its superior performance and time efficiency, showcasing its scalability as well. We will include this detailed memory usage analysis in the final version of our manuscript to provide a more comprehensive evaluation of scCR. --- Rebuttal Comment 1.1: Title: Eager for Your Feedback on Our Rebuttal Comment: Dear Reviewer wZRo, We sincerely thank you for dedicating your time to review our work and for your constructive feedback. We particularly appreciate the positive feedback on the significance of our approach to scRNA-seq data imputation, especially your recognition of the completeness of our work and the thoroughness of our experiments. **With only about a day remaining in the discussion period**, we are eager to engage further and understand whether our responses have satisfactorily addressed your concerns. In our rebuttal, **we provided point-by-point responses to all your questions and concerns regarding the sensitivity analysis of key hyperparameters and memory usage analysis**. In summary: * We provided a **comprehensive analysis of the impact of different hyperparameters**, including $\alpha$, $\beta$, $\gamma$, and $k$. * We investigated the **memory complexity of all methods used in this paper**. * We measured the **memory usage of scCR** across various datasets. We would greatly appreciate it if you could kindly review our responses. We welcome any further questions and are happy to provide additional clarifications if needed. Thank you for your consideration. Sincerely,\ The Authors --- Rebuttal 2: Title: [Gentle Reminder] Eager for Your Feedback on Our Rebuttal Comment: Dear Reviewer **wZRo**, In our rebuttal, we provided point-by-point responses to all the concerns and questions you raised. Given that we have only **six hours remaining** before the **deadline**, we are very eager to confirm whether our responses have adequately addressed your concerns. We kindly ask you to take a moment to review our rebuttal and provide any further feedback. If there are any remaining questions or concerns, please be assured that we are ready to respond promptly. Sincerely,\ The Authors
Summary: This paper introduces a novel imputation method for scRNA-seq data that accounts for both associating and dissociating gene relationships by using a k-NN graph and negating the cell-gene matrix. The method standardizes gene value distributions and shows significant performance improvements in cell clustering and gene expression recovery across six datasets, outperforming existing methods. Strengths: 1. This paper introduces a new imputation method named Single-Cell Complete Relationship (scCR) that addresses the limitations of current propagation-based approaches for scRNA-seq data by modeling both associating and dissociating gene-gene relationships. 2. The extensive experiments conducted by the authors show that scCR significantly outperforms existing methods in gene expression recovery and cell clustering, highlighting its effectiveness in capturing complete gene-gene relationships and improving the quality of scRNA-seq data analysis. 3. The paper is well-structured and clearly written. Weaknesses: 1. The proposed method appears too simple and lacks significant innovation. 2. The paper does not clearly articulate the motivation behind the proposed method. Specifically, it does not adequately explain what dissociating relationships among genes are or why identifying these relationships is effective for addressing the research problem. 3. The experiments lack biological validation, and relevant case studies are needed to support the findings. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. There are now many methods using large models for cell clustering, such as SCGPT and GeneFormer. How do these methods compare in terms of effectiveness? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > How do large models for cell clustering, such as SCGPT and GeneFormer, compare in terms of effectiveness? Our method and large-scale models have **clearly different objectives; while our method tackles denoising scRNA-seq data, large-scale models (e.g., scGPT and Geneformer) targets learning gene and cell embeddings using neural networks pre-trained on large-scale datasets for transfer learning.** Therefore, our method and large-scale models are **not in a relationship where their effectiveness can be compared, but rather in a relationship where they can collaborate to create synergy.** Our scCR can provide denoised scRNA-seq data to large-scale models. To confirm that scCR can improve the performance of large-scale models, we conduct an additional experiment using scGPT [1]. We measure the cell type annotation performance of an scGPT model on the Multiple Sclerosis dataset [2] when we apply our scCR to the input data of scGPT compared to when we do not apply it (i.e., when using raw data). Table 3 in the PDF of the global response shows the cell type annotation performance of the scGPT model fine-tuned on the dataset, averaged across three independent runs. As shown in the table, **our scCR improves the cell type classification performance of the scGPT model.** This result demonstrates that our scCR effectively addresses noise contained in scRNA-seq data and can assist the large-scale models. > The experiments lack biological validation and relevant case studies To verify whether scCR can provide biological insights, we confirm that **scCR enriches relevant genes in lupus, a chronic autoimmune disease.** Specifically, we conduct an in-depth analysis on the PBMCs dataset [3] obtained from lupus patients. We perform GSEA enrichment tests [4] that identify pathways related to specific conditions. In this case, the condition corresponds to interferon-stimulated CD16 Monocytes. We use both raw data and data imputed by scCR, and compare the results from them. When comparing the top 20 most significantly enriched pathways, **scCR newly identifies four SARS_COV-related pathways in interferon-stimulated CD16 monocytes.** Since lupus is an autoimmune disease characterized by an overactive immune system that attacks the body's own tissues, the activation of SARS_COV-related pathways may indicate an overactive or abnormally activated immune response in lupus patients, reacting excessively to viral infections. This suggests that the pathways identified through **scCR's denoising process provide new and important biological insights that were previously obscured by noise**, highlighting the utility of scCR in revealing relevant gene interactions and pathways. We will include this experimental result regarding biological validation in our final version. > Too simple and lacks significant innovation. We believe that **utilizing biomedical evidence and domain-specific knowledge is crucial for the development of machine learning for healthcare.** Furthermore, we expect that **our work will lead subsequent frameworks to take into account the existence of two types of gene-gene relationships** in scRNA-seq data. While **designing sophisticated and complex methods is also important**, we believe that **biomedical evidence can drive significant progress in machine learning for healthcare**, and our work can serve as a good example. > Motivation and adequate explaination of what dissociating relationships among genes are or why identifying these relationships is effective for addressing the research problem. **Associating genes refer to genes that co-occur**, while **dissociating genes refer to genes that avoid each other** [6]. Mathematically speaking, associating relationships and dissociating relationships correspond to positive and negative correlation coefficients, respectively. The core idea of previous work, scBFP [7] is to impute false zeros (dropouts) in a gene based solely on associating genes with high cosine similarity. **Although scRNA-seq data imputation is a very challenging task due to severe noise, scBFP overlooks the presence of dissociating genes.** Within a cell, when considering the value to be imputed for gene Q, the value for its associating gene can assist in inferring the value for gene Q. However, its dissociating gene can also provide crucial information. **If its dissociating gene has a high value, the value for gene Q may be low because they avoid each other.** Unlike scBFP, our scCR can leverage dissociating gene-gene relationships via the negation of a cell-gene matrix. Despite its simplicity, our scCR successfully models dissociating gene-gene relationships as shown in Figure 8 in the manuscript. Furthermore, scCR significantly outperforms state-of-the-art methods in various downstream tasks, as shown in Table 1, Figure 5, Figure 6, and Figure 7 in the manuscript. We will add this detailed explanation regarding associating and dissociating relationships, as well as the clear motivation behind scCR, to our final version. [1] H. Cui et al., “scGPT: toward building a foundation model for single-cell multi-omics using generative AI,” Nature Methods, 2024. \ [2] L. Schirmer et al., “Neuronal vulnerability and multilineage diversity in multiple sclerosis,” Nature, 2019. \ [3] H. M. Kang et al., “Multiplexed droplet single-cell RNA-sequencing using natural genetic variation,” Nature biotechnology, 2018. \ [4] A. Subramanian et al., “Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles,” PNAS, 2005.\ [5] E. Rossi et al., “On the unreasonable effectiveness of feature propagation in learning on graphs with missing node features,” LoG, 2021. \ [6] F. J. Whelan et al., “Coinfinder: detecting significant associations and dissociations in pangenomes,” Microbial genomics, 2020. \ [7] J. Lee et al., “Single-cell RNA sequencing data imputation using bi-level feature propagation,” Briefings in Bioinformatics, 2024. --- Rebuttal 2: Title: Eager for Your Feedback on Our Rebuttal Comment: Dear Reviewer 49oE, We sincerely thank you for dedicating your time to review our work and for your thorough feedback. **With only about a day remaining in the discussion period**, we are eager to engage further and understand whether our responses have satisfactorily addressed your concerns. In our rebuttal, **we provided point-by-point responses to all your questions and concerns regarding (1) large-scale models (e.g., scGPT and Geneformer), (2) the lack of biological validation, (3) the lack of innovation, and (4) insufficient explanation**. In summary: * For (1), we clarified that our method and large-scale models have **clearly different objectives**. * For (1), we demonstrated that **our scCR improves the cell type classification performance of scGPT**. * For (2), we confirmed that **scCR enriches relevant genes in lupus**, a chronic autoimmune disease, **by newly identifying four SARS_COV-related pathways in interferon-stimulated CD16 monocytes**. * For (3), we assert that **utilizing biomedical evidence and domain-specific knowledge is crucial for the development of machine learning in healthcare**. * For (4), we provided a **clear motivation** behind our method, along with a **detailed explanation** of dissociating relationships and why identifying them is effective. We would greatly appreciate it if you could kindly review our responses. We welcome any further questions and are happy to provide additional clarifications if needed. Thank you for your consideration. Sincerely,\ The Authors --- Rebuttal 3: Title: [Gentle Reminder] Eager for Your Feedback on Our Rebuttal Comment: Dear Reviewer **49oE**, In our rebuttal, we provided point-by-point responses to all the concerns and questions you raised. Given that we have only **six hours remaining** before the **deadline**, we are very eager to confirm whether our responses have adequately addressed your concerns. We kindly ask you to take a moment to review our rebuttal and provide any further feedback. If there are any remaining questions or concerns, please be assured that we are ready to respond promptly. Sincerely,\ The Authors
Summary: The paper proposes an approach for single-cell RNA-seq data imputation. The data comes as a matrix capturing relationship between cells and genes. Zero values in that matrix represent unobserved gene expression that can result from technical omissions (known as dropouts) and true biological absence. The non-zero values also suffer from noise such as cell and batch effects. The goal is to impute and de-noise observed single-cell RNA-seq data. An effective prior approach is based on kNN graphs, where one first builds an adjacency matrix between cells or genes using cosine similarity and kNN neighborhoods relative to RNA-seq data. In contrast to prior work that focuses on adjacency matrices informed by “association” links, the proposed approach aims at modelling “dissociation” links that have negative cosine similarity. The approach proceeds in multiple stages: - Pre-imputation stage where kNN graph is built on the input matrix and similar to Markov chains, starting from random state (i.e., feature matrix of dimension $N_{cell} \times N_{gene}$) one “diffuses” to the stationary distribution (Appendix A for details). - The second stage appends negated pre-imputed matrix making the feature matrix $N_{cell} \times 2N_{gene}$ that allows for kNN graphs accounting for both “association” and “disassociation” relationships to be reflected in the adjacency matrix. This is followed by gene-to-gene and cell-to-cell propagation (i.e., stationary distribution of the Markov chain given by these matrices) with resulting matrices convexly combined into imputed “complete relationship”. - Final step is de-noising where now propagation of information involves convex combination with the original feature matrix and adjacency matrix build using kNN graph on the back of “complete relationship” from step ii). The final output is a convex combination of steps ii) and iii). Experiments involve cell clustering, recovery of dropout rates, robustness relative to dropouts, identification of rare cell types, modelling disassociation rates, etc. Strengths: This is an interesting problem that relates nicely to link prediction in graph neural networks. It has been clearly presented with strong motivation by noise and dropouts. The paper is also clearly written and well-organized. I would hope that there will be follow up from the community focusing on link prediction. The approach is not straightforward and involves several “propagation” steps and, contrary to past work, incorporates the disassociation and negative correlations into kNN adjacency matrices. Empirical performance in some cell clustering tasks is not incrementally but significantly improved. The method also appears to be more robust relative to dropouts than the alternatives. Weaknesses: An ablation is missing relative to different steps (pre-imputation, complete relationship, de-noising stage). Hence, it is unclear if they are needed and to what extent they contribute to the performance improvement. Imputation metrics might be dependent on dropout strategy and it would be good to discuss what kind of “random” strategies have been used and how likely they are to reflect the corruptions specific to single-cell RNA-seq. Is there any way to generate “challenging” splits that are better at reflecting the “generalization” and "robustness"? Technical Quality: 2 Clarity: 3 Questions for Authors: M is of dimension N x F (line 112)? How do you decide that cell row is “unknown” from observed data? Dropout rates? How was this done exactly? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Imputation metrics might be dependent on dropout strategy and it would be good to discuss what kind of “random” strategies have been used and how likely they are to reflect the corruptions specific to single-cell RNA-seq. Is there any way to generate “challenging” splits that are better at reflecting the “generalization” and "robustness"? **Yes, there is a specific pattern of dropouts in real scRNA-seq data, and we perform additional experiments applying a realistic dropout strategy** reflecting this pattern. Existing studies [1, 2] simulate dropout by randomly sampling non-zero values in a cell-gene matrix from a uniform distribution and setting them to zero (i.e., missing completely at random (MCAR)). However, in real scRNA-seq data, dropouts occur more frequently in genes with low expression levels rather than those with high variance [3]. This is because the probability of capturing RNA transcripts of low-expression-level genes during sequencing is lower. Based on this pattern of dropouts, we select the 1000 genes with the lowest expression levels and simulate dropout only in these genes. We randomly sample non-zero values of these genes from a uniform distribution and replace the sampled values with zero (i.e., missing not at random (MNAR)). Table 1 in the PDF of the global response shows the performance comparison under the aforementioned MNAR settings in terms of data recovery, measured by RMSE. We compare our scCR to the two most competitive baselines, scFP [1] and scBFP [2]. The number of dropouts is set to $20\%$ of the total number values in a cell-gene matrix. As shown in the table, **scCR outperforms the compared methods by significant margins in the realistic dropout settings, demonstrating the robustness of scCR in realistic scenarios.** We believe the reviewer has highlighted a very important aspect of dropout recovery in scRNA-seq data research. The consideration of realistic dropout simulation can help pre-assess the generalizability of techniques in real-world scRNA-seq application. We will include this important discussion and the experimental results in our final version. > An ablation is missing relative to different steps (pre-imputation, complete relationship, de-noising stage). Although we have conducted an ablation study analyzing the effectiveness of concatenation and standardization processes in the complete relation stage of scCR as shown in Table 2 in the manuscript, **we conduct an additional ablation study to explore the effectiveness of each stage of scCR.** Table 2 in the PDF of the global response shows the results of the ablation study in terms of cell clustering, measured by ARI. As shown in the table, the addition of the complete relation stage and the denoising stage notably enhance the performance compared to when the pre-imputation stage is used alone. We can confirm that **the complete relation stage and the denoising stage significantly contribute to the outstanding performance of scCR.** This ablation study emphasizes the well-founded design of our scCR. > How do you decide that cell row is “unknown” from observed data? **We do not decide known or unknown values from a given observed cell-gene matrix when using our scCR.** In this paper, the terms known and unknown are used solely to explain the process of Feature Propagation (FP) [4], which addresses missing feature imputation on graph-structured data. FP assumes that the locations of both observed (known) and unobserved (unknown) values in a feature matrix are given. FP imputes unobserved values by diffusing observed values while preserving these observed values. In contrast, in scRNA-seq data recovery, all values are observed (i.e., known) in a given cell-gene matrix. Thus, to apply FP to scRNA-seq data, FP-based imputation methods treat zero values as unknown values to be imputed via features diffused from non-zero values. For clarity, we will add this explanation to our final version. > Dropout rates? How was this done exactly? Following conventional dropout recovery research, given a cell-gene matrix, we randomly sampled non-zero values from a uniform distribution at dropout rates of $ \{ 0.2, 0.4, 0.8 \} $. We then set these sampled non-zero values to zero, creating false zeros. > M is of dimension N x F (line 112)? Yes, we thank you for pointing out the typo. As the reviewer mentioned, $\mathbf{M}$ has the same dimension as a given feature matrix $\mathbf{X}\in \mathbb{R}^{N \times F}$, where $N$ is the number of nodes and $F$ is the number of feature channels. [1] S. Yun et al., "Single-cell RNA-seq data imputation using feature propagation," arXiv preprint arXiv:2307.10037, 2023. \ [2] J. Lee et al., "Single-cell RNA sequencing data imputation using bi-level feature propagation," Briefings in Bioinformatics, 2024. \ [3] Y. Liu, et al., "iDESC: identifying differential expression in single-cell RNA sequencing data with multiple subjects," BMC bioinformatics, 2023. \ [4] E. Rossi et al. "On the unreasonable effectiveness of feature propagation in learning on graphs with missing node features," Learning on graphs conference,” PMLR, 2022. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for detailed response and additional experiments. I am satisfied with the author's response and will be increasing my score. --- Rebuttal 2: Title: Response to Reviewer NnmD’s Feedback and Acknowledgment Comment: We appreciate your decision to increase the score and your recognition of our efforts in addressing your concerns. We are pleased that **our detailed response and the additional experiments provided were satisfactory**. We **recognize the depth of expertise you bring to the review process**, which has helped establish a dropout evaluation setting that is more realistic than conventional ones. Thanks to this setting, we were able to **demonstrate the superiority of our method even in more realistic scenarios**. We will include this important discussion and the experimental results in the final version. **Your insightful reviews and efforts have significantly improved our manuscript**. We welcome any further questions and are happy to provide additional clarifications.
null
null
Rebuttal 1: Rebuttal: We 1) propose a novel imputation method that newly employs dissociating relationships in addition to associating relationships, 2) standardizes the value distribution of each gene to have standard distributions regardless of the gene, and 3) demonstrate that the proposed method achieves exceptional performance gains in both cell clustering and gene expression recovery. We appreciate the reviewers’ thoughtful comments of our work. Especially, we thank the reviewers for the positive feedbacks about "clear and strong motivation", "straight forward yet effective approach", and "well organized paper". Pdf: /pdf/389e33859a0ccc5d62d70993cb793adfbdb7a96b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Amortizing intractable inference in diffusion models for vision, language, and control
Accept (poster)
Summary: This paper studies the problem of training diffusion models to sample from an intractable posterior distribution, defined by a prior diffusion model and an arbitrary likelihood function. The contributions are summarized as follows: - This paper proposes relative trajectory balance (RTB) for training diffusion-based models to sample from a posterior distribution. RTB is derived from the perspective of continuous GFlowNets, which thus enables off-policy training. In contrast to related literature, the proposed approach performs posterior inference by fine-tuning a prior diffusion model in a parameter-efficient way. - The effectiveness of the proposed approach is validated through experiments in vision, language modeling, and continuous control benchmarks. Strengths: - This paper proposes to train diffusion-based models to sample from an intractable posterior distribution. In particular, the proposed approach performs posterior inference by fine-tuning a prior diffusion model $p_{\theta}$ in a parameter-efficient way. - The paper is well-written and well-organized. Weaknesses: We use a posterior sampler, parameterized by a prior diffusion model with a learnable drift term $u^{post}$, to simulate trajectories and obtain samples $x_{1}$. Then, we compute the likelihood function $r(x_{1})$ and the prior $p_{\theta}$. Using these as an unnormalized density, the model parameters can be updated using GFlowNet-based loss. For fine-tuning a diffusion model, do we need to ensure that the initial state is an empty set (discrete) or a point mass (continuous)? In general, the initial state for DDPM follows a Gaussian distribution. Does this violate the GFlowNet assumption? Technical Quality: 3 Clarity: 3 Questions for Authors: Some typos: - Line 158: $x_{1} \leftarrow x_{0}$ --> $x_{1} \rightarrow x_{0}$? - Eq11: $NN_{2}(x_{t}, t)$ --> $NN_{2}(t)$? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Limitations were included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and positive assessment of the paper! We answer your question below: ### Does Gaussian initial state violate assumptions? Assuming the initial distribution is Gaussian does not violate the GFlowNet assumption, as it amounts to assuming the first generation step transitions from an abstract initial state to a sample from that Gaussian distribution. When training, e.g., with the trajectory balance objective, the likelihood of this first transition -- from the abstract initial state to the Gaussian noise sample -- appears in the decomposition of the sampling trajectory's forward likelihood. With RTB, the likelihood of this initial-to-noise step is the same for the prior and posterior models and thus cancels in the loss. ### Typos - The direction of the arrows in $x_1\leftarrow\dots\leftarrow x_0$ is intentional: the arrows always show the transitions that exist in the MDP, but the sampling of the noising trajectory happens in reverse (starting from data $x_1$ and ending at noise $x_0$). We can revise if you think this causes confusion. - Equation (11): Yes, ${\rm NN}_2$ isn't taking $x_t$ as input. We will fix this. Thanks for reading carefully! **Please let us know if there are any other questions we can answer during the discussion period.** --- Rebuttal Comment 1.1: Comment: Thank you for your reply, and I will maintain my score.
Summary: The paper looks at the problem of finetuning/training a generative model to sample from a desired posterior distribution when given access to a diffusion model prior. The experiments validated the generation capability across different tasks that diffusion models (and conditional generation) can be applied to - text infilling, text-to-image generation and training policies through offline RL. The work proposes a fine-tuning objective to train the model that will sample from the posterior (which in this case is instantiated from the prior’s checkpoint), which can be computed using off policy data/trajectories, ie. trajectories that are not all sampled from the posterior model. Strengths: Conditional generation is an active field of study and being able to use a foundational diffusion/generative model as a prior that can be employed/modified to get a posterior distribution of interest, is of interest in many applications. The paper is mostly written clearly, and presents experiments on three different generation problems. Developing an off-policy objective for RL training/finetuning of a generative prior is very useful, however it’s somewhat unclear to me how off-policy the data actually is - I believe the data used is mostly on-policy with small amounts of noise added - could the authors add a small discussion making it very clear how the training data is related to the on/off-policy distributions of the prior/posterior models and how it is different from the data used to train the baselines, as well as how realistic these data collection assumptions are? Weaknesses: 1. Speaking very qualitatively, the provided generations in Fig 4 and H.1 don’t look very different from each other and DPOK (and often even DDPO) seems to be generating samples of the same quality. This leads me to think that marginal improvements in the metrics provided (log likelihood etc) are not strongly tied to the generation quality. Also how were these samples chosen? To avoid cherry-picking and encourage reproducibility, i’d suggest mentioning the seed they were sampled with in the corresponding code readme files. 2. Why isn’t there a std dev/error reported for baseline numbers in Table 4? The numbers look close enough for many methods and aggregating metrics over seeds can significantly change results in RL. 3. A major concern is that the authors should ablate each stabilization/implementation trick they used when it is applicable to other baselines that they compare to (Clipping, using the pretrained ckpt from the prior for the finetuning, using parameter efficient training, any tweaks made to noising schedules, any special tricks used in data collection etc) since there are so many that have been used. Overall, I'm only moderately familiar with this area, and therefore will assign a lower confidence to my review. However, my current belief is that the paper has not demonstrated the advantages of the proposed method satisfactorily(or maybe this just has not been presented clearly) so far (either in terms of showing that it can really work well with very off-policy data that is easily available, or showing impressive gains in generation quality/diversity). Technical Quality: 2 Clarity: 3 Questions for Authors: 1. On the text infilling task, am I correct in assuming that none of the other baselines had access to both x and y (the first 3 and the fifth sentences) since they are autoregressive and can’t condition on y? If yes, is the comparison to diffusion models that can condition on y fair? Should the comparison instead be with encoder-decoder or non-causal language models? 2. Is there a typo in Line 112? It says “... would sample the product distribution 𝑝post(x1) ∝ 𝑝𝜃 (x1)𝑟(x1). If 𝑟(x1) = 𝑝(y | x1) is a conditional distribution over another variable y, then 𝑝post is the Bayesian posterior 𝑝𝜃 (x1 | y).” Shouldn’t the product distribution be equal to the probably of the intersection of x and y and not y given x? Or is there a term which has implicitly been assumed to be constant/irrelevant? Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The paper includes a very brief discussion on limitations which could be improved. I can provide more suggestions for this discussion after engaging with the authors and understanding their assumptions better. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We are happy to see that you appreciated the breadth of applications presented in our paper and the value of the contribution. Below we have attempted to answer your questions and to clarify a few misunderstandings in the review. We note that most of the listed weaknesses and questions pertain to specifics of individual experiments, not to the main contribution of the paper: a general-purpose method that can be applied to a wide range of tasks *simultaneously*, achieving results comparable to specialized methods that had been developed for each problem. We hope that our reponses will help to answer your questions and concerns. ### Improvements in image quality Measuring posterior sampling quality is difficult for text-to-image generation models, since we do not have ground truth samples from the posterior. We acknowledge that differences in visual quality compared to DPOK are not apparent, and the visual results (Figure 4) should be seen as complementing Figure 3 in showing comparable quality and diversity to baselines. RTB training facilitates asymptotically unbiased posterior sampling. This may not necessarily translate to improvements in auxiliary fidelity metrics such as those we must resort to when ground truth posterior samples are unavailable (as in the text-to-image experiments). However, in the experiments where we do have access to unbiased samples from the target classes (MNIST/CIFAR-10), it does translate to better FID scores (Table 2). In practice, text-to-image generation systems may use a combination of techniques to improve samples (e.g., collecting data from human raters, imposing auxiliary aesthetic constraints, etc.). Our work adds a novel, general, and probabilistically principled technique to this toolbox. ### Fair comparisons We answer a few questions related to comparisons between other methods and hours, hoping to clarify why we believe they are fair and informative. #### Infilling baselines seeing the right context Good question; no, it is **not** the case that baselines see only $x$ (left context) and not $y$ (right context). The fact that this was not clear suggests we should write it explicitly in the paper; thank you for directing our attention to it. In fact, all the baselines with the exception of "Prompt (**x**)" do condition on both $x$ and $y$, making the methods comparable. For example, for the prompting and SFT autoregressive baselines, the model receives a prompt of the form "Beginning: {x}, End: {y}, Middle: ", following prior work (see Appendix F). The difference between "Prompt (**x**)" and "Prompt (**x**,**y**)" shows the importance of conditioning on $y$. #### Algorithm components To clarify, we apply the same collection of techniques with RTB and with methods from prior work wherever applicable. For example, all methods used the same noising schedules. Gradient clipping, fine-tuning the pretrained prior checkpoint, and parameter-efficient training were done for RL baselines as well. On the other hand, some techniques used with RTB training are *not applicable* to other methods. Data collection methods used for off-policy training cannot be applied to compared to manifestly on-policy finetuning algorithms such as DPOK. The inference-time classifier guidance baselines, such as those in Table 2, do not require any training, so training hyperparameter choices do not apply to them. Altogether, while we believe the comparisons we make are fair, we agree that ablations on the techniques used in the paper (while not being applicable to baseline methods) would add to the understanding of the proposed algorithm. We will include ablation studies about this in the final version. #### Prior/posterior training data It is important to note that the newly proposed algorithms **and** most baselines are "data-free" -- that is, the prior model is pretrained and we receive no ground truth samples from the posterior, but rather must discover its modes through exploration and can query for the reward of a generated sample. #### How were samples for figures chosen? In the image generation figures (Figures 3 and 4 and Appendix H.1), the samples are not cherry-picked. In Appendix H.1, in each row the images are generated from seeds 0-9. #### RL baselines The reporting of baseline numbers and highlighting in Table 4 exactly matches the form in a line of previous offline RL papers (see, e.g., [35,43,36]), where std is reported only for the newly proposed algorithm alongside baseline means. However, we will update the paper to include std for comparisons, and have attached an updated table with std bars from baselines which report this metric in the global rebuttal document (Table 4). ### Line 112 In that line, $p^{\rm post}$ is a distribution over $x_1$, where $y$ is fixed. For a fixed $y$, the product is proportional *as a function of $x_1$* to the posterior $p(x_1\mid y)$. **Thank you again for your feedback. We hope these answers and clarifications help; if they do, please consider updating your score. Let us know if there are any other questions we can answer during the discussion phase.** --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your response. I had the following clarification questions/thoughts: 1. " a general-purpose method that can be applied to a wide range of tasks simultaneously, achieving results comparable to specialized methods that had been developed for each problem." - is this true for all baselines considered in the work (that they are developed for specific problems)? Could the DPOK/DDPO baseline not have have been used on all the tasks? Could you explain why it was not applicable to the other two setups (Offline RL, text infilling) 2. On offline RL the method is slightly worse than both QGPO and D-QL (if we count the number of setups where a method is not within the top 5%) - and so I would suggest removing the claim from the abstract that it achieves state-of-the-art results - instead saying that it almost matches the state-of-the-art (if the other two methods are that). --- Rebuttal 2: Comment: Dear reviewer, Thanks for your response. We are happy to clarify these points: 1. The training free baselines (DPS, LGD-MC) cannot be used with discrete diffusion models for the text infilling task and are only derived for the particular case of inverse problems in continuous space. DPOK/DDPO are RL finetuning baselines which can in principle be used for all the tasks, but do require significant changes to be adopted for the case of discrete diffusion (which was not considered in the original work). Note that **all the baselines are also biased samplers**, and hence expected to give worse performance for tasks requiring unbiased posterior inference such as classifier guidance (as we show in the paper) and other important scientific tasks (for example [1]). In the context of offline RL, the D-QL baseline trains the policy with an objective very similar to DPOK - The policy is trained to maximize the Q function with a per-step KL regularization term with behavior policy. We can add some notes in the appendix to make these connections clearer in the final version. 2. We agree that the claim "matches state-of-the-art" is more accurate, and will update the abstract in the revision. **We hope we've addressed your questions and concerns. Feel free to reach out with any additional questions.** [1] Adam, Alexandre, et al. "Posterior samples of source galaxies in strong gravitational lenses with score-based priors." arXiv preprint arXiv:2211.03812 (2022). --- Rebuttal Comment 2.1: Comment: Dear Authors, Thank you for your response. From what I understand, empirically, the results of RTB look similar to those of DPOK (or DPOK-like methods, ie, D-QL, like the authors said) on 2 of the 3 tasks: offline RL and text-to-image generation, and it was not tried out on the text-infilling task since that would require changes beyond what the DPOK paper had. Additionally, the qualitative examples/infills are not shown in the appendix for for the GFN baseline in the autoregressive prior category of models which seems to get the next best scores in table 3 so it is hard to say if the generated infills are much better than the next best performing method or not. The experimental results do not demonstrate an advantage to using this approach, but the method is more principled compared to alternative methods (I have not validated the proofs in the appendix rigorously and will consult the other reviewers in the discussion phase to follow about this) and I will update my score to reflect this.
Summary: This paper proposed a method to train a posterior $p^{post}(x)$ given a prior distribution $p(x)$ and some additional (possibly unnormalized) constraint function $r(x)$, when the prior is a pretrained diffusion model. Through the choice over $r(x)$, this setup can capture a wide range of tasks. The authors propose *relative trajectory balance* (RTB) constraint, which, if satisfied by the posterior under training, guarantees that $p^{post} \sim p(x)r(x)$. By training $p^{post}(x)$ to satisfy the RTB constraint, one can recover the desired posterior. Importantly, this training objective allows for efficient computation of the loss and off-policy training. The paper provides convincing empirical results on a variety of tasks ranging from conditional generation to offline RL. Strengths: **Motivation and Theoretical Justification** * The paper provides a theoretical justification (Proposition 1) for the proposed method, which is a natural extension of the trajectory balance constraint for the setting where we cannot easily sample from $p^{post}$. This is a realistic constraint that one often encounters when trying to perform posterior inference given a pretrained prior. **Empirical Analysis** * While it's clear that the formulation does capture a wide range of tasks, it is nice to see an experimental evidence of that. The generality of the proposed method is emphasized through the results on four different tasks. * I appreciate that the authors took time to discuss some of the implementation details (Sec 2.3). **Writing Quality** * The presentation is very well-done. The problem being tackled is clear, and the technical aspects of the solution are intuitive and easy to follow. Weaknesses: **Scalability** * One main concern I have is how scalable the method is due to trajectory matching, especially as the prior becomes larger. The main benefit of this technique relies on the assumption that one aims to repurpose an existing diffusion prior, which is presumable a large foundation model -- so it's important that the method scales desirably to the prior's size, which is not clear from the results in the paper. * Section H discusses some techniques employed to reduce memory usage, but even this relatively small-scale experiment (50 steps on latent diffusion with LoRA), the authors report that they could fit only 8 timesteps on an A100. * It'd be very useful to get some idea of how sensitive the method is to these hyperparameters. One extreme case to consider would be to use single-step gradient subsampling for a much larger T (e.g. 1000 steps). **Experimental Results** * Sec 2.3 claims that the proposed method should easily generalize to conditional constraints (which I believe), but there doesn't seem to be any experiment that tests this claim. For example, it'd strengthen the paper to have a result where RTB training successfully extracts a class-conditional model (that takes a conditioning input) from an unconditional one (perhaps for the CIFAR experiment). * The text-conditional image synthesis experiment seems to be done for only four hardcoded text prompts, which is pretty limited. * While the text in-filling results are great to have, a more practical and interesting task would be fine tuning a pretrained LLM. As hinted in Sec. 2.4, I'm curious if RTB can be used to efficiently customize an LLM, where we deviate from the Gaussian transition kernel. * Having a data-dependent reference as an upper bound on the performance of the proposed method would be useful (e.g. classifier guidance trained classifier trained on noisy data, or directly fine-tuning the prior itself on a specific task). This can tell us more about how much of the potential performance the model captures. Technical Quality: 3 Clarity: 4 Questions for Authors: * For the MNIST even-digit experiment, why was the reward function chosen to be $max_{i \textrm{ even}} p(c=i \mid x)$? Wouldn't $\sum_{i \textrm{ even}} p(c=i \mid x)$ be a more reasonable reward? I'm curious if this particular choice of reward makes it more likely for certain methods to mode collapse (e.g. by maximizing the probability for a single even class), and whether the results in Table 2 would change for MNIST with a different $r(x)$. * Could you share some more details of how LoRA was used for MNIST/CIFAR setup (e.g. which weights were LoRAed)? Also I'm curious what happens if you don't use LoRA and train the posterior on full set of weights (from prior weight or from scratch), as I LoRA could act as a regularizer. * How long is the ratio between prior vs. posterior training in terms of flops or wall-clock time? How quickly the posterior can be fine-tuned could limit the applicability of this method on certain real-world tasks. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes. For concerns and potential limitations, see the comments and questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and good questions. We are glad you found the paper well-written and the empirical results compelling. We believe that the diversity of domains on which we showed the effectiveness of our approach makes this work both a valuable illustration of off-policy finetuning for intractable diffusion posteriors and a starting point for a number of possible domain-specific applications. We have addressed each of your questions below and added three new experiments in response to your comments (please see the response to all reviewers). ### Text infilling and LLM fine-tuning We believe there may be a misunderstanding present. The infilling results are for the setting of *fine-tuning a pretrained discrete diffusion language model*, which does not use a Gaussian transition kernel. In discrete diffusion language models, the kernel is categorical at each position. The text infilling experiment (Table 3 in the paper) thus supports two claims: - that the proposed diffusion fine-tuning algorithm is applicable outside the Gaussian setting (and even outside the continuous setting); - that fine-tuning of LLMs for intractable posteriors, such as infilling, using off-policy RL can be generalized beyond autoregressive language models (as done in [28]). ### On scalability and cost We answer a few questions related to scaling and fine-tuning cost comparisons. #### Scalability RTB actually scales surprisingly well to larger models and more diffusion steps (relative to simulation-based methods) despite being a trajectory-level objective. An important observation about the RTB objective -- not included in our original submission -- is that computing the RTB gradient does not require storing the computation graph of all timesteps to be updated. We observe that the gradient of the RTB objective for a single trajectory is just the sum of per-step log-likelihood gradients scaled by the RTB residual: $$ \nabla_{\phi}L_{RTB} = 2\left(\log\frac{Z_{\phi}}{r(x_1)} + \sum_{i=1}^T\log\frac{p_{\phi}^{post}(x_{i\Delta t} \mid x_{(i-1)\Delta t})}{p_{\theta}(x_{i\Delta t} \mid x_{(i-1)\Delta t})}\right) \cdot \nabla_{\phi}\sum_{i=1}^T\log p_{\phi}^{post}(x_{i\Delta t} \mid x_{(i-1)\Delta t}).$$ Because the likelihood gradients can be accumulated during the forward pass, this allows for a batched gradient accumulation version of the update. For trajectory length (number of diffusion steps) $T$ and accumulation batch size (number of time steps receiving a gradient signal in each backward pass) $B$, the number of batched forward passes required scales as $T/B$. Only the accumulation batch size $B$, not the trajectory length $T$, is constrained by the memory budget. This means we can easily scale training with large number of diffusion steps $T$ without increasing the variance of the gradient through stochastic subsampling, with training time growing only linearly with number of time steps under a fixed memory budget. The batched update implementation was not included in our original codebase, but we have performed experiments to validate it and will include them in the revision. Finally, we highlight that this is in contrast to other diffusion samplers (e.g., PIS/DDS/DIS) which differentiate through the sampling (SDE integration) process and therefore need to store the entire computation graph. For these methods, memory necessarily scales with the length of the trajectory. #### Cost of training posterior vs. prior For our image experiments (Table 2 and Figure 2), a very small number of finetuning steps is necessary for convergence to finetune a posterior with RTB (usually between 100 and 300 are sufficient, although we run for 1500), where each step performs a gradient update on a batch of trajectories (from 16 to 32, depending on the experiment). In contrast, training a prior diffusion model even on MNIST requires tens of thousands of training iterations. #### LoRA in MNIST/CIFAR setup The experiments on discrete diffusion and offline RL (Sections 3.3 and 3.4) do not use LoRA, while the image experiments (Sections 3.1 and 3.2) do. Note that parameter-efficient fine-tuning is standard for large models such as Stable Diffusion and also adopted by baseline methods (e.g., DPOK). For example, for MNIST and CIFAR we use LoRA (rank=32) on the key, query and value vectors of the denoiser’s attention mechanism of a UNet model. RTB can be applied without LoRA parameterization on these tasks too; please see the reponse to all reviewers and Table 1 in the attached pdf. ### Conditional constraints Using conditional constraints requires a more detailed study of architectures and conditioning strategies, since the unconditional prior cannot directly be finetuned. However, we have run a preliminary experiment on the MNIST even/odd digits task (Table 2); please see the response to all reviewers. ### Data-dependent upper bound Please see the response to all reviewers for such an upper bound on the text infilling task, where we have a larger ground truth dataset. ### Max or sum in classifier reward Thank you for the qustion. The choice of the reward function for the MNIST finetuning task (L220) was to discourage generation of ambiguous digits: for example, $\sum_{i\text{ even}} p(c=i\mid x)$ is high when $p(c=0\mid x)\approx p(c=2\mid x)\approx\dots\approx p(c=8\mid x)$. In the current formulation, high reward can only be achieved by generating an image whose likelihood of at least one class is high. We observed that replacing max by sum resulted in more such ambiguous digits, which would appear to be a weakness of the pretrained prior model. ### Prompts for ImageReward task For the results in Figures 3 and 4, we use a similar setup to DPOK [15]. The prompts are copied verbatim from that paper. **We hope we have answered your questions satisfactorily; if so, please consider updating your score. Let us know if there are any other questions we can answer during the discussion phase.** --- Rebuttal Comment 1.1: Comment: Dear Reviewer 4zDq, Thank you again for your initial feedback. We have addressed your concerns in the above comment and added three new experiments based on your suggestions in the global response. Would you kindly let us know if these have affected your evaluation of the paper and if there is any further clarification we can provide? The authors.
null
null
Rebuttal 1: Rebuttal: **We would like to thank all the reviewers for their comments. The reviewers all noted that the paper is well-written (4zDq, VfXT, Z1kk) and pointed out the the broad utility of the proposed approach (4zDq, VfXT, Z1kk), the theoretical justification for the proposed method (4zDq), its efficiency (Z1kk), and the quality of the empirical analysis (4zDq).** The attached PDF file contains the following: - **Table 1:** RTB on CIFAR without LoRA (suggestion by Reviewer 4zDq, ablation for Reviewer VfXT). - While LoRA was used in the oriignal experiments, full model finetuning can be achieved effectively, using a very small (~1e-5) learning rate to avoid instabilities. We have included results for CIFAR-10 in Table 1. LoRA, however, provides significant memory and speed advantages, including a reduced sensitivity to the learning rate (with LoRA we can train effectively with lr=5e-4 or even lr=1e-3). We will include all details and results in the final version. - **Table 2:** Conditional constraints experiment for even/odd MNIST (suggestion by Reviewer 4zDq). - We use a naive parametrization of training the prior with an extra input channel, which we populate with 0 or 1 for even or odd conditioning during RTB posterior fine-tuning. We look forward to future work which develops more specialized architectures for handling conditional constraints. - **Table 3:** Upper bound for text-infilling task (suggestion by Reviewer 4zDq). - We (supervised-)fine-tune the prior diffusion language model on the entire Stories dataset (which consists of 50k examples compared to the 1k pairs of left and right context used for training with RTB and other baselines). This serves as an upper bound on the performance of all the data-free methods, including RTB. - **Table 4:** Standard deviation bars for offline RL baselines that report them in their papers (suggestion by Reviewer VfXT). Pdf: /pdf/b37fbd2daf06bd7c8137f20fea62bcac7849fc1f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Generalizable and Animatable Gaussian Head Avatar
Accept (poster)
Summary: This paper presents a method for animatable facial avatar reconstruction from a single RGB image, and the reconstructed avatar is based on 3D Gaussian splatting (3DGS) to support real-time rendering. To this end, the proposed method generates pixel-aligned Gaussian point clouds to reconstruct the identity, and use 3DMM to construct an additional set of Gaussian points to control the facial expression. The network is trained on a large scale corpus of human talking videos. After training, the network can produce reenactment results at real-time framerate for any unseen identities. Experiments show that the proposed method is able to produce plausible reenactment results. Strengths: * This paper proposes the first generalizable 3D Gaussian-based avatar modeling method (if not taking ECCV 2024 papers into account). * The authors compare the proposed method with most recent state-of-the-art baselines (such as Protrait-v2). The comparison is comprehensive and convincing. * Results in Table 1 and Table 2 show that the proposed method consistently outperforms existing approaches under differently metrics and different settings. Weaknesses: * My major concern is about the method design. Despite the plausible results, the proposed method is confusing and doesn't make sense to me. If I understand correctly, the reconstruction branch produces Gaussian points that are aligned with the input image, and these points keep static and unchanged when performing reenactment. Instead, the expressions are modeled through the additional Gaussian points that are attached to the 3DMM surface (Line 187-188), which means these expression Gaussian points should play a major role in expression control. However, according to Figure 7, the Gaussian points produced by the reconstruction branch actually play a major role in facial rendering, while the "expression Gaussians" seem to be much less important. That doesn't make sense to me: how could the static points produced by the reconstruct branch model the dynamic expressions? When overlapping the reconstruction Gaussian points with the expression ones, how does the proposed method resolve the potential conflicts of these two point sets, such as different lip positions? * The so-called dual-lifting technique looks straight-forward and trivial to me. According to Sec.3.1, this module produces two pixel-aligned Gaussian point sets, one for the visible surface and the other for the invisible surface. However, using two set of points (visible+invisible, or front+back) to model a complete shape is not a new idea; it has been used in 3D human reconstruction like [a] many years ago. Although such an idea has not been proposed in the form of 3DGS, I don't think it is novel enough to be claimed as a technical contribution. * It is unclear whether the baseline methods are trained on the same dataset as the proposed method or not. In Line 244 of the main paper, the authors mention that they "use the official implementation", but it is still unclear whether the authors use the pretrained network weights or retrain these baselines using the same dataset. [a] Gabeur et al. Moulding Humans: Non-parametric 3D Human Shape Estimation from Single Images. ICCV 2019. Technical Quality: 3 Clarity: 2 Questions for Authors: One minor suggestion: it would be better if the authors could provide some qualitative comparison against state-of-the-art methods in the form of dynamic reenactment videos and highlight the difference/advantages of the results. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitation and potential social impact in Sec.5 of the main paper and Sec.E & F of the supplemental document. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and helpful comments, which triggered deeper thinking about our approach. We would like to address your concerns in the following sections: **How could the static points produced by the reconstruction branch model the dynamic expressions?** * Your understanding is mostly correct. In our method, the reconstruction branch generates static Gaussian points, while the expression branch generates dynamic Gaussian points. * However, it's important to note that our Gaussians are not purely RGB or spherical harmonics Gaussians. Instead, our Gaussians include 32-D features (as described in Sec. 3.3). In Fig. 7 of the paper, we visualize the first 3 dimensions of these features (i.e., the RGB values of the Gaussians) without the neural rendering module. This visualization is intended to intuitively display the functionality of each part and the importance of each branch should not be judged based on RGB values ​​alone. Their importance is determined by the entire 32-D features and the neural renderer module. In fact, the expressions are modeled solely by the expression branch, without expression Gaussians, the lifted Gaussians from the reconstruction branch are static. **How does the proposed method resolve the potential conflicts of these two point sets?** * Thank you for pointing out some missing parts of the paper. We will include this discussion on how to resolve conflicts in the revised version. * Although we attempt to bring the two point sets closer together, there are inherent conflicts since one set is static and the other is dynamic. We address these conflicts through the neural rendering module. Our Gaussian points have 32-D features, which contain more information than just RGB values. Thus, the neural rendering module can leverage these features to integrate the two point sets. We show some results with conflicts in Fig. 4 on the new supplementary page. It can be seen that when there are significant expression differences, the RGB values of the Gaussian points conflict, but these conflicts are well-resolved after neural rendering. We believe this demonstrates that the neural rendering module acts as a filter, using features from the expression Gaussian in areas where conflicts may arise. **About the novelty of the dual-lifting method.** * We believe our method is novel. The work of [Gabeur et al.] uses two sets of points to model a static human body and then obtains the body mesh. Our method uses two sets of lifted 3D Gaussians to model the human head, which is the first one-shot 3DGS method for head avatars. Additionally, we incorporate expression Gaussians, enabling dynamic expression modeling, which further develops this method. Our method also achieves real-time re-driving, which is a first in one-shot head avatars and is crucial for practical applications. Finally, our work provides a comprehensive evaluation and comparison for one-shot dynamic head avatar reconstruction, which builds a solid baseline for future research. For missing citations and discussion of related papers, we will include and discuss them in the revised version. **The baseline methods and evaluation method.** * We used pre-trained network weights and the official code implementations for evaluation. Since each method has different data requirements, it is challenging to train every method on a unified dataset. For instance, Portrait4D uses single-view head images, Portrait4DV2 uses generated multi-view data, GOHA needs video and single-view head images, Real3DPortrait needs EG3D-generated multi-view head images and pre-train image-to-plane model, ROME did not release their training code. * Since these methods emphasize their generalization ability, using pre-trained weights for evaluation is acceptable and all compared baseline works also follow the same way for their evaluation. * Additionally, the HDTF and VFHQ test sets are commonly used benchmark datasets. OTAvatar, GOHA, and GPAvatar have reported results on the HDTF test set, while GPAvatar, Portrait4D, and Portrait4DV2 have reported results on the VFHQ test set, therefore we chose to test on these two datasets. **Side-by-side video comparison with baselines.** * Due to rebuttal restrictions, we are unable to provide additional videos for rebuttal. Instead, we have included some consecutive frames in the new supplementary page and highlighted the areas of interest. We will add side-by-side video comparisons in our future open-source code and demo website.
Summary: The paper proposes a method to achieve one-shot head avatar animation with 3D Gaussian Splatting (3DGS). With 3DGS, the papers show high-fidelity animation with fast inference speed. To solve one-shot 3DGS reconstruction, the paper proposes a dual-lifting method with 3DMM regularization. Experiments justify the model designs and show the proposed approach achieves SOTA performance. Strengths: 1. The paper proposes a technically sound approach for solving the one-shot animatable head avatar problem using 3D Gaussian Splatting (3DGS), fully utilizing the pre-trained 2D network to obtain cues for lifting. 2. The quantitative results are promising, and the qualitative results are sufficient to demonstrate the robustness of the approach. Weaknesses: 1. The visual results of the method seem to be overly smoothed to some extent, making the results less realistic compared to some baselines (e.g., Portrait4Dv2). The authors may need to provide additional explanation for this and discuss why the method achieves better qualitative results than other methods. 2. The approach is highly dependent on 3DMM but lacks a discussion about its impact. What if the estimation fails? How does the estimation accuracy affect the final performance (both training and inference)? If using other designs without 3DMM, what would be the method's performance? 3. There should be some failure cases to show the limitations of the work. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The current ablation study may not be sufficient to validate the dual-plane lifting. Is it possible to conduct a study by removing the reconstruction branch to better demonstrate the importance of dual-plane lifting? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors discuss their "Limitations" and provide an "In-Depth Ethical Discussion" proposing several measures to prevent these ethical risks. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and helpful comments. We would like to address your concerns in the following sections: **Is it possible to conduct a study by removing the reconstruction branch to better demonstrate the importance of dual-plane lifting?** * Removing the reconstruction branch is possible, but it would result in a lack of detail. Our method integrates global identity features when predicting the expression Gaussians, which provides some identity information to the expression Gaussians. However, the 1D global features are insufficient to reconstruct detailed identity features. We provide qualitative results of the reconstruction branch in Fig. 2 on the new supplementary page and quantitative results here. The qualitative results show a severe lack of details on identity, demonstrating the necessity of the reconstruction branch, and the quantitative results also support this conclusion (the CSIM metric). * | Method | PSNR↑ | SSIM↑ | LPIPS↓ | CSIM↑ | AED↓ | APD↓ | AKD↓ | CSIM↑ | AED↓ | APD↓ | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | w/o Recons | 18.006 | 0.756 | 0.261 | 0.454 | 0.203 | 0.223 | 5.324 | 0.230 | 0.246 | 0.279 | | Ours (full) | **21.83** | **0.818** | **0.122** | **0.816** | **0.111** | **0.135** | **3.349** | **0.633** | **0.253** | **0.247** | **The visual results of the method seem to be overly smoothed, making the results less realistic compared to some baselines. The authors may need to provide additional explanation for this and discuss why the method achieves better qualitative results than other methods.** * Currently used quantitative metrics focus more on the correctness of results rather than their realism. For example, AED measures whether subtle expressions are correctly modeled, and PSNR and SSIM measure if identity details are authentically reproduced. Sometimes visually realistic results may be incorrect. To verify the realism of our results, we have provided our quantitative evaluation with FID scores (self-reenactment on VFHQ), which is a commonly used metric for realism. Our model demonstrates competitive performance compared to state-of-the-art methods. * | Method | StyleHeat | ROME | OTAvatar | HideNeRF | GOHA | CVTHead | GPAvatar | Real3D | P4D | P4D-v2 | Ours | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | **FID↓** | 72.138 | 49.516 | 70.692 | 51.930 | 39.638 |109.054 | 37.610 | 38.999 | 46.965 | 30.573 | **28.938** | * Our method tends to over-smooth the hair regions. We believe this because our method does not include control of these regions. During training, the hair and upper body will always look like the input image instead of the target image, and this inconsistency leads to over-smoothing. This problem is also observed in other baseline methods since all the baseline methods do not support hair control. Among them, GOHA, Real3DPotrait, and Potrait4D use GAN loss to enhance the realism, but lead to inaccurate expression control and false illusion of details, while Portrait4DV2 improves hair realism by using data generation to obtain correct ground truth of uncontrolled regions. But it's worth noting that the primary contribution of Portrait4DV2 is a new learning paradigm, which is orthogonal to our main contribution. Our method can also benefit from the data generation method of Portrait4DV2. **The approach is highly dependent on 3DMM but lacks a discussion about its impact.** * Thank you for pointing out some missing parts of the paper. We will include this discussion on 3DMM in the revised version. * 3DMM estimation is important in our method. We use the 3DMM estimation methods provided by GPAvatar (based on EMOCA and MICA) to process our training and inference data. While these methods introduce some errors during our training process, their robustness ensures that our model can still be effectively trained. In the future, our method will also benefit from advancements in 3DMM estimation; the more accurate the estimation, the better our model is expected to perform. * During inference, if the 3DMM model fails to accurately predict the target expression from the target image, our model will also reproduce these inaccuracies in the driving result. For example, some subtle expressions are not captured by 3DMM (subtle frowns), or the expression is incorrect due to the non-decoupling of identity and expression. Mitigating this issue depends on further developments in the 3DMM estimation field. * Control without the 3DMM is possible but would require significant modifications to the method. We hope to leave this for future work. **There should be some failure cases to show the limitations of the work.** * We show some limitations in Fig. 3 on the new supplementary page. This includes the results of the tongue that are not modeled by the 3DMM, and the model's lack of details in invisible areas in the input image. These examples will also be integrated into future revisions. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. After carefully reviewing the comments from the other reviewers and the authors' responses, I have decided to maintain my rating of borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and for taking the time to review our rebuttal and engage with us. Your comments are important for us. We are pleased we could address your concerns. We are happy to address any further concerns.
Summary: The paper introduces a novel framework, GAGA, for one-shot animatable head avatar reconstruction from a single image. The key innovation is the dual-lifting method, which generates high-fidelity 3D Gaussians that capture identity and facial details by predicting lifting distances from the image plane to 3D space. This method leverages global image features and a 3D morphable model (3DMM) to ensure accurate expression control and real-time reenactment speeds. The model can generalize to unseen identities without specific optimizations. Experimental results demonstrate that GAGA outperforms previous methods in terms of reconstruction quality and expression accuracy. The main contributions include the introduction of the dual-lifting method, the use of 3DMM priors to constrain the lifting process, and the combination of 3DMM priors with 3D Gaussians for efficient expression transfer, allowing high-quality real-time rendering. Strengths: - The proposed dual-lifting of approach is interesting and enables the hallucination of unseen areas and resolves ambiguity of front/back 3D Gaussians. - The combination of expression Gaussians and dual-lifted Gaussians allows flexible expression control. - The method outperforms various baselines in a wide range of metrics. The rendering speed also significantly outperforms prior methods. - The paper is clearly written and easy to follow. Weaknesses: - The Gaussians from the expression branch only use the vertices of the 3DMM model, which can be inaccurate and unable to model fine facial features. - It would be nice to show more side-by-side video comparison with baselines. Many baselines look similar qualitatively and it’s hard to tell the improvement without highlighting the difference. Technical Quality: 3 Clarity: 3 Questions for Authors: - Why not also predict position offsets for the expression Gaussians? To keep expression structures, regularization can be used. - Would the two sets of Gaussians (expression & dual-lifted) contract each other during expression? - Why not bind the lifted Gaussians to the 3DMM vertices and animate together? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper acknowledges several limitations. First, the model may generate less detailed reconstructions for unseen areas of the face, such as the back of the head or the interior of the mouth, which are not visible in the input image. Second, the 3DMM-based expression branch cannot control parts of the head that are not modeled by 3DMM, such as hair and the tongue. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and insightful comments. We are happy to address your questions in the following: **Why not also predict position offsets for the expression Gaussians?** * Our method emphasizes the efficiency of inference rendering. Predicting expression Gaussians offset for each frame will impact the inference efficiency. Our expression Gaussians are controlled by 3DMM vertices now, which can be performed asynchronously with rendering and is very efficient. * In addition, the offset of the point is also related to the shape and expression. These two parameters determine the position of the point before the offset. At the same time, the number of points is large (5023), which makes it difficult to directly learn the offset of each expression point. * But we also experimented with adding additional offsets to the expression Gaussians. Our setup is as follows: we extract global features from the driving image, then use the initial positions given by the 3DMM and trainable point features to predict point offsets, and we apply regularization to prevent excessive displacement. The results are shown below. * | Method | PSNR↑ | SSIM↑ | LPIPS↓ | CSIM↑ | AED↓ | APD↓ | AKD↓ | CSIM↑ | AED↓ | APD↓ | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | with offsets | **21.93** | 0.814 | 0.138 | 0.797 | 0.121 | 0.168 | 3.732 | 0.584 | 0.263 | 0.294 | | Ours | 21.83 | **0.818** | **0.122** | **0.816** | **0.111** | **0.135** | **3.349** | **0.633** | **0.253** | **0.247** | * It can be seen that in such a setting, adding offset predictions has little effect or even worse impact on most metrics. **Would the two sets of Gaussians contract each other during expression?** * During the expression (inference) process, the two sets of Gaussians do not contract towards each other. But during training, we use MSE loss to bring the two sets of Gaussians closer together to fully leverage the priors of the 3DMM. So the two sets of Gaussians are close to each other after training. **Why not bind the lifted Gaussians to the 3DMM vertices and animate them together?** * Our primary consideration is efficiency. In our setup, we obtain 175,232 lifted Gaussians and 5,023 expression Gaussians, which require a large matrix to compute their binding relationship. Unlike full-body reconstruction, which involves significant deformations (requiring binding points to a few bones), facial reconstruction doesn't demand such extensive deformations. Therefore, we first attempted to merge the expression Gaussians and enhancement Gaussians directly without binding. The results indicate that this approach is effective for capturing expressions and jaw movements. * We have further discussed how we resolved the conflict between the lifting Gaussians and the expression Gaussians caused by not binding in our response to reviewer d3t7. **Why do we only use the vertices of the 3DMM model?** * The vertices are naturally compatible with the positions of the Gaussians in 3DGS. Additionally, 3DMM (FLAME) has 5,023 points, providing sufficient information for head expression control. To integrate 3DMM more effectively into 3DGS, we chose not to use edge and face information. * Moreover, the points in 3DMM contain richer information beyond just shape and expression parameters. Therefore, we opted not to redundantly input the shape and expression parameters of 3DMM into the model. **Side-by-side video comparison with baselines.** * Due to rebuttal restrictions, we are unable to provide additional videos for rebuttal. Instead, we have included some consecutive frames in the new supplementary page and highlighted the areas of interest. We will add side-by-side video comparisons in our future open-source code and demo website. **About the limitations of 3DMM.** * We further discussed the impact of 3DMM in our rebuttal to reviewer **65B2**. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the detailed response and additional experiments. I will maintain my initial score of weak accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful feedback and for taking the time to review our rebuttal and engage with us. Your comments are important for us. We are pleased we could address your concerns. We are happy to address any further concerns.
Summary: This paper presents "Generalizable and Animatable Gaussian Head Avatar" (GAGA), a method for one-shot animatable head avatar reconstruction using 3D Gaussians. Unlike existing methods that depend on neural radiance fields and require extensive rendering time, GAGA employs a dual-lifting method to generate high-fidelity 3D Gaussians from a single image in a single forward pass. The approach integrates global image features and 3D morphable models to control expressions and achieve real-time reenactment speeds. Experiments demonstrate that GAGA outperforms previous methods in reconstruction quality and expression accuracy. Strengths: 1. The dual-lifting method for reconstructing 3D Gaussians from a single image is novel and effective, allowing for high-fidelity reconstruction. 2. Efficient resource utilization during training and inference makes it practical for real-time applications, which is a significant improvement over existing methods that are often slow. 3. The model can generalize to unseen identities without specific optimizations, broadening its applicability. 4. Experimental results show superior reconstruction quality and expression accuracy performance compared to state-of-the-art methods. Weaknesses: 1. The reliance on 3D morphable models for expression control may limit the model's flexibility. 2. The 3DMM identity and expression parameters are not entirely decoupled, potentially impacting cross-identity reenactment identity consistency. 3. The method may generate less detail for unseen areas and has limitations in controlling regions not modeled by 3DMM, such as hair and tongue. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How sensitive is the model to variations in the input image quality and lighting conditions? 2. How robust is the model when applied to images with significant occlusions or extreme facial expressions? 3. What specific measures were taken to ensure the diversity and quality of the training data? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The model may produce less detail for unseen areas, which can affect the realism of the generated avatars in dynamic scenes. 2. The expression control is limited by the constraints of the 3DMM, which does not model elements like hair and tongue, leading to potential inaccuracies. 3. There is a slight compromise in identity consistency of cross-identity reenactment due to the partial decoupling of identity and expression parameters in the 3DMM. 4. While the method achieves real-time speeds, the dual-lifting and neural rendering processes add complexity to the implementation and may require significant computational resources for training. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and valuable comments. We are pleased to address your concerns in the following sections: **How sensitive is the model to image quality and lighting conditions?** * We present more qualitative results with low-quality images or challenging lighting conditions in Fig. 1 of the new supplementary page. The reconstructed avatars are inevitably affected by image quality and lighting conditions. For example, avatars reconstructed from blurred images lack details, while those from images with challenging lighting conditions have a fixed lighting condition, such as some shadows on the nose. However, these features also demonstrate that our model can faithfully restore details from the input image and handle scenarios with varying image quality and challenging lighting. **How robust is the model to significant occlusions or extreme facial expressions?** * In Fig. 1, we also show the results for inputs with significant occlusions and extreme expressions. For common occlusions such as sunglasses, the model can handle them well. For some uncommon occlusions such as hands, the reconstructed avatar will be affected by the occlusion. For extreme inputs or driving expressions, our method shows good robustness. This shows that our method can produce reasonable results even in extreme cases. **What specific measures were taken to ensure the diversity and quality of the training data?** * We constructed our training data from the VFHQ dataset, a talking head video dataset containing 15,204 video clips. Despite the repeated identities, the dataset still has a large number of unique identities, ensuring sufficient identity diversity in our training data. To ensure diversity in expressions and head poses between frames, we extracted frames from the videos. As described in Sections 4.1 and A.1, we uniformly sampled 25-75 frames from each video based on its length. This temporally sparse sampling ensures that different frames exhibit as much variation in expressions and poses as possible. * For the ground truth 3DMM parameters in our dataset, we adopted the implementation from GPAvatar [Chu et al., 2024]. This implementation integrates state-of-the-art 3DMM estimators (MICA, EMOCA), resulting in high-quality 3DMM labels. **Implementation complexity and computational resources for training.** * Our method is also highly efficient in training. As shown in the table below, our training costs are lower than those of the baseline methods. In addition, as a generalizable method, our model only needs to be trained once and does not require fine-tuning during inference. In the future, we plan to open-source our code, including implementations of all components, as well as our pre-trained models. We hope this will provide an easy-to-use and solid baseline for future research. * | Method |StyleHeat | ROME | OTAvatar | HideNeRF | GOHA | CVTHead | GPAvatar | Real3D | P4D | P4D-v2 | Ours| | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | | **Time(GPU Hours)** |166 | - | 192 | 1667 | 768 | 1200 | 50 | 1424 | 672 | 1536 | **46** | * Among them, ROME did not provide training details and code. StyleHeat, HideNeRF, and CVTHead did not release official training time, so we conducted limited training and estimated the total time for these methods. All times tested on / reported on /or converted to A100. **About the limitations of 3DMM.** * We further discussed the impact of 3DMM in our rebuttal to reviewer **65B2**. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: Thank you for the rebuttal. After reading the comments from other reviewers and the reponses from authors, I would keep my rating as borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and for taking the time to review our rebuttal and engage with us. Your comments are important for us. We are pleased we could address your concerns. We are happy to address any further concerns.
Rebuttal 1: Rebuttal: Firstly, we would like to thank all the reviewers for their thorough review and valuable suggestions. We summarize the issues raised and indicate where we address them. Additionally, we provide some new visual results in the supplementary page. We answered the following questions in our response to **PdVU**: * Robustness of the model to image quality, lighting condition, occlusion, and extreme expression. (**PdVU**) * How we ensure the diversity of training data. (**PdVU**) * The complexity of model implementation and the resource consumption during the training process. (**PdVU**) We answered the following questions in our response to **3QbB**: * Why don’t we predict offsets for expression Gaussians? (**3QbB**) * Do the lifted Gaussians and expression Gaussians contract each other during expression? (**3QbB**) * Why don’t we bind the lifted Gaussians to the 3DMM expression Gaussians? (**3QbB**) * Why do we use only 3DMM vertices? (**3QbB**) We answered the following questions in our response to **65B2**: * Ablation study of removing the reconstruction branches (**65B2**) * Explanation of quantitative results and over-smoothed qualitative results. (**65B2**) * Discussion on the impact of 3DMM accuracy on model performance. (**65B2**, **PdVU**) * There should be some failure cases to show the limitations of the work. (**65B2**) We answered the following questions in our response to **d3t7**: * Further explanation of the model design, including how we model dynamic expressions. (**d3t7**) * How we resolve conflicts between the two sets of Gaussian points. (**d3t7**, **3QbB**) * Further discussion of the novelty of our method. (**d3t7**) * Details on how we conduct our evaluation and comparison. (**d3t7**) We also show more results in the new supplementary page: * Side-by-side video frames comparison results and qualitative results with highlights. (**3QbB**, **d3t7**) Due to page limitations we selected several of the most competitive baseline methods to show video frames. Pdf: /pdf/598722a4827c76dc5e3b33c1334681e696410570.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Alias-Free Mamba Neural Operator
Accept (poster)
Summary: This paper introduces a novel neural operator called MambaNO (Mamba Neural Operator) for solving PDEs. The key contributions atleast to me are: 1. A new integral form called "mamba integration" with O(N) computational complexity that captures global function information. This is so cool! I give props to the authors for coming up with this! 2. An alias-free architecture that combines mamba integration with convolution to capture both global and local function features. 3. Theoretical analysis proving MambaNO is a representation-equivalent neural operator (ReNO) and can approximate continuous operators for a large class of PDEs. 4. Extensive empirical evaluation demonstrating state-of-the-art performance across various PDE benchmarks. Strengths: 1. Novelty: The paper introduces a new approach to neural operators by adapting the Mamba architecture, which has shown promise in other domains. The combination of global (mamba integration) and local (convolution) operators is innovative in this context. 2. Theoretical foundation: The authors provide a solid theoretical analysis, including proofs of representation equivalence and approximation capabilities. This adds depth to the empirical results and helps understand why the proposed method works. 3. Comprehensive experiments: The evaluation is thorough, covering a wide range of PDE types and comparing against multiple state-of-the-art baselines. The inclusion of both in-distribution and out-of-distribution tests strengthens the claims of generalization ability. 4. Performance improvements: The reported improvements in accuracy and efficiency are substantial across different PDE types, which is impressive given the diversity of the benchmarks. 5. Alias-free framework: By adhering to an alias-free framework, the authors address an important issue in neural operators, potentially improving the model's stability and generalization capabilities. 6. Efficiency: The O(N) complexity of the mamba integration is a significant advantage, especially for high-dimensional problems. Weaknesses: 1. Hyperparameter sensitivity: The paper doesn't provide a comprehensive analysis of how sensitive the model is to various hyperparameters, such as the number of layers or the dimensionality of the state space. 2. Potential limitations in handling multi-scale phenomena: To me atleast there are many PDEs that exhibit behavior across multiple scales. While the U-shaped architecture with up and downsampling operations addresses this to some extent, the paper doesn't deeply analyze how effectively MambaNO can handle problems with significantly different scales of behavior. 3. Lack of discussion on boundary condition handling: The paper doesn't provide a detailed explanation of how different types of boundary conditions are handled within the MambaNO architecture, which is a critical aspect of PDE solving. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How do you envision extending the cross-scan operation to 3D or higher-dimensional PDEs? What challenges do you anticipate? Have you conducted any preliminary experiments on 3D PDEs to assess the scalability of MambaNO? 2. Can you provide more insight into the relationship between the state space model used in Mamba and the specific requirements of PDE solving? 3. While MambaNO achieves O(N) complexity, how does its practical runtime and memory usage compare to other methods, especially for large-scale problems? 4. How well does MambaNO generalize to PDEs with very different characteristics from those in the training set? Are there certain types of PDEs where it struggles? Have you explored transfer learning approaches, where a model trained on one class of PDEs is fine-tuned for another? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: They have presented the limitations! Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Most of the hyperparameters, including encoder layer, upsampling factor, scanning direction, and integration depth, have been ablated with results and practical settings given in subsection E of supplemental material. Note that for most NO applications, the dimension of state space is simply set as 16 or 32, hence we omitted its ablation in our previous submission. As suggested, the concerned experiments are provided in the uploaded PDF. The practical selections are dependent on the balance of efficacy and efficiency. 2. Yes, handling different scales is a key issue in solving PDEs. MambaNO addresses the problem of different behavioral scales not only through the up- and down- samplings of its U-shaped architecture. Additionally, MambaNO combines global (Mamba integration) and local (convolution integration) treatments to learn holistic features, thereby better handling multi-scale phenomena. Besides, the ablations on U-net layers and the replacement of Mamba or convolution integration with naive modules are also given in the subsection E of supplemental material. However, the texts (deep analysis) on how effectively MambaNO can handle problems with significantly different scales are indeed insufficient, which will be fulfilled in later versions. In addition, we conjecture that introducing adaptive mesh refinement or incorporating physics-guided priors would tap the potential of enhancing the capabilities in multi-scale applications. 3. Boundary conditions play a crucial role in solving PDEs because they define the behavior of the solution at the boundaries of the computational domain. We believe MambaNO's unique scanning mechanism can better handle the initial and boundary conditions of PDEs, allowing the model to utilize this information during the forward inference process. Reducing the scanning directions would also decrease the model's effectiveness, as demonstrated in our ablation study in supplemental material. In fact, techniques such as data augmentation, inputting more boundary information, adaptive boundary conditions, and multi-task learning can significantly enhance the model ability to handle different boundary conditions. We will continue to explore the limits of MambaNO's capability to handle various boundary conditions in the near future. 4. To extend the cross-scan operation to 3D or higher-dimensional PDEs, we need to generalize the scanning process to handle additional dimensions. This involves iterating along multiple spatial axes and potentially managing increased computational complexity. In 3D case, this means scanning along the x, y, and z axes while ensuring the consistency and accuracy of the numerical solution. Anticipated challenges Computational Complexity: The increase in dimensions significantly raises computation and memory demands. Data Handling: Managing large datasets in higher dimensions can be complex. This includes efficient storage, retrieval, and manipulation of data. Accuracy and Stability: Increased dimensions would heighten the potential for numerical errors and instabilities. Unfortunately, we have not yet conducted experiments related to 3D PDEs. Inspired by GINO [1], which combines GNO and FNO for 3D PDEs, we may next consider integrating GNO and MambaNO to address the 3D challenges. 5. The state-space model is a mathematical framework that represents a physical system in terms of input, output, and state variables, which are related through first-order differential equations. PDEs typically need to be solved within specific spatial and temporal domains, requiring precise handling of boundary and initial conditions. The state-space model is particularly well-suited for managing these conditions. Specifically, the state-space model uses a scanning-like mechanism to handle information from initial and boundary conditions, integrating this information into the state equations to determine the system's state at the next time point. This approach is logical and intuitive because the state-space model systematically tracks the time evolution of the system's states, ensuring that initial and boundary conditions are correctly processed at each time step. 6. As suggested, we have conducted experiments on larger dataset such as 2D CFD in PDEBench [2] and compared MambaNO with CNO[3] and other two O(N) competitors, including GNOT[4] and OFormer[5]. The numerical and visual results are given in Table 3 and Fig. 1 of the uploaded PDF, respectively. Evidently, the performance of MambaNO is ahead of other three competitors. In terms of memory usage, stopping epoch, and training time, our MambaNO ranks third, second, second, respectively. Note that all these four are with O(N) complexity. 7. We believe that generalizing to very different PDEs would be a tremendous advance in this field. In the original texts, we provide both in-distribution and out-of-distribution test results for the same PDE, verifying that MambaNO has optimal generalization across different distributions. However, we have not yet conducted experiments on significantly different PDEs. We consider that the generalization capability for irregular or higher-dimensional PDEs still needs improvement. In the future, we will further explore transfer learning to enhance the ability of the algorithm in handling different PDEs. [1]. Geometry-informed neural operator for large-scale 3d pdes. in NeurIPS 2024. [2]. Pdebench: An extensive benchmark for scientific machine learning. in NeurIPS 2022. [3]. Convolutional neural operators for robust and accurate learning of PDEs. in NeurIPS 2024. [4]. Gnot: A general neural operator transformer for operator learning. in ICML 2023. [5]. Transformer for Partial Differential Equations’ Operator Learning. Transactions on Machine Learning Research, 2022. --- Rebuttal 2: Title: Response to the Authors Comment: Your additional explanations and experiments have addressed many of the concerns raised. The ablation studies on hyperparameters and the new results on larger datasets like 2D CFD in PDEBench are particularly valuable, demonstrating MambaNO's competitive performance against other O(N) complexity models. Your insights on handling multi-scale phenomena and boundary conditions are appreciated, though further exploration would be beneficial. The explanation of extending the cross-scan operation to higher dimensions is clear, and the potential integration with GNO for 3D PDEs is a natural next step, and I would be excited to see your follow-up work: )! Overall, thanks for the detailed explanation! I will however keep my score: )! Thank you --- Rebuttal 3: Comment: Thank you very much for your acknowledgment!
Summary: This paper proposes a novel neural operator structure that applies the Vision Mamba architecture to neural operators. Additionally, it introduces an activation operator to mitigate the impact of standard neural network activation functions on bandlimited functions, thus reducing aliasing error. The method shows promising results when compared with several neural operator methods on both in-distribution and out-of-distribution benchmarks. Strengths: The strengths of this paper are as follows: 1. The writing is exceptionally clear, making it easy to follow and effectively explaining the complexity reduction benefits brought by the Mamba structure. 2. The method achieves impressive results, demonstrating the advantages of the Mamba structure. The paper also evaluates the method on datasets with varying resolutions, showing that the proposed approach maintains consistent performance across different resolutions. Weaknesses: The main drawback of this paper is the lack of experiments and benchmarks. The selected datasets and baselines for the experiments are not sufficiently representative. The authors should consider referring to datasets from PDEBench [1], or at least citing them. Since the primary advantage of the Mamba NO is complexity reduction, the paper should also compare it with other low-complexity models in transformers, such as linear attention models (e.g., OFormer, GNOT). **References** 1. PDEBENCH: An Extensive Benchmark for Scientific Machine Learning (https://arxiv.org/abs/2210.07182) Technical Quality: 4 Clarity: 4 Questions for Authors: No Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: In the original texts, we presented the results of eight representative two-dimensional partial differential equations, demonstrating that MambaNO enjoys better accuracy yet with O(N) complexity. Thanks for your reminding of the new datasets and the two outstanding models. As suggested, we have added several experiments on 2DCFD dataset from PDEBench[1], comparing with the mentioned OFormer[2] and GNOT[3], as well as one original competing method CNO[4]. The numerical and visualized results are given respectively in Table 3 and Fig. 1 of the uploaded PDF file. It is shown that MambaNO still leads the performance among the four O(N) competitors, while ranking second in terms of used epochs and training time, as well as third in terms of GPU memory usage (close to CNO). Note that the main contribution of MambaNO is not simply complexity reduction. In fact, we propose a novel form of kernel integral that conducts holistic feature learning both locally and globally. On that basis, the theoretical analysis on the obeyance of discretization invariance and continuous-discrete equivalence is also provided. Moreover, the performance again is achieved with only O(N) complexity. Finally, we will cite this mentioned dataset and models in later version. The experimental results on the complete PDEBench dataset will be also added, given chance of revision. Thanks for your valuable suggestions! [1]. Pdebench: An extensive benchmark for scientific machine learning. in NeurIPS 2022. [2]. Transformer for Partial Differential Equations’ Operator Learning. Transactions on Machine Learning Research, 2022. [3]. Gnot: A general neural operator transformer for operator learning. in ICML 2023. [4]. Convolutional neural operators for robust and accurate learning of PDEs. in NeurIPS 2024. --- Rebuttal 2: Comment: Due to the imminent closure of the discussion period, we kindly request the reviewer to provide us with their valuable feedback on our rebuttal. We are at their disposal to answer any further questions in this regard.
Summary: The authors present a new operator architecture based on Mamba and convolutional integration. They show theoretically and empirically that this neural operator is discretization invariant and alias-free. The proposed architecture outperforms baselines across a variety of 2D benchmarks. Strengths: The theoretical justification for the architecture is strong. - The empirical results show good performance across a variety of PDEs. - The proposed computational complexity of O(N) is great. - There are good ablation studies and discretization invariance results. - There is a good description of the dataset and methods, as well as the computational complexity analysis. Weaknesses: The lack of statistical significance of the results makes the benchmark comparisons somewhat less convincing. - I could not find hyperparameters, model details, or training details of the baseline models, which makes it difficult for me to evaluate if those baselines are comparable to the default configuration of the MambaNO. - The architecture may be less scalable than others, some of the ablation studies present decreased performance with increasing model capacity. If the authors could comment if this is caused by overfitting or model limitations, that would be great. - The proposed computational complexity is great, however, it would be great if it was supported by empirical timing experiments or comments on the efficiency of the model. Although faster, does the model need more epochs/training time compared to baselines? If the authors could comment on the training details of there different models that would help. Overall, the proposed architecture seems to perform well and is based on theoretically sound analysis. However, the lack of reproducibility and transparency makes the results less convincing. Technical Quality: 3 Clarity: 3 Questions for Authors: How could you see adapting this model to larger, irregular systems? Resampling to a regular grid is likely not a feasible strategy with complex geometries or 3D problems. - I am curious about the model’s capacity to learn across multiple PDEs or physics scenarios. Most large PDE models use some sort of attention mechanism or transformer architecture. [1, 2, 3] How do you see your model fitting into this ecosystem, where a pretrained model can be fine-tuned? 1. Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, Jun Zhu, DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training, https://arxiv.org/abs/ 2403.03542 2. Michael McCabe, Bruno Régaldo-Saint Blancard, Liam Holden Parker, Ruben Ohana, Miles Cranmer, Alberto Bietti, Michael Eickenberg, Siavash Golkar, Geraud Krawezik, Francois Lanusse, Mariel Pettee, Tiberiu Tesileanu, Kyunghyun Cho, Shirley Ho, Multiple Physics Pretraining for Physical Surrogate Models, https://arxiv.org/abs/2310.02994 3. Maximilian Herde, Bogdan Raonić, Tobias Rohner, Roger Käppeli, Roberto Molinaro, Emmanuel de Bézenac, Siddhartha Mishra, Poseidon: Efficient Foundation Models for PDEs, https://arxiv.org/abs/2405.19101 Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The convolutional integration assumes a fixed grid, which makes this neural operator incompatible with irregularly discretized meshes. For practical physical applications, this could be a significant limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. As suggested, we have provided the experimental data and their p-values in the uploaded PDF, showing that the differences between the experimental results are statistically significant. In other words, these differences are not due to any random fluctuations. Note that such tables are too redundant and often contain many repetitions since most p-values are <0.001, so we have omitted the presentation of p-values in the original submission. For all competing methods, we used the best parameters suggested in their original papers. All the selected parameters and the trained models will be released at GitHub for comparisons and evaluations. For the proposed MambaNO, most of the hyperparameters, including encoder depth, upsampling factor, scanning direction, and integration layers, have been ablated with results and practical settings given in subsection E of supplemental material. However, we omitted the training parameters in our previous submission. As suggested, they are now provided in Table 2 of the uploaded PDF. In terms of different PDEs, most of them are fixed in all the experiments, yet with partial of them fine-tuned for better performance. 2. The phenomenon of occasionally performance declines with increasing model capacity can be attributed to underfitting (insufficient data samples). Thanks for your reminding, we have re-conducted comparative experiments on a larger dataset (10,000 samples, with 7,000 used for the training set) among four algorithms of the same O(N) complexity, i.e., Oformer[1], GNOT[2], CNO[3], and our MambaNO, the first two of which are suggested by another reviewer. The experimental results are provided in Table 3 of the uploaded PDF. As seen, the accuracy of our model has been further boosted. Note the efficiency metrics are also provided for a deeper comparison. 3. As suggested, we have provided the training loss curves for FNO[4], CNO, and MambaNO in Fig. 2 of the uploaded PDF. As seen, FNO suffers from poorer performance that results from early convergence. The convergence trends of MambaNO and CNO, both with a time complexity of O(N), are basically consistent. With a deeper inspection, we can see that the performance fluctuation of MambaNO is more stable than CNO, with finally lower errors given same epochs. Therefore, neither more epochs nor more training time is required for MambaNO. As suggested, more training details are given in Table 2 of uploaded PDF file. As for better reproducibility and transparency, we indeed only provided the trained model for evaluation. During current stage, the attachment can not be updated, nor any link to website is allowed to provide. However, we promise to release the complete codes at GitHub in the coming days. 4. For irregular geometries, GNOT [2] encodes shape features using an encoder into K and V, which are then combined with cross-attention. RecFNO [5] uses MLP to learn irregular or sparse observations and reshapes the obtained features into a regular shape. These approaches provide insights into how our model can handle irregular problems. We need to use an encoder to integrate irregular shape features and transform them into a form that can be used for subsequent processing. GINO [6] combines GNO and FNO to solve 3D problems. I believe that our MambaNO can also compensate for shortcomings in irregular shapes and 3D problems in this way. Additionally, for 3D problems, designing an efficient integral scanning scheme is also an issue to be discussed. So far, we have not attempted to pre-train the model on multiple PDEs and then fine-tune it. However, these three fine-tuning related papers have broadened our perspective. Thanks. Perhaps pre-training on different PDEs would be more beneficial for our model's performance on downstream tasks. We believe this is an interesting and worthwhile direction to explore. Thanks again for your valuable insights! 5. Yes, fixed-grid convolutions may not be suitable for irregular discrete grids. However, as mentioned in response to point 4, there are already related works that address irregular shapes [2][5][6]. We believe that MambaNO also owns potential for handling irregular shapes, which will be a focus of our future research. [1]. Transformer for Partial Differential Equations’ Operator Learning. Transactions on Machine Learning Research, 2022. [2]. Gnot: A general neural operator transformer for operator learning. In ICML 2023. [3]. Convolutional neural operators for robust and accurate learning of PDEs. in NeurIPS 2024. [4]. Fourier Neural Operator for Parametric Partial Differential Equations. in ICLR 2021. [5]. RecFNO: A resolution-invariant flow and heat field reconstruction method from sparse observations via Fourier neural operator. International Journal of Thermal Sciences, 2024. [6]. Geometry-informed neural operator for large-scale 3d pdes. in NeurIPS 2024. --- Rebuttal 2: Comment: Due to the imminent closure of the discussion period, we kindly request the reviewer to provide us with their valuable feedback on our rebuttal. We are at their disposal to answer any further questions in this regard.
Summary: The paper introduces Mamba Neural Operator (MambaNO) for solving Partial Differential Equations (PDEs) efficiently. Unlike existing methods, which are computationally expensive and often neglect global and local feature integration, MambaNO offers O(N) complexity. It balances global and local integration through a state-space model and alias-free architecture. MambaNO demonstrates accurate approximation of operators for PDEs and achieves state-of-the-art performance on benchmarks with fewer parameters and better efficiency. Strengths: 1. The paper is in general well-motivated and provides an promising MAMBA-based alias-free framework for solving PDE. 2. The paper is well-written with nice visualization with some experimental verifications. Weaknesses: 1. Since alias-free is an important property of the framework, it should move it from the appendix to the main body and provide a succinct summarization. Also, the paper should clarify the meaning of alias-free in the main body. Does it mean zero alias error? Can we view them as the reconstruction error for function approximation. 2. The plots (figure 1,2,3) need better explanation of the setting. It is unclear from the plots what tasks they refer to and what does the those colorful patches/separation mean. 3. Since MAMBA is a work that mainly address the system concerns, like memory and computation saving via kernal computation, the paper should make very clear comparison with the original MAMBA to explain their design choices. How does the MAMBA techniques improves the neural operator settings and why other methods cannot achieve it. E.g. Are there memory benefit. Furthermore, the paper makes several extensions to the original MAMBA work in Figure 1, like down/up sampling, conv integration, and the autoencoder style architecture. The paper should makes better justification why they adopt this way of stacking modules. 4. Since the integration of MAMBA as the neural operator is not unique, the author should perform ablation study on the alternative ways of stacking MAMBA layers. Further ablation studies like varying the different MAMBA module inherent parameters will be helpful. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Could the author clarify the main applications of this type neural operator? And do they face the similar constraint of LLM models? In addition, why autoregressive models like MAMBA is more helpful than standard ML models? What are the benefits of causal dependency in these applications. 2. Is the main application about function approximation problem or generic PDE solving? If it includes generic PDE solving, the author should also provide experiments on that. Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: 1. In general for Neurips submission, the author should frame the paper towards generic compute scientist than specialist in this field. So, they should give clear definition, application and motivation of problem. The current intro and background are written in either very vague (related work) or techinical (mamba operator setting) manner. 2. I am not an expert in neural operator learning. I would leave the judgment of effectiveness (e.g. figure 3) and novelty to other reviewers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Alias-free is truly an important property of the framework. However, two reasons lead us to move it into the appendix. First, our innovation lies in proposing MambaNO that follows alias-free property. The alias-free framework is innovated by other work [1], not ours. Therefore, we put more effort in clarifying our own ideas, yet with most of the detailed discussions on “alias-free” left in the appendix. Note that a brief introduction of this framework was already given in the main texts. Second, an alias-free NO means it is a representation-equivalent neural operator (ReNO), enjoying discrete-continuous equivalence. Therefore, we have also provided an explicit proposition claiming that MambaNO is actually a ReNO. This implies that the discrete implementations within the neural network are equivalent to the underlying continuous operations, minimizing aliasing errors as much as possible. As you stated, it does mean zero alias error and can be viewed as the reconstruction error for function approximation. 2. Most previous works [2][3][4] have used the fluid-like patterns to visualize the variables of PDEs. In this work, we simply follow this typical scheme to demonstrate the performance of our methods. As suggested, more explanations will be given in latter version. 3. Note that a new Mamba variant is not our core effort. Instead, we attempt to propose a new neural operator for PDEs. Along this line, we found that the kernel integral in traditional NOs bears certain similarities to state-space models, leading to the proposal of MambaNO. Compared with the original NOs, such as FNO[2] and CNO[3], rather than MAMBA, the proposal enjoys both local and global dependencies. The advantages of memory benefit and linear computation are also inherited. 4. In fact, we have provided ablation experiments on the number of stacking MAMBA layers, as shown in Table 6 of supplementary material. Besides, the ablations of varying inherent parameters by using different scanning mechanisms are also given, as shown in Table 5 of supplementary material. As suggested, we would move these contents into the main texts for clarity. 5. As in most previous works [1][2][3][4], the primary application of NOs is solving PDEs. The constraints that need to be considered during application include, but are not limited to, the initial and boundary conditions of PDEs [5], Continuous-Discrete Equivalence in band-limited function spaces [1], and discretization invariance [2]. We believe these constraints are more theoretically dependent, different from LLMs those are more data driven. To the best of our knowledge, LLM has never been used in solving PDEs. The performance of MambaNO stems from multi-scale information and a scanning mechanism that handles PDE boundary and initial conditions. We believe that causal dependency is essential for the interaction between boundary and initial conditions. 6. The primary application is the generalized solution of PDEs. Considering that 1D PDEs are relatively simple, we provide experimental data and visualization results for eight representative 2D PDEs in the original text to demonstrate MambaNO's potential for solving PDEs. Due to time and computational resource constraints, irregular shapes and higher-dimensional problems remain to be explored. [1]. Representation equivalent neural operators: a framework for alias-free operator learning. in NeurIPS 2024. [2]. Fourier Neural Operator for Parametric Partial Differential Equations. in ICLR 2021. [3]. Convolutional neural operators for robust and accurate learning of PDEs. in NeurIPS 2024. [4]. Transformer for Partial Differential Equations’ Operator Learning. Transactions on Machine Learning Research, 2022. [5]. Pdebench: An extensive benchmark for scientific machine learning. in NeurIPS 2022. --- Rebuttal Comment 1.1: Comment: I am not an expert in Neural ODE. AC could downweight my opinion on this work. Since the author sufficiently addresses my concerns, I will raise my score. --- Rebuttal 2: Comment: We sincerely thank the reviewer for acknowledging our response and for raising our score.
Rebuttal 1: Rebuttal: At the outset, we would like to thank all four reviewers for their thorough and patient reading of our article. Overall, four reviewers have all complimented some aspects of our study. Specifically, Reviewer 1: The paper is well-motivated and presents a promising MAMBA-based alias-free framework for solving PDEs, and it is well-written with effective visualizations and experimental verifications. Reviewer 2: The paper provides a strong theoretical justification for the architecture, with empirical results showing good performance across a variety of PDEs. The proposed computational complexity of O(N) is impressive, and the paper includes good ablation studies and discretization invariance results, as well as a detailed description of the dataset, methods, and computational complexity analysis. Reviewer 3: The paper is exceptionally clear, making it easy to follow and effectively explaining the complexity reduction benefits brought by the Mamba structure. It achieves impressive results, demonstrating the advantages of the Mamba structure and evaluates the method on datasets with varying resolutions, showing that the proposed approach maintains consistent performance across different resolutions. Reviewer 4: The paper introduces a new approach to neural operators by adapting the Mamba architecture, which has shown promise in other domains. The combination of global (Mamba integration) and local (Convolution integration) operators is innovative in this context. Firstly, the authors provide a solid theoretical analysis, including proofs of representation equivalence and approximation capabilities, adding depth to the empirical results and helping to understand why the proposed method works. Secondly, the evaluation is thorough, covering a wide range of PDE types and comparing against multiple state-of-the-art baselines. The inclusion of both in-distribution and out-of-distribution tests strengthens the claims of generalization ability. The reported improvements in accuracy and efficiency are substantial across different PDE types, which is impressive given the diversity of the benchmarks. Thirdly, by adhering to an alias-free framework, the authors address an important issue in neural operators, potentially improving the model's stability and generalization capabilities. Additionally, the O(N) complexity of the Mamba integration is a significant advantage, especially for high-dimensional problems. Besides, their criticism and constructive suggestions will enable us to improve the quality of our article. If our paper is finally accepted, we will incorporate all the changes that we outline below in a camera-ready version (CRV) of our article. As allowed by the conference, we are uploading a one page PDF that contains figures and tables on numerical experiments which support our arguments below. With this context, We proceed to answer the points raised by each of the reviewers individually, below. Yours sincerely, Authors of "Alias-Free Mamba Neural Operator". Pdf: /pdf/8bf280422f5d3fe8ed2bc0b937f3ff10671eeedf.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Automated Label Unification for Multi-Dataset Semantic Segmentation with GNNs
Accept (poster)
Summary: The paper proposes a multi-dataset training approach that works on top of a unified output taxonomy mapped onto individual dataset-specific taxonomies. First, a multi-head model is trained. Then, a unified taxonomy is automatically constructed by merging semantically identical classes, determined based on segmentation performance on individual classes when a connection is made. After that, GNNs are used to further refine the mapping from the individual dataset-specific classes to the unified taxonomy. The input nodes of the GNN are dataset-specific labels and unified classes. Dataset-specific labels are first enriched with a textual description provided by ChatGPT, which is then encoded with LLaMA-2 and combined with a trainable dataset embedding to create node input features. The addition of the trainable dataset embedding enables the model to learn the different meanings of classes with the same name in different datasets. Finally, the training process alternates between fine-tuning the segmentation network and refining the mappings with the GNN. Strengths: 1. Well-written and clear. 2. Achieved state-of-the-art results on the WildDash 2 benchmark, demonstrating improvements over previous work. 3. Introduces a novel approach for label unification. Weaknesses: 1. The paper primarily focuses on individual benchmark performance rather than the quality of the recovered taxonomy. In practical scenarios, it's crucial to reason within a unified label space. Mistakes such as equating MPL:tunnel to ADE:fireplace could mean the difference between advancing or halting in a real-world scenario. 2. The model achieving state-of-the-art performance on WildDash is trained exclusively on driving datasets, unlike other methods submitted that were trained on the full RVC collection (including ADE20K and COCO), potentially impacting performance in road driving contexts. 3. There is significant variation in training design choices in Table 2, making it challenging to isolate the impact of each individual choice (e.g., backbone, selection of dataset-specific classes, etc.). 4. The initialization of the adjacency matrix appears to be a crucial step for model performance, yet it is not discussed in the main paper. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. What is the size of the created taxonomy in different tables (e.g. in Table 5)? Furthermore, Table 5 could also include the performance after initializing the adjacency matrix, without additional GNN training. 2. How does the quality of the created taxonomy compare to the manual taxonomy provided by [5]? Are all relationships recovered? For example, IDD:tunnel is not connected to MPL:tunnel in Figure 5. 3. How are unseen datasets mapped to the unified taxonomy in Table 3? Is it done manually or automatically? 4. Is there any advantage to using the formulation in Eq. 6 as opposed to NLL+ [4]? 5. How much does the trained taxonomy differ from the initialized taxonomy? Do all unified classes "survive" after training, or do some become obsolete? 6. What exactly is the zero-shot model on WildDash 2 in Table 4? 7. How long does it take to initialize the adjacency matrix? What exactly is the difference compared to [52]? Are there similarities to [5], such as pairwise merging? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: 1. Focus on individual benchmark performance, rather than the process and the quality of the produced taxonomy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and insightful comments regarding our manuscript. Below are our responses to your questions and concerns. **W1: Focus on benchmark performance vs. taxonomy quality** We acknowledge the importance of reasoning within a unified label space. Our focus on multi-dataset training aims to automatically integrate labels across datasets. Evaluation on these datasets is a straightforward approach. To assess the quality of our unified label space, we conducted an indirect evaluation using WD2, demonstrating its effectiveness in unseen scenarios. **W2: Impact of training exclusively on driving datasets** We recognize the potential concern regarding dataset bias. It's worth noting that other methods on the WD2 leaderboard also use datasets beyond RVC. In the revised version, we will include a model trained on the full RVC collection for a more comprehensive comparison. **W3: Difficulty isolating individual training design choices** We apologize for the lack of clarity in Table 2. Due to long training times and limited access to open-source methods, comprehensive comparative experiments are challenging. Our primary comparison highlights the significant improvement our model offers over the single dataset and multi-SegHead baselines. **W4: Lack of discussion on adjacency matrix initialization** We regret this omission and have conducted ongoing experiments to address it. Results indicate that our proposed initialization method (Algorithm 2) provides a slight improvement over randomized initialization on a 3ds setting, suggesting a relatively minor impact on overall performance. | Method|\|L\||CS|SUN|CV|Mean| |--|--|--|--|--|--| |Randomized adjacency matrix|54| 78.0|43.1|82.4|67.8 |Without GNN training|Pending|Pending| Pending|Pending|Pending |Ours| 54|78.4|43.3|82.6|68.1 --- **Q1: Size of the created taxonomy in different tables and performance after initializing the adjacency matrix.** Thank you for your suggestion. We agree that including the initialization of the adjacency matrix in Table 5 would enhance reader understanding. However, due to time constraints, we cannot currently provide results for all seven datasets. These findings will be included in the revised version of the paper. Comparative experiments in a smaller setting were conducted, as addressed in our response to Weakness 4. Below is a summary of label space sizes for the methods, which we will incorporate into the revised version. | Variants | \|L\| | |--|--| |1 | 448 | |2 | 329 | |3 | 226 | |Ours | 217 | **Q2: Comparison of created taxonomy with the manual taxonomy.** To compare our constructed taxonomy with the manual taxonomy provided by [5], we adopted [5] as the ground truth standard, assessing whether categories from all datasets were appropriately linked. Categories were categorized into Merged, Single, and Split classes, each evaluated based on Correct, Partially Correct, and Wrong Classifications. We introduced *Wrong but reasonable* to accommodate justified connections that were nonetheless classified as wrong. For instance, while the Cityscapes dataset marked the Traffic sign (back) as ignored, [5] connected it based on expert knowledge. The summary of our 7ds learned taxonomy is as follows: | |Correct|Partial Correct|Wrong|Total Num|Wrong but reasonable | |--|--|--|--|--|--| |Merged|137|108|47|292|17 | |Single|38|12|16|66|6 | |Split|1|3|86|90|23 | |Total Num|176|123|149|448|46 | An initial taxonomy calculated using Algorithm 2 yields: | |Correct|Partial Correct|Wrong|Total Num|Wrong but reasonable | |--|--|--|--|--|--| |Merged|131|115|46|292|6 | |Single|44|7|15|66|3 | |Split|0|0|90|90|20 | |Total Num|175|122|151|448|29 | Notably, our constructed unified label space contains only 217 categories, representing a 6% reduction from the 231 categories formed by Algorithm 2’s initial adjacency matrix, while achieving nearly consistent quality in taxonomy recovery. **Q3: Mapping of unseen datasets to the unified taxonomy.** As indicated in Lines 212-214 of our paper, the mapping process is conducted automatically. We identify the optimal mapping by comparing the unified label categories predicted by our model on the training set of the unseen datasets against the ground truth labels. **Q4: Advantages of using the formulation in Eq. 6 compared to NLL+ [4].** We regret that because [4] is not open-sourced, our reproduced version of NLL+ does not converge under our training framework, prompting us to utilize Eq. 6 for training instead. **Q5: Differences between trained taxonomy and initialized taxonomy.** For the differences in the label space, please refer to Q2 and the global rebuttal. The initial unified label space had 231 categories, but some became obsolete during training, resulting in 217 categories in the final 7ds-trained model. Our paper (L487-480) explains the methods to eliminate inactive connections. Specifically, during the final training phase, we evaluate the model in the training dataset to remove inactive connections and unlinked nodes. **Q6: Explanation of the zero-shot model on WildDash 2.** The zero-shot model on WildDash 2, referenced in Table 4, is the model trained using our 7 datasets (i.e., the "ours" model in Table 2). We evaluated this model against the WildDash 2 dataset without any additional fine-tuning. **Q7: Time required to initialize the adjacency matrix and comparison with [52].** Using four 80G A100 GPUs, initializing the adjacency matrix takes approximately two days to train the Multi-SegHead model, followed by half a day of cross-evaluation across multiple datasets. Obtaining the initial adjacency matrix using Algorithm 2 takes around one hour. In comparison to [52], we modified the cost calculation to utilize the IoU metric. This approach is only applicable to category merging and shares similarities with the partial merge method in [5]. We hope this clarifies your questions and appreciate your feedback. --- Rebuttal Comment 1.1: Title: Feedback on the rebuttal Comment: I would like to thank the authors for the thorough feedback and additional experiments. Still, even after reading it, I am keeping my original score. The new results show that even though the proposed method improves the results on individual benchmarks it does so at the expense of relation discovery quality (Q2, W4, and corresponding answers by the authors). This is connected to my main concern (W1), and that is that individual benchmark performance is not a good proxy for evaluating the task of label unification. Furthermore, the WD2 performance is named as one of the strengths of this approach, but at this moment it is not clear if the improvements are due to the methodological contributions of this paper or the choice of training data. I am inclined to believe that it is due to omission of COCO and ADE20K classes in the final taxonomy. This is also suggested by the significantly worse performance of the zero-shot model which was trained on the Vistas dataset which should be enough for a good result on WD benchmark. With regards to W3, previous work comes to similar conclusions, so that is not enough of a contribution. --- Rebuttal 2: Comment: We are pleased to inform you that we have obtained the complete experimental results regarding the initialization, answering W4 & Q1. Our experiments were conducted on three datasets (Cityscapes, Sunrgbd and CamVid) using two 32GB V100 GPUs. After completing 50,000 multi-SegHead training iterations, we performed 50,000 segmentation network training iterations for each group using the same multi-head model parameters. For the experiments involving GNN, an additional 30,000 GNN training iterations were included (without updating the segmentation network parameters). The results indicate that our GNN training shows a performance improvement compared to the results obtained using Algorithm 2 for initializing the adjacency matrix without GNN training. Algorithm 2 provides a good starting point for GNN training and contributes to the overall performance enhancement. | Method|\|L\||CS|SUN|CV|Mean| |--|--|--|--|--|--| |Randomized adjacency matrix with GNN training|54| 78.0|43.1|82.4|67.8| |Intialized adjacency matrix without GNN training|56|78.0| 42.0|82.6|67.5| |Ours (Intialized adjacency matrix with GNN training)| 54|78.4|43.3|82.6|68.1| --- Rebuttal 3: Comment: Dear Reviewer, Thank you for your valuable feedback. We agree that evaluating a universal taxonomy instead of merely benchmark performance of downstream tasks is a crucial aspect of this research, particularly in terms of relation discovery quality. However, we currently face a challenge due to the lack of a dedicated benchmark specifically designed for assessing taxonomy quality. While the manually curated taxonomy provided by [5] serves as a helpful reference, it also incorporates expert knowledge that might not be well aligned with the visual features present in the dataset images. Furthermore, there is currently an absence of well-established metrics specifically for this type of evaluation. Given these current limitations, this work has primarily focused on evaluating model performance across different datasets. Moving forward, we are committed to exploring ways to better evaluate relation discovery quality and to address the concerns you have raised. We appreciate your insights and believe they are instrumental in guiding our future research endeavors. Regarding the WD2 performance, our zero-shot model achieved SOTA results compared to other zero-shot models on the leaderboard, which we believe highlights the contributions of our approach. It’s important to note that the Vistas dataset lacks annotations for the pickup, van, and autorickshaw categories present in WD2. Additionally, we did not utilize the relabeled data provided by WD2 for these categories in the MVD, Cityscapes, and IDD datasets during training. Therefore, given the dataset bias, it is expected that the zero-shot model would perform lower on WD2 compared to models specifically trained on the WD2 dataset. Thanks!
Summary: This paper introduces a method using GNN to automatically construct a unified label space across multiple datasets, addressing the issue of label space conflicts in multi-dataset semantic segmentation. The method eliminates the need for manual re-annotation or iterative training, significantly enhancing the efficiency and effectiveness of model training. Experiments show that this method has certain effectiveness. Strengths: 1. The motivation behind this approach is clear, and automatically constructing a unified label space across multiple datasets in multi-dataset semantic segmentation is meaningful. 2. The authors use numerous illustrations and provide a detailed description of the experimental parameters, making it easy to follow and reproduce. Weaknesses: 1. This paper proposes a new method, but in my view, it is merely a collection of tricks. For example, $d_i$ is introduced in Equation (1), but the authors neither explain its role through ablation experiments nor from a methodological perspective. It is recommended that the authors provide a more detailed explanation of the motivation for each component of the method. 2. Although the motivation for this work is multi-dataset semantic segmentation, the experimental comparison methods do not include the latest methods from the multi-dataset semantic segmentation community, such as [1,2], etc. 3. This paper only uses HRNet-W48 as the backbone. The authors are encouraged to further explore the scalability of the proposed method on transformer-based networks. 4. From Table 5, it appears that the improvement in results due to label description is not significant. However, using GPT to complete this step inevitably incurs a substantial consumption of time and resources, which raises questions about whether it is worth it. Additionally, the authors should discuss the impact of the proposed method on time and space costs. [1] Gu X, Cui Y, Huang J, et al. Dataseg: Taming a universal multi-dataset multi-task segmentation model. NeurIPS 2023. [2] Wang L, Li D, Liu H, et al. Cross-dataset collaborative learning for semantic segmentation in autonomous driving. AAAI 2022. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed in the Conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our work. Below, we summarize each of your questions and provide detailed responses. **Q1 & Q2: Concerns about the lack of detailed explanation and the omission of the latest methods** A: We respectfully disagree with your assessment. Our approach is fundamentally different from existing methods and two methods you mention. While existing methods require manual re-labeling (e.g., MSeg [23]), expert knowledge (e.g., NLL+ [4]), or are limited to only two datasets (e.g., Auto Univ. [6]), our method **automatically** constructs **a unified label space** across multiple datasets without human intervention or iterative processes. Furthermore, we are the first to leverage Graph Neural Networks for this task. The papers you mentioned do not construct a unified label space. Instead, they utilize different segmentation heads (with different weights) for evaluation across various datasets. For instance, [1] employs a text encoder to encode label categories into corresponding embedding spaces for each dataset, predicting within dataset-specific embedding spaces. This method also struggles with handling categories that share the same name but have different annotation standards across datasets (e.g., IDD "curb" vs. MPL "curb"). This challenge is precisely why we introduced $d_i$ in Equation (1), allowing nodes with the same name from different datasets to obtain distinct node features, thereby differentiating these nodes. On the other hand, [2] uses dataset-specific batch normalization and heads, which can be heavily influenced by dataset biases. In real-world inference scenarios, the outputs from different segmentation heads may conflict, complicating practical application. Our method, however, consistently predicts across different datasets using a unified label space (with the same weights), employing different boolean label mapping matrices solely for performance evaluation. In practical inference, this label mapping is unnecessary. Therefore, methods [1-2] require prior knowledge of the target label space for predictions, which does not align with our task setup and is not suitable for comparison. **Q3: Suggestion to explore scalability on transformer-based networks.** A: Thank you for your valuable suggestion. We acknowledge the lack of experiments exploring the effectiveness of our method across different models. Training on seven datasets requires approximately one week on four 80G A100 GPUs, which limits our ability to provide results using transformer-based networks at this time. However, we plan to investigate the scalability of our method with transformer-based architectures in future work. **Q4: Concerns about the significance of improvements from label descriptions and resource consumption.** A: We appreciate your feedback regarding the use of GPT for generating text descriptions. It is important to clarify that the process of generating these descriptions using the text encoder is not conducted in real-time; it involves a one-time inference step. The time cost for this step is less than one minute, and the space requirement for preserving text features is only 3.5MB, which is negligible compared to the overall training costs. Furthermore, after training, the GNN component is discarded, meaning it does not introduce any additional overhead during the inference phase. We hope these clarifications address your concerns effectively. --- Rebuttal Comment 1.1: Comment: I acknowledge the authors' efforts in the rebuttal and have read it. The paper itself is interesting and adequate from a technical point of view. However, my main concern is the lack of comparisons and discussion with previous SOTA in the semantic segmentation community, which hinders the evaluation of this paper. I believe that high time complexity should not be a reason to avoid comparisons. On the contrary, it could introduce new challenges for practical applications. I suggest the authors include Transformer-based comparison methods and optimize the time complexity. Therefore, I will maintain my score. --- Rebuttal 2: Comment: Dear Reviewer, Thank you for your thoughtful feedback and for acknowledging our efforts in the rebuttal. To the best of our knowledge, the SOTA method we refer to is the one presented in IJCV 2024 [5], where our approach demonstrated significant performance improvements in both the 7ds and WD2 ( 51.3 vs. 56.4 on 7ds and 46.9 vs. 50.2 on WD2). Could you please clarify if the SOTA methods you are referring to are those mentioned in your initial review? If so, we have already emphasized that these methods are not directly comparable within our *dataset-agnostic* setting, as they only provide *dataset-aware* prediction and are unable to provide predictions in a unified label space. If you are referring to other methods, we would appreciate it if you could provide some examples for our reference. We would also like to emphasize that we did not intend to avoid comparisons. Rather, it was challenging to produce the comparison results within the limited time frame. We plan to include additional experimental data in the revised paper, such as results on WD2 using the full RVC collection for training and on 7ds with the initialized adjacency matrix without GNN training. As for your suggestion to include Transformer-based methods, we greatly appreciate this valuable input. We will certainly consider exploring and discussing these approaches in a future version of the paper. Thanks! We look forward to your any further feedback. --- Rebuttal Comment 2.1: Comment: What I mean by SOTA methods is that they are transformer-based approaches. The proposed method brings considerable improvements with HRNet or SNp as the backbone. However, my main concern is that HRNet and SNp are simple backbones proposed years ago. Therefore, I remain skeptical whether the improvements are still reasonable when the backbone is replaced by more complex and effective ones such as a Transformer. The authors suggested replacing HRNet with other SOTA backbones such as Transfomrer, to validate the generalization and effectiveness of their proposed method. --- Reply to Comment 2.1.1: Comment: Dear Reviewer, Thank you for highlighting this important consideration. We fully agree that exploring Transformer-based methods could further validate the generalization and effectiveness of our proposed approach. However, we have currently chosen to use CNN models for the following two reasons, with plans to explore Transformer-based models in future work: * For consistency and fairness in comparison with other multi-dataset semantic segmentation training methods, we have utilized CNN models, as these are also employed by the methods we compared against. The consistency of using CNNs allowed us to establish a solid baseline and ensure that our comparisons were on equal footing. * As you mentioned, Transformer-based methods have demonstrated significant improvements over traditional CNN backbones like HRNet and SNp. However, we would like to highlight that, even with a CNN backbone, our current method has achieved state-of-the-art results on WD2 benchmark, surpassing other approaches on the leaderboard, including those that employ Transformer-based methods [26][44]. We anticipate that incorporating a Transformer backbone could likely yield even greater improvements. We are committed to further enhancing our approach by exploring Transformer-based models in future research, and we appreciate your valuable feedback on this matter.
Summary: This paper introduces a method to automatically match and unify different label spaces for semantic segmentation. This allows to train a single model on multiple, differently annotated datasets. The authors can show that this can yield benefits in overall model performance, also compared to other multi-label approaches. The method is the currently best result on the public WildDash2 benchmark. Strengths: - This method is the current SOTA on the public learderboard of WildDash2, which means the proposed method holds against a very rigorous testing setup. - With the current knowledge that data scale is one of the key ingredients in well-generalizing models, progress on multi-dataset-training has high potential for impact. For example, the current leader of all ScanNet 3D segmentation leaderboards is also a method based on multi-dataset-training. - The presentation is very clear and structured for the most parts of the paper. Weaknesses: - It is currently unclear to me based on what information the graph connectivity is learned. How the method is described, the name of the annotated label is the only information that is put into the nodes, and all matching is based on that plus a hallucinated description from GPT. However, the authors mention in line 199-200 that the method would leverage visual similarity between annotations. It is not clear to me how this is facilitated. - All results are based on an architecture from 2019. It is lucky that WildDash2 has no stronger competitors in the leaderboard, but it remains unclear whether the method would yield the same improvements for a more competitive base architecture (e.g., mIoU of 77% on Cityscapes is a very bad baseline performance in Table 2. The achieved “improved” 80.7% are on-par with DeepLabv3, which was SOTA in 2018). - The method underperforms on more general indoor/outdoor datasets. Both on ADE and COCO, MSeg is better (even when predicting less classes, and in contrast to the bolded numbers, which are somehow ignoring MSeg) - I don’t think it is well discussed how hallucinating label descriptions through ChatGPT can introduce mistakes into the methodology. I don’t know about all of the datasets, but for example Cityscapes provides their own descriptions of the labels, which are the descriptions that also label workers use to label the data (see https://www.cityscapes-dataset.com/dataset-overview/#labeling-policy ). LLMs that are simply prompted with the label name can easily make up wrong descriptions that are not calibrated to the label policy of the annotation. Technical Quality: 3 Clarity: 3 Questions for Authors: see above. The points that can hopefully easily be clarified by the authors are the question how visual information is used in the matching, what to do about GPT outputting wrong definitions, and why the method underperforms on more general datasets with larger label spaces. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The broader impact does not at all discuss potential societal impact of the work. While it is true that it reduces the re-labeling effort, the method still requires segmentation labels, which are the much higher cost compared to re-labeling existing segmentation labels. What should be rather discussed are the societal impact of deploying a method with an automatically generated label space, and what the implications of this are to safety and certification when trying to deploy this method e.g. to automated driving. - The discussed limitations are OK. It would be interesting of the authors could further comment on ways how to potentially get around the scaling problem of current requiring all datasets loaded on one node, and every dataset beeing sampled for every batch. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and for highlighting the areas that require clarification in our paper. We appreciate your insights, which allow us to improve our manuscript. Below, We summarize each of your questions and provide detailed responses. **Q1: Clarification on how graph connectivity is learned and visual similarity is utilized.** A: Thank you for your question. The node features in our approach only contain textual characteristics. The visual information, is provided by the segmentation model during training. Specifically, the graph neural network outputs embeddings and adjacency matrix that are utilized in the segmentation network's inference process. For the image training samples, the segmentation network encodes visual embedding features. The GNN makes predictions by classifying and mapping these visual features to labels. The loss function is computed using formulas (4-6), and the loss value is backpropagated through the GNN model to update the parameters. **Q2: Concerns about using an older architecture and its competitive performance.** A: We appreciate your observations regarding the architecture used in our studies. To ensure a fair comparison, we selected this architecture based on the configuration outlined in Mseg [23]. Training on seven datasets is indeed time-consuming (approximately one week on four 80G A100 GPUs), which limits our capability to evaluate more competitive architectures at this stage. However, our results (Table 11) from five datasets show potential for improvement, as we've enhanced Cityscapes' mIoU to 82.2%. This indicates that there is room for further optimization in our method. We will explore more competitive base architectures, such as transformer-based models, in our future research. **Q3: Performance evaluation relative to the MSeg approach on indoor/outdoor datasets.** A: Thank you for raising this question. It's worth noting that MSeg combines categories and overlooks some difficult classes, which simplifies the learning task. Many of these difficult classes correspond to IoU values lower than the overall mean IoU (e.g., for ADE, mIoU for omitted/merged classes is 37.5 vs. all categories at 42.0; for COCO, it's 37.1 vs. 46.7). MSeg's merged categories also take into account label alignment across different datasets, which eases the learning process and thus may not represent a fair comparison to our method. We have provided examples of merged categories and their IoUs to help illustrate this point: |MSeg Label| ADE Label| IoU| |---|--|--| | **table**| coffee table| 55.9| || table| 50.6| | **terrain**| land| 0.4| || earth| 31.1| || field| 22.1| || grass| 64.7| || sand| 36.4| || dirt track| 6.4| | **mountain_hill**| mountain| 56.2| || hill| 8.9| | **car**| car| 84.3| || van| 35.9| | **stairs**| stairs| 28.5| || stairway| 28.9| | **railing_banister**| railing| 26.4| || bannister| 10.0|| | **unlabeled**| computer| 66.3| || signboard| 36.3| || monitor| 45.5| || crt screen| 5.0| || screen| 63.9| || canopy| 8.8| || plant| 44.0| || bar| 23.8| || step| 11.7| | MSeg Label| COCO Label| IoU| | --| --|--| | **building**| house| 30.0| || roof| 52.7| || building-other-merged| 46.6| | **floor**| floor-wood| 36.0| || floor-other-merged| 47.8| | **table**| table-merged| 38.3| || dining table| 75.8| | **terrain**| grass-merged| 37.6| || sand| 14.5| || dirt-merged| 22.5| | **wall**| wall-brick| 24.9| || wall-stone| 54.9| || wall-tile| 30.3| || wall-wood| 16.0| || wall-other-merged| 43.1| | **vegetation**| flower| 10.9| || tree-merged| 49.5| **Q4: Potential issues with hallucinated label descriptions through ChatGPT.** A: Thank you for your invaluable input regarding the potential inaccuracies in generated label descriptions. We agree that using official label descriptions from datasets like Cityscapes greatly enhances the reliability of our methodology. Given that some datasets may lack corresponding label descriptions or have inconsistent language styles, this can pose challenges for the model. Therefore, in the future, we plan to use the label descriptions provided by official datasets as prompts, employing GPT models to generate more accurate and stylistically consistent label descriptions to help the model learn better. This adjustment may help improve the model's learning process by reducing the risks associated with hallucinated descriptions. **Limitation: Discussion on societal impacts and scalability challenges.** A: Thank you for the addition to the limitations section. Indeed, compared to unsupervised or weakly supervised methods, we still require complete annotated data. However, our approach leverages the available annotation information in existing datasets to reduce the reliance on annotated data. Errors in the fully automated construction of a unified label space do present some safety risks for autonomous driving tasks. Therefore, we also recommend introducing a manual review mechanism to address these issues. Current automatic label generation methods may introduce significant safety risks, so we recommend incorporating a manual review mechanism for generated labels to ensure accuracy and mitigate these concerns. Regarding scalability, we concur that it's not necessary to load all datasets on a single node. Instead, as long as the computations for weight gradient updates include representations from all datasets, we can distribute this across multiple nodes, allowing more efficient processing. This approach can help facilitate scaling while avoiding bottlenecks in performance. We will include additional descriptions of these limitations in the final version of the paper, particularly focusing on safety considerations for autonomous driving scenarios. We hope that these responses provide the clarity you were seeking and address your concerns adequately. We sincerely appreciate your constructive feedback, which is invaluable in improving our manuscript. --- Rebuttal Comment 1.1: Title: read rebuttal Comment: Dear authors, thank you for your rebuttal, which I have read. Ultimately my view on this paper does not change. There seems to be a limitation of hallucinating definitions for labels and matching them purely on textual information, where visual information for matching is only propagated during training, meaning unseen datasets are aligned purely on GPT-generated definitions. I don't fully understand the comment about MSeg. Are the authors suggesting mIoU values are compared between methods that are measured over different sets of labels? That would be strange. In any case, I can accept that MSeg does not follow a standard protocol for these datasets. Overall I lean towards keeping my original score of accept, pending the discussion with other reviewers. --- Rebuttal 2: Comment: Dear Reviewer, Thank you for your thoughtful feedback. We acknowledge the limitation of our current approach, where label descriptions generated by GPT are used to construct node features purely based on textual information. In future work, we plan to address this limitation by incorporating official label descriptions and introducing mask image features corresponding to the label categories to enhance the node features. Regarding your question about MSeg, we would like to clarify that MSeg performs label merging or omission for certain datasets (e.g., ADE, COCO), while for others (e.g., CS, SUN), it uses the same evaluation categories as we do. For these datasets where a different evaluation protocol was used, while direct comparisons are not possible, this does explain the higher mIoU values observed in their results. However, we believe that on datasets where the same evaluation protocol is used, a fair comparison can be made. Thank you again for your detailed feedback and for maintaining your positive view of our work.
Summary: The paper presents an approach to automatically train a semantic segmentation network using multiple datasets with individually different class label policies. A unified label space emerges during training automatically without direct manual label mapping definitions by using a graph neural network (GNN) which is guided by textual descriptions of each label in unison with a multi-head segmentation head. The training alternates between GNN and segmentation network training to jointly optimize the label mapping for multiple dataset as well as the actual image segmentation task itself. Multiple experiments evaluate the approach to unify 7 mixed datasets (CS, MPL, SUN, BDD, IDD, ADE, COCO) as well as 5 road-scene datasets (CS, IDD, BDD, MPL, WD2). Performance is compared by both training single heads and using the Multi-Seg. Head. as well as benchmarking on WD2. Additional ablation experiments showcases the value of textual label descriptions for the unification process. In summary this approach allows for a joint training with multiple mixed segmentation datasets without additional manual intervention. Strengths: *) Generation of segmentation label space mappings to automatically create a unified label space. *) Great overall performance for each tested dataset *) Record benchmark results for Wilddash2 *) Many experiments to showcase the approach on multiple driving-scene datasets as well as some more generic datasets. *) Supplemental contains the full training code to recreate the experiments Weaknesses: *) Some details are omitted (see Questions) making it hard to judge the extent/validity of some claims (e.g. robustness/generalization). *) Multi-head training and evaluations favor the accumulation of per-dataset bias. The cross-validations are a good start but there is no evaluation of "new"/out-of-distribution labels from one dataset within the other. This amplifies dataset-specific training (each head is optimized to work for one specific dataset); thus training to "solve" datasets rather than the task at hand. L281 (tunnel == fireplace) is a great observation but also shows the weakness of the approach. *) "Splitting" of source labels is not supported; the multi-heads are trained in isolation so if a label in one dataset requires splitting in another dataset, the respective heads are isolated (one has the split, the other doesn't). During inference of an unseen image this can result in only a single merged representation instead of the more detailed split. This effect is underrepresented during evaluation as dataset bias further separates the heads to work in favor of the overlapping test data. *) Both WD1 and WD2 are used. WD2 supersedes WD1 in every way /(it also includes all WD1 frames with the extended label policy) and crucially also uses a unified label policy combining IDD, MPL, CS & WD1 (80 label cat.; the benchmark uses the reduced WD2_eval 25 cat policy). So contrary to L223, WD2 does contain the labels "lane marking", "crosswalk" and "manhole". WD2 has a different publication associated with it which is unmentioned: CVPR22 "Unifying Panoptic Segmentation for Autonomous Driving". The paper/experiments would be easier to understand with only WD2. *) For some figures/descriptions in the paper it is unclear which dataset combinations was used during training. Introduction of fixed names (e.g. 5ds vs. 7ds for 5/7 datasets; or explicitly mention "domain-general"/"domain-specific" wording) could help. Technical Quality: 2 Clarity: 3 Questions for Authors: *) The creation of textual descriptions using ChatGPT is not described at all; which version of ChatGPT was used? How were per-dataset label spaces described as textual input? Were images/masks/masked images used with a multi-modal version of ChatGPT (e.g. GPT4V; GPT4o; ...). Using masks together with images as inputs for ChatGPT is non-trivial. L110 contains no details; wording in Figure 2 suggests a multi-modal model as "image of" is part of the dataset label text or simply the "blind" pure-text completion of prompts in the style "An image of <label> from the dataset <dataset>....". Please clarify; the text label descriptions are an important aspect of the paper's novelty! *) Where is the final resulting unified label mapping? Figure 5 shows a small part but the full joined label set would give a good indication of generalization vs. specialization. *) Supplem. B. (target number of unified labels) is a crucial component as it directly steers the separation/merging of similar concepts (e.g. seperate "Cityscapes_road"/"MPL_road" labels vs. a unified "road" label). How is the hyperparameter Lambda|L| chosen and what number of labels did the experiments end up with? *) What pre-training was used for the initialization of the multi-heads? *) WD2 negative examples are a defined subset within the benchmarking dataset. However, the official rules of the benchmark state, that both regular/"positive" and negative frames/subsets have to be treated equally. The comparability within the benchmark is only held upright, as long as this rule is observed. L246/L247 can be interpreted otherwise: "We map non-evaluated classes to a void label"; is this applied to both positive and negative frames alike? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: *) Limitations relating to dataset bias could be mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful inquiries and for pointing out crucial elements of our methodology. Your feedback is instrumental in refining our paper and addressing any weaknesses. Below, we summarize your questions and provide detailed responses. **Q1: About the creation of textual descriptions using ChatGPT.** A: Thank you for pointing this out. We apologize for not providing specific details in our initial submission. In the revised version of the paper, we will clarify that we used ChatGPT 3.5 to generate label descriptions without any image input. Each dataset label category was formatted into the text input as “An image of <label> from the dataset <dataset>.” This prompt was used to encourage ChatGPT to complete the label descriptions. The resulting complete label descriptions were then inputted into a text encoder to generate the corresponding textual features for the label nodes. **Q2: Where is the final resulting unified label mapping?** A: Please refer to our global rebuttal for more examples. We plan to present our complete unified label mapping in the form of open-source code. **Q3: The importance of Supplement B regarding unified labels and the choice of hyperparameter Lambda |L|.** A: We appreciate you highlighting the importance of this aspect. Based on experimental insights and referencing paper [52], we selected the hyperparameter ($\lambda$ = 0.5), as this setting provided a good initial value. In our experiments with seven training datasets (CS, MPL, SUN, BDD, IDD, ADE, COCO), we had a total of 448 label categories. With the selected parameter, the resulting unified label space \(|L|\) comprised 231 labels. We will include these details in the revised version of the paper. **Q4: What pre-training was used for the initialization of the multi-head models?** A: In the Multi-SegHead Training Stage section of Supplementary A, we describe the process of training our multi-head model. Specifically, we construct a separate segmentation head for each dataset, where each head contains only a linear layer to map the embedded feature channels to the specific label categories of that dataset. The training was conducted using four 80G A100 GPUs for 100K iterations. Image preprocessing and hyperparameter settings were consistent with those used in other training stages. **Q5: Treatment of WD2 negative examples in benchmarking datasets.** A: We apologize for the lack of clarity in our description, which may have caused confusion. We adhere strictly to the WD2 evaluation guidelines, applying uniform label mapping to all samples. This process is indeed applied to both the positive and negative frames. We will explicitly include this description in the revised version of the paper. --- **Weakness 1: Multi-head training and dataset bias concerns.** A: Thank you for highlighting the issue of dataset bias in multi-head training. We acknowledge this limitation. After completing cross-validation, we discard the multi-heads and the model utilizes a unified label embedding space for predicting class probabilities during subsequent training phases, which should help mitigate biases introduced by the initial training setup. We believe adding more datasets may help alleviate out-of-distribution label concerns, as combining datasets may reduce errors from mislabeling during cross-validation. We also recognize the importance of distinct semantic gaps, such as between "tunnel" and "fireplace". We plan further research to leverage label descriptions from official datasets and utilize advanced large models to provide accurate label mappings while constraining the model’s output based on textual features to minimize incorrect connections between semantically different classes. **Weakness 2: Splitting of source labels with respect to multi-head training.** A: We apologize for any confusion regarding the isolation of multi-heads training. After the initial multi-seghead training stage, we discard the multi-head structure, and thus, only a single UniSeghead is maintained for predictions. During inference on unseen images, we solely output results from the unified label space, eliminating the complications related to the isolation of respective heads. This will be clarified in the revised version of the paper. Additionally, our visualization results (in Supplement F) show that the final model generates fine-grained segmentation results. **Weakness 3 & 4: The use of both WD1 and WD2, and unclear figures or descriptions.** A: Thank you for noting the potential confusion arising from using both WD1 and WD2. To clarify, our model did not utilize WD1 during training, and we plan to remove references to testing on WD1 in the final version of the paper for better understanding. We also appreciate your reminder regarding the citation of the WD2 paper, which we will include in the revised version of the paper. Additionally, thank you for pointing out the lack of model descriptions in our visual results that caused confusion. The L223 experiment in the paper refers to the 7ds training model, which does not include WD2, resulting in only MPL having *lane markings*, *crosswalks*, and *manholes*. We will add more detailed descriptions to the images (Fig. 3 and Fig. 5) and revise 'Ours' to 'Our 7ds model'." We hope these responses provide clarity and address your concerns effectively. Thank you once again for your constructive feedback.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their valuable comments and suggestions. The automatic construction of a unified label space using GNNs is the main innovation and contribution of our method. However, due to the limitations of our presentation format, we apologize for not being able to provide a comprehensive demonstration. We plan to open-source the constructed unified label space along with our code. In the supplementary material, we also present more examples of the generated graphs. Our supplementary material is divided into four parts: 1. **Fig. 1**: Shows well-constructed examples across the 7 datasets. 2. **Fig. 2**: Illustrates examples of the label space initialized using Algorithm 2 and subsequently refined through GNN training. 3. **Fig. 3**: Displays examples of some incorrectly constructed links. 4. **Fig. 4**: Shows the label space constructed on 5 datasets. Since our method constructs the unified label space through sample-based learning, the constructed label space depends on the samples included in the datasets. For instance, in **Fig. 2**, Traffic sign (back) is labeled as an ignored category in CS, BDD, and IDD, making it difficult to learn a link with MPL: Traffic sign (back). Additionally, the model's learning relies on visual features. For example, COCO: Window-other often includes vegetation outside the window, leading to an incorrect link in **Fig. 3**. Similarly, the tunnel == fireplace issue mentioned in the paper arises because these categories have similar visual features. The model merges them into one category to save space for predictions. However, these categories have significant semantic differences. We believe that using label descriptions provided by official datasets to generate more accurate text features, combined with incorporating knowledge from large models, can help constrain the model’s label links and reduce incorrect semantic links. This is an area worth further research. We thank all the reviewers for your insightful questions and feedback. If there are any more issues or concerns to discuss, we welcome further inquiries and are happy to provide additional clarifications. Pdf: /pdf/fda16092b2148707cb75f9f6f630bb34eb50d366.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents a novel approach for automated label unification across multiple datasets in semantic segmentation using Graph Neural Networks (GNNs). The proposed method aims to create a unified label space that allows semantic segmentation models to be trained on multiple datasets simultaneously without the need for manual re-annotation or taxonomy reconciliation. The results demonstrate significant performance improvements over existing methods, achieving state-of-the-art performance on the WildDash 2 benchmark. Strengths: + Novel Approach: The use of GNNs to automatically construct a unified label space is innovative and addresses the challenge of conflicting label spaces in multi-dataset training. + Performance: The method shows impressive performance improvements, achieving state-of-the-art results on the WildDash 2 benchmark and outperforming other multi-dataset training methods. + Efficiency: The approach eliminates the need for manual reannotation and iterative training procedures, significantly enhancing the efficiency of multi-dataset segmentation model training. + Robustness: The method demonstrates good generalization to unseen datasets, indicating its robustness and applicability across various scenarios. Weaknesses: This indeed seems a interesting problem. I have several questions: 1. How do you select the unified label size N? Is it adaptive to different dataset labels, or preset hyper-parameters? 2. Can you show more results of the bipartite graph generated the GNN? Like what labels is hard to link by text features, but linked by the GNN. 3. Is the generated graph by GNN influenced by the number of datasets in training or the models? For example, if I use different models for the training, will the graph be the same? If I use two dataset in multi-dataset training, will the nodes of these two dataset labels have the same edges as in when I use three datasets? 4. Can I apply the generated graph to other models' training without the need for doing GNN training again? If not, how much training cost will GNN bring? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper is interesting and the experiments seem to prove the effectiveness. But I need more evidence to prove the robustness of the GNN results when using different datasets and models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank for your thoughtful questions. Below are our responses to the specific questions raised: **Q1: How do you select the unified label size N? Is it adaptive to different dataset labels, or preset hyper-parameters?** A: We apologize for not clearly explaining the selection of the unified label size N in the main text, which might have caused some confusion. In Supplementary Material B, we clarify that N is determined by solving Algorithm 2, rather than being a preset hyper-parameter. Specifically, we construct the label co-occurrence relationships through cross-validation of the multi-seghead model across different datasets. This automatic solving method allows us to adapt to different dataset labels and enhances scalability. We will add this explanation to Section 3.2 of the main text in the revised version of the paper. **Q2: Can you show more results of the bipartite graph generated by the GNN? Like what labels are hard to link by text features, but linked by the GNN.** A: Please refer to our global rebuttal for more examples. For instance, ADE: Signboard is annotated as both Traffic sign (front) and Traffic sign (back), while CS annotates Traffic sign (back) as an ignore category. We believe that such knowledge should be learned through samples. **Q3: Is the generated graph by GNN influenced by the number of datasets in training or the models? For example, if I use different models for the training, will the graph be the same? If I use two datasets in multi-dataset training, will the nodes of these two dataset labels have the same edges as when I use three datasets?** A: Yes, the selection of different models and datasets affects the constructed nodes. The GNN relies on the model’s segmentation predictions to learn the label mapping matrix. If the model cannot accurately recognize fine-grained categories, the GNN will not be able to construct the label mappings for these fine-grained categories. The influence of datasets on the generated graph is likely present. For instance, in 5ds training, IDD: Non-drivable fallback or rail track is merged with Terrain. However, in 7ds training, after removing WD2: Terrain and adding COCO: Railroad, IDD: Non-drivable fallback or rail track is merged with the fine-grained categories MPL: Rail Track and COCO: Railroad. We speculate this is because the inclusion of COCO: Railroad in 7ds training improves the model’s learning of fine-grained categories, affecting the prediction results for Non-drivable fallback or rail track in the IDD dataset, leading to different outcomes. **Q4: Can I apply the generated graph to other models' training without the need for doing GNN training again? If not, how much training cost will GNN bring?** A: Yes, similar to other multi-dataset training methods that apply manually designed unified label spaces to different models, our GNN model constructs a unified label space as a boolean label mapping matrix that can be separated and used for training other models. We will open-source the constructed unified label space along with the revised version of our code to facilitate training with other models. If there are any further questions or concerns, please feel free to reach out to us. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The rebuttal is clear and solves my concern. I would like to raise my rating to 6.
null
null
null
null
null
null
Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning
Accept (spotlight)
Summary: This paper approaches the challenge of pluralistic alignment, which arises when the preferences among humans diverge across a population. The work identifies that current preference modeling assumptions do not account for multi-modal reward distributions, which is often the case for pluralistic preferences. Thus, they end up averaging modes and generating inaccurate reward models. To address that, the paper proposes a latent variable model that explicitly models different users and applies variational inference techniques to infer the user latent that conditions the reward model and the optimized policy. The paper also proposes scaling rewards following the likelihood from the model, as a way to ensure all latents follow the same scale, which is necessary for latent-conditioned policy optimization. Finally, the paper evaluates the proposed model in a set of control and language environments, demonstrating gains over prior preference modeling choices. Strengths: - The paper approaches pluralistic alignment, a challenge that is mostly always overlooked by the RLHF community, which implicitly assumes that all humans present the same preferences and values. Therefore, the work is very relevant; - The main merit of the paper is to identify that current modeling assumptions, while assuming the same preference over humans, fall short when modeling diverse preferences. The paper brings very didactic cases that explicitly show such failure and the consequences in the optimized policy (e.g., Figure 3). - The introduction of the latent variable modeling for preference modeling in RLHF is interesting and novel, and a natural extension of the prior work [1]. Weaknesses: - The major concern is on the evaluation setup. Although the paper brings a diverse set of experiments, they are all setups where the hidden context is composed of very few variables with simple distributions. This is acceptable to show the failure of previous methods, but not sufficient to claim scalability in the LLM setup for the proposed method. In more detail: - The “Pets” dataset is too simple and does not even require language. From the example in the Appendix B.2, it is possible to ignore the prompt and extract the pet variable from the responses and context. The problem then boils down to fitting a categorical classifier over four classes, where the input is the pet in response A and B, and the same for the context. Given the amount of variables and the dataset size, it is questionable if variational inference is required here - perhaps a simple linear model or MLP can learn directly from the context. - The “Ultrafeedback-P” indeed contains much richer prompts and responses, and the hidden variables (helpfuness, honesty) are not straightforward to extract from the responses. However, the experiment assumes only two users, and there are only two hidden variables as well. Again, this raises concerns if variational inference is required here - perhaps a base LLM can already separate these users in the feature space, but there is no such a baseline in the work. - Interestingly, the paper highlights the requirement of filtering out the context where the users agree, which simplifies the task, as it is easier to identify users with extreme disagreement. It would be important to justify why that is necessary and if it is not a limitation of the proposed method. - The paper is motivated by personalized RLHF and provides modeling for latent-conditioned policies, but in the LLM experiments there is only preference modeling and no RLHF/alignment. This time of experiment is necessary to claim that VPL scales up for pluralistic alignment. - Some methodology details are missing: how the prior is learned (the training objective, data); how humans are simulated in the control environments (L279); how the accuracy of the reward model in Table 1 is defined; the description and details of the environment in used in Appendix A.3; - Another crucial concern is that the proposed reward scaling technique is not equivalent to the learned latent reward in the sense of resulting in the same optimal policies (it is not a policy invariant reward shaping [2]). A minimal example that illustrate this is the following MDP: - Imagine an MDP with 11 states (s_0 to s_10). Initial state s_0. There are two actions, a_1, a_2. If the policy takes a_1, it goes to s_1 and receives r_1 = 1000. If it takes a_2, it goes to s_2 and receives r_2 = 100. From s_3, regardless of the action taken, the agent goes to s_1 -> s_3 -> s_5 -> s_7 -> s_9 (terminal), and does not receive rewards in any of these states. Similarly, from s_2, it goes to s_2 -> s_4 -> s_6 -> s_8 -> s_10 (terminal), but receives reward +100 at each state. Assuming no discount in the return, we have V(s_1) = 1000 and V(s_2) = 500, thus the optimal policy should choose a_1. However, if you consider the reward scaling proposed in this work and assume a uniform distribution over states to compute the expectation in L211, then V(s_1) ~= 17/9 and V(s_2) ~= 30/9, which is a different optimal policy. - The reward shaping seems to work empirically at least for the Maze Navigation (Ravens-Manipulation is slightly worse and it is unclear in the LLM experiments if the VPL version leverages the scaling or not). It is important to state this caveat, since reward scaling implicitly assumes policy invariance under reward transformation. - One minor concern is regarding the data requirements during test-time inference. During deployment, VPL would require interacting with a user to collect a few labels before performing inference. In the LLM experiment with GPT2, it looks like it requires 100 samples, so it is questionable if this is sample-efficient enough for interacting with humans in the real world. Is this experiment using the active inference from Section 4.2? **Summary**: I believe the paper brings a contribution while presenting the failure of previous methods in pluralist alignment, and showing how they fail. Furthermore, the proposed methodology is interesting and seems to work well in small cases. Nevertheless, the presented empirical evidence does not support the scalability claims in LLMs, as the datasets are too simple and even raises questions whether very simple models could solve it. Furthermore, the reward scaling technique proposed is not well grounded theoretically and leads to different optimal policies. There are also some potential societal consequences that are not discussed. Therefore, I do not support acceptance in the current form of the paper, but. I am open to changing my score in case of the raised concerns being addressed or any misunderstanding clarified. Technical Quality: 3 Clarity: 3 Questions for Authors: - How does VPL handle irrational users? (i.e., users whose preferences are inconsistent across context/responses) - Do the LLM experiments leverage the active inference from Section 4.2? - Why isn’t VPL-SPO shown in Table 1? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - I believe the experimental setup does not support any claims of scalability with LLMs and it is unclear whether the proposed method would work in a scenario with multiple users and several variables affecting preferences, which is often the case in the preference datasets. - There are some concerns in terms of social impact that are not discussed in Section A.5. A well-performant latent model results in a very personalized language policy, which can potentially lead to two main problems: first, the model being biased/presenting sycophancy [3] to satisfy user beliefs; second, the model reinforcing users with antisocial/unethical/criminal preferences, which often presents preferences that are very far from the rest of the society. Both scenarios are potentially harmful and would be interesting to discuss ways to mitigate them. **References**: [1] Siththaranjan et. al. Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF, 2023. [2] Ng et. al. Policy Invariance under Reward Transformations: Theory and Applications to Reward Shaping. ICML, 1999. [3] Sharma et. al. Towards Understanding Sycophancy in Language Models, 2023. # Post-Rebuttal Please refer to my comment under the rebuttal message. Based on that, I am increasing my scores and recommending acceptance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate that you found the method interesting and agree that pluralistic alignment is a challenge that is mostly overlooked by the RLHF community. We have conducted 5 additional experiments to demonstrate scalability and include the results in the rebuttal PDF, and address your concerns below. > **“The major concern is on the evaluation setup. [...] not sufficient to claim scalability in the LLM setup for the proposed method.”** In our LLM experiments, we adopt the widely used UltraFeedback dataset to expose pluralistic alignment. The original dataset has 4 attributes along which fine-grained preferences are generated, and in the original submission, we used two of these attributes to indicate diverse users (similar to prior work [2]). **We present additional experiments in Table AM:1 where we compare VPL and the baselines, in a scaled and more diverse setting where all four attributes are present in the preference dataset.** Effectively, in these experiments, **we go beyond the prior work** [2] that includes two users (harmlessness and helpfulness), and include all four attributes in the UltraFeedback dataset. This dataset presents a standard benchmark, which is similar to past works in preference modeling with diverse users [2, 3], as well as state-of-the-art preference modeling works that use either UF or HH-RLHF [4,5,6]. By using all available attributes in UF as distinct users, **we have created a challenging benchmark for pluralistic alignment from the best available real RLHF language datasets.** The reviewer suggested : “Although the paper brings a diverse [...] hidden context is composed of very few variables with simple distributions”. So, to demonstrate VPL in the presence of more users, we include new experiments with a large number of users and hidden variables, in Figure AM:2. We supplement the Maze and Habitat experiments with 10 and 5 users (Figures 8 and, 4 respectively), **with new Habitat-Rearrange experiments, where we have ~ 100 users**. Each user has a ranking over five different locations for storing objects in their home. **Here, the space of users and variables is combinatorial (as total possible orderings are 5!)**. In Figure AM:4, we observe that VPL infers the user distribution and adapts the downstream model, **demonstrating its effectiveness with a large user population**. Additionally, in Figure AM:3, we test the generalization of VPL to unseen users at test time and show that it can effectively infer and follow unseen goals at test time, outperforming the baselines. > **“Another crucial concern is that the proposed reward scaling technique is not equivalent to the learned latent reward [...]”** We appreciate the reviewer for taking the time to craft this example. To test the given MDP under the scaling from SPO [1], and standard version (BTL) of preference learning [13], we simulated the MDP you suggested and report the value function of states $s_1, s_2$. | | $V(s_1)$ | $V(s_2)$ | |-------------------|----------|----------| | Ground Truth | 1000 | 500 | | Markovian BTL | -18.87 | 22.70 | | Markovian SPO | 17/9 | 30/9 | | Non-Markovian BTL | 36.46 | -33.36 | | Non-Markovian SPO | 35/9 | 10/9 | **Under a Markovian oracle, both methods fail to generate the correct value function i.e. $V(s_1) < V(s_2)$.** This points to a larger issue in preference learning, which all RLHF methods face and is not unique to our scaling. Essentially, in preference-based learning, we only have binary labels, and no notion of reward scale; so we assume how the scale of rewards affects the preferences (e.g. BTL or SPO). Consequently, under the assumed scale, a cumulative reward estimate (i.e. the value function) might not have the same ordering as the ground truth, which is what we observe in the above table. We adopt a common assumption in preference learning, particularly in [1] that inspired the scaling: the oracle providing preference labels is non-Markovian (i.e. it prefers all states in a trajectory with higher returns over those in a lower return one). **As a result, in the given MDP, the states $s_1, s_3, s_5, s_7, s_9$ would be preferred over the states $s_2, s_4, s_6, s_8, s_{10}$. This generates the accurate value functions and policy i.e. $V(s_1) > V(s_2)$, and the policy under this scaling is invariant.** > **“data requirements during test-time inference [...] it looks like it requires 100 samples, so it is questionable if this is sample-efficient enough for interacting with humans in the real world”** We would like to clarify the training and evaluation setup here: **VPL uses only a random sample of 2-8 questions from the subset of 100 questions in the LM experiments to predict the latent distribution**, which provides us with a sample efficient way to infer the user distribution at test-time, as opposed to using all 100 samples. > **“How does VPL handle irrational users? (i.e., users whose preferences are inconsistent across context/responses)”** In Figure AM:2 of the rebuttal PDF, we include experiments with varying number of labels queried from each user at test time. We also add noise to the context i.e. we flip the labels with a certain probability to simulate inconsistent and noisy users. **The performance degrades with increasing noise, but VPL still outperforms the baseline at 25% noise level. With more labels, VPL performs better at higher noise levels,** but at 50% noise, the context provides equal information for all user types. Consequently, it fails to identify the particular user and the performance coincides with the baseline (that has no mechanism for personalization). This shows that VPL is able to handle irrational users i.e. users whose preferences are inconsistent across context/responses. --- Rebuttal 2: Title: Continued Rebuttal Comment: > **the model being biased/presenting sycophancy [12] to satisfy user beliefs; second, the model reinforcing users with antisocial/unethical/criminal preferences, which often presents preferences that are very far from the rest of the society.** Thank you for bringing up this insight, and we will include the following discussion in the social impact statement. In pluralistic alignment, we assume that some differences in user preferences reflect divergent but equally valid perspectives on which moral or cultural values an LLM should align to; for example, individuals from one culture may hold collectivist moral values, while another culture is more individualist. Both value systems should be respected, and as such LLMs should be able to recognize and adapt to the values held by a particular user. However, the personalized model could potentially either be sycophantic or align with adversarial users, which is undesirable. This raises very interesting questions, such as: At what point should the LLM embrace a more universal set of values? How can we detect when such a point has occurred in a particular conversation? The probabilistic framework of the user distribution could allow us to identify low probability or rare behavior, and also the distributional nature of reward functions can help us point out responses where the users are divergent (maybe signifying disagreement). Additionally, a model could flexibly switch between adhering to the user's personal preferences and conforming to a more universal perspective on topics where it could be biased, or is sensitive to jailbreak [14]. Taking inspiration from Constitutional AI [14], we can allow a system designer to specify the topics for which the LLM should not engage in user personalization. Overall, this presents an exciting future research direction toward building safe personalized LLMs. > **Pets dataset is simple** We acknowledge that **Pets dataset is a simple synthetic dataset**. Therefore, we present the improved capabilities of our method with additional experiments on a larger and more diverse UltraFeedback dataset (UF-P-4), with four different users (In Table AM:1). The Pets dataset provides a **sanity check over VPL** to show that the model is able to adapt to multi-modal preferences in imbalanced language datasets. > **“this raises concerns if variational inference is required here - perhaps a base LLM can already separate these users in the feature space, but there is no such a baseline in the work.”** We include an additional baseline in Table AM:1, "VPL+Ground Truth User", where we replace the predicted latent user vector by the ground truth one-hot user vector to adapt the model to different users at training and test-time. **We show that VPL provides comparable performance to this condition (63% vs 66%)**. Here, a key advantage of VPL is that **it does not assume access to explicit user types** but learns to encode and cluster users directly from preference labels, while achieving similar accuracy. > **there is only preference modeling and no RLHF/alignment** We present experiments on learning policies using VPL reward modeling in simulated control domains. We acknowledge that our language experiments are focused on preference modeling, but we primarily follow and compare to prior work, which also focuses on preference modeling from diverse datasets alone [2, 3], without including experiments for LLM fine-tuning. We believe that preference modeling alone is interesting, as modeling diverse users may also provide increased interpretability in highlighting potential reasons for preferences [7, Sec. 2]. We believe that further exploring how to best apply VPL to downstream tasks and larger, noisier datasets is an interesting and exciting avenue for future research. Additionally, prior work has shown that improving RM performance can yield improved downstream performance, both when used in RLHF training [8, 9, 11] and when used in best-of-N settings [9, 10]. > **“the requirement of filtering out the context where the users agree..”** In the additional experiments with four users (Table AM: 1 of the rebuttal PDF), we filter out instances only where all users disagree. So, the context can still contain queries where at least two users overlap. Thus, **VPL works in cases where different users agree on some responses, but not all of them.** We will add the following to the limitations sections of the paper: “In LLM preference modeling, VPL assumes that the queries used to generate context inputs for posterior inference contain some useful information to identify the users. In our work, we filter out the instances where all users agree i.e. avoiding degenerate contexts that provide no information about the users.” --- Rebuttal 3: Comment: > **"Methodology details"** Thank you for pointing out the missing details. We will update them clearly in the manuscript, as follows: 1. We assume that our prior is a multi-variate Gaussian with mean $\mu$ and covariance $\Sigma=\text{diag}(\sigma\sigma^T)$, where $\mu, \sigma \in \mathrm{R}^d$. In all experiments, they are initialized from a standard Gaussian. However, in our control experiments, we observed that using a learned Gaussian i.e. setting $\mu$ and $\sigma$ to learnable parameters under the ELBO objective (Eq. 3) improved performance and stability during training. 2. The humans are simulated using Oracle reward functions that are included in Appendix B.1. We randomly sample a user type and query a batch of annotations from the given user to generate a single data point. 3. The accuracy of the reward model is "1" if the LM rewards model assigns a higher reward to the response preferred by the given user over the alternative response, and "0" otherwise. We report the average accuracy of the model over the eval set consisting of multiple prompt and response pairs, labeled by diverse users. 4. VPL-SPO is required as a part of policy optimization while currently, we focus on preference modeling for LLMs in this work. VPL and VPL-SPO will model the preferences with similar accuracy. The only difference would be the scale of rewards that affects policy optimization. -- [1] Swamy et al. (2024). A Minimaximalist Approach to Reinforcement Learning from Human Feedback. [2] Siththaranjan et al. (2023). Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF [3] Zhao et al. (2023). Group Preference Optimization: Few-Shot Alignment of Large Language Models. [4] Ivison et al. (2023). Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2. [5] Rafailov et al. (2023). Direct Preference Optimization: Your Language Model is Secretly a Reward Model. [6] Cui et al. (2023). UltraFeedback: Boosting Language Models with Scaled AI Feedback. [7] Sorensen et al. (2024). A Roadmap to Pluralistic Alignment. [8] Shen et al. (2023). The Trickle-down Impact of Reward (In-)consistency on RLHF. [9] Gao et al. (2022). Scaling Laws for Reward Model Overoptimization. [10] Ivison et al. (2024). Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback. [11] Meng et al. (2024). SimPO: Simple Preference Optimization with a Reference-Free Reward. [12] Sharma et al. (2023). Towards Understanding Sycophancy in Language Models. [13] Ouyang et al. (2022). Training language models to follow instructions with human feedback. [14] Bai et al. (2022) Constitutional AI: Harmlessness from AI Feedback. Title: Continued Rebuttal --- Rebuttal Comment 3.1: Comment: I appreciate the authors’ efforts on addressing questions and bringing new empirical evidence for the work. My major concern (about the empirical evidence not supporting the scalability claims) was properly addressed with the rebuttal experiments. There are now new experiments showing the method works for >100 users in Habitat Environment and an extension in the Ultrafeedback benchmark. The missing methodology details were also clarified in the rebuttal. I strongly recommend the authors to add these experiments and new clarification text for the camera-ready version, as it considerably improves the paper. My third concern was also clarified. In terms of the second concern (reward scaling), the rebuttal acknowledges the limitation and links it to an underlying assumption on the oracle’s preference labeling process (non-Markovian). I believe this assumption is strong and somewhat simplistic (if the ground-truth labeler only looks at the trajectory’s return and not for particular states, then the problem modeling could be simplified to a bandit setting instead of the full sequential decision-making setup - as it is often the case of traditional LLM-based RLHF). I believe the camera-ready version should also explicit this limitation/underlying assumption as a clarification. Overall, most of my crucial concerns were addressed and the paper largely benefited from the new content. Therefore, I am raising my score accordingly. Other points (questions/limitations) were properly discussed as well. Again, I strongly recommend authors to incorporate this on the camera-ready version, particularly the discussion regarding the societal impact. (For the next time, **please make sure to adhere to the rebuttal length restriction**).
Summary: The primary objective of the paper is to design a multi-modal RLHF strategy to align diverse preferences with a latent variable model. The latent variable represents the users/topics and the reward model conditioned on the latent variable is learned for each user preference. The empirical results support the hypothesis on simulated control problems and pluralistic language datasets. Strengths: 1. Aligning to diverse preference with variational inference is one of the most natural ideas and the work provides an interetsing step in that direction. 2. The approach also provides a method to actively learn user preferences leveraging the posterior uncertainty. 3. The empirical performance and ablation shows that learning under the probabilistic framework is able to precise multimodal reward model. Weaknesses: 1. The setup is not extremely practical. For ex: If I am the LLM company when a new user comes to the system and asks a question via the prompt? Then, instead of answering this, we will be doing some active learning to efficiently identify the user right? Since the posterior q(z|y_1, y_2) needs to be estimated for that? Is there any other way to do that or we need to incorporate the active learning strategy every time for each new user/group? 2. The experimental setup is restricted to a simulated environment and tasks. It is crucial to understand the efficacy of the approach when scaled to more realistic environments and tasks [1, 2, 3] 3. The work lacks comparison with several baselines on multi-objective RL or aligning with diverse preferences. Please provide a detailed comparison with them, not necessarily through experiments but at least a detailed discussion will be crucial. [1, 2, 3, 4, 5] References : 1. MaxMin-RLHF: Towards an equitable alignment of large language models with diverse human preferences 2. RLHF from Heterogeneous Feedback via Personalization and Preference Aggregation 3. Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration 4. Pareto-Optimal Learning from Preferences with Hidden Context 5. PERSONALIZED SOUPS: PERSONALIZED LARGE LANGUAGE MODEL ALIGNMENT VIA POST-HOC PARAMETER MERGING Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Can the authors motivate a practical scenario of the setup, where this setup can make sense? 2. It's not very clear on the specific nature of the prior and posterior being used for each of the tasks. 3. Also it is done on majorly simulated tasks and environments. Hence, it is important to show the performance in more realistic environments and benchmarks. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Check above Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing useful feedback regarding our work. We appreciate that the reviewer believes that our introduced approach is natural towards aligning models to diverse preferences. The reviewer brings up useful concerns regarding the scale of the experiments, demonstrating the effectiveness of VPL in realistic benchmarks and drawing bridges to practical scenarios for this problem and solution. We address these questions by providing additional experiments, and outline those and other responses below. > **“The experimental setup is restricted to a simulated environment and tasks.”** Thank you for suggesting that we scale our experiments to more realistic tasks and environments. Following this, we include five additional experiments in the rebuttal PDF: 1. In Table AM:1, **we scale the language experiments to UltraFeedback dataset with four users,** which is comparable in scale and diversity to state-of-the-art prior work on preference modeling [6]. In these new experiments, we go beyond the prior work [6] that includes two users (harmlessness and helpfulness), and include all four attributes in the UltraFeedback dataset. By using all available attributes in UF as distinct users, we have created a challenging benchmark for pluralistic alignment from the best available RLHF language datasets. 2. In Figure AM: 1, **we expand our robotics experiments to additional tasks in the Habitat simulator.** We include a new Habitat-Tidy task inspired by TidyBot [7], that presents **a realistic setting for diverse users in the real world.** 3. To test VPL with a large number of users, in Figure AM: 4, we demonstrate that it outperforms the baselines in the Habitat-Rearrange task (when scaled to ~ 100 users). 4. We also show that VPL is able to generalize to new users at test-time i.e. adapt the model to align with unseen goals in Figure AM: 3. 5. In Figure AM:2, we show that this learned model is robust to noise or inconsistency in user context labels when modeling diverse user preferences. Overall, the additional experiments and analysis, along with the existing results cover a diverse number of realistic tasks with increasing scale and diversity. > **Can the authors motivate a practical scenario of the setup, where this setup can make sense?** Consider real-world tasks such as autonomous household robotics. **Each person has strong individual preferences for things like how a user arranges things in their home [7], which must be modeled by a robot to effectively assist in cleaning the home (such as one person may prefer storing shirts in the drawer, while another may prefer them on the shelf). In assistive robotics for the disabled** [10], different users have different physical constraints and preferences regarding feeding methods and food choices. To succeed at these tasks, the robot must efficiently infer and adapt to each user's unique preferences. Our simulated control experiments, as well as the additional experiments shown in Figure AM:1 in the rebuttal PDF, test whether VPL is a promising method for this type of real-world robotics task. LLMs must interact with a large, diverse, global population of users. Within this population, individuals from one culture may hold collectivist moral values, while another culture is more individualist [8]. Or in certain cultures, it is morally acceptable to eat a certain type of fish while pregnant, but in some others, it is a taboo. **Both of these value systems should be respected, and as such LLMs should be able to recognize and adapt to the values held by a particular user.** In our experiments, the UltraFeedback dataset contains multiple attributes that users may prefer, such as harmlessness and helpfulness [9]. E.g. if the prompt is “teach me how to dive into a river” - users can prefer the models to be either helpful (i.e. provide the steps to learn diving) or harmless (i.e. warn the user that it is dangerous and provide no instructions). Here, the LLMs should infer and personalize to the values and preferences of each user, and VPL presents a step in that direction. In summary, these directions present some practical cases where an AI model has to personalize and cater to multiple users. And, **our method presents a practical approach to this problem where each user answers a few questions to personalize the downstream model.** > **“The setup is not extremely practical. For ex: If I am the LLM company when a new user [...] any other way to do that or we need to incorporate the active learning strategy every time for each new user/group?”** We thank the reviewer for bringing up this concern. We would like to present an important clarification that the **active learning procedure needs to be performed only once after the model has been trained** to generate the most discriminative queries i.e. using active learning we get a set of N query pairs. Beyond active learning, we can **instead select random queries** (potentially leading to inefficient performance). For each new user, we get labels over the N pairs, which enables us to estimate the posterior $q(z | (s^i_1, s^i_2, y^i)_{i=1}^N)$. **Our experiments show we get accurate performance in as little as 2 test-time queries** (see Figure AM:2 in the rebuttal PDF and Figure 5 in the original submission). However, if we do not have access to any test-time queries for a new user, we have access to the prior $p(z) \sim \mathcal{N}(\mu, \sigma^T \mathrm{I})$, and **can sample from the mode of the prior,** where the model would respond according to the majority group (as in existing preference learning methods). Thus with no additional information, VPL is as performant as a vanilla RLHF model. --- Rebuttal Comment 1.1: Title: Response to Rebuttal by Authors Comment: Thanks for providing clarifications to my concerns. I agree with the motivation for robotics, although for the LLMs still its not very clear. So, to be clear once you have the N pairs, with which you perform the active learning. But, what does it mean that the active learning is done once is not clear? Can you explain this in more detail. Also, can you please specify the prior and posterior used for the robotics and LLM case? --- Reply to Comment 1.1.1: Comment: Thank you for engaging in discussion with us. We provide additional clarification regarding the motivation and practicality for VPL with LLMs. We also include a detailed explanation of the active learning approach and the prior/posterior structure (in addition to the outline in Section 4). ### ***LLM Motivation*** As we show in our new experiment for reviewer PuPi, by default our method gives the same reward modeling performance as the current standard BTL model if there is a single user in the dataset or if we have no additional information from the user. So, if we have no additional preference labels from a user, our technique will not hurt performance. **However, if we can obtain 2-8 annotations from a user about which response they prefer, we can personalize our reward model to their specific preferences and values, unlocking the benefits of pluralistic alignment of LLMs.** ### ***Active Learning*** We would like to clarify the active learning workflow in detail: 1. We have a set of queries $(s^i_1, s^i_2)^K_{i=1}$, which is a pair of states (or responses to a given prompt, in case of LLMs). To create the training set, we sample a batch of queries of size $N$, and randomly ask one of the users to annotate it with their preferences ie. we get $\mathbf{S_j} = [(s^i_1, s^i_2, y^i)_{i=1}^N]_j$, where $N << K$. 2. We get multiple annotated batches from the diverse users to form the training set for the reward model i.e $D = (\mathbf{S_1}, \mathbf{S_2} \dots)$. 3. We train the reward model as indicated in Algorithm 1, and obtain the encoder $q(z | \mathbf{S_j})$ (i.e it takes as input a subset of queries $\mathbf{S_j}$) and the reward model R(s,z). The encoder output is $q(z | \mathbf{S_j})$, which is multi-variate gaussian distribution approximating the distribution over user preferences / groups or types. 4. ***Here, we start the active learning process***. Given $q$ and all the queries $(s^i_1, s^i_2)_{i=1}^K$, we generate multiple subsets of size N (total possible samples are $^N C_K$). For each given subset, we can find the information gain over the posterior in Step 3 and Eq. 4. 5. We choose the subset of N pairs $\textbf{S}_{active} = (s^i_1, s^i_2)_{i=1}^N$ with the max information gain. This is the set of questions that are most informative about the user type or distribution. 6. Finally, **at evaluation for all incoming users, we ask them to annotate the same set of questions in $\textbf{S}_{active}$, and then provide it as input to the encoder to obtain the posterior over this user's preferences**. **Therefore, the entire process of active learning has to be done only once after training to obtain $\mathbf{S}_{active}$. At eval time, we just need a users to provide labels for the same $N$ pairs to predict the posterior distribution for reward / policy conditioning.** ### ***Posterior and Prior:*** 1. We use a gaussian prior and an MLP based posterior similar to the approach in a standard VAE [1]. 2. **The prior is a standard multi-variate gaussian of dimension** $d$, where the mean $\mu=[0 \dots 0]^T$ and the covariance $diag(\sigma \sigma^T), \text{ where } \sigma=[1 \dots 1]^T$. Here, $\mu, \sigma \in \mathrm{R}^d$; $d$ is the size of the latent dimension. In the robotics experiments, we set them to be learnable parameters by setting the requires_grad property = True in pytorch. 3. For, **the posterior in robotics and LLMs we predict two vectors** $\hat{\mu}, \hat{\sigma} \in \mathrm{R}^d$ **using the encoder q and the annotated pairs $\textbf{S}_j$ as input**. It generates the posterior, which is also a multi-variate gaussian with the mean $\hat{\mu}$, and the covariance $diag(\hat{\sigma} \hat{\sigma}^T)$ 4. The encoder architecture in the robotics case is a simple MLP that takes in the annotated pairs $\mathbf{S_j} = [(s^i_1, s^i_2, y^i)_{i=1}^N]_j$ i.e. the input is of dimension (2S+1)*N, where S is state dimension, and N is the number of queries. 5. For the LLM encoder architecture, we lay it out in detail in Section 5 and Figure 2 of the paper. It is similar to the robotics case, with certain modifications to handle the high dimensional and complex LLM embeddings. 6. For robotics, we sweep over possible values $d$ in { $8,16,32$ }, and for LLMs, $d=512$. We include the detailed hyperparameters in Appendix B.4 7. Finally, to predict the reward or condition the policy on the user type/preference, we sample a vector $z$ from the predicted posterior, and augment the input to the reward model or the policy to generate personalised rewards / behavior. [1] Kingma et al. (2013). Auto-Encoding Variational Bayes. Thank you for the suggestions and we will make this workflow more clear in the paper. --- Rebuttal 2: Title: Continued Rebuttal Comment: > **“comparison with several baselines on multi-objective RL or aligning with diverse preferences. ”** We thank the reviewer for providing additional references for our work. We will incorporate the additional references in the related works section as follows: [1, 4] aims at trading conflicting alignment among diverse users with different objectives through techniques like Pareto-optimal optimization or multi-objective RL. The goal of such methods is to optimize the reward model to maximize worst-case performance over the different groups. In contrast, **our work does not aim to optimize against the diversity but rather solve the model misspecification** and learn reward models that can infer the context and specialize to a particular user. This ensures the model can align to all user groups, rather than trade-off among them. [2] introduces an approach that uses explicit clustering of human groups and learns individual reward functions for the different clusters. Our work instead relies on variational inference and latent conditioned rewards to infer and model diverse humans directly from the preference data. [2] further introduces an additional method that assumes a single reward function for all the clusters, but **adopts a probabilistic approximation to the reward model, similar to DPL [6]. We include DPL as a baseline in all the LLM experiments** in Table 1 in the original submission and Table AM:1 in the rebuttal PDF, and show that our method outperforms DPL across multiple datasets. We included a reference to [5] in L112 of the original submission. [3] proposes a pluralistic alignment framework, **using smaller LMs that are trained on community or user-specific data**. Further, it uses responses from the smaller community LMs to adapt a larger LLM to provide responses covering all or just one specific user. Meanwhile, **our approach adopts an unsupervised method (no access to explicit user distributions)** to identify the latent and condition the preference model towards the specific user preferences. > **prior and posterior structure** Here, we provide additional details about the structure of the prior and posterior of our model. We assume that our prior is a multi-variate Gaussian with mean $\mu$ and covariance $\Sigma=\text{diag}(\sigma\sigma^T)$, where $\mu, \sigma \in \mathrm{R}^d$. In all experiments, they are initialized from a standard Gaussian. However, in our control experiments, we observed that using a learned Gaussian i.e. setting $\mu$ and $\sigma$ to learnable parameters under the ELBO objective improved performance and stability during training. The posterior is an MLP that takes in the annotated samples $\textbf{S} \sim (s^i_1, s^i_2, y^i)_{i=1}^N$ and predicts the latent distribution $\mathcal{N}(f(\textbf{S}), g(\textbf{S}))$. > **Also it is done on majorly simulated tasks [...]** Our work presents **an algorithmic solution to the problem of personalization** in RLHF. We present extensive experiments across simulated control and language experiments, which we believe present realistic setups and benchmarks. So, we believe real-robot experiments to be beyond the scope of this algorithmic paper and leave on future work the challenge of deploying this method to real-world robot systems. -- [1] Chakraborty et al. (2024). MaxMin-RLHF: Towards an equitable alignment of large language models with diverse human preferences. [2] Park et al. (2024). RLHF from Heterogeneous Feedback via Personalization and Preference Aggregation. [3] Feng et al. (2024). Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration. [4] Boldi et al. (2024). Pareto-Optimal Learning from Preferences with Hidden Context [5] Jang et al. (2024). Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging. [6] Siththaranjan et al. (2023). Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF [7] Wu et al. (2023). TidyBot: Personalized Robot Assistance with Large Language Models. [8] Graham et al. (2013). Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism. [9] Cui et al. (2023). UltraFeedback: Boosting Language Models with Scaled AI Feedback. [10] Bhattacharjee et al. (2020). Is More Autonomy Always Better? Exploring Preferences of Users with Mobility Impairments in Robot-assisted Feeding.
Summary: Instead of learning a unimodal reward model as in standard RLHF, this work aims to learn a reward that covers a diverse range of preferences. It assumes that user preferences are not explicitly given, such as through verbal descriptions in the prompt/instruction. Instead, preferences are implicitly provided through rankings among candidate responses. Technically, it uses variational inference to learn an encoding that characterizes any user’s preferences. Accordingly, the policy model is conditioned on the learned preference latent code, making the generation steerable. The authors also discuss how to select the most representative set of response pairs for each user. Strengths: - The paper is well-written and easy to follow. - The study is addressing a crucial problem. The setup is realistic, as users may not always want to explicitly state their preferences. Weaknesses: 1. It is unclear how many labels are needed to accurately learn a user preference or profile encoder. There should have been experiments evaluating (a) how encoder’s performance improves as the # of user samples increases; and (b) how well the encoding generalizes — can it encode unseen user profiles with high fidelity, and can it extrapolate and interpolate? 2. The claim of being the first work that learns latent-code conditioned reward is not correct. [1] and [2] below also learn multi-modal reward. - [1] Guan, Lin, Karthik Valmeekam, and Subbarao Kambhampati. "Relative behavioral attributes: Filling the gap between symbolic goal specification and reward learning from human preferences." ICLR 2023 - [2] Wu, Zeqiu, et al. "Fine-grained human feedback gives better rewards for language model training." NeurIPS 2023 3. The setup of the LLM experiment is quite simple -- the number of attributes or dimensions of preferences is very limited. While the feasibility of attribute-conditioned reward modeling has been demonstrated in previous works [1,2], this work doesn't significantly extend beyond them in terms of scalability. 4. One missing piece in the experiment is an analysis of the latent code's negative impact on the language model. An important feature of language models, especially LLMs, is their versatility. They are not supposed to only answer questions related to pets. One question that needs to be answered is whether conditioning on a pets-related latent code would lead to catastrophic forgetting or distortions in responses to other tasks/questions unrelated to pets. 5. I understand that Section 4.2 discusses the strategy for selecting the most representative state-pair sets during deployment. However, this process seems quite costly as it requires multiple full passes through the dataset. A complexity analysis should have been included. 6. Real-world user profiles and preference data are often unbalanced. The imbalance may affect the approach's effectiveness. Technical Quality: 3 Clarity: 3 Questions for Authors: See the Weakness section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the Weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the useful feedback. We appreciate that the reviewer recognizes the importance of the problem, and the realistic setup introduced to study it. The reviewer raised useful questions about the effectiveness of the user encoder with varying labels, the generalization performance, and the scale of experiments. Overall, the concerns raised are very interesting and we provide additional experiments to demonstrate the usefulness of our method in real-world settings. We respond to all the concerns individually below: > **[1] and [2] below also learn multi-modal reward** Thank you for pointing us to these references. While these methods present conditional reward models, we highlight the specific differences between our work and the references. We will update our related section to contrast our contribution with these works as follows: [1] presents an approach to **learning a reward model conditioned on a feature vector** that represents the preferences for the desired behavior. Further, in [1] the users provide feedback on the individual features of a trajectory. However, in our work, we do not assume access to such explicit features. Instead, we propose a method to encode and predict latent distributions directly from binary preferences in an unsupervised setting. **Moreover, [1] does not include experiments on language models and only validates the method on simulated locomotion experiments, while we scale our approach to state-of-the-art preference datasets for LLMs, along with diverse and realistic simulated experiments.** [2] introduces a method to use dense rewards over multiple specific attributes to align LLMs. **This treats the problem of pluralistic alignment as a multi-objective RL problem** i.e. learn different reward models for each user or objective individually. Whereas in our work we aim to encode and model the multi-modal user preferences, and then condition the same model i.e. steer it to align with the particular user. > **“this work doesn't significantly extend beyond them in terms of scalability.”** In addition to the differences highlighted above, we also scaled our experiments to demonstrate VPL. Following this, we include five additional experiments in the rebuttal PDF: 1. In Table AM:1, **we scale the language experiments to the UltraFeedback dataset with four users.** In these new experiments, we go beyond the prior work [5] that includes two users (harmlessness and helpfulness), and include all four attributes in the UltraFeedback dataset. By using all available attributes in UF as distinct users, we have created a challenging benchmark for pluralistic alignment from existing real RLHF language datasets. 2. Our method is more general and focuses on inferring the (potentially unknown number of) different preferences without explicit user information and learning a latent conditioned model to adapt to the particular end user. Meanwhile, [1] uses explicit features for modeling different users or goals, and [2] learns different reward models for the different features. **The probabilistic nature of VPL enables us to do active learning as well (Figure 5 in original submission), which makes this a more efficient model for diverse preferences.** 3. In Figure AM: 1, we expand our robotics experiments to additional tasks in the Habitat simulator. We include a new Habitat-Tidy task inspired by TidyBot [7], that presents a realistic setting for diverse users in the real world. 4. To test VPL with a large number of users, in Figure AM: 4, we demonstrate **that it outperforms the baselines in the Habitat-Rearrange task (when scaled to ~ 100 users)**. 5. We also show that VPL can **generalize to new users** at test-time i.e. adapt the model to align with unseen goals in Figure AM: 3. Overall, the additional experiments and key differences highlight that our proposed approach presents a scalable and general method for pluralistic alignment over the included references. In summary, [1,2] focuses on learning personalized policies or reward models given the explicit classes or users. Further, these methods are tested only over language or control experiments. In our work, **we present an initial step towards learning a probabilistic framework to predict the latent user distribution directly from binary comparisons for language models and learn personalized policies that are tested in dynamic and realistic control settings.** > **“how encoder’s performance improves as the # of user samples increases”** We present additional experiments, where we vary the number of labels at test-time, for LLMs (in Figure AM:2) and control experiments (in Figure 5 in the original submission). **It shows that the performance of the model improves with the number of labels, reaching peak performance at 8 labels**. However, **actively selecting the queries could allow the model to perform similarly with fewer queries** (see Section 4.3 and Figure 5). Furthermore, using more labels makes the model robust to small noise levels as observed in Figure AM:2. > **“how well the encoding generalizes”** To test the generalization of VPL to new users, we include experiments in Figure AM:3, where the reward model and policy are trained on users preferring 10 different goals in the maze, while at test-time the agent interacts with **users preferring 5 unseen goals that are sampled in-distribution**. We compare the performance to an oracle baseline, that uses the exact goal information during training and test time, to analyze how well the encoder can infer the distribution over unseen users. **In Figure AM:3, we observe that VPL significantly outperforms the baselines, and has performance at par with the oracle baseline.** This demonstrates that VPL can interpolate between different users, even slightly outperforming the oracle that is trained on only the set of 10 goals, while VPL uses the prior to optimize the policy over the set of all possible in-distribution goals. --- Rebuttal 2: Title: Continued Rebuttal Comment: > **“This process seems quite costly as it requires multiple full passes through the dataset. A complexity analysis should have been included.”** Thank you for this comment, we would like to clarify **that our approach doesn't require multiple passes over the entire dataset for each new user**. In our active inference techniques, we use a sampling-based method inspired by [3] to actively query the model. Given a dataset of D queries $(s^i_A, s^i_B)_{i=1}^{|D|}$, we sample $S$ query batches of size $Q$, where $Q$ is number of annotated samples per batch we get from a user. Here, $Q \in [2,8]$, **so we need to perform O(S * Q)** passes over the model with batch size $2^Q \sim [4, 256]$. Furthermore, this process only needs to be performed once after the model is trained to obtain the most discriminative set of queries for the given model. Finally, whenever a new user interacts with the system, we need to get labels on the actively inferred queries (usually 2-4) but do not require any additional passes over the query dataset. > **“The imbalance may affect the approach's effectiveness.”** Thank you for your attention to this detail. In our control and language experiments, particularly the control experiments and the pets dataset are imbalanced i.e. the preference dataset contains an unequal number of preferences from individual groups or users. As a result, the baselines can achieve > 50% accuracy, converging to the preferences of the majority user groups. So, **VPL works in the presence of imbalanced datasets.** While recent works [7, 8] focus on achieving Pareto-optimal performance across the groups, VPL treats each user individually via latent conditioning and does not suffer from this problem. VPL is able to personalize to the minority groups as well (see Figure 11 in Appendix A). > **One missing piece in the experiment is an analysis of the latent code's negative impact on the language model. An important feature of language models, especially LLMs, is their versatility. They are not supposed to only answer questions related to pets. One question that needs to be answered is whether conditioning on a pets-related latent code would lead to catastrophic forgetting or distortions in responses to other tasks/questions unrelated to pets.** Thank you for raising this insightful question. VPL introduces a latent bottleneck during the reward learning process, but since the base architecture of the policy and the reward model does not change otherwise, we do not believe this should majorly affect the performance of the model. We will leave a thorough investigation of this question to future work, but will note this as a potential limitation in the limitations section. -- [1] Guan et al. (2023). Relative behavioral attributes: Filling the gap between symbolic goal specification and reward learning from human preferences. [2] Wu et al. (2023). Fine-grained human feedback gives better rewards for language model training. [3] Sadigh et al. (2017). Active Preference-Based Learning of Reward Functions. [4] Peng et al. (2024). Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input. [5] Siththaranjan et al. (2023). Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF. [6] Ivison et al. (2023). Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2. [7] Boldi et al. (2024). Pareto-Optimal Learning from Preferences with Hidden Context [8] Chakraborty et al. (2024). MaxMin-RLHF: Towards an equitable alignment of large language models with diverse human preferences. --- Rebuttal Comment 2.1: Comment: I thank the authors for their detailed response. First, I would like to point out that the rebuttal exceeded the character limit by being posted as an Official Comment instead of as a Rebuttal. I hope the authors can better adhere to the conference policy and respect the time of reviewers. While I don't think the rebuttal adequately addresses my concerns, I still find that the upsides of this work outweigh the downsides. Therefore, I would like to maintain my current positive rating. --- Reply to Comment 2.1.1: Comment: We would like to thank the reviewer for their response. We apologize for the response length and will be considerate of that in the future. Could the reviewer please point us to specific questions to expand upon? This would help us address their concerns more effectively.
Summary: This paper introduces a new framework for preference learning which tailors to user-preferences. Human feedback with Variational Preference Learning (VPL) learns a latent reward / preference for each user at the test-time. They furthur show potential application of techniques from active learning and uncertainty estimation to the framework. Strengths: - Paper is generally well-written and clear - The research problem is very well motivated -- personalization to preferences that go beyond a universal notion of a single preference function. - Method itself is novel and well-formulated to match the problem. Weaknesses: - My main concern lies in evaluation outside of designed control environments. To truly test the quality of a Reward Model (RM), one needs to show the downstream policy performance benefiting from improvements in the RM. These results are not present currenlty in section 7. The issue of reward variance as discussed in Section 4.1, may require further design decisions coupled with the optimization algorithm (PPO, REINFORCE, RLOO, online contrastive losses like DPO, etc.). Previous work has studied the general issue of gradient variance in RLHF (which is directly related to reward variance through the REINFORCE estimator) which suggest that this may not be an issue [1]. [1] Ahmadian et. al. "Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs" Technical Quality: 4 Clarity: 3 Questions for Authors: Suggestions: - Having a single-modal dataset, such as summarization, would help further ground the work and increase the experimental depth. The expectation is that we shouldn't see many benefits when using VPL compared to traditional RLHF. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Limitations have been specified by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s feedback and are glad that they agree with the motivation of the problem and the novelty of the approach. We address the concerns raised as follows: > **“Having a single-modal dataset [...] would help further ground the work and increase the experimental depth.”** This is an interesting analysis and we thank the reviewer for suggesting this. We ran additional experiments on the UltraFeedback dataset, where we considered the preferences of a single user (preferring the model to be “honest” over all other attributes) to analyze the single-modal case as suggested. **The standard BTL model gives a 77.04% eval accuracy while our VPL model gives a 77.16% eval accuracy. Our model matches the baseline performance, indicating that there is no drop in performance when using VPL compared to traditional RLHF over an unimodal dataset.** > **“evaluation outside of designed control environments.”** We present experiments on learning policies using VPL reward modeling in simulated control domains, which we have expanded and made more realistic in the rebuttal PDF. We acknowledge that our language experiments are focused on preference modeling, but we primarily follow and compare to prior work, which also focuses on preference modeling from diverse datasets alone [1, 2], without including experiments for LLM fine-tuning. We believe that preference modeling alone is interesting, as modeling diverse users may also provide increased interpretability in highlighting potential reasons for preferences [3, Sec. 2]. We believe that further exploring how to best apply VPL to downstream tasks and larger, noisier datasets is an interesting and exciting avenue for future research. Additionally, Prior work has shown that improving RM performance can yield improved downstream performance, both when used in RLHF training [4, 5, 7] and when used in best-of-N settings [5, 6]. -- [1] Siththaranjan et al. (2023). Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF. [2] Zhao et al. (2023). Group Preference Optimization: Few-Shot Alignment of Large Language Models. [3] Sorensen et al. (2024). A Roadmap to Pluralistic Alignment. [4] Shen et al. (2023). The Trickle-down Impact of Reward (In-)consistency on RLHF. [5] Gao et al. (2022). Scaling Laws for Reward Model Overoptimization. [6] Ivison et al. (2024). Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback. [7] Meng et al. (2024). SimPO: Simple Preference Optimization with a Reference-Free Reward.
Rebuttal 1: Rebuttal: [Disclaimer: All Figures and Tables referred to as **AM:X** are in the **A**dditional **M**aterial.] We thank the reviewers for their careful reading and constructive feedback. We appreciate that all reviewers recognize the importance and relevance of the problem of pluralistic alignment in preference learning. All reviewers agree that our realistic setup and experiments show the failure of the baselines, and our proposed method is an interesting and novel approach to the described problem. Reviewers (hkQw, Z9tQ, KQbQ) had concerns about the scale of experiments for LM preference modeling and evaluation of our method in diverse environments. To address these concerns, we have conducted 7 additional experiments; the results of 5 are included in the rebuttal PDF, and additional results requested by reviewers hKQW and PuPi are included in the response to those reviewers. Below, we provide further context on the 5 experiments in the rebuttal PDF. > **Scaling language model results to state-of-the-art preference datasets. (Reviewers hKQW, Z9tQ, KQbQ)** In Table AM:1, we present additional results while scaling VPL to a version of the UltraFeedback dataset, where we consider users preferring all four diverse attributes. Here, as in prior work [1], we treat the rating attributes (harmless, helpful, etc.) as distinct users. **Going beyond prior work([1] considers only two users), we scale to 4 users, and test whether we can steer the reward model to personalize to the particular user.** As reviewer hkQw mentioned, this further presents a **larger and more complicated dataset** (that is comparable in scale to prior works[1,2,3,4]) to infer the user type and predict the downstream rewards. At the time of writing, the standard and best available RLHF datasets used in prior work on preference modeling are HH-RLHF [5] and UF [6]. By using all available attributes in UF as distinct users, we have created a challenging and scaled benchmark for pluralistic alignment from existing real RLHF language datasets. In Table AM:1, we observe that VPL outperforms all the baselines in terms of accuracy, leading to a 10% improvement. VPL performs comparably to an Oracle baseline with explicit user information. **This highlights the ability of VPL to scale to larger datasets with more users, and richer prompts and responses.** > **Scaling to dynamic and realistic home environments. (Reviewers Z9tQ, KQbQ)** To apply VPL to a set of complex control environments, and practical settings for robots interacting with diverse users, **we include additional experiments in the Meta Habitat Simulator[7].** Inspired by TidyBot [8], we include experiments in a realistic home environment, where the robot has to pick and place objects around the house, personalizing the cleanup based on user preferences (particularly, storing kitchen goods based on their attributes i.e. putting plastic items together and metal items together, or do it for tableware/kitchenware). VPL is able to infer the user’s preferences and steer the robot to the desired behavior with a 30% higher accuracy. Meanwhile, the baselines collapse to a single solution that doesn't cater to the diverse users, as shown in Figure AM:1. **We believe this additional experiment illustrates both the need for personalized preference learning for applications like autonomous household robotics and the fact that VPL is a promising method for achieving it at scale.** > **Scaling to many (∼100) users. (Reviewers hKQW, KQbQ)** Currently, the UltraFeedback dataset with four users provides us with one of the largest and most diverse datasets for aligning models, to the best of our knowledge. So, to test the effectiveness of our approach in the presence of a larger number of users (not bounded by datasets), in Figure AM:4, **we include simulated experiments on a new Habitat-Rearrange task, where we have ~ 100 users.** Each user has a ranking over five different locations preferred for storing objects in their home. Here, the space of users and variables is combinatorial (as the total possible orderings are 5!). **This presents a challenging benchmark with a much larger number of users and hidden variables, and in Figure AM:4, we show that VPL consistently outperforms the baselines by ~ 30%** in aligning to user-preferred locations for the rearrangement task. > **Effect of noise and number of queries from users at test-time. (Reviewers hKQW, KQbQ)** In Figure AM: 2, we evaluate VPL in the presence of irrational users i.e. how does VPL perform when the users are noisy or inconsistent across their preferences? To simulate this, we progressively add more noise to the context i.e. flip the preference labels in the context with increasing probability. We also test the effect of the number of labels provided by a user, on the downstream performance. We make the following observations: 1. The performance of **the model degrades with increasing noise, however, VPL still significantly outperforms the baselines** with noisy and inconsistent preference labels from the users. 2. At 50% noise the context labels provide equal information to the encoder about all users. Consequently, it fails to identify the particular user and the performance coincides with the baseline (that has no mechanism for personalization). 3. **Eliciting more preference labels from users generates better latent estimates and improves the accuracy of the preference model.** 4. Additionally, it also makes the performance robust at high levels of noise. Overall, **VPL provides reasonable performance with as low as ~ 2 queries,** and we report the best performance at ~ 8 test-time queries. Overall, these results show that VPL is able to "handle irrational users (i.e., users whose preferences are inconsistent across context/responses)”, as requested by reviewer hKQW. Further, this also shows that VPL works is test-time data efficient (i.e. shows reasonable performance with 2 user queries). Pdf: /pdf/5a53cd69550bc0ee1e9eaaf31606897661ee36f7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Road Network Representation Learning with the Third Law of Geography
Accept (poster)
Summary: This paper proposes a novel framework for learning road network representations. Different from previous approaches, the proposed framework leverages a particularly designed graph contrastive learning method to integrate the Third Law of Geography into the process of road network representation learning, which can significantly alleviate the current shortcomings. In both classical and newly identified downstream tasks of road representation learning, the proposed framework achieves significant improvements compared with other state-of-the-art approaches. Strengths: S1. Novelty: The idea of integrating the Third Law of Geography in learning road network representations for urban computing tasks is novel. S2. Technical solidity: The geographical law considered by the authors is mathematically modeled in the forms of graph augmentation and negative sampling, which is then integrated into the learning process of road network representation through a particularly designed graph contrastive learning framework. The proposal of this computational method lies in a non-trivial theoretical background. S3. Comprehensiveness: The empirical validations of the proposed approach are comprehensive. Besides of verifying the proposed approach with classical tasks of urban computing, new learning tasks closely related to real scenarios are further identified by the authors and used in the experiments. S4. Effectiveness: Compared with existing approaches, the proposed method achieves significant improvements on all downstream tasks. Weaknesses: W1. This paper lacks detailed explanations of using the Third Law of Geography in road network representation learning. The authors are suggested to give more explanations or examples to show the significance of using the Third Law. W2. The spectral negative sampling is effective but may not be easy to understand for non-experts. The authors are encouraged to provide more explanations of this technique. W3. In experiments, this paper only presents improvements in metrics, without more intuitive results. The authors are encouraged to give more discussions or empirical analysis of the implication of the third law on representations. W4. The presentation can be improved. Also, there are some typos in the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: As stated in “Weaknesses,” I have the following questions. Q1. Are there more detailed explanations or examples that can show the significance of the Third Law of Geography in road network representation learning? Q2. Are there more easy-to-understand descriptions/explanations of the proposed spectral negative sampling method? Q3. Are there case studies or discussions in the experiments that can show the implication of the Third Law of Geography in road network representation learning? Q4. Since this work is based on the Third Law of Geography, it seems it is similar to context-aware spatiotemporal learning. It is encouraged for authors to discuss the correlations and distinguishment between context-based ST learning and Third Law of Geography based learning. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have discussed the potential limitations but not much sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions. We apologize for any confusion. We are pleased to inform you that all concerns have been addressed. Below are our responses to each comment. Responses to Weaknesses: 1. To illustrate the Third Law of Geography, consider two roads surrounded by residential buildings. Even though two roads are located very far apart, according to the third law, as they have very similar geographic configurations, they should still have similar representations. We will include these discussions in the final version of the paper to provide readers with easy understandings of the proposed method. 2. In the "Spectral Negative Sampling" section, we design a negative sample compared with the positive sample (the augmented graph from the third law of geography) for better contrastive learning. The positive sample connects nodes with very similar geographic configurations. Thus if we design a negative sample which mainly connects roads with dissimilar geographic configurations, then by discriminating the anchors (the original graph) from the negative sample, we can enhance the representations and achieve that "roads with dissimilar geographic configurations should have dissimilar representations". Then we design the specific form of the negative sample, which is inspired by the sparsest cut problem (Equation 8). In particular, it could be a complete graph with a proper graph encoder (as in Section 4.5). However, leveraging the complete graph as the negative sample is very time and space consuming (, even with subgraph sampling). Then we apply spectral graph sparsification to effectively approximate the complete graph, which results in a d-regular graph. Finally, we design a d-regular graph as the negative sample, with a properly designed graph encoder. 3. Following the reviewer's suggestion, we conducted a case study comparing the First and Third Laws of Geography. The case study is included in the attached PDF under "Author Rebuttal by Authors" and will be added to the final version of the paper. - We randomly selected an anchor road, computed its representation similarity with other roads (according to cosine similarity), and displayed the top 10 most similar roads. The anchor road is red, the top 10 similar roads by the First Law are blue, and those by both laws are orange. - With only the First Law, similar roads are much closer to the anchor. With both laws, similar roads are farther but have similar geographic configurations, as shown by comparing street view images. This demonstrates that the Third Law ensures similar representations for roads with similar configurations, regardless of distance. 4. Thanks for your suggestion. We will polish and double-check the writing. Response to the Questions: 1. (Refer to response to weakness 1) 1. (Refer to response to weakness 2) 1. (Refer to response to weakness 3) 4. Following the suggestion of the reviewer, we discuss the correlations and differences between: (1) context and geographic configuration; and (2) context-aware spatiotemporal learning and the Third Law of Geography. - (1) **Context** is usually defined as the some contents or attributes of a spatial entity, such as the spatial context of a road segment in [1] and the context of a POI [3], where the spatial context is still some attributes of a road. "**Geographic configuration**" is defined as "the makeup and the structure of geographic variables over some spatial **neighborhood** around a point," in A-Xing Zhu's paper [2] on the Third Law of Geography. In our problem, the geographic configuration of a road segment includes both its features and the features of its **neighborhood**, e.g., its surrounding buildings, natural environments, regions, etc. And street view images are very good proxies to describe those. In a word, the major difference is that the term context, in the literature of spatiotemporal learning, focuses only on a spatial entity itself, while **geographic configuration** concerns both the attributes of the spatial entity itself and its spatial neighborhood. - (2) **Context aware learning**, inspired by word2vec [4], encourages entities within a context to have similar representations, which is more similar to the First Law of Geography, "everything is related to everything else, but near things are more related than distant things". In contrast, the **Third Law of Geography** states that "The more similar geographic configurations of two points (areas), the more similar the values (processes) of the target variable at these two points (areas)." It maps the similarity relationship of the geographic configurations to the similarity relationship of the target variable (i.e., road representation in our paper). We will include the above discussions in the final version of the paper. --- [1] Robust Road Network Representation Learning: When Traffic Patterns Meet Traveling Semantics. In CIKM '21. [2] Spatial prediction based on Third Law of Geography. Annals of GIS, 2018. [3] How is the Third Law of Geography different? Annals of GIS, 2022. [4] Distributed representations of words and phrases and their compositionality. In ICLR. --- Rebuttal 2: Title: Additional Questions Comment: Dear Reviewer cFij, Thanks very much for providing the constructive and motivating feedback! Can you please let us know whether we have addressed all your questions and whether you have any additional feedback? Thank you! --- Rebuttal Comment 2.1: Title: Thanks for rebuttal Comment: I would like to thank the authors for their detailed responses, and my concerns are well addressed.
Summary: This paper introduces a novel framework for road network representation learning, with the key innovation being the incorporation of the Third Law of Geography. This concept is implemented through a tailored graph contrastive learning objective, featuring geographic configuration-aware graph augmentation and spectral negative sampling. To further preserve geographic proximity (the First Law of Geography), which has been proven crucial in existing literature, the authors further propose a dual contrastive learning strategy to accommodate both modeling perspectives. Experimental results on datasets from Singapore and New York City demonstrate significant improvements over several baseline methods in road function prediction, road traffic inference, and visual road retrieval tasks. Strengths: 1. The paper addresses a significant problem in road network representation learning, which has important implications for real-world applications such as traffic management and urban planning. 2. The core idea of the paper is grounded in established geographical theory, and the incorporation of street view images as a supplementary data source enhances the implementation of this concept. 3. The authors demonstrate awareness of scalability issues and incorporate sampling techniques in their method design, enabling efficacy in large-scale road networks. 4. The paper presents comprehensive experimental validations of the model's superiority over three downstream tasks. Weaknesses: 1. The research challenges identified by the paper lack depth. In particular, the first challenge is insufficiently articulated, equivalent to stating that "generating road representations that preserve the Third Law of Geography is challenging". 2. A significant weakness of the paper lies in the limited novelty of its ideas and methodological design: a. While existing literature may not explicitly introduce the concept of the Third Law of Geography, several works have already considered the similarity of road segments in regions with similar functionalities [1,2,3]. Moreover, in the broader field of spatio-temporal data mining, several studies have considered the hierarchical structure of urban road networks [4,5]. b. The simple concatenation of street view image representations with other road features lacks sophistication. Images and other road attributes exist in different modalities, potentially introducing multi-modal data fusion issues that may impede the effective encoding of street view images into road representations. c. The Simple Graph Convolution (SGC), its theoretical interpretation, and graph contrastive learning have already been well-developed in previous research. d. The fusion of the third law and the first law is presented as one of the two major challenges the paper aims to address. However, the proposed solution merely combines two contrastive learning tasks without a thorough discussion of potential conflicts between them. e. Although the paper considers scalability issues, the proposed sub-sampling solution is relatively straightforward. 3. The paper lacks detailed discussions on the universality of the third law and the first law, particularly in large-scale road networks, raising concerns about the practical applicability of the method. Besides, merely mentioning this in the limitations section is insufficient. 4. The second paragraph at the beginning of section 4 contains confusing notations that are not previously introduced. 5. Experimental design: a. As the paper's solution to improve scalability is straightforward, the baseline methods should also be equipped with the same graph sampling techniques to ensure a fair comparison. b. The paper lacks case studies that demonstrate the effectiveness of integrating street view images. Relying solely on ablation studies is inadequate for this purpose. c. The paper does not use weighted average of the two contrastive learning tasks, which is important as the first law and the third law might conflict in certain scenarios. Along this line, experiments on the trade-off between the two contrastive learning tasks should also be conducted. [1] Pre-training Context and Time Aware Location Embeddings from Spatial-Temporal Trajectories for User Next Location Prediction. In AAAI 2021. [2] Pre-training local and non-local geographical influences with contrastive learning. Knowledge-Based Systems 2023. [3] Pre-training Contextual Location Embeddings in Personal Trajectories via Efficient Hierarchical Location Representations. In ECML-PKDD 2023. [4] Semi-Supervised Hierarchical Recurrent Graph Neural Network for City-Wide Parking Availability Prediction. In AAAI 2020. [5] GPT-ST- Generative Pre-Training of Spatio-Temporal Graph Neural Networks. In NIPS 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Why is the spectral negative sampling strategy only applied to the contrastive learning task corresponding to the third law? How does the derivation of this strategy in section 4.5 relate to the mutual information maximization in equation (5)? Would it not be more intuitive to design an additional contrastive learning task to focus on encouraging the node representations of the augmented graph to satisfy both the third law and its inverse version? 2. Why does the inverse version of the third law in section 4.5 hold? For instance, two roads with dissimilar geographic configurations might still exhibit spatial proximity, suggesting that their representations should remain similar. 3. What proportion of roads in the dataset are associated with street view images? How does the model perform in geographic regions where (a) street view images are limited or of poor quality, or (b) the two geographic laws do not hold? 4. Compared to widely used Points of Interest (POI) data, what are the unique advantages of street view images in this context? 5. What are the specific experimental settings for the road traffic prediction task? If the objective is to predict future traffic variations, how do static road network representations contribute to this task without considering the dynamic correlations among road segments? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed comments and suggestions. We apologize for any confusion. We are pleased to inform you that all concerns have been addressed. Below are our responses to each comment. As the reviewer has many concerns, we conduct more discussions to address them. Response to Weaknesses: 1. We analyze the meanings of the third Law and point out that incorporating it into the road network representation learning is very beneficial. This challenge is in contrast with the first Law, which emphasizes the importance of distance effect. The Third Law focuses on mapping between relationships for geographical configurations in geospatial entities (road segments) to relationships of target variables (representations). It is more challenging to learn the mapping of relationships than to learn the mapping of values. More explanations can be found in our response to "Weakness 2" below. 2. We argue the **novelty** of our idea and method as follows. In general, the novelty of our papers is twofold: (1) we introduce the Third Law of Geography for road network representation learning; (2) we design novel methods to learn the third law in graph neural networks (GNNs). In particular, our **novel designs** include geographic configuration-aware graph augmentation and spectral negative sampling. The responses to the detailed questions are listed as follows. - a. The term "geographic configuration" is not functionalities of regions, and the Third Law of Geography is not related to hierarchical structures in cities. In particular, the definition of "geographic configuration" in [1] is "the makeup and the structure of geographic variables over some spatial **neighborhood** around a point.", far beyond the functionalities of regions. The street view images (SVIs), which provide the visual features of a road, also include important descriptions of a road. For example, the surface condition of a road and the number of lanes in a road (, which is not available for the majority of OSM data,). Also, the geographic configuration includes descriptions of a point itself and its neighborhood. Street view images describe both the visual features of a road and its **neighborhood**, e.g., its surrounding buildings, regions etc. Thus we think the street view images are a good proxy, among various current data, to describe the geographic configuration of a road. Besides, our work is **not** related to hierarchical structures in a city. However, we would also like to add another paragraph in the "Related Work" section to discuss the literature that the reviewer mentions. - b. For multi-view / modality fusion, our design is not a direct concatenation of different data. Our designs include: (1) projecting different data into another space with learnable parameters and then concatenating them (as illustrated in Section 4.1); (2) fusing information from multiple views with mutual information maximization. The first design (joint representation) is widely adopted in multi-modality fusion [8], while the second design is inspired by [9]. Both [8] and [9] demonstrate that information from different views can be fused properly in our method. - c. Our main contributions, as listed at the end of the Introduction in the paper, are: (1) introduction of the **Third Law of Geography** to road network representation; (2) **geographic configuration-aware graph augmentation** and **spectral negative sampling** in the graph contrastive learning framework. Those are our **original** designs and have never appeared in previous literature. As our proposed method is based on graph contrastive learning and SGC, which are preliminaries of our method, we need to put the contents in the method for those who are not familiar with these topics. Besides, the theoretical interpretation of SGC is closely related to the implementation of the Third Law of Geography, and thus we discuss why SGC can achieve our goal in Section 4.3. We believe that bridging existing theories to real-world applications is also very important and can extend the scope of existing theories. - d. We have two efforts on fusing the First and the Third Law (the tuning of weights are listed in the comments): - The two losses have some sharing parameters (module $g_{\theta_0}$), which can learn the consensus, while those modules ($g_{\theta_1}$ and $g_{\theta_2}$) that do not have sharing parameters learn the discrepancy (potential conflicts). The final representation is the aggregation of the outputs from the three modules. - Train the two losses jointly. For joint training, in our preliminary experiments, we have tried some multi-objective optimization techniques to balance the two terms. Specifically, we have tried the Pareto optimum [1] in neural networks to adaptively choose proper weights to balance the two terms. However, we find that (1) with this technique, the results do not have significant improvement; (2) We find the optimal weights for the two terms vary from (0.4, 0.6) to (0.6, 0.4), very close to (0.5, 0.5). Those are the reasons why we do not tune the weights and state in the last paragraph of Section 4.6 that "Therefore, we do not introduce additional hyper-parameters to adjust their weights." - We also conduct some empirical results on tuning the weights of the two losses, and the results on the road function prediction task are listed as follows. We introduce another hyper-parameter $\beta$ to balance the loss of the third law ($\mathcal{L}_1$) and the first law ($\mathcal{L}_2$) : $\mathcal{L} = \beta \mathcal{L}_1 + (1 - \beta) \mathcal{L}_2$ . The results show that the learning performance is not sensitive to the weight $\beta$. - e. Our paper does not focus on scalability issues in road network representation learning, and we do not claim our contribution on the scalability issue neither. We provide a sub-sampling trick in our implementation, which works well within our framework. --- Rebuttal 2: Title: [Rebuttal by Authors - 2] Results of Tuning the Weights of Losses Comment: **Road Function Prediction on Singapore** | $\beta$ | Micro-F1 (%) $\uparrow$ | Macro-F1 (%) $\uparrow$ | AUROC (%) $\uparrow$ | | ------- | ----------------------- | ----------------------- | -------------------- | | 0.1 | 81.23 $\pm$ 0.36 | 62.03 $\pm$ 0.84 | 92.90 $\pm$ 0.28 | | 0.2 | 81.48 $\pm$ 0.37 | 62.61 $\pm$ 0.85 | 93.01 $\pm$ 0.24 | | 0.3 | 81.22 $\pm$ 0.34 | 62.11 $\pm$ 0.62 | 92.91 $\pm$ 0.25 | | 0.4 | 80.99 $\pm$ 0.30 | 61.50 $\pm$ 0.70 | 92.66 $\pm$ 0.26 | | 0.5 | 81.40 $\pm$ 0.30 | 62.45 $\pm$ 0.64 | 93.27 $\pm$ 0.22 | | 0.6 | 81.02 $\pm$ 0.44 | 60.94 $\pm$ 0.85 | 92.69 $\pm$ 0.21 | | 0.7 | 81.19 $\pm$ 0.37 | 62.26 $\pm$ 0.88 | 92.82 $\pm$ 0.21 | | 0.8 | 81.26 $\pm$ 0.33 | 61.91 $\pm$ 0.83 | 92.81 $\pm$ 0.23 | | 0.9 | 81.40 $\pm$ 0.40 | 62.54 $\pm$ 0.72 | 93.05 $\pm$ 0.24 | **Road Function Prediction on NYC** | $\beta$ | Micro-F1 (%) $\uparrow$ | Macro-F1 (%) $\uparrow$ | AUROC (%) $\uparrow$ | | ------- | ----------------------- | ----------------------- | -------------------- | | 0.1 | 82.83 $\pm$ 0.21 | 46.70 $\pm$ 0.45 | 89.18 $\pm$ 0.21 | | 0.2 | 82.93 $\pm$ 0.19 | 47.17 $\pm$ 0.37 | 89.24 $\pm$ 0.17 | | 0.3 | 82.91 $\pm$ 0.20 | 47.25 $\pm$ 0.57 | 89.17 $\pm$ 0.20 | | 0.4 | 82.86 $\pm$ 0.20 | 47.08 $\pm$ 0.49 | 89.14 $\pm$ 0.19 | | 0.5 | 82.97 $\pm$ 0.16 | 47.22 $\pm$ 0.42 | 89.30 $\pm$ 0.21 | | 0.6 | 82.97 $\pm$ 0.23 | 46.80 $\pm$ 0.56 | 89.22 $\pm$ 0.15 | | 0.7 | 83.04 $\pm$ 0.15 | 47.57 $\pm$ 0.52 | 89.21 $\pm$ 0.20 | | 0.8 | 82.89 $\pm$ 0.21 | 46.76 $\pm$ 0.47 | 89.03 $\pm$ 0.15 | | 0.9 | 82.85 $\pm$ 0.23 | 46.49 $\pm$ 0.47 | 88.97 $\pm$ 0.20 | --- Rebuttal 3: Title: [Rebuttal by Authors - 3] Response to Weaknesses - Continued Comment: 3. The effectiveness and when the third law and the first law are expected to work have been extensively discussed in previous literature [10, 11] in geographical sciences. Our paper applies the Third Law of Geography to road network representation rather than examining the Third Law itself. We would like to recommend readers to refer [11] for more details. 4. Sorry for this confusion. The meaning of those three notations is explained here. $\boldsymbol{Z}$ is the output of some GNN. $\boldsymbol{Z}^{[0]}$, $\boldsymbol{Z}^{[1]}$ and $\boldsymbol{Z}^{[2]}$ are three outputs from different GNNs as illustrated in Fig. 1, where the number $i$ in the superscript $\cdot^{[i]}$ indicates that $\boldsymbol{Z}^{[i]}$ is the output of $i$-th GNN. 5. About the experimental design: - a. Thanks for your suggestion. First, most baselines have their own strategies for tackling the scalability issues. These strategies are proposed according to the peculiarities of those methods. Thus it is **not realistic** to equip each baseline with the sub-sampling technique. Second, other baselines, such as SRN2Vec, have unrelated model structures regarding sampling, which prevents the sub-sampling trick from being applied in their methods. Third, to explore the best performances of other approaches, we did not integrate our method with other baselines and use their recommended settings in the experiments. - b. Following the reviewer's suggestion, we conducted a case study comparing the First and Third Laws of Geography. The case study is included in the attached PDF under "Author Rebuttal by Authors" and will be added to the final version of the paper. (1) We randomly selected an anchor road, computed its representation similarity with other roads (according to cosine similarity), and displayed the top 10 most similar roads. The anchor road is red, the top 10 similar roads by the First Law are blue, and those by both laws are orange. (2) With only the First Law, similar roads are much closer to the anchor. With both laws, similar roads are farther but have similar geographic configurations, as shown by comparing street view images. This demonstrates that the Third Law ensures similar representations for roads with similar configurations, regardless of distance. - c. We have discussed this issue in "2.d." --- Rebuttal 4: Title: [Rebuttal by Authors - 4] Response to Questions Comment: 1. Sorry for the confusion. We indeed use one contrastive learning objective to incorporate the Third Law of Geography. Contrastive learning, with its mathematical support from mutual information maximization [2], requires both positive samples and **negative samples** to learn its loss function (Eq. (4) (5)) [3, 4]. Section 4.5 demonstrates how to generate a negative sample for graph contrastive learning. Positive samples provide information that is positive to the anchor (the original graph), while negative samples provide negative information for contrast. By contrasting with both positive samples and negative samples, models can learn good representations without supervision [5]. Following this principle, we design a positive sample where roads with very similar geographic configurations are connected, and also design a negative sample, in Section 4.5 Spectral Negative Sampling, where roads with very dissimilar geographic configurations are connected. The detailed design of the negative sample is inspired by the sparsest cut problem (in spectral graph theory) and further optimized by spectral graph sparsification. This is why the subtitle of this subsection is "Spectral Negative Sampling". As the spectral negative sampling in this paper is particularly designed to learn the third law, we only prepare it for the third law. According to the principle of contrastive learning, it is also intuitive to incorporate the inversion version of the third law by negative samples. 2. If the first law (spatial proximity) and the third law (geographic configuration) give different results on similarity, the proposed model can make some tradeoffs between the two laws, when learning road representations. Possible scenarios include spatially distant roads with similar geographic configurations and spatially proximal roads with different geographic configurations. And this is why we need to "jointly consider the third law of geography and the first law of geography" mentioned by Reviewer UWD1. 3. We sample street view images (SVIs) on each road. In practice, more than 95% of roads are associated with SVIs. For those roads without SVIs associated with them, we leverage SVIs within several meters (i.e., SVIs located in the buffer zone of a road) as proxies to represent their geographic configurations. Finally, every road has several SVIs associated on with it. Although the scenarios mentioned by the reviewer are not the critical issues tackled by the current work, if the street view data are limited or the dataset contains some samples that do not follow the two geographic laws, the proposed method might face challenges in performing robustly as other related approaches do. However, these issues can be potentially addressed by designing the new version of Garner to endow it with the capability of learning with incomplete or contaminated road network data. In the future, we will keep on exploring the effectiveness of the proposed Garner by considering the valuable suggestions raised by the reviewer. We will also include the above discussions in the final version of the paper to provide readers who are interested in the proposed Garner with more possibilities to improve it. 4. The unique advantages of street view images (SVI) are listed as follows: - **Distribution and Coverage of POIs**: A major issue of POIs in our problem is that they are unevenly distributed in cities, and a large portion of road segments do not have nearby POIs. In contrast, SVIs cover almost every road in a city, and we can sample SVIs for each road segment. - **Commercial Bias of POIs**: Another issue is that POIs are provided by Internet Web Maps for map search, and thus their contents are mainly for commercial functions (e.g., restaurants), leading to poor performance in non-commercial regions such as industrial or residential areas [12, 13]. However, there are many roads outside commercial regions, where POIs lack a lot of information. In contrast, SVIs can provide meaningful information across diverse regions. 5. Following previous work [6, 7], the "Road Traffic Inference", a standard downstream task to evaluate the effectiveness of road representations, is to predict the average speed of each road, not to predict the future traffic. For static representation learning, all of the existing studies on road network do not consider the setting of predicting future traffic variations. Developing an advanced model which can effectively capture the dynamic correlations based on this paper would be our future work. --- Rebuttal 5: Title: [Rebuttal by Authors - 5] References of Author Rebuttal and Response to Reviewer Comments Comment: [1] Multi-Task Learning as Multi-Objective Optimization. In NeurIPS 2018. [2] MINE: Mutual Information Neural Estimation. In ICML 2018. [3] Representation Learning with Contrastive Predictive Coding. [4] Deep Graph Infomax. In ICLR 2019. [5] Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. ICML 2020. [6] Robust Road Network Representation Learning: When Traffic Patterns Meet Traveling Semantics. CIKM 2021. [7] Relational Fusion Networks: Graph Convolutional Networks for Road Networks. IEEE Trans. Intell. Transport. Syst. 2021. [8] Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. [9] Contrastive multi-view representation learning on graphs. In ICML 2020. [10] Spatial prediction based on Third Law of Geography. Annals of GIS, 2018. [11] How is the Third Law of Geography different? Annals of GIS, 2022. [12] Urban2vec: Incorporating street view imagery and pois for multi-modal urban neighborhood embedding. In AAAI 2020 [13] Knowledge-infused Contrastive Learning for Urban Imagery-based Socioeconomic Prediction. In WWW 2023 --- Rebuttal 6: Title: Additional Questions Comment: Dear Reviewer u7JA, Thanks very much for providing the constructive and motivating feedback! Can you please let us know whether we have addressed all your questions and whether you have any additional feedback? Thank you! --- Rebuttal Comment 6.1: Title: Acknowledgement Comment: I thank the authors for their detailed explanation. However, the responses on the novelty of the method and the argument on scalability are not very convincing. I would like to keep my score. --- Rebuttal 7: Comment: Thanks for your reply. Could you please kindly let us know whether we have addressed some of your concerns? We would like to do some further clarification on the novelty. In summary, this work is the first attempt to consider the **Third Law of Geography** for the **road network representation learning task**. 1. The meaning of geographic configuration in the **Third Law of Geography** is far beyond functionalities of regions. The definition of "geographic configuration" in [6] is "the makeup and the structure of geographic variables over some spatial neighborhood around a point". The makeup and structure of a road contains a lot of information such as its surrounding buildings, its width, its surrounding region. The richness of this information cannot be described with a simple "functionality of region." For example, the height of the surrounding building of a road will influence the traffic on a road [9], but the height is not some functionality of a region. 2. Existing literature [1, 2, 3, 4, 5] does not mention the **Third Law of Geography** and the incorporation of the third law into neural networks. The literature listed by the reviewer still only considers the **First Law of Geography**. As illustrated in the Introduction of our paper, current literature make the geospatial entities within a context (spatial neighborhoods) to be similar. In contrast, the third law does not consider any spatial distance, and it argues that spatial entities, with similar geographic configurations, should be similar, even if they are very far apart. - [1] considers spatial context on condition of specific functionality. It still focuses on (conditional) spatial proximity. - [2] considers "ncorporate geospatial proximity as a local geographical influence and relative distance differences as a non-local geographical influence" [2], which still depends on spatial proximity and distance. - [3] considers context, which includes a grid at a larger scale (spatial regions) and sequences (temporal sequences). But it still considers spatial proximity. - [4, 5] consider hierarchical structures in cities, which are not related to our work. - Spatial proximity and distance, as demonstrated in the Introduction of our paper, is depicted by the **First Law of Geography**. In contrast, the **Third Law of Geography** does NOT consider any spatial proximity (including distance) between two target entities, it sololy depends on geographic configurations. 3. In our method design to incorporate the third law: **geographic configuration-aware graph augmentation** and **spectral negative sampling**, they do NOT consider any spatial proximity or spatial distance between two road segments. This is why our method is able to generate similar representations for roads with very similar geographic configurations, even though they are very far apart. A case study to illustrate this has been included in the attached PDF under "Author Rebuttal by Authors". 4. The papers that you list are not doing the same **research problem** as ours. The research problem in our paper, as stated in the title is **road network representation learning**. In particular, - [1] is to solve "Next Location Prediction". - [2] is to solve "next POI recommendation" task. - [3] is to learn "location embedding" to solve "Next Location Prediction" and "trajectory classification". "Locations", defined in [3] are grids (or regions) in cities. Grids are pologons. In contrastive, a road a polyline, which does not surround some area. - [4] is to solve "Parking Availability Prediction". - [5] is to solve traffic prediction on sensor network [7], not road network. - Our paper is to solve "road network representation learning", and the downstream tasks include: road function prediction, road traffic inference and visual road retrieval. Both our pre-training task and downstream tasks are totally different from the papers you list. - The word "road" does not appear in the main body of [1, 3, 5]. The word "road" nearly does not appear in [2, 4]. --- [1] Pre-training Context and Time Aware Location Embeddings from Spatial-Temporal Trajectories for User Next Location Prediction. In AAAI 2021. [2] Pre-training local and non-local geographical influences with contrastive learning. Knowledge-Based Systems 2023. [3] Pre-training Contextual Location Embeddings in Personal Trajectories via Efficient Hierarchical Location Representations. In ECML-PKDD 2023. [4] Semi-Supervised Hierarchical Recurrent Graph Neural Network for City-Wide Parking Availability Prediction. In AAAI 2020. [5] GPT-ST- Generative Pre-Training of Spatio-Temporal Graph Neural Networks. In NIPS 2023. [6] Spatial prediction based on Third Law of Geography. Annals of GIS, 2018. [7] Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In ICLR. [9] Street view imagery in urban analytics and gis: A review. Landscape and Urban Planning. Title: Further Clarification of Novelty - 1 --- Rebuttal Comment 7.1: Title: Further Clarification of Novelty - 2 Comment: For the scalability and efficiency issue, we have two designs: subgraph sampling and spectral graph sparsification. The spectral graph sparsification for our spectral negative sampling is our special design and cannot be applied to other methods. 1. The subgraph sampling is to sample a much small graph from the huge road network. 2. Even with subgraph sampling, the negative sampling to encourage "the reverse implication of the Third Law: roads with dissimilar configurations should have dissimilar representations" is still very time and space consuming. It requires to sample a complete graph, with time and space complexity of $O(|V^{\prime}|^2)$, where $|V^{\prime}|$ is the number of nodes in the sampled subgraph. To improve the efficiency and scalability, we follow the theoretical results of spectral graph sparsification [8], to approximate the complete graph with a $k$-regular graph, which has much less edges, with complexity of $O(|k V^{\prime}|)$, where $d$ is the degree of nodes, and we set $k=6$ in our experiments. It is non-trival to a adopt theoretical result to negative sampling. 3. The results of spectral sparsification in our paper, is only application to approximate a complete graph. Thus it cannot be applied to other baselines, which either do not have negative sampling or do not use complete graph in negative sampling. Thank you again to providing constructive feedbacks? Can you please let us know whether we have addressed all your questions and whether you have any additional feedback? --- [8] Daniel A. Spielman and Shang-Hua Teng. Spectral sparsification of graphs. SIAM J. Comput.
Summary: This paper proposes a novel method for learning representations of road networks. It highlights the limitations of existing methods that primarily use the First Law of Geography, which emphasizes spatial proximity. The authors introduce a new framework, Garner, that incorporates the Third Law of Geography, focusing on geographic configuration similarities. They employ a graph contrastive learning approach with geographic configuration-aware graph augmentation and spectral negative sampling. The framework is evaluated on real-world datasets from Singapore and New York City, showing significant performance improvements in downstream tasks. Strengths: 1. The paper introduces a novel application of the Third Law of Geography, providing a fresh perspective in the field of geographic information systems. The methodology is robust, with thorough theoretical grounding and comprehensive empirical evaluation. 2. The paper provides a solid theoretical basis for its methods. The mathematical proofs are thorough and well-constructed 3. The proposed approach demonstrates substantial improvements in real-world datasets, highlighting its practical value. Weaknesses: 1. In Section 4.2, the kNN and threshold-based methods for building new connections are highly sensitive to their parameters. This sensitivity greatly impacts the generation of new connections, making it difficult to control and optimize, thereby affecting the model's consistency and robustness. 2. The study is limited to datasets from Singapore and New York City; additional testing on more diverse datasets would enhance the findings' applicability. 3. There is no code or dataset provided. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the proposed method perform on road networks with significantly different characteristics (e.g., rural vs. urban)? As the authors said in section 4.5: "roads with dissimilar geographic configurations should have dissimilar representations." 2. Are there any examples or case studies demonstrating the practical implementation of Garner in different urban contexts? 3. What are the computational requirements and scalability aspects of the proposed method for larger datasets? For example, if we want to measure the similarity between different cities, we need to use national or global maps. 4. How does the model deal with dynamically changing road networks, such as those undergoing construction or frequent changes? Does the whole model need to be re-trained after the changes? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The authors should consider testing their method on a more diverse set of datasets to ensure broader applicability. 2. Consideration of temporal changes in road networks and how the model adapts to these changes would strengthen the practical application of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions. We apologize for any confusion. We are pleased to inform you that all concerns have been addressed. Below are our responses to each comment. Response to the Weaknesses: 1. We have conducted the sensitivity test on the hyper-paramter $k$ of kNN graph, and we find our method is not sensitive to $k$. - For implementation, we choose the kNN graph because of its high efficiency. - For kNN graphs, the hyper-parameter is $k$. Therefore, we conducted comprehensive hyper-parameter sensitivity tests on various datasets and metrics, and the results are reported in Section 5.3 and Appendix C.6. The results show that the $k$ does **not** have a significant impact the performance of various downstream tasks. In other words, our model is not sensitive to the values of $k$. In our main experiments, we also use $k=6$ on all experiments, and this setting can produce much better results than all the baselines. 2. Thanks so much for your suggestion. Currently, we only use those datasets because of the limited data sources. Specifically, street view images are very **expensive** and time-consuming to collect, and other sources of data (e.g., road function, and traffic speed) are unavailable in most cities. To mitigate the potential issues caused by data availability, we have considered diverse downstream real-world tasks, i.e., road function prediction, road traffic inference, and visual road retrieval, to comprehensively test the proposed method and other baselines. In the future, we will try to collect more datasets for evaluation. 3. Sorry for the confusion, we have uploaded the code in the "Supplementary Material" under "Abstract" in this webpage. Response to Questions: 1. We achieve "roads with dissimilar geographic configurations should have dissimilar representations." by negative sampling in the contrastive learning framework. In particular, contrastive learning requires positive samples and negative samples. Thus we design a negative graph $\bar{\mathcal{G}}^{[1]}$ whose edges mainly connect roads with dissimilar geographic configurations. By discriminating the negative sample and anchor, we can achieve "roads with dissimilar geographic configurations should have dissimilar representations." 2. Following the reviewer's suggestion, we conducted a case study comparing the First and Third Laws of Geography. The case study is included in the attached PDF under "Author Rebuttal by Authors" and will be added to the final version of the paper. - We randomly selected an anchor road, computed its representation similarity with other roads (according to cosine similarity), and displayed the top 10 most similar roads. The anchor road is red, the top 10 similar roads by the First Law are blue, and those by both laws are orange. - With only the First Law, similar roads are much closer to the anchor. With both laws, similar roads are farther but have similar geographic configurations, as shown by comparing street view images. This demonstrates that the Third Law ensures similar representations for roads with similar configurations, regardless of distance. 3. For the GPU memory, the proposed method does not require more GPU memory as the number of roads (nodes) grows. Because we use a technique called subsampling (refer to **Sample** in Fig. 1 and the text description from line 207). In particular, for each iteration (forward and backward propagation), we only sample a subgraph with a fixed size of nodes (4000 nodes in our setting). We observe that the GPU memory usage is less than 10GB, and thus the proposed method can run on a single RTX 3090 GPU. In the implementation of graph learning (in a single graph), if the graph is not very large, we usually load the graph into the memory [1]. If the graph is extremely large and cannot be loaded into the memory, we can first sample the index of the nodes, and then read related edges and node features from the disk. 4. Thanks very much for raising this interesting question. In current literature on road network representation, the changes in road network itself have not been considered, because its changes are relatively slow, and usually require several months or years. We also follow their settings. Actually, graph neural networks can be trained on certain graphs and tested on other similar graphs, and maintain competitive results. (For example, the PPI datasets are several graphs of proteins, and learned GNN models are tested on unseen proteins to predict the label of nodes. [2,3]) Thus, if there are only some small changes, we do not need to re-train the model, we just need to update the input data and get the output representation with the same model. If there are significant changes in the road network, we can try to re-train the model, and re-train is not very time-consuming. We just need several hours to train the model on Tesla V100 GPU, which is even slower than on a single RTX 3090 GPU. Response to Limitations: 1. (Refer to "Response to Weaknesses", 2) 2. (Refer to "Response to Questions", 4) --- [1] https://docs.dgl.ai/generated/dgl.data.DGLDataset.html#dgl.data.DGLDataset [2] Inductive Representation Learning on Large Graphs. In NeurIPS. [3] Simple and Deep Graph Convolutional Networks. In ICML. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I keep my original score. --- Rebuttal 2: Title: Additional Questions Comment: Dear Reviewer ggbs, Thanks very much for providing the constructive and motivating feedback! Can you please let us know whether we have addressed all your questions and whether you have any additional feedback? Thank you!
Summary: This paper introduces Garner, dual geographic-configuration-aware graph contrastive learning framework for road representation learning. Both the third law of geography and the first law of geography are considered by using street view images, geographic configuration aware graph augmentation, and spectral negative sampling. The effectiveness of Garner is tested on three downstream tasks: road function classification, road traffic prediction, and visual road retrieval. Strengths: 1. The contributions of the proposal Garner framework are clearly highlighted. 2. The idea of jointly considering the third law of geography and the first law of geography is intriguing. 3. The statistical significance of the first two tasks are listed which is very good. Weaknesses: 1. IMHO, the emphasis of learning the third law of geography is a bit oversold. Why can the averaging pool of CLIP-image embeddings of SVI along a road represent its geographic configuration? To me, the SVI pooled embedding can be simply treated as the visual feature of each road. 2. In terms of Section 4.2, other than the norm-based measure as the similarity measure, can you do an ablation study on different similarity measures? 3. Some math symbols are not well-explained which makes the paper a bit hard to read. For example, what are L_S, L_K, D_S, D_K? L and D are defined, but the subscriptions need to be explained. What are \mathbf{x}_i and \mathbf{x}_j in Equation 7? Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Why can the averaging pool of CLIP-image embeddings of SVI along a road represent its geographic configuration? 2. Could you do an ablation study on different similarity measures in Section 4.2? 3. Under Equation 8, you wrote "we design the negative sample based on Z and K, ..., by discriminating positive samples from negatives". Could you explain the intuition why the discriminative between positive and negative samples can minimize Equation 8? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The author need to add a paragraph about their limitations and some potential negative societal impact of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors thank the reviewer for providing constructive and detailed comments, which may significantly improve this paper. We are pleased to inform you that all your concerns have been successfully addressed. Please see the detailed response to each of your comments listed below. Response to the weaknesses: 1. In A-Xing Zhu's paper [1] on the Third Law of Geography, the term "geographic configuration" is defined as "the makeup and the structure of geographic variables over some spatial **neighborhood** around a point." In our scenario, the target variable is the output representation of roads, and thus we should include variables and features that can effectively describe the roads. The street view images (SVIs), which provide the visual features of a road, also include important descriptions of a road. For example, the surface condition of a road and the number of lanes in a road (, which is not available for the majority of OSM data,). Also, the geographic configuration includes descriptions of a point itself and its neighborhood. Street view images describe both the visual features of a road and its **neighborhood**, e.g., its surrounding buildings, regions etc. Therefore, we consider SVIs a suitable proxy to describe the geographic configuration of a road. To use SVIs in our model, we need to: (1) encode the images into vectorized representations; (2) aggregate multiple street view images along each road segment. To this goal, our implementation is "the averaging pool of CLIP-image embeddings of SVI along a road." 2. Following your suggestion, we have conducted an ablation study with different similarity measures. As mentioned in the paper, popular choices for similarity measure includes: norm/distance-based similarity, cosine similarity, Gaussian kernel. However, Gaussian kernel $\exp ( -\frac{(a - b)^2}{2 \sigma^2} )$ is a monotonic function of distance, and also a bijective function of distance. Thus we only conduct experiments on norm/distance-based similarity and cosine similarity. The results are listed in the following table. We observe that, with different similarity metrics, the model achieves very **similar** results. **Road Function Prediction on Singapore** | Metric | Micro-F1 (%) $\uparrow$ | Macro-F1 (%) $\uparrow$ | AUROC (%) $\uparrow$ | | ------ | ----------------------- | ----------------------- | -------------------- | | norm | 81.40 $\pm$ 0.30 | 62.45 $\pm$ 0.64 | 93.27 $\pm$ 0.22 | | cosine | 81.30 $\pm$ 0.34 | 62.26 $\pm$ 0.56 | 92.94 $\pm$ 0.21 | **Road Function Prediction on NYC** | Metric | Micro-F1 (%) $\uparrow$ | Macro-F1 (%) $\uparrow$ | AUROC (%) $\uparrow$ | | ------ | ----------------------- | ----------------------- | -------------------- | | norm | 82.97 $\pm$ 0.16 | 47.22 $\pm$ 0.42 | 89.30 $\pm$ 0.21 | | cosine | 82.95 $\pm$ 0.18 | 46.98 $\pm$ 0.56 | 89.13 $\pm$ 0.18 | **Road Traffic Inference on Singapore** | Metric | MAE $\uparrow$ | RMSE $\uparrow$ | MAPE $\uparrow$ | | ------ | --------------- | --------------- | ----------------- | | norm | 2.80 $\pm$ 0.03 | 3.52 $\pm$ 0.04 | 0.579 $\pm$ 0.030 | | cosine | 2.82 $\pm$ 0.02 | 3.54 $\pm$ 0.03 | 0.585 $\pm$ 0.024 | **Road Traffic Inference on NYC** | Metric | MAE $\uparrow$ | RMSE $\uparrow$ | MAPE $\uparrow$ | | ------ | --------------- | --------------- | ----------------- | | norm | 3.30 $\pm$ 0.02 | 4.40 $\pm$ 0.03 | 0.207 $\pm$ 0.002 | | cosine | 3.32 $\pm$ 0.02 | 4.44 $\pm$ 0.03 | 0.208 $\pm$ 0.002 | 3. Sorry for the confusion. Following the conventions in graph learning, the subscript of $\boldsymbol{L}\_{\boldsymbol{S}}$ indicates that this is a graph Laplacian matrix induced by some adjacency matrix $\boldsymbol{S}$, which is similar to $\boldsymbol{D}$, the degree matrix. In particular, $\boldsymbol{D}\_{\boldsymbol{S}}$ is the degree matrix of a graph with adjacency matrix $\boldsymbol{S}$, i.e., $(\boldsymbol{D}\_{\boldsymbol{S}})\_{i, i} := \sum_{i=1}^{n} \boldsymbol{S}\_{i}$, and $\boldsymbol{L}\_{\boldsymbol{S}} := \boldsymbol{D}\_{\boldsymbol{S}} - \boldsymbol{S}$. $\boldsymbol{D}\_{\mathcal{K}}$ and $\boldsymbol{L}\_{\mathcal{K}}$ are the degree matrix and graph Laplacian matrix of the complete graph $\mathcal{K}$ respectively. Equation 7 is listed as follows: $$ usc\_{\mathcal{G}} = \min\_{\boldsymbol{x} \in \\{0, 1\\}^n - \\{\boldsymbol{0}, \boldsymbol{1}\\}} \frac{\sum_{(i, j) \in \mathcal{E}} (\boldsymbol{x}\_i - \boldsymbol{x}\_j)^2}{\sum_{(i, j)}(\boldsymbol{x}\_i - \boldsymbol{x}\_j)^2} = \min\_{\boldsymbol{x} \in \\{0, 1\\}^n - \\{\boldsymbol{0}, \boldsymbol{1}\\}} \frac{\boldsymbol{x}^T \boldsymbol{L}\_{\mathcal{G}} \boldsymbol{x}}{ \boldsymbol{x}^{T} \boldsymbol{L}\_{\mathcal{K}} \boldsymbol{x}}, $$ where $\boldsymbol{x}$ is a n-dimensional vector, $\boldsymbol{x}\_i$ is the $i$-th element of vector $\boldsymbol{x}$. $\forall i$, $\boldsymbol{x}\_i$ is either $0$ or 1, but $\boldsymbol{x}$ cannot be a vector of all $0$s or $1$s. These explanations are represented as $\boldsymbol{x} \in \\{0, 1\\}^n - \\{\boldsymbol{0}, \boldsymbol{1}\\}$. The semantic meaning of this equation is that, the sparsest cut problem partitions nodes into two subsets and $\\{0, 1\\}$ denotes the which subset a node should belong to. --- Rebuttal Comment 1.1: Title: I will keep my score Comment: I thank the authors for their further explanation and I would like to keep my score. --- Rebuttal 2: Title: [Rebuttal by Authors - 2] Comment: Response to questions: 1. (Please refer to response to weakness 1.) 2. (Please refer to response to weakness 2.) 3. Sorry for the confusions in the draft. Our goal in Section 4.5 is to **design** the negative sample $\bar{\mathcal{G}}^{[1]}$ according to Equation 8 for the mutual information estimator in Equation 4 & 5, where the positive sample is $\mathcal{G}^{[1]}$. We get some inspirations to design the negative sample from minimizing the sparsest cut problem (Equation 8), but not to strictly minimize Equation 8. In particular, we find that in the sparsest cut problem $$ \min\_{\boldsymbol{Z} \in \mathbb{R}^{n \times f}} {\frac{\operatorname{tr} (\boldsymbol{Z}^{T} \boldsymbol{L}\_{\boldsymbol{S}} \boldsymbol{Z})}{\operatorname{tr} (\boldsymbol{Z}^{T} \boldsymbol{L}\_{\mathcal{K}} \boldsymbol{Z})}}, $$ where the complete graph $\mathcal{K}$ could be a very good negative sample if we are going to minimize the numerator. Recall that in Section 4.3, leveraging SGC (simple graph convolution) as the graph encoder can minimize the numerator, and thus the complete graph $\mathcal{K}$ is a very good negative sample. Maximizing the denominator $\operatorname{tr} (\boldsymbol{Z}^{T} \boldsymbol{L}\_{\mathcal{K}} \boldsymbol{Z})$ is non-trivial, while minimizing the denominator is easy to achieve, through another SGC encoder. Minimizing the denominator is opposite to what we want, so we use it as the negative sample. Response to Limitations: Thanks for your suggestion. We would like to include the following two paragraphs in the final version of the paper. This paper is based on the First Law of Geography and the Third Law of Geography. Though the two laws are generally true, the method in this paper may fail where the two laws are not applicable. For example, the First Law may fail in extremely large areas or with limited data [1]. Also, we assume that the street view images are good proxies to describe the geographic configurations of roads. The potential negative societal impact includes: (1) Our method requires street view images (SVIs) along roads. However, SVIs may not be updated and thus our methods may provide outdated information. Also, SVIs cannot provide everyday changes in a city; (2) our method currently does not consider adversarial attacks from data, and thus may provide incorrect information for downstream tasks if it is attacked. --- [1] Spatial prediction based on Third Law of Geography. Annals of GIS, 2018. --- Rebuttal 3: Title: Additional Questions Comment: Dear Reviewer UWD1, Thanks very much for providing the constructive and motivating feedback! Can you please let us know whether we have addressed all your questions and whether you have any additional feedback? Thank you!
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to all the reviewers for their careful review and constructive comments. Here, we briefly summarize our response to all the comments. 1. A case study can be found in the uploaded PDF file. 2. More ablation studies and concrete examples have been provided to demonstrate the critical role of the Third law of Geography in road network representation learning and clarify the motivations of the proposed approach. 3. As suggested by the reviewers, more clarifications on the modules of the proposed methods have been provided to make the proposed method easy to understand. In addition, sensitivity tests and more ablation studies have been conducted to show the peculiarities of the proposed approach. 4. Experimental settings are clarified to make the experiments in this work convincing. 5. As suggested by the reviewers, we have conducted discussions to distinguish the proposed approach from other related works, thus enhancing the novelty and contributions of the paper. 6. How we will improve the paper according to all the reviewers' suggestions has been illustrated. Again, we thank all the reviewers, and we will carefully revise the paper following all the suggestions raised. Pdf: /pdf/e34ae7b9187ac820bebf64f4b3872c82e04d2834.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes a novel framework, termed Garner, for road network representation learning that leverages both the Third Law of Geography and the First Law of Geography. It emphasizes both spatial proximity and geographic configuration similarity. Street view images are used to capture similarities in road surroundings by geographic configuration-aware graph augmentation, spectral negative sampling, and a dual contrastive learning objective. ​Experiments demonstrate that integrating the Third Law significantly improves performance in downstream tasks like road function prediction, traffic inference, and visual road retrieval. Strengths: S1: Street view images from OpenStreetMap are used as proxies for geographic configurations. S2: Considering the reverse implication of the Third Law: roads with dissimilar configurations should have dissimilar representations. S3: The proposed method is validated on Singapore and New York City road networks with street view images. Weaknesses: W1: The novelty of this paper is limited, as geographic configuration has been modeled in many studies on urban computing. W2: The choice of the lose, ie, to maximize the MI between the original graph $G^{[0]}$ and the augmented graph $G^{[1]}$, is not well justified and is difficult for me to understand. W3: Several key factors, such as points of interest (POIs), are not taken into consideration. Technical Quality: 2 Clarity: 3 Questions for Authors: Q1: Why maximize the MI between the original graph $G^{[0]}$ and the augmented graph $G^{[1]}$? The two graphs have different semantics. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors would like to thank the reviewer for providing constructive comments. We have made the following clarifications to address your concerns. Response to the Weaknesses: 1. The novelty of our paper is listed as follows: - We introduce the **Third Law of Geography**, a fundamental principle in geographical sciences, which has not been explored in previous studies in road network representation learning. - We design **novel graph contrastive learning** techniques, i.e., **geographic configuration-aware graph augmentation** and **spectral negative sampling**, where spectral negative sampling is inspired by the sparsest cut problem and spectral graph sparsification in spectral graph theory. - According to [1], the definition of "geographic configuration" is "the makeup and the structure of geographic variables over some spatial **neighborhood** around a point." Thus, the term "geographic configuration" requires information from both a spatial entity and its **neighborhood**. To the best of our knowledge, all of the above have not been explored in the literature of geospatial entity representation learning. We hope this clarification makes our novelty much easier to follow. 2. The original graph $\mathcal{G}^{[0]}$ and the augmented graph $\mathcal{G}^{[1]}$ contain different information from different views. Our goal is to fuse information from both views and generate better output representations of road segments. In particular, the augmented graph $\mathcal{G}^{[1]}$ is constructed according to the Third Law of Geography. As written in our paper, "Previous studies show that information from different views can be fused properly by maximizing their mutual information (MI) [2,3], which can also improve the quality of representations [3]." Inspired by these, we maximize the MI between the original graph $\mathcal{G}^{[0]}$ and the augmented graph $\mathcal{G}^{[1]}$ so as to allow the output representation to learn from the view constructed according to the Third Law of Geography. 3. Thanks for your suggestions. We do not include POIs in the paper due to the following reasons: - **Focus on the Third Law of Geography**: Our paper focuses on the Third Law of Geography. In our paper, street view images (SVIs) can already model geographic configurations and the third law very well. Thus, SVIs are sufficient to illustrate our concepts and achieve our research objectives. - **Distribution and Coverage of POIs**: We would like to argue that POIs are currently not appropriate for our problem. A major issue is observed that POIs are unevenly distributed in cities [10], and a large portion of road segments do not have nearby POIs. In contrast, SVIs cover almost every road in a city, and we can sample SVIs for each road segment. - **Commercial Bias of POIs**: Another issue is that POIs are provided by Internet Web Maps for map search, and thus their contents are mainly for commercial functions (e.g., restaurants), leading to poor performance in non-commercial regions such as industrial or residential areas [10, 11]. However, there are many roads outside commercial regions where POIs may not provide sufficient information for subsequent analytical tasks. In contrast, SVIs can provide meaningful information across diverse regions. - **Future Work**: We acknowledge that it is very interesting to identify an appropriate way to consider POIs in Garner in future work. And we will try our best to achieve this in the future. --- [1] Spatial prediction based on Third Law of Geography. Annals of GIS, 2018. [2] Learning representations by maximizing mutual information across views. In NeurIPS. [3] Contrastive multi-view representation learning on graphs. In ICML. [4] Graph convolutional networks for road networks. In SIGSpatial 2019. [5] On representation learning for road networks. ACM Trans. Intell. Syst. Technol 2021. [6] Spatial structure-aware road network embedding via graph contrastive learning. In EDBT 2023. [7] Robust road network representation learning: When traffic patterns meet traveling semantics. In CIKM 2021 [8] Jointly contrastive representation learning on road network and trajectory. In CIKM 2022 [9] Learning effective road network representation with hierarchical graph neural networks. In KDD 2020 [10] Urban2vec: Incorporating street view imagery and pois for multi-modal urban neighborhood embedding. In AAAI 2020 [11] Knowledge-infused Contrastive Learning for Urban Imagery-based Socioeconomic Prediction. In WWW 2023 --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: I agree with you on the use of POIs, street view images are sufficient for the illustration of your idea. However, it's not entirely reasonable for me to say that maximizing MI aims at fusion. Maximizing MI retains only the shared information between the two graphs. In other words, the geographic information you expected may be lost during this. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: Dear Reviewer MjEv, Thank you for your thoughtful feedback and for allowing us the opportunity to address your concerns. We are pleased to have resolved some of the issues you raised and appreciate your constructive comments. We agree with your opinion that maximizing mutual information (MI) is appropriate for fusing views that share information. We would like to provide further clarification on the fusion of the two graphs in our approach: 1. **Shared Information Between Graphs**: Both graphs, $\mathcal{G}^{[0]}$ and $\mathcal{G}^{[1]}$, have shared information of geographic configurations as the same node features. As described in Section 4.1, we prepare their node features as concatenation of projected road features and projected street view image embeddings. Specifically, the node features in both graphs are $\boldsymbol{H}^{[0]} = \text{concat}([\boldsymbol{C} \boldsymbol{W}_c, \boldsymbol{X} \boldsymbol{W}_x])$, where $\boldsymbol{C}$ is the matrix of geographic configurations, $\boldsymbol{X}$ is the matrix of road features (from map data) and $\boldsymbol{W}_c$ and $\boldsymbol{W}_x$ are learnable projection matrix. This "joint representation" is a widely used method in multi-modality and multi-view fusion [1]. 2. **Differences between Graphs**: The only difference between the original graph $\mathcal{G}^{[0]}$ and the augmented graph $\mathcal{G}^{[1]}$ are their edges, which provide different views of underlying road segments. Specifically, edges in $\mathcal{G}^{[0]}$ describe the connectivity of roads in the map, while edges in $\mathcal{G}^{[1]}$ connect roads with very similar geographic configurations. By maximizing the mutual information between them, the model can learn some "high-level factors whose influence spans multiple views" [2], for example, the similarity of road segments considering the third law. Our ablation study also demonstrates that, building the augmented graph $\mathcal{G}^{[1]}$ for contrastive learning significantly enhances the learning performance of our proposed method. Thank you again for your valuable feedback. [1] Multimodal Machine Learning: A Survey and Taxonomy. TPAMI. [2] Learning Representations by Maximizing Mutual Information Across Views. In NeurIPS 2019. --- Rebuttal 2: Title: Additional Questions Comment: Dear Reviewer MjEv, Thanks very much for providing the constructive and motivating feedback! Can you please let us know whether we have addressed all your questions and whether you have any additional feedback? Thank you! --- Rebuttal Comment 2.1: Comment: I raised my score to 4: Borderline reject since my concerns are partially addressed. --- Reply to Comment 2.1.1: Comment: Dear Reviewer MjEv, Thanks for your reply. We also agree that careful designs are required to keep both shared information and exclusive information. And we have achieved this in our method. We would like to provide both clarifications and experimental results as follows. 1. As mentioned in [1], models can learn both shared information and exclusive information via mutual information by considering both global and local mutual information. 2. The loss in our method is one of the most widely used MI estimators, Jensen-Shannon MI estimator [2, 3], which considers the global-local mutual information by maximizing the mutual information between a patch of an image (local high-level representations) and the image itself (high-level "global" representation). 3. In the field of **graph contrastive learning**, generating an augmented graph and using mutual information maximization to **fuse** different views of graphs, keep both shared information and exclusive information, and enhance the representation are widely adopted [4, 5, 6]. Our designs are also based on this literature to fuse different views. Besides, the experimental results in 4-6 also indicate the effectiveness of the fusion and keeping both shared information and exclusive information (i.e., geographic configurations). 4. In our **ablation study**, line 2 and line 3 in Table 5 show that the maximizing the mutual information between the original graph $\mathcal{G}^{[0]}$ and the augmented graph $\mathcal{G}^{[1]}$ to fuse different views and keep exclusive information (i.e., geographic configurations) can significantly improve the learning performance. In particular, we archive up to 25% improvement. 5. We have conducted a downstream task named **visual road retrieval** (Table 4 in the paper), aiming at finding which road a street view image should belong to. In this experiment, our method, which uses SVIs as part of the input and models the Third Law of Geography, performs significantly better than the baselines, which do not consider geographic configurations. The experimental results indicate that the representation learned a lot from the geographic configurations (exclusive information). 6. We have conducted a **case study**, which is included in the attached PDF under "Author Rebuttal by Authors". In the case study, we randomly selected an anchor road, computed its representation similarity with other roads and displayed the top 10 most similar roads. By considering the third law in our method, similar roads are farther but have similar geographic configurations. This also reveals that our method can keep exclusive information of geographic configurations. --- [1] Learning disentangled representations via mutual information estimation. In ECCV. [2] Learning deep representations by mutual information estimation and maximization. In ICLR. [3] Deep Graph Infomax. In ICLR. [4] Contrastive Multi-View Representation Learning on Graphs. In ICML 2020. [5] Graph Contrastive Learning with Adaptive Augmentation. In WWW 2021. [6] Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. In IJCAI.
null
null
null
null
null
null
Wormhole Loss for Partial Shape Matching
Accept (poster)
Summary: The paper proposes an unsupervised optimization for partial shape matching between (near-)isometric shapes, relying on distance similarity. Defined as the "Consistent Pairs" a pair of points that present a similar geodesic distance between the full and the partial shapes, this work relaxes this definition by assuming that the path on the partial shapes can traverse holes with straight lines. The paper shows results on CUTS and HOLES on par with the state-of-the-art and a slight improvement on PFAUST-M and PFAUST-H. Strengths: - The paper addresses a relevant and challenging problem: solving for partial shape correspondence in an unsupervised setting is a difficult task, and shape matching community is active on this - The method shows interesting applicability in MDS embeddings, although the paper does not evaluate this aspect numerically. Weaknesses: - The proposed principle seems sound but is also restricted to simple cases. The method shows better results on HOLES, where the assumption holds, while for CUTS, it seems to suffer. My intuition is that CUTS contains a large missing part, and this cannot be well approximated by straight lines on the boundary. - Some aspects of the presentation are not satisfactory. I appreciate the effort in grounding the formulation in a more systematic methodological explanation, but at the moment, it seemed cluttered. For example, I would say that Theorem 1 and its demonstration are unnecessary. Visualizations are not good for black-and-white printing. Figure 6 misses a Ground Truth to assess the quality of the final result. - The paper and its limitations do not offer a discussion nor analysis on the limit of the proposed strategy in terms of failure cases. A more in-depth analysis of the impact of partiality magnitude and its nature on the capability of finding guaranteed pairs would be beneficial and insightful for future work. Technical Quality: 3 Clarity: 2 Questions for Authors: 1) The paper analyzes the cases in which the partial shape is still a single component. How does the method perform in the presence of disconnected components? 2) How computationally expensive is the method? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors describe some limitations regarding future works but do not discuss failure cases of the proposed strategy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and for acknowledging the relevance and challenge of addressing partial shape correspondence in an unsupervised setting, as well as the applicability of our method to MDS embeddings. We will add to Figure 6 the ground-truths. We hereby respond to the reviewer's remarks and questions. **Clarification:** Regarding the reviewer's summary, it seems that there might have been some confusion and mix of concepts that we would like to address. As defined in the paper, consistent pairs are pairs of points for which the shortest path on the partial shape have rigorously the same length as the shortest path on the full shape. The lengths can be different only if the shortest path on the full shape passes through parts which are missing in the partial shape. We design a criterion that finds as many consistent pairs as possible. All pairs guaranteed by this criterion are definitely consistent. Our criterion is based on a lower bound estimation of the unknown geodesic distance on the full shape. It is realized by considering Euclidean distances between boundary points of the partial shape as potential shortcuts. ### CUTS Regarding performance on CUTS, the reviewer suggests that the proposed criterion is restricted to simple cases, and would not apply to missing parts like in CUTS due to their large size. We respectfully disagree with these two claims as explained below. - Regarding simplicity, "The missing parts in the CUTS dataset were created by slicing figures with 3D planes. In practice, this procedure does not modify much the topology of the surface and it does not introduce many inconsistent pairs. As such, cuts mostly preserve all geodesic distances, thus, our novel loss function was almost identical to the one in Equation (4)" (lines 256-258). The implication is that cuts provide too simple, and not too complex, partial shapes where almost are pairs are consistent and found by our criterion, explaining why there our method does not lead to significant improvement. - Regarding size, it is the shape of the missing parts, and not their size, that dictates the complexity of the partial shape. For example, cutting off just a finger or both legs on a human shape leads to simple partial shapes with only consistent pairs but with very different size of missing parts. In contrast, holes of various sizes present significantly harder topologies with many pairs that are inconsistent. In fact, our method leads to significant improvements when handling missing parts with holes by filtering out inconsistent pairs, as shown on the HOLES, PFAUST-M and PFAUST-H datasets. ### Failure cases Regarding failure cases of our criterion. Our binary criterion cannot guarantee inconsistent pairs as proven in Theorem 2. Regarding failure cases of our methods, in all our experiments except on CUTS, our criterion managed to find a significant amount of consistent pairs, leading to improved performance in both shape matching and manifold flattening. In CUTS, our criterion does not fail, it is just not so relevant as almost all pairs are consistent, and we discuss this issue in the results section. At the other end, we do not claim that our regularized approach is guaranteed to always succeed. ### Partiality magnitude and amount of missed consistent pairs The reviewer suggests to analyse the impact of partiality magnitude on the ability to find consistent pairs. This question is interesting, however, we would like to emphasize that the amount of missing parts is not the main factor impacting consistent pairs, but more the topology of the missing parts. See our earlier discussion on CUTS for an example of a simple partial shape with large missing parts. Nevertheless, in the rebuttal period, we evaluated the gap between the total amount of consistent pairs and that found by different criteria on both the PFAUST-M and PFAUST-H datasets, where PFAUST-H is harder since it has many more holes (of smaller size). See the table below, that we will include in the paper, presenting the percentage of consistent pairs, and among those the percentage of guaranteed pairs by the different criteria, along with the standard deviations. ||%Consistent|%Guaranteed ($\mathcal{C}_\mathcal{T}$) [42]|%Guaranteed ($\mathcal{C}_\mathcal{W}$) (OURS)| |-|:-:|:-:|:-:| |PFAUST-M|78 (+-16)|48 (+-18)|82 (+-14)| |PFAUST-H|53 (+-16)|30 (+-18)|65 (+-18)| ### Several connected components Regarding several connected components, it is standard practice for partial shape matching to consider only a single connected component. By definition, consistent pairs can only belong to the same components. Our pipeline would simply generalise by computing the features of each component independently and then concatenating them before estimating the matching, just like other pipelines would. ### Complexity Regarding the complexity of our method, we would like to point out to the reviewer that we have discussed the computational complexity of the method in appendix B.2. Note that this complexity occurs only during preprocessing, as then our methods does not induce any noticeable additional cost. --- Rebuttal Comment 1.1: Title: Post-Rebuttal Comment: I thank the authors for their clarifications and discussion. I appreciate their insights on CUTS, and I see that for the considered algorithm, it is probably less challenging than HOLES since it should generally preserve more of the geodesic distances of the remaining part. Concerning the Failure cases, I still find the discussion a bit confusing. Since "we do not claim that our regularized approach is always guaranteed to succeed," I would be curious to know more precisely which cases do not succeed. I see the complexity has been analyzed in Appendix B.2; it would also be interesting to know the wall-clock time of an inference. What is the required inference time for evaluating the HOLES test set compared to the DirectMatch baseline? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the efforts and are happy to have clarified the issue with the CUTS dataset. We here answer all remaining questions. ### Failure cases Failure cases of the regularized approach may happen when relaxing the binary into a soft criterion, as discussed in the rebuttal of reviewer 1VSx, where they claim that indeed this regularization "involves a trade-off between retaining slightly noisy information and maintaining the integrity of the matching process". As such, there might be partial shapes with extremely complex topologies where this compromise would not be beneficial. Nevertheless, we did not find such failure cases on the data we worked with. Note, that as all methods in this field, we do not have a theoretical proof of general optimality. We thus do not claim to have the final word in this fascinating and challenging field. Yet, currently, for the specific task of matching a part to a whole, our empirical evidence indicates that the proposed measure improves current SoTA results by a significant notch. ### Inference time Since the learning architecture we tested is the one proposed in DirectMatchNet, our test-time pipeline is the same, yielding similar inference times. Specifically, inference time is 0.12s for both the proposed method and DirectMatchNet per shape on the HOLES dataset. We will add these run times to the paper.
Summary: The paper investigates the problem of shape matching in an unsupervised and partial setting. In particular, the paper extends an existing criterion for filtering out potentially inconsistent point pairs. This is done by simply including extrinsic information that bounds from below the distances between pairs of boundary points, resulting in a new criterion that preserves more consistent pairs. The experiments are conducted for applications including multi-dimensional scaling and partial shape matching, in which the advantage of the proposed loss is demonstrated. Strengths: The paper is in general well-motivated, and the idea is straightforward to follow. Also, the presentation is clear with sufficient figures to help understanding. the experiments include different applications. Weaknesses: In general, the paper is well-written, explores a simple yet effective idea, and proves its effectiveness via experiments. However, the main weakness of this paper comes from the lack of theoretical depth. The proposed method is a modification of an existing idea, whereas new insights are limited. This is also discussed in Limitations of the paper, there are more consistent pairs that could be recovered, yet we have no idea how large the gap is. That being said, the contribution of the paper is still clear. Technical Quality: 3 Clarity: 3 Questions for Authors: I have no particular critical question. But I am curious, how large the gap is between the preserved consistent pairs and all possibly existing consistent pairs? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations have been adequately discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and their appreciation of the paper's clear contribution and the soundness and clear presentation of our framework. In the following we relate to the reviewer's remarks and questions. ### Theoretical depth Regarding the theoretical depth of our paper, we respect the reviewer's perspective. However, we respectfully like to argue otherwise. The proposed criterion, which is indeed a simple deviation from a naive existing one, is less trivial than might seem at first glance. Most existing papers in shape analysis, and in particular in shape matching, focus on using intrinsic properties, such as geodesic distances, LBO eigendecompositions, functional maps, and interactions between non-regular metric spaces. Another philosophy in shape analysis, albeit less popular, is to consider only extrinsic properties, e.g. Wang et al. 2018, but such approaches are ill-suited for problems based on intrinsic properties like shape matching. In our case, the proposed approach combines intrinsic and extrinsic non-differential quantities. In fact, we invoke an extrinsic measure (embedding distances in a Euclidean space) to analyze intrinsic quantities (geodesic distances). While maybe simple in appearance, such a perspective is far from trivial. In addition, extrinsic Euclidean distances are often ignored in the study of shapes undergoing non-rigid transformations. Finally, our criterion yields theoretically guaranteed pairs (Theorem 2), which is proven in the appendix. Bottom line - it is not a heuristic criterion, but rather a theoretically supported, and apparently practically useful one. Providing a simple approach that integrates intrinsic and extrinsic integral measures, that introduces the often ignored extrinsic Euclidean distances, and is well supported by a solid mathematical theory is a fundamental contribution. Moreover it leads to enhanced empirical performance. That being said, we do not provide a theoretical guarantee on the measure of consistent pairs our criterion can find, and we are clear on that matter in the limitations paragraph. The same argument is true for other criteria, e.g. $\mathcal{C}\_\mathcal{T}$. Nevertheless, the proposed criterion is theoretically guaranteed to recover all the pairs that the existing criterion $\mathcal{C}\_\mathcal{T}$ finds. We also show in Figure 4 a qualitative difference between the consistent pairs both criteria guarantee. Y. Wang et al. Steklov spectral geometry for extrinsic shape analysis. ACM TOG, 2018. ### Amount of missed consistent pairs Regarding the gap between the number of guaranteed pairs via different criteria and the total number of consistent pairs, we conducted experiments during the rebuttal period to estimate it. We evaluated on the datasets PFAUST-H and PFAUST-M the amount of consistent pairs, guaranteed pairs by our criterion $\mathcal{C}\_\mathcal{W}$, and guaranteed pairs by $\mathcal{C}_\mathcal{T}$. We provide in the table below the average percentage of consistent pairs, and among these, the average percentage of guaranteed pairs by each criteria. We also provide the standard deviations in parenthesis. ||%Consistent|%Guaranteed ($\mathcal{C}_\mathcal{T}$) [42]|%Guaranteed ($\mathcal{C}_\mathcal{W}$) (OURS)| |-|:-:|:-:|:-:| |PFAUST-M|78 (+-16)|48 (+-18)|82 (+-14)| |PFAUST-H|53 (+-16)|30 (+-18)|65 (+-18)| We will add this evaluation table to the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! I have no further questions.
Summary: The paper presents a novel criterion for identifying consistent pairs based on geodesic distances between points on partial and full surfaces. This new loss function utilizes intrinsic geodesic distances, extrinsic distances between boundary points, and a consistency criteria to enhance the performance of partial shape matching. The authors validate the improvements in the loss function both theoretically and through experimental results. Strengths: - The focus on partial shape matching and unsupervised shape matching for partial to full shapes is intriguing. - Both quantitative and qualitative evaluations demonstrate significant improvements over the state of the art. Weaknesses: - The method fully relies on the Euclidean embedding space for boundary distances, which might not always reflect true geodesic distances on the surface, especially in highly non-Euclidean or complex topologies. -The authors could enhance evaluations by using other correspondence techniques, instead of only using DiffusionNet features, and demonstrate broader generalization capabilities with other signatures. Technical Quality: 3 Clarity: 3 Questions for Authors: - How does the criterion perform on significantly larger datasets with more complex topologies? It would be interesting to integrate a control measure to analyze the effect of filtering inconsistent pairs. - Regularizing the binary mask into a soft mask M* involves a trade-off between retaining slightly noisy information and maintaining the integrity of the matching process. How is the threshold for the soft mask determined, and how sensitive is the model's performance to this threshold? - Are there specific scenarios where this regularization might fail? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The method might fail in the presence of topological variations, leading to incorrect correspondences if the geodesic paths on the partial surface are substantially different from those on the full surface. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and for the appreciation of the strengths of our method when applied to the challenging partial shape matching problem and our significant improvements for this task. We hereby respond to the reviewer's remarks and questions. ### Reliance on the Euclidean embedding space Regarding the Euclidean term in our criterion, the Euclidean embedding space yields a lower bound on geodesic distances between boundary points. Naturally, this bound is less tight the more the surface is curved. Nevertheless, we demonstrate in both shape matching experiments with highly non-rigid transformations of shapes with complex partial topologies and flattening highly curved embeddings of the Swiss roll, that our criterion is greatly beneficial in non-Euclidean settings. We also ran in the rebuttal period new experiments to evaluate the amount of consistent pairs recovered by our criterion in the PFAUST benchmark and found that we are able to recover most pairs and find about twice as many as the previous criterion which does not rely on Euclidean distances. See the table below, that we will include in the paper, presenting the percentage of consistent pairs, and among those the percentage of guaranteed pairs by the different criteria, along with the standard deviations. ||%Consistent|%Guaranteed ($\mathcal{C}_\mathcal{T}$) [42]|%Guaranteed ($\mathcal{C}_\mathcal{W}$) (OURS)| |-|:-:|:-:|:-:| |PFAUST-M|78 (+-16)|48 (+-18)|82 (+-14)| |PFAUST-H|53 (+-16)|30 (+-18)|65 (+-18)| ### Generalization Regarding the generalization capabilities of our method, we showed the success of our criterion in two completely different tasks: partial shape matching and MDS flattening of partial surfaces. In the learning pipeline, we used the DiffusionNet as it is currently the prominent network for shape correspondence. Nowadays SOTA methods rely on DiffusionNet, and we show that we can improve them even further with our criterion. However, our method is complementary to DiffusionNet, and we could use the proposed criterion to train learnable feature extractors that would potentially replace DiffusionNet in the future. ### Larger and more complex datasets Regarding the question on the performance of our method on larger datasets with more complex topologies, we evaluated our method on the reference datasets for partial shape matching. We not only evaluated on the leading benchmark in the field SHREC'16, we also tested on the recent benchmark PFAUST to show that our method generalises to other types of shapes with different types of shape partiality. Note that both benchmarks comprise complex topologies due to the way parts were removed. We did not find a larger dataset with more complex topologies for partial shape matching of surfaces undergoing non-rigid transformations. ### Control measures Regarding control measures for analysing the effect of filtering inconsistent pairs, we point out Table 3 in our ablation study. There, we show how performance changes if we take either a binary or a soft criterion, based either on our wormhole one $\mathcal{C}\_\mathcal{W}$ or $\mathcal{C}\_\mathcal{T}$. In addition, our approach is built on top of DirectMatchNet. Thus, the comparative performance analysis in Tables 1 and 2 shows the effect of including our criterion. ### Soft mask matrix and thresholding Regarding the soft mask $\boldsymbol{M}^s$, there is actually no threshold parameter in its calculation. It is given directly form the distance matrix $\boldsymbol{D}$ and the $\boldsymbol{K}$ matrix by $\boldsymbol{M}\_{ij}^s = \min(\frac{\boldsymbol{K}\_{ij}}{\boldsymbol{D}\_{ij}},1)$, where $\boldsymbol{K}$ is given by the criterion$$\boldsymbol{K}\_{ij} = \min\limits_{B_1,B_2\in \mathcal{B}} d_{\mathcal{Y}'}(v_i, B_1) + d_{\mathcal{Y}'}(v_j, B_2) + d_E(B_1, B_2),$$where $v_i$ and $v_j$ are the vertices at indices $i$ and $j$ respectively. We will clarify this by adding the definition of $\boldsymbol{K}$. Also, since we want the soft mask matrix $\boldsymbol{M}\_{ij}^s$ to be 1 and equal to the binary mask matrix $\boldsymbol{M}\_{ij}$ for consistent pairs (when $\boldsymbol{D}\_{ij}\le \boldsymbol{K}_{ij}$), we cut off the ratio $\frac{\boldsymbol{K}\_{ij}}{\boldsymbol{D}\_{ij}}$ to $1$ in the definition of the soft mask matrix. ### Failure cases Regarding failure cases, our binary criterion cannot guarantee inconsistent pairs as proven in Theorem 2. Regarding failure cases due to the regularization, when we would get worse results by switching from the binary to the soft mask, we did not find such scenarios. Nevertheless, we do not claim that our regularized approach would never fail on unseen challenging contexts. --- Rebuttal Comment 1.1: Title: Responce to rebuttal Comment: Dear authors, Thank you for submitting your rebuttal and including the additional experiment. I have reviewed the rebuttal and found that my questions have been addressed. 1VSx
Summary: This paper introduces a refined criterion for detecting pairs of points whose geodesic distance on a partial surface is equal to that on the full surface. The "wormhole" criterion is sound and less conservative than previous such criteria. The refined criterion improves results in planar embedding with multidimensional scaling (MDS) as well as partial-to-whole shape matching. Strengths: This paper is clear and well-written. It is focused on one core contribution, the wormhole criterion for guaranteed pairs of points. And the results on both MDS and partial matching validate the utility of the refined criterion. Weaknesses: ### Miscellaneous - This is confusing: "This condition naturally follows from the fact that the shortest path in the full surface $\mathcal{Y}$ is shorter than any path between the points passing through the boundary $\mathcal{B}$, and in particular, ones that pass through the closest boundary points" (137-139). I think what you mean is either that (1) in general, the length of a path between the points in the full surface that passes through the boundary is at least as long as the sum of their distances to the boundary; or that (2) for pairs of points that satisfy the condition $\mathcal{C}_{\mathcal{T}}$, the shortest path in the full surface is *no shorter* than the path in the partial surface. - $\mathbf{K}$ is only mentioned in passing on line 170 before being used in equation (6). It might be helpful to introduce more explicitly what you mean by the "threshold" matrix, though I think it is inferable from context. - Line 21: "Consequentially" should be "Consequently" Technical Quality: 4 Clarity: 3 Questions for Authors: - You say you could "easily generalize the discussion" to the case of both surfaces being partial (103). Would this just amount to using the intersection of the criteria on both sides? - The boundary to which distances are measured is viewed as the boundary of $\mathcal{Y}'$ excluding the boundary of $\mathcal{Y}$. Given a partial surface, how do you know which parts of its boundary are part of the boundary of the full surface? - Why is the loss function scaled by the vertex area twice in equation (5)? It might be helpful to state this as an integral in the smooth setting before discretizing. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and for recognizing the utility of the wormhole criterion in both MDS and partial-to-whole shape matching tasks. We are happy that the reviewer enjoyed the clarity, soundness, and contribution in our manuscript. We fixed the typos pointed out by the reviewer and added the definition of $\boldsymbol{K}$ to the paper, $\boldsymbol{K}\_{ij} = \min\limits_{B_1,B_2\in \mathcal{B}} d_{\mathcal{Y}'}(v_i, B_1) + d_{\mathcal{Y}'}(v_j, B_2) + d_E(B_1, B_2)$, where $v_i$ and $v_j$ are $i$-th and $j$-th vertices. In the following we respond to the reviewer's remarks and questions. ### Line 137 Regarding the confusion in lines (137-139), we will rewrite this sentence to avoid confusion by taking a formulation similar to version (1) suggested by the reviewer. ### When both shapes are partial Regarding our theoretical discussion about consistent pairs, we focused on the case when only one surface is partial. In our experiments, we then used the found consistent pairs to guide the learning process for partial-to-whole shape matching. If both shapes are partial, the theory allows to find consistent pairs on each partial surface independently. In practice though, we would need to find the shared consistent pairs between both partial surfaces should we want to learn the matching between both partial surfaces. This should indeed be done by taking some form of intersection of found consistent pairs. Designing a partial-to-partial shape matching pipeline using the proposed criterion is left to future work (we will emphasize it in Limitations section). ### Boundaries Regarding boundaries, in practice, we treat all boundaries equally. We will remove Line 133 "excluding the boundaries of $\mathcal{Y}$" and clarify in the paper that all boundaries are treated equally. ### Double area scaling Regarding the double area scaling in Equation (5), our loss function is based on the continuous loss presented by Aflalo et al. IJCV 2016, see also [11]. Following their formulations we introduce a loss for the correspondence function between $\mathcal{X}$ and $\mathcal{Y}'$, defined by $\tilde{p}:\mathcal{Y'}\times \mathcal{X} \rightarrow \mathbb{R}^+$ as, $$\int_\mathcal{Y'\times Y'} \left(\int_\mathcal{X\times X} d_\mathcal{X}(x_1, x_2) \tilde{p}(x_1, y'\_1)\tilde{p}(x_2, y'\_2) da_{x_1} da_{x_2} - d_\mathcal{Y'}(y'\_1, y'\_2)\right)^2 m(y'\_1,y'\_2) da_{y'\_1} da_{y'\_2} $$where $d_{\mathcal{X}}$ and $d_{\mathcal{Y}'}$ measure the distances between surface points, and $m:\mathcal{Y'}\times \mathcal{Y'} \rightarrow \\{0,1\\}$ is our binary masking function. We readily have the discrete version,$$\mathcal{L}(\boldsymbol{\tilde P}, \boldsymbol{D}\_\mathcal{X}, \boldsymbol{D}\_\mathcal{Y'},\boldsymbol{M})=\||\boldsymbol{M}\odot(\boldsymbol{\tilde P} \boldsymbol{A}\_\mathcal{X} \boldsymbol{D}\_\mathcal{X}\boldsymbol{A}\_\mathcal{X}\boldsymbol{\tilde P}^\top - \boldsymbol{D}\_\mathcal{Y} )\||\_\mathcal{Y'Y'},$$ where $\||\boldsymbol{B}\||\_\mathcal{Y'Y'} = \text{trace}(\boldsymbol{B}^\top\boldsymbol{A}\_\mathcal{Y}\boldsymbol{B}\boldsymbol{A}\_\mathcal{Y})$. Similar to previous papers [21] we assume that our learned correspondence matrix implicitly contains the area matrix $\boldsymbol{A}\_\mathcal{X}$, thus, $\boldsymbol{P} = \boldsymbol{\tilde P} \boldsymbol{A}\_\mathcal{X}$. Consequently, our loss function is defined as follows,$$\mathcal{L}\_{\text{geo}}(\boldsymbol{P}, \boldsymbol{D}\_{\mathcal{X}}, \boldsymbol{D}\_{\mathcal{Y}'}, \boldsymbol{M})=\||\boldsymbol{M}\odot(\boldsymbol{PD}\_\mathcal{X}\boldsymbol{P}^\top - \boldsymbol{D}\_\mathcal{Y'}) \||\_\mathcal{Y'Y'}.$$For simplicity, we expressed it element-wise,$$\mathcal{L}\_{\text{geo}}(\boldsymbol{P}, \boldsymbol{D}\_{\mathcal{X}}, \boldsymbol{D}\_{\mathcal{Y}'}, \boldsymbol{M})=\sum_{ij} \boldsymbol{M}\_{ij} \boldsymbol{A}\_{\mathcal{Y}'ii} \boldsymbol{A}\_{\mathcal{Y}'jj} \left( (\boldsymbol{P}\boldsymbol{D}\_\mathcal{X}\boldsymbol{P}^\top - \boldsymbol{D}\_{\mathcal{Y}'})_{ij} \right)^2.$$We will add a derivation of the loss from its continuous to the discrete versions. Y. Aflalo et al. Spectral generalized multidimensional scaling. IJCV, 2016 --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed response. I continue to think that the wormhole criterion will be of use to the community.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GOMAA-Geo: GOal Modality Agnostic Active Geo-localization
Accept (poster)
Summary: The paper introduces GOMAA-Geo, a system designed for active geo-localization tasks where the goal can be specified in various modalities, including text, ground-level, or aerial imagery. The framework uses a combination of cross-modality contrastive learning to align representations across different modalities, supervised foundation model pretraining, and reinforcement learning to derive effective navigation and localization policies. It demonstrates strong performance in both same-modality and cross-modality localization tasks, with an emphasis on zero-shot generalization to new, unseen datasets. Strengths: 1. Modality Agnosticism: The ability to handle goal specifications in various modalities makes GOMAA-Geo highly versatile and applicable in diverse real-world scenarios, particularly in search-and-rescue operations where goal modalities can vary significantly. 2. Zero-Shot Generalization: The framework is effective at generalizing to completely new environments and goal modalities it has never seen during training, which is crucial for practical applications where pre-collected data might not be available. 3. Robust Performance: Extensive testing shows that GOMAA-Geo consistently outperforms other state-of-the-art methods in terms of success rates across multiple datasets and modalities, confirming its efficacy. Weaknesses: 1. Complex Integration: The integration of multiple complex components—contrastive learning, supervised pretraining, and reinforcement learning—while effective, can be challenging to optimize and maintain, potentially limiting its adaptability. 2. Computational Demands: The processes involved, particularly the training across different modalities and the reinforcement learning steps, might require substantial computational resources, which could be a barrier to deployment in resource-constrained environments. 3. Limited Discussion on Failure Cases: The paper could benefit from a more detailed discussion on scenarios where GOMAA-Geo fails or performs suboptimally, which would be valuable for future improvements and practical deployments. Technical Quality: 4 Clarity: 3 Questions for Authors: N/A Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks, we appreciate the reviewer's valuable feedback! > **Q1**: Complex Integration: The integration of multiple complex components—contrastive learning, supervised pretraining, and reinforcement learning—while effective, can be challenging to optimize and maintain, potentially limiting its adaptability. **A1**: We agree that our approach relies on more than a single “end-to-end learning protocol”, and in that sense, it might be somewhat more laborious to train our method from scratch. However, a similar argument can be made for how the 3-step approach (contrastive learning; supervised pretraining; reinforcement learning) makes the overall system more robust, modular, and adaptable. By this, we mean that, assuming e.g. a SOTA reinforcement learning algorithm emerges in the future, then most of the system can be kept intact and only the third step needs to be replaced with that new RL approach. Finally, note that our work pioneers a new task setup and set of approaches for tackling it, and future work may well result in even more streamlined approaches for the task (which we encourage also via releasing all code and pre-trained models upon paper acceptance). > **Q2**: Computational Demands: The processes involved, particularly the training across different modalities and the reinforcement learning steps, might require substantial computational resources, which could be a barrier to deployment in resource-constrained environments. **A2**: We first want to re-emphasize that our GOMAA-Geo model is _**not**_ trained across different modalities. This is also described e.g., in the abstract (Line 20-21), Section 3 “Active Geo-localization Setup” (Line 137-138), and Section 5 “Experiments and Results” (Line 271-272). We do agree, however, that RL training is typically a time- and/or resource-consuming task, which may indeed be difficult in resource-constrained environments. But, we would also like to highlight here that all models were trained on a single dual GPU workstation, which we believe is well within the budget of many research labs. More on compute resources/requirements are described around Lines 756-760 in the appendix. Finally, as also listed by you under the strength “2. Zero-Shot Generalization”, our results clearly show that GOMAA-Geo generalizes well to completely new environments and goal modalities, so e.g. research groups/practitioners with less compute available may still be able to successfully use these approaches without further RL training. > **Q3**: Limited Discussion on Failure Cases: The paper could benefit from a more detailed discussion on scenarios where GOMAA-Geo fails or performs suboptimally, which would be valuable for future improvements and practical deployments. **A3**: Thank you for pointing this out; we will add several failure case examples with associated discussions to the appendix of the revised version. Please see also the **attached PDF** with examples of failure cases, where one can see that in scenarios where the goal patch is identical to many other patches in the search space, our GOMAA-Geo agent can become unsuccessful (but note that it would be confusing even for humans in these settings, as we also rely on more discriminative aspects when looking for something). --- Rebuttal Comment 1.1: Comment: Thanks for the response, I have reviewed all the other reviewers' comments and the authors' responses. I agree with Reviewer 9JWt that the main contribution of the dataset lacks some details and the zero-shot performance requires a more reasonable explanation. While I also acknowledge the paper's significant contribution to the community, I would like to change my rating to borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you for the comments and your engagement. We provide our response to your comments below: > **Q1**: "Lack some details regarding the dataset" **A1**: Thank you for your comment. We refer you to our detailed response ([link](https://openreview.net/forum?id=gPCesxD4B4&noteId=wpmhB8kN3m)) to Reviewer 9JwT's concerns about the dataset. We believe we have thoroughly addressed all relevant details as requested. Should you have any further questions or concerns regarding the dataset, please do not hesitate to reach out to us. The dataset has already been made available at this anonymous [link](https://huggingface.co/datasets/Gomaa-Geo/MM-GAG/tree/main). Finally, as you mentioned in your response "lack some details", **it would be beneficial if you could specify which specific details you are referring to. This will enable us to respond more effectively.** Next, we clarify the reason behind the superior zero-shot generalization performance of our model. > **Q2**: Reason for superior zero-shot generalization performance? **A2**: The CLIP-MMFE module utilizes a satellite image encoder to project satellite images into the CLIP feature space. The original CLIP model, trained on 400 million image-text pairs, predominantly consists of diverse ground-level images. This extensive training allows the CLIP vision encoder to develop robust, generalized visual representations, making it a powerful feature extractor with strong zero-shot capabilities. The satellite image encoder of the CLIP-MMFE module, although trained on a more specialized dataset of satellite images, maps these images into the CLIP feature space. As a result, it inherits the CLIP model’s exceptional zero-shot performance. This inheritance occurs because the satellite encoder benefits from the rich and diverse visual features learned by the CLIP vision encoder. By mapping satellite images to the same feature space as the CLIP model, the satellite encoder leverages the extensive training of the CLIP model, thereby achieving improved zero-shot generalization capabilities. In essence, the effective feature extraction and zero-shot learning abilities of the CLIP vision encoder are extended to the satellite image encoder and is achieved by aligning the feature space of the satellite image encoder of the CLIP-MMFE module and CLIP vision encoder as discussed in lines 149-170, enhancing its performance in recognizing and interpreting new or unseen satellite imagery within the well-established CLIP feature space. To substantiate our argument, we have conducted numerous experiments and provided both qualitative and quantitative analyses throughout the paper. We would like to emphasize one key result here. To validate how aligning features from different modalities to the unified CLIP space—achieved through the CLIP-MMFE module—enhances zero-shot performance, we conducted an experiment (**detailed in Appendix D**) comparing it to a modality-specific feature extractor, such as SatMAE, a foundational satellite image encoder. Although the performance of both approaches is comparable when the goal modality is satellite imagery, a significant performance gap emerges in zero-shot evaluations with other goal modalities. This highlights the benefit of aligning features from different modalities to the robust, zero-shot generalizable CLIP space, as demonstrated by our results. We would be very happy to provide further clarification and hopefully clarify your concerns (if any) about zero-shot generalizability before the discussion period concludes. --- Rebuttal 2: Title: Follow-up to our response Comment: Dear Reviewer DjP3, In our rebuttal, we have already addressed all your previous comments and have answered 9JwT's queries about our dataset, and provided all necessary details along with a link to the dataset. As we approach the end of the discussion period, we wanted to follow up to see if our responses have addressed your concerns. We would be very grateful to hear additional feedback from you and will provide further clarification if needed. If you believe our response has adequately resolved the issues you raised, we kindly ask you to consider the possibility of raising the score. Thank you again for your time and effort. Thanks, Authors --- Rebuttal Comment 2.1: Comment: Thanks for these further responses, and I have no other questions. I hold a positive rating of 5, and I lean to accept this paper.
Summary: In this paper, the author proposes a direction classification task, which is for the drone path navigation. My main concerns are about the task. Usually we have the global satellite-view image, shall we estimate the direction? Most works do the location retrieval, which is a common practise in the geolocation. Strengths: In this paper, the author proposes a direction classification task, which is for the drone path navigation. The method seems sound. The idea is presented clear. Weaknesses: 1. My main concerns are about the task. Usually we have the global satellite-view image, shall we estimate the direction? Most works do the location retrieval, which is a common practise in the geolocation. 2. Can the network general to unseen scenarios? For example, there are many similar buildings or similar forests. 3. I am not sure the network could predict the future according to the history. Because there are no global map as input. 4. The drone usually do not go back, so the estimation space is only 3 directions. If we further consider the boundary or corner, there are only 2 / 1 direction. Will it affect the model? 5. I see the samples shown in the paper. I see a biases toward the boundary and corner. Could you show more samples with destination at the center? 6. Missing alignment. How about give a destination between the two patches? 7. The direction estimation is not new. Similar methods have been explored in the RL and detection. 8. Typos. i.e. should be i.e., Technical Quality: 2 Clarity: 2 Questions for Authors: Please see the Weakness. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please see the Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper! We hopefully address all of your current concerns / potential misunderstandings below. > **Q1**: My main concerns are about the task. Usually we have the global satellite-view image, shall we estimate the direction? **A1**: This appears to be a misunderstanding. We are **_not_** proposing a “direction classification task” (DCT), i.e., the task is not merely to predict a direction pointing toward a specific goal, but to _explore a partially explored environment to reach the goal_. This may imply that the agent needs to also explore places that are not taking it closer to the goal, to reveal contextual information which may later be useful for efficiently moving towards the goal. We denote this novel task as _goal modality agnostic active geo-localization_. A DCT is _only used as a pre-training task_ (see Sec. 4, L. 171-210, Fig. 2). After pre-training, our full GOMAA-Geo system is refined with RL (see Sec. 4, L. 211-243). In Table 5 we clearly show the benefit of the additional RL-based fine-tuning on top of the DCT warm-up task. **Comparison to other works:** There indeed exist many works on what we call _static_ geo-localization, which often assume access to a broader satellite view perspective, and where the mapping between a ground-level and top-view (satellite) image is treated as an image retrieval problem. We discuss this in Related Work (see e.g. L. 69-75). However, high-resolution satellite imagery is not available publicly everywhere in the world, and GPS information can sometimes be absent or unreliable (e.g. in search-and-rescue operations after natural disasters or warfare activity). Motivated by this, our work tackles geo-localization in partially observable GPS-denied environments, and we specifically tackle it in a _goal modality agnostic_ setup, since if GPS is not available, an agent must be able to robustly find a given location regardless of how it is specified. > **Q2**: Can the network generalize to unseen scenarios? For example, there are many similar buildings or similar forests. **A2**: Absolutely! This is one of the key strengths of our framework, which we validated e.g. by conducting extensive experiments with open-source datasets specifically designed to test GOMAA-Geo's ability to handle a range of unseen pre- and post-disaster real-world scenarios. Despite not being trained on any post-disaster aerial images, GOMAA-Geo performs well on images captured after disasters (Table 3). See also L. 307-321 for a detailed discussion on this. Additionally, the experimental section and the appendix contain extensive experimental results regarding generalization to unseen scenarios and goal modalities. See e.g. L. 287-306, Table 2, Appendix B, C, F. See also the **attached PDF** for example exploration behaviors of GOMAA-Geo in scenes containing additional land cover types (i.e., beyond buildings and forests). These will be added to the appendix of the revised version. > **Q3**: Not sure the network could predict the future according to the history. **A3**: The agent is _not_ tasked to predict the future based on its history of sequentially observed top-view glimpses and actions, but rather to predict a promising next action that ultimately leads the agent to reach its goal. We discuss this in the main paper (L. 376-387) and show clearly in Fig. 6 and in Appendix F that the full history is useful for this, as opposed to e.g. only basing the decision on the most recent state. > **Q4**: "The drone usually do not go back, so the estimation space is only 3 directions. If we further consider the boundary or corner, there are only 2 / 1 direction. Will it affect the model?" **A4**: When in a non-boundary position, the agent can always select any of the 4 actions (up, down, left, right), but as it is aware of its relative position in its search area (_not_ global GPS location; see L. 110) and a history of past relative positions, it quickly learns during training -- based on the reward (eq. (6)) that penalizes such abnormal behaviors -- that it is mostly not beneficial to move to its previous position. When it is on a boundary, we invalidate actions that would take the agent outside the search area (see L. 753-754), which is handled by sampling / selecting only from the available actions. We rarely observed abnormal behaviors from our trained GOMAA-Geo model, as evidenced by numerous visual illustrations of exploration behaviors in different scenarios throughout the paper (see e.g. Fig. 4, 5, 6 in the main paper; also Fig. 7-12 in the Appendix). > **Q5**: Could you show more samples with destination at the center? **A5**: See the **attached PDF** with additional visual results, where the goal is in the middle parts of the search area (will also be added to the appendix in the revision). Note that cases with the goal closer to a border typically correspond to scenarios where the start-to-goal distance is larger, so the examples that we focused most on in the paper can be expected to be more difficult. > **Q6**: Missing alignment. How about give a destination between the two patches? **A6**: We unfortunately have trouble understanding this remark. It seems you are asking if we could represent goal locations that aren't centered within a given cell. Our current framework assumes a discrete domain and doesn't require that goal locations be centered within the target cell, only that they are contained within the field of view. > **Q7**: The direction estimation is not new. Similar methods have been explored in the RL and detection. **A7**: This misunderstanding was addressed in A1 above. We emphasize that *we are not proposing direction estimation*. Rather, we propose a novel, more complex task of goal modality agnostic active geo-localization. To the best of our knowledge, this task has not been explored by others and is therefore novel. > **Q8**: Typos. **A8**: Thanks, we'll fix the identified typos in the revision. --- Rebuttal 2: Title: Follow-up to our response Comment: Dear Reviewer noHa, In our rebuttal, we addressed all your concerns in detail including your concern regarding the task, and also provided additional visualizations that you requested. We will include all the additional visualizations in our revised draft. As we approach the end of the discussion period, we would like to follow up to ensure that our responses have satisfactorily addressed your concerns. We would greatly appreciate any further feedback you may have and are prepared to offer additional clarification if necessary. If you believe our response has adequately resolved the issues you raised, we kindly ask you to consider the possibility of raising the score. Thank you again for your time and effort. Thanks, Authors
Summary: This paper introduces GOMAA-Geo, a framework designed to enhance active geo-localization for the UAV, enabling them to find targets specified through various modalities like natural language or imagery. It addresses the challenge of efficient localization in dynamic environments without specific training on disaster scenarios. Utilizing a modality-agnostic approach with cross-modality contrastive learning and a combination of pretraining and reinforcement learning, GOMAA-Geo achieves robust navigation and localization. It conducted extensive experiments across various datasets and demonstrated its effectiveness. Strengths: - It proposes a new dataset named Multi-Modal Goal Dataset for Active Geolocalization (MM-GAG) for active geo-localization across different goal modalities: aerial images, ground-level images, and natural language text. - This paper introduced a framework named GOMAA-Geo, which utilizes a modality-agnostic approach, combining cross-modality contrastive learning to generalize across different goal specifications and modalities. - Extensive experiments are done in this paper, which is quite laborious. Weaknesses: - The introduced dataset is the main contribution of the paper, but the details of the dataset are very limited, and there is no more information in the appendix, such as the details of data collection, why the zero-shot generalization performance of the evaluation model is better? - The writing needs to be improved, especially the method part, which is quite confusing. In 240 lines, it is claimed that the complete GOMAA-Geo framework integrates all the components introduced before, but it is not easy to find the corresponding relationship from the text to the figure, and the connection between Figure 1, Figure 2, and Figure 3 is also difficult to sort out. Technical Quality: 2 Clarity: 2 Questions for Authors: What is the full name of the UAV? There is no explanation in the whole paper, which will cause confusion to readers. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: It addresses the limitations in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: While we very much appreciate the feedback on our work, we were surprised to find that this reviewer has recommended rejecting the paper, given that the listed "Strengths" seem to clearly outweigh concerns listed under "Weaknesses" (where no technical concerns about our work were mentioned) nor under "Questions" (where only a clarification about an abbreviation is requested). We address the listed concerns and the question below. > **Q1**: The introduced dataset is the main contribution of the paper. **A1**: We wish to emphasize that while our dataset is indeed a significant contribution, we do not view it as the *main* contribution. Rather, our main contribution is the introduction and extensive evaluation of a novel framework designed to effectively address goal modality agnostic active geo-localization, even when the policy is trained solely on data from a single goal modality. However, we do agree that our dataset is also an important contribution, both on its own (we carefully considered how to design it so that it is easy to use for future researchers, and as mentioned will release it together with the code upon paper acceptance), and because it allows for assessing and evaluating various methods within our novel task setting (i.e., it is key for validating our main contribution). We summarize our contributions in Lines 55-67. > **Q2**: The details of the dataset are very limited, and there is no more information in the appendix, such as the details of data collection. **A2**: We describe the proposed MM-GAG dataset on Lines 263-276 in Section 5 “Experiments and Results”, and provide additional information in the Appendix section titled “MM-GAG Dataset Details”. Together, these parts of the paper describe key dataset details such as (i) locations in the world where the data were captured (Fig. 15); (ii) **details of data collection** / creation procedure (beginning at Line 762 in the appendix); (iii) number of evaluation scenarios covered (Line 273-275). If the reviewer can identify concrete missing details, we would be happy to provide these as well. And, of course, we will release the dataset for future research in this field upon acceptance on GitHub along with detailed documentation. We plan to use the GitHub issue tracker to gather requests for additional information about the dataset to ensure ease of use for future researchers. > **Q3**: Why the zero-shot generalization performance of the evaluation model is better? **A3**: We attribute the model's superior zero-shot generalization performance to the CLIP-MMFE module (see Lines 149 - 170 for a detailed discussion on this; we also encourage the reviewer to see Lines 323- 331 where we empirically validate the effectiveness of CLIP-MMFE and provide corresponding visualizations in Fig. 4). One of the most impressive features of the CLIP-MMFE module is its ability to align features across different modalities (such as aerial imagery and text) with the CLIP ground-level image encoder known for its zero-shot generalization capabilities. Moreover, based on the results in Table 2, one of the unseen modalities during training ("Ground Image") appears surprisingly to lead to better results compared to the modality seen during training ("Aerial Image"). The reason for this is that the ground-level image task is a little bit easier because it is constrained to be in locations with discriminative information (the available ground-level images typically depict discriminative scenery such as unusual/salient buildings etc), whereas aerial images are not similarly constrained. The way in which we see that "Ground Image" gives slightly better results is by computing the average across the different C-values (i.e. across columns) of Table 2 and getting the mean results for Text, Ground Image, and Aerial Image to be 0.6008, 0.6145 and 0.6034, respectively. Thus the results are on average best for Ground Image and worst for Text, with Aerial Image in between. Note, however, that all results are quite similar, which showcases the robustness across the various modalities. We also refer the reviewer to Appendix D, where we discuss this “trade-off” between modality-specific vs. modality-agnostic goal representation in active geo-localization. Our findings suggest that on the one hand, modality-agnostic representations are beneficial for addressing active geo-localization problems across diverse goal modalities, and on the other hand, they are equally competitive with models that are designed to solve modality-specific active geo-localization tasks, such as SatMAE-Geo. > **Q4**: The writing needs to be improved, especially the method part, which is quite confusing. In 240 lines, it is claimed that the complete GOMAA-Geo framework integrates all the components introduced before, but it is not easy to find the corresponding relationship from the text to the figure, and the connection between Figure 1, Figure 2, and Figure 3 is also difficult to sort out. **A4**: We appreciate the reviewer's suggestions on improving readability, including improving the consistency between Fig 1-3. We will make improvements as suggested in the revision. > **Q5**: What is the full name of the UAV? **A5**: UAV stands for “Unmanned Aerial Vehicle”; we will ensure this is clarified in the revised paper. --- Rebuttal 2: Comment: The reason why I believe the dataset is the main contribution is twofold: (1) As stated by the author in the rebuttal, "We summarize our contributions in Lines 55-67," for the contribution paragraph, the first part introduces the GOMAA-Geo framework, and the second part discusses the dataset contribution: "We create a novel dataset to assess various approaches for active geo-localization across three different goal modalities: aerial images, ground-level images, and natural language text." (2) In this paper, the authors claim the proposed open-source dataset MM-GAG is "currently available for evaluating the zero-shot generalizability of GOMAA-Geo across diverse goal modalities, such as ground-level images and natural language text." (Lines 266-270). They conduct a variety of experiments using MM-GAG, detailed in Table 1, Table 2, and further explored in Table 12, Table 14, and Figure 17 in the Appendix. This indicates that a significant portion of the main experiments and ablations in the paper utilized this dataset, and thus, the quality of the dataset impacts the evaluation results. If the dataset quality is poor, such as having biases, how can I trust that the method proposed by the paper is effective? This is why I am hesitant to give a high score from the dataset perspective. If the author does not consider the dataset as an important contribution, I will reduce my evaluation of the experiments related to MM-GAG, and if evaluating purely from a methodological perspective, I am willing to increase my score. Additionally, regarding the details of the data, although more details are provided in the supplementary materials from Lines 762-769, I still hope for more specific information to validate the dataset's collection rationality. Lines 762-763 mention: "After filtering the images, the dataset contained 73 images in total across the globe." I would like to know how much data was initially collected, what the filtering was based on, what confirms the global coverage of the dataset, why these specific locations were chosen for sampling, and how the current dataset ensures that it is not biased. --- Rebuttal Comment 2.1: Title: Additional Details Regarding the Dataset Comment: Thank you for your comment. We’re pleased to provide additional details about the dataset that you requested. Please see our detailed response below. **Regarding sampling locations**: The ground-level images were collected from a diverse group of users via a small-scale crowdsourcing effort. We made every effort to ensure the images were sourced from a wide range of countries. The sampling locations were determined by the GPS information embedded in the EXIF data of the images, not by manual selection. The purpose of using the privately sourced images was to avoid leakage into any of the foundation models. **Regarding filtering**: Initially, we collected 82 images. We applied a basic filter to the collected ground-level images, based on the availability of GPS data. Since we needed to retrieve satellite imagery corresponding to each ground-level image, we required that each image include GPS information in its EXIF data. Images **lacking GPS information** were excluded. We did not apply any further filtering. Finally, our dataset comprises 73 ground-level images. **Regarding biases**: As mentioned before, we did our best to ensure diversity among the ground-level images. Our dataset features both indoor and outdoor scenes from 11 different countries. Furthermore, we report the average pairwise similarity between the images in our dataset, computed using cosine similarity of the corresponding image embeddings from various vision models: | Vision Model | DinoV2 [1] | SigLIP [2] | CLIP [3] | |--------------------------|--------|--------|-------------| | Avg. Pairwise Similarity | 0.10±0.22 | 0.32±0.17 | 0.56±0.13 | The low average pairwise similarity suggests that the images in our dataset represent a diverse range of concepts. Finally, we would like to emphasize that a single ground-level image can be utilized to generate up to 300 potential start and goal scenarios by spatially adjusting the grid of satellite images. By randomly initializing start and goal locations and averaging the results over 5 different random seeds enabled us to robustly evaluate our model (Line 289). **Link to the dataset**: An anonymous link to the dataset was included in the supplementary material of our initial submission. For your convenience, here is the [link](https://huggingface.co/datasets/Gomaa-Geo/MM-GAG/tree/main) for further reference. **References:** [1]: Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." Transactions on Machine Learning Research (TMLR), 2023. [2]: Zhai et al. "Sigmoid loss for language image pre-training". International Conference on Computer Vision (ICCV), 2023. [3]: Radford et al. "Learning Transferable Visual Models from Natural Language Supervision". International Conference on Machine Learning (ICML), 2021. --- Rebuttal 3: Title: Follow-up to our response Comment: Dear Reviewer 9JwT, In our rebuttal, as you requested, we provided all the necessary details about the dataset along with the anonymous dataset link. As we approach the end of the discussion period, we wanted to follow up to see if our responses address your concerns. We would be very grateful to hear additional feedback from you and will provide further clarification if needed. If you believe our response has adequately resolved the issues you raised, we kindly ask you to consider the possibility of raising the score. Thank you again for your time and effort. Thanks, Authors --- Rebuttal Comment 3.1: Comment: Thanks for your response. I'd like to raise the rating to 5.
Summary: The paper introduces GOMAA-Geo, a novel framework for active geo-localization (AGL) that is capable of zero-shot generalization across different goal modalities. GOMAA-Geo is designed to assist agents, such as UAVs in search-and-rescue operations, to locate targets specified through various modalities (e.g., natural language, ground-level images, or aerial images) using a sequence of visual cues observed during aerial navigation. The framework addresses two main challenges: 1. Dealing with goal specifications in multiple modalities while relying on aerial imagery as search cues. 2. Limited localization time due to constraints like battery life, necessitating efficient goal localization. GOMAA-Geo integrates cross-modality contrastive learning to align representations across modalities, foundation model pretraining, and reinforcement learning to develop effective navigation and localization policies. The framework has been evaluated extensively and shown to outperform alternative approaches, demonstrating its ability to generalize across datasets and goal modalities without prior exposure during training. Strengths: 1. The paper proposes a novel approach that combines cross-modality learning, foundation model pretraining, and reinforcement learning for active geo-localization, which is novel to some extent. 2. GOMAA-Geo's ability to generalize across different datasets and modalities without prior training exposure, enhancing its applicability in diverse real-world scenarios. The framework has undergone rigorous testing and comparison with alternative methods, demonstrating its effectiveness through quantitative and qualitative analyses. 3. The authors have created a new dataset to facilitate benchmarking, which is beneficial for future research in the area. Weaknesses: While the framework shows promise, it may require further validation in real-world conditions with actual UAVs and under various environmental challenges. Technical Quality: 3 Clarity: 4 Questions for Authors: See the weakness Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: I suggest the authors to discuss the negative impacts of proposed method to our society like public safety. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We are pleased to see that you found our proposed framework novel and that the new dataset will be valuable for future research in this field. Below, we address all your comments. > **Q1**: While the framework shows promise, it may require further validation in real-world conditions with actual UAVs and under various environmental challenges.. **A1**: In this work we have focused on laying the ground work for the novel and real-world relevant task of Global Navigation Satellite System (GNSS)-denied goal localization, which we call *goal modality agnostic active geo-localization*. Evaluations of our main GOMAA-Geo approach, as well as an extensive set of baselines and ablations, are conducted on multiple real-image datasets (as also pointed out in the reviewer's list of the paper's strengths), including under various environmental challenges (see e.g. generalization from pre-disaster to post-disaster scenarios in Table 3). Natural next steps indeed include evaluating these types of methodologies also using real UAVs, as suggested by the reviewer. We plan to explore this in follow-up work, and given that our code and data will be made publicly available, others will be able to explore this as well. > **Q2**: I suggest the authors to discuss the negative impacts of proposed method to our society like public safety. **A2**: Please refer to “Broader Impacts” in the appendix (page 21), which discusses both positive and negative potential downstream applications. Specifically, under “Further considerations” (Line 804-808), we discuss potential negative downstream applications of these types of methodologies. We are grateful for the reviewer's consideration of these issues, and wholeheartedly agree that it is important to carefully consider both positive and negative potential downstream effects when developing new methodologies. --- Rebuttal Comment 1.1: Title: Follow-up to our response Comment: Dear Reviewer 4muN, Thank you very much for your valuable reviews and the time you've invested. As the author-reviewer discussion period is coming to an end, we sincerely want to confirm whether we have addressed your concerns. If you believe our response has adequately resolved the issues you raised, we kindly ask you to consider the possibility of raising the score. Thank you again for your time and effort. Thanks, Authors --- Rebuttal 2: Title: Response to Authors Comment: I have no further questions. I still give positive rating. Thank the authors for your rebuttal.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable feedback and insightful comments. We are glad that the majority of the reviewers find the zero-shot generalizability across goal modalities to be a strong contribution of our work, and agree with the reviewers that extensive experiments have been conducted in order to validate various claims and contributions. We considered all the concerns raised by the reviewers carefully and hopefully clarified all of their queries. Furthermore, we have attached a PDF with additional visualizations as requested by the reviewers. Pdf: /pdf/bd72cbde610fb01113d5d908772d737624171ee1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Domain Learning for Cross-domain Image Denoising
Accept (poster)
Summary: This paper introduce adaptive domain learning (ADL) for cross domain raw image denoising problem. The author also introduce a module to embed sensor information into the network. Experiments on public datasets with various smartphone and DSLR cameras demonstrate proposed model outperforms prior work on cross-domain image denoising with a small amount of image data from target domain. Strengths: 1. The proposed pipeline is straightforward and the idea is easy to understand. 2. It's interesting to apply domain adaptation idea for raw image denoising. Weaknesses: 1. Novelty Issue: Although I am reluctant to mention this, the proposed pipeline is quite common and widely available in domain adaptation research. The main credit is that the author applied it to raw image denoising. However, it may require more innovative insights to meet the standards of the NeurIPS community. 2. Embedding sensor information to help the network distinguish the domain is also a common technique used in many research works. I don't believe it can be considered a significant contribution. 3. In line 210, 20 pairs of data in the target domain is actually not a very small number for a raw image dataset. I am curious about the significance of using these target domain data and how it can contribute to real advancements in raw image denoising applications. 4. The author should fine-tune PMN (the recent SOTA method) with the target domain data for a more thorough comparison. 5. Minor Issues: There is a misalignment in Table 2. It would be better to note SSID, ELD, and SID in Table 1 instead of using camera names only. PMN (Learnability Enhancement for Low-Light Raw Image Denoising: A Data Perspective) Technical Quality: 2 Clarity: 2 Questions for Authors: Primarily comparing PSNR with self-supervised denoising methods is somewhat unfair because the primary advantage of these methods is not high PSNR. Instead, why not structure the experimental section to compare with state-of-the-art performance methods? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer G2YE for your valuable feedback. We now address your concerns below. ***- Novelty issue*** In this paper, we attempt to solve the problem of lack of RAW data for new sensors ***in a different way*** compared to prior work. The calibration method needs to collect data pairs to build the calibration model, while the self-supervised method is hard to reach state-of-the-art performance because of its basic assumptions. Our method solves this problem by proposing a cross-domain training strategy, ADL. Our method doesn't need to build any calibration model and can make use of the synthetic data and data from other sensors (source domain), which is usually useless for other methods. With the help of very little amount of data from the new sensor (target domain), our method can train a model that can reach state-of-the-art performance by automatically leveraging useful information and removing useless data. ***- Data requirement*** As illustrated in Figure 3 in the main paper, the performance of our method is saturated when we use 18 pairs of data. However, our method can reach satisfactory results when the number of paired data is 6. Note that 18 pairs of data is still a small amount of data compared to other RAW denoising methods. According to the survey made by [1], for DNN-based supervised methods such as SID [2], it takes about 1800 pairs of data for them to reach state-of-the-art performance. For noise calibration methods [3], they need 150-300 pairs of data to build the calibration model. For the self-supervised denoising method [4], they need 12000 noise images, and they still cannot reach state-of-the-art performance. ***- Compared with PMN*** PMN is a general method used to overcome the bottleneck of learnability by reforming paired real data according to noise modeling. We do not compare our ADL with PMN because its general strategy can be used to improve the performance of any of the RAW denoising methods, including our ADL. We will add the experiment of how PMN can improve our method in the revision. ***- Minor issues*** We thank you for your suggestion and we will modify them in our revision. ***- Comparison with self-supervised denoising is not fair*** Both our ADL and self-supervised denoising methods aim to solve the problem of data scarcity in RAW data denoising. Therefore, we take the self-supervised denoising method as a reference. Besides, we also compare our ADL with other state-of-the-art methods that have the same settings, like LED and SFRN, that aim to solve the data scarcity problem, as illustrated in Table 2 in our main paper. [1] Jin, X., Xiao, J. W., Han, L. H., Guo, C., Zhang, R., Liu, X., & Li, C. (2023). Lighting every darkness in two pairs: A calibration-free pipeline for raw denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 13275-13284).\ [2] Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3291-3300).\ [3] Wei, K., Fu, Y., Yang, J., & Huang, H. (2020). A physics-based noise formation model for extreme low-light raw denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2758-2767).\ [4] Moran, N., Schmidt, D., Zhong, Y., & Coady, P. (2020). Noisier2noise: Learning to denoise from unpaired noisy data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12064-12072). --- Rebuttal Comment 1.1: Comment: Hello, Thank you for your response. I would like to reply to the contents one by one and discuss with the author as follow: - ***Novelty issue*** As in my initial review, I expressed concerns that the proposed pipeline is quite common in domain adaptation research. While I acknowledge the main credit to the authors for applying these methods to the raw image denoising field, I believe the paper may require more innovative insights to inspire readers, as is expected from publications at NeurIPS. For me, novelty is not a decisive factor for rejection if other aspects are excellent, in other words, I believe there is still much room for improvement to elevate the paper's quality to the acceptance standard. - ***Data requirement and PMN*** As presented in the PMN paper, they obtained significantly better results than SFRN, according to my previous test on PMN's code, they utilized about 160 data pairs. This context led me to comment that "20 pairs of data in the target domain is actually not a very small number for a raw image dataset." However, based on Figure 3 and the author's clarifications, the performance with just 6 paired data is substantially lower than with 26 paired data (around a 3 dB difference), which is significant. I am not yet convinced by the author's rebuttal on this point and would appreciate further explanation. - ***Comparison with self-supervised denoising is not fair*** Self-supervised denoising methods typically use no clean data (no supervision), and unsupervised denoising uses unpaired data. In contrast, the method proposed by the authors not only requires paired data from the source domain for supervision but also from the target domain, which should naturally result in better PSNR/SSIM scores. This is why I question the fairness of using these as comparable baselines for performance comparisons. Additionally, the results presented in the tables do not show a significant performance advantage over self-supervised or unsupervised methods, despite with the support of substantial data and supervision signals. - ***Additionally*** I am curious about the significance of using these target domain data and how it can contribute to real advancements in raw image denoising applications. Does this domain adaptation strategy have any practical applications in real-world scenarios for raw image denoising? Looking forward to discussion with you. Best regards, Reviewer G2YE --- Reply to Comment 1.1.1: Comment: We thank you for affirming and approving the application of ADL to RAW denoising and other clinical aspects and for your valuable suggestions. We will improve our paper further in the revision. Besides, we would really appreciate it if you could point out the specific work that a similar pipeline is proposed in domain adaptation research so that we can learn from it and include it in our literature review. We were eager to address any further questions or concerns you may have. If you have had a chance to review our response and have additional thoughts, we would greatly appreciate your input. --- Rebuttal 2: Comment: Thank you for your reply. We would like to further discuss the question with you as below: ***- Data requirement and PMN*** Collecting paired real data for RAW denoising requires considerable human labor and equipment support. Collecting 160 pairs of real data requires a lot more effort than collecting 20 pairs of real data. The PMN can reach better performance with less data because they overcome the bottleneck of learnability in real RAW denoising. Moreover, PMN is a data reformation method, their method can be applied to any training strategy and network architecture and has similar performance improvement. We applied the DSC strategy proposed in PMN to our ADL. The PSNR on the Sony sensor improved by 0.41dB, and the PSNR on the Nikon sensor improved by 0.38 dB. ***- Comparison with self-supervised denoising is not fair*** We try our best to find all possible work that has the ***same goal*** as our method for the ***comprehensive*** of the baselines. We compare our ADL with the self-supervised denoising method just for reference. Moreover, as illustrated in section 4.1, line 211 in the main paper, in our experiment, the self-supervised denoising method did not use any source domain data and used all the data from the target domain data, which is the same as their own setting. Compared to our ADL, they use a lot more data and do not involve any cross-domain learning. ***- The practical applications of ADL*** Here are two examples of practical applications for our ADL in real-world scenarios: 1. As the iteration of the smartphone and DLSR cameras become faster and faster in recent years, collecting a large RAW denoising dataset for each of these sensors to build noise calibration models or single domain supervised learning is very labor-demanding. Moreover, the dataset for specific sensors cannot be used in the training of sensors in the future and therefore causes a waste of resources. With the help of our ADL, we only have to collect a small dataset with around 20 pairs of data. Besides, the dataset we collect for old sensors can also be reused and serve as the source domain to help with the training of the new sensors. 2. Collecting paired real RAW images is difficult and may collect bad data, such as misalignment, without the help of professional equipment, such as robot arms. Moreover, synthetic data may also have outliers, which are very different from the noise distributions of real-world data. These bad data are hard to detect and can lead to significant performance drops in supervised learning. As illustrated in section 4.4, the robustness of our ADL can avoid the performance drop brought by these bad data. --- Rebuttal Comment 2.1: Comment: Hello, Thank you for your response. After careful consideration of all the rebuttals and available reviews/discussions, I have decided to maintain my original score. I believe this to be the most appropriate reflection of my overall impression and evaluation of the paper. Best regards, Reviewer G2YE
Summary: This paper proposes a novel adaptive domain learning (ADL) approach for cross-domain RAW image denoising problem. The ADL scheme allows to train models for target domains with limited data by leveraging data from other source domains. The harmful data from the source domain is automatically removing during training. Authors also propose a new modulation approach to encode the sensor type and ISO to help the network to adapt to different noise distributions. The extensive experimental results demonstrate the effectiveness of the proposed method. Strengths: 1. The proposed ADL training strategy is useful in the cases of very limited target domain data and is able to effectively utilize the existing source domains by filtering out harmful data 2. Extensive ablation studies demonstrate the robustness of the method even in the presence of misaligned data 3. The experiment results in the supplementary show that the method generalizes to other problems (dehazing, deblurring) Weaknesses: 1. The qualitative results only show the error maps. Actual denoised images (or patches) in the sRGB should be provided for better visual comparison of the methods. In addition, quantitative analysis with metrics such as LPIPS should be done on the sRGB methods 2. It seems that the already trained network cannot easily adapted to new data. The method encodes the sensor type as a one-hot vector. Therefore, the new source domain data (new sensor) cannot be incorporated to fine-tune the network and the training should be done from scratch. Technical Quality: 4 Clarity: 3 Questions for Authors: Have you tried other metrics than PSNR for dynamic validation? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer Xk6Q for your valuable feedback. We now address your concerns below. ***- Qualitative results and metrics in sRGB space should be provided for better comparison*** Since RAW data is hard to visualize and tell the difference, we use the error map for better demonstration. We will put some comparisons in sRGB space in our revision. ***- It is hard to incorporate new source domain data because of the one-hot encoding*** It is not hard to incorporate new source domain data into the model. To maintain the network's ductility, you can keep the size of the one-hot encoding vector greater than the number of types of sensors in the source domain data. For example, if you have 5 sensors as the source domain data, you can make the one-hot encoding vector size 6 to add potential source domain data in the future, and this will not affect the training for the current stage. ***- Other metrics than PSNR for dynamic validation*** As illustrated in Figure 1 in the rebuttal pdf file, we demonstrate the dynamic validation set ablation study on the SSIM metric. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in addressing my comments and providing additional experimental results for the SSIM metric. However, I am still not convinced that the method is flexible for fine-tuning new source domain data. Even though the one-hot encoding vector size can be set higher than the number of sensor types, we can still have more new sensor types in the future that exceed the vector size --- Reply to Comment 1.1.1: Comment: We thank you for you valuable feedback. We would like to further discuss the problem with you. You can set the one-hot encoding vector size to be some large number, for example, greater than 100. We believe that 100 is enough for real practice. Very large number of source domains barely appear in real scenarios. --- Reply to Comment 1.1.2: Comment: We value your feedback and are eager to address any further questions or concerns you may have. If you have had a chance to review our response and have additional thoughts, we would greatly appreciate your input.
Summary: This paper address the cross-domain image denoising problem with a small number of target domain training samples. The authors propose an adaptive domain learning (ADL) strategy that dynamically selects useful training samples from both source and target domains to improve performance. Additionally, the paper leverages channel-wise modulation layer to model different noise distribution caused by sensor type and ISO. Experimental results showcase the advantages of the proposed method. Strengths: 1. The paper clearly defines the problem of few-shot domain adaptation for image denoising and presents a solution. 2. The reported quantitative results in the paper indicate improved performance over existing methods, suggesting potential consideration and applicability in real-world scenarios. 3. In general the writing is clear and easy to follow, with minor presentation issues. Weaknesses: 1. **Lacking Clarity on Definitions and Practical Benefits:** The paper actually discusses the few-shot domain adaptation problem but fails to clearly define the specific settings and scenarios being addressed. Also, this paper does not clearly specify the conditions under which the proposed ADL strategy provides the most benefit. For example, how does the performance of ADL compare with simpler strategies like mixing the source and target domain data as the number of target samples increases? What is the threshold number of samples where ADL proves advantageous? Is this number large enough so that collecting target domain data larger than the threshold can be seen as impractical. 2. **Lack of comparison to adequate baselines**: The paper would benefit from a more comprehensive comparison with existing few-shot domain adaptation method in low level vision [1,2]. Comparing the proposed method with approaches in different settings, such as self-supervised learning, may not be fair or relevant. It would be more informative and meaningful to compare against methods with similar objectives and constraints. 3. **Lack of Task-Specific Design and Motivation**: The proposed ADL strategy is not specifically tailored for the denoising task, which reduces its effectiveness and relevance. Additionally, the paper does not clearly explain the motivation behind combining ADL with the modulation module. The paper would benefit from a clearer explanation of how these components interact and complement each other. 4. **Presentation Issues:** There are inconsistencies in the presentation of results, such as the caption in Table 3 mentioning "Mod" (modulation) without corresponding columns in the table. [1] K. Prabhakar, Vishal Vinod, et al. "Few-Shot Domain Adaptation for Low Light RAW Image Enhancement." British Machine Vision Conference 2021. [2] Bo Jiang, Yao Lu, et al. "Few-Shot Learning for Image Denoising." 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. As the size of the target domain dataset increases, how does the performance of the ADL strategy compare with simpler strategies such as mixing the source and target domain datasets? Is there a threshold number of target samples beyond which ADL does not offer significant benefits? 2. What are the computational overheads associated with the dynamic validation set used in the ADL strategy? Is the process efficient enough? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations are currently not discussed in this paper, which may not be so adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer e1pc for your valuable feedback. We now address your concerns below. ***- Lacking Clarity on Definitions and Practical Benefits*** Our ADL aims to solve the problem of data scarcity in RAW image denoising. To be specific, our ADL has benefits when the data pairs from the target domain are limited. Our ADL can utilize data from the source domain, including synthetic data and real data from other sensors to improve the performance of the training using target domain data. As illustrated in Figure 3 in the main paper, we compare our ADL with naive fine-tuning (pre-trained on source domain data and fine-tuning using target domain data) as the size of target domain data increases. Our ADL outperforms naive fine-tuning when the target domain data is scarce. Besides, according to our experiments, naive fine-tuning can have comparable results with our ADL when the size of target domain data is around 280 on the sensor Sony. ***- Lack of comparison to adequate baselines*** We compare our ADL with [1], as illustrated in Table 1 in the rebuttal pdf file. Our method can outperform their method on PSNR and SSIM. We do not compare with [2] because they did not open source. We will include them in the literature review in our revision. ***- Lack of Task-Specific Design and Motivation*** We combined the modulation strategy with ADL to adjust the feature space by embedding two easy-to-access parameters, the sensor type and the ISO. These two sensor-specific information help our network get the knowledge of two crucial aspects in cross-domain RAW image denoising, the domain gap and the noise level. Each of the sensors has a different domain gap compared to the target domain. Taking the sensor type as input, our network can explicitly judge the domain gap between the input source domain data and the target domain and further adjust the features. ISO is the sensor's key setting that affects the noise level of the captured RAW images. Taking ISO as the input, our network can get the knowledge of how noisy the input RAW images are and therefore can adjust the features to fit different noise levels better. ***- Presentation Issues*** We thank you for your suggestion and we will modify them in our revision. ***- Computational overheads of dynamic validation set*** The dynamic validation set strategy does not bring much computational cost to our framework. On RTX4090 GPUs, dynamic validation takes 0.21s per image. [1] K. Prabhakar, Vishal Vinod, et al. "Few-Shot Domain Adaptation for Low Light RAW Image Enhancement." British Machine Vision Conference 2021. \ [2] Bo Jiang, Yao Lu, et al. "Few-Shot Learning for Image Denoising." 2023. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their rebuttal and extra experiment. However, some of my questions have not been properly answered: **Q1**: Regarding the setting, this approach requires source data in domain adaptation. However, source data is not available in many cases (source-free domain adaptation). If those compared methods are source-free, the comparison may not be fair. **Q2**: We are happy to see the additional results. However, the authors did not address my concern: "Comparing the proposed method with self-supervised learning may not be fair," because SSL does not require target domain data with ground truth. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We would like to further discuss the question with you as below: ***- Unfair comparison with the baselines*** We try our best to find all possible work that has the ***same goal*** as our method for the ***comprehensive*** of the baselines. We compare our ADL with the source-free domain adaptation and self-supervised denoising method just for reference, and we will emphasize this point in our revision. The source domain data of our ADL can be easily accessed. The source domain data could be synthetic data or data from any other sensors. Due to the robustness of our ADL, the harmful data from the source domain will be automatically removed. For the self-supervised denoising method, as illustrated in section 4.1, line 211 in the main paper, in our experiment, the self-supervised denoising method did not use any source domain data and used all the data from the target domain data, which is the same as their own setting. --- Reply to Comment 1.1.2: Comment: We value your feedback and are eager to address any further questions or concerns you may have. If you have had a chance to review our response and have additional thoughts, we would greatly appreciate your input.
Summary: This paper addresses the challenge of cross-domain RAW image denoising due to varying noise patterns from different camera sensors (bit depths, color). The authors propose an Adaptive Domain Learning (ADL) scheme that leverages existing data from various sensors and a small amount of data from a new sensor to train an effective denoising model. The ADL scheme selectively uses beneficial data from the source domain while discarding harmful data, and introduces a modulation module to incorporate sensor-specific information. The proposed model outperforms prior methods on public datasets, demonstrating state-of-the-art performance with limited target domain data. Strengths: 1. Originality: The adaptive domain learning (ADL) strategy is a novel approach to addressing the cross-domain image denoising challenge. 2. Quality: The proposed method is thoroughly evaluated through extensive experiments on various public datasets, showing its robustness and effectiveness. 3. Clarity: The paper is well-structured, with clear explanations of the methodology, experiments, and results. Weaknesses: 1. Compared with RAW2RAW mapping: There are existing methods for raw-to-raw mapping, such as "Semi-Supervised Raw-to-Raw Mapping." It would be interesting to explore whether source domain raw images can be mapped to the target domain and then used for training based on the noise model. This could potentially provide a more direct comparison and integration of data from different domains. 2. Data Requirements: The authors used over 20 captured raw images, which is significantly more than the "Two pairs" approach used in LED. With these raw images, could a noise model be used to simulate noise instead? Is there a comparison available? This would help to understand whether the additional raw images provide a significant advantage over using simulated noise from a noise model. Technical Quality: 3 Clarity: 3 Questions for Authors: The current ADL settings require more data compared to the LED method. In Table 2b, it is not clear how the LED method was specifically trained—whether it involved fine-tuning with limited target domain data. Additionally, comparing the performance of ADL with LED under the "Two pairs" condition would provide valuable insights into the effectiveness and efficiency of both approaches. This could help understand the trade-offs between data requirements and performance outcomes. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: A more explicit discussion on the potential impact of extremely limited target domain data (like two pairs) on the model's performance would provide additional clarity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer 4VKx for your valuable feedback. We now address your concerns below. ***- Compared with RAW2RAW mapping*** We mapped source domain RAW data to target domain data by the pre-trained model proposed in [1]. We utilize sensors "IP" and "S6" from SIDD dataset and replace the corresponding metadata in the data preprocessing stage of [1] to keep the data consistent. The following table demonstrates our ADL compared to RAW-to-RAW mapping by [1]. Here "IP" means mapping sensor S6 to sensor IP and "S6" means mapping sensor IP to sensor S6. The value in the table is PSNR/SSIM. The result demonstrates that our ADL outperforms RAW-to-RAW mapping. This is because RAW-to-RAW mapping is an ill-posed problem and hard to handle the overexposure and underexposure cases. | | IP | S6 | | --- | --- |--- | |[1]|51.26/0.971|38.17/0.889| |Ours|53.09/0.978|39.68 /0.897| ***- Data Requirements*** As illustrated in Figure 3 in the main paper, the performance of our method is saturated when we use 18 pairs of data. However, our method can reach satisfactory results when the number of paired data is 6. Note that 18 pairs of data is still a small amount of data compared to other RAW denoising methods and is not enough to build a noise model. According to the survey made by [2], for DNN-based supervised methods such as SID [3], it takes about 1800 pairs of data for them to reach state-of-the-art performance. For noise calibration methods such as [4], they need 150-300 pairs of data to build the calibration model. For the self-supervised denoising method such as [5], they need 12000 noise images, and they still cannot reach state-of-the-art performance. ***- How is LED trained and how is ADL's performance under the setting of 2 pairs?*** LED is pre-trained on data synthesized by the calibration model built by the simulation camera. In the pre-train stage of LED in Table 2 in our main paper, we replace the synthetic data with our source domain and target domain data to keep the comparison fair. In the fine-tuning stage, LED and ADL use the same target domain dataset with a size of 18. Note that LED actually uses 6 pairs of data (2 pairs for each ratio) in the fine-tuning stage, as illustrated in section 3.3 of LED's main paper. Therefore, we show the performance of our ADL compared to LED using 6 pairs of target domain data. On the Sony sensor, the PSNR of our ADL is 35.82, while the LED is 35.79. On the Nikon sensor, the PSNR of our ADL is 35.56, while the LED is 35.47. [1] Afifi, M., & Abuolaim, A. (2021). Semi-supervised raw-to-raw mapping. British Machine Vision Conference (BMVC).\ [2] Jin, X., Xiao, J. W., Han, L. H., Guo, C., Zhang, R., Liu, X., & Li, C. (2023). Lighting every darkness in two pairs: A calibration-free pipeline for raw denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 13275-13284).\ [3] Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3291-3300).\ [4] Wei, K., Fu, Y., Yang, J., & Huang, H. (2020). A physics-based noise formation model for extreme low-light raw denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2758-2767).\ [5] Moran, N., Schmidt, D., Zhong, Y., & Coady, P. (2020). Noisier2noise: Learning to denoise from unpaired noisy data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12064-12072). --- Rebuttal 2: Comment: We value your feedback and are eager to address any further questions or concerns you may have. If you have had a chance to review our response and have additional thoughts, we would greatly appreciate your input.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback. Below, we address the concerns of Reviewer 4VKx, Reviewer e1pc, Reviewer Xk6Q and Reviewer G2YE. Our paper presents a novel adaptive domain learning (ADL) method with a modulation strategy to solve the problem of data scarcity in RAW denoising. Our ADL can utilize data from various sensors (source domain) to help the training of very limited data from a new sensor (target domain). Our ADL can automatically remove harmful data from the source domain during the training. As highlighted by the reviewers: "The ADL strategy is novel (Reviewer 4VKx)", "The experiment shows ADL's robustness and effectiveness (Reviewer 4VKx, Reviewer Xk6Q) and indicates improved performance over existing methods(Reviewer e1pc)", "The paper is well-structured and easy to follow (Reviewer 4VKx, Reviewer e1pc, Reviewer G2YE)". Pdf: /pdf/d99d79dffb2138375a33ac1254cf5857ccc012c1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Dual Lagrangian Learning for Conic Optimization
Accept (poster)
Summary: The authors propose Dual Langrangian Learning for learning dual conic optimization problems. Dual conic completion, differentiable conic projection layers, and a self supervised Lagrangian training framework are discussed. Strengths: - Presents a unique framework for the dual optimization. No dedicated framework for dual optimization proxies are known to date -Extensive theoretical formulations Weaknesses: - The practical use of the framework is unclear especially with regard to scalability and computational costs involved Technical Quality: 4 Clarity: 4 Questions for Authors: - It is unclear how the utility of this framework can be tested for problem-solving in the real world. While DC3 has been selected as a comparable technique, it remains to be seen whether dual optimization is always beneficial -- for e.g. assuming a problem can be solved both in the primal and dual, are there necessary computational benefits to solving in the dual? How scalable are the proposed algorithmic frameworks? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and their appreciation of our work. ## Benefits of dual optimization While the paper outlines the theoretical and computational building blocks of DLL, this methodology is meant to be used in tandem with primal techniques, e.g., primal optimization proxies that produce candidate primal solutions. The dual proxies trained with DLL then provide valid dual bounds to certify the quality (i.e. optimality) of primal solutions. Accurate dual solutions can also be used to speedup primal algorithms, e.g., by identifying inactive constraints (whose associated dual variable will be zero), which can be safely removed from the problem. There are also cases where dual solutions are valuable in their own right. This is the case for instance instance in market-clearing applications, where primal variables are cleared quantities, and dual variables are prices. In that context, DLL can offer higher accuracy and guarantees of dual feasibility. ## Scalability The scalability of our proposed framework leverages several key ingredients: * Efficient conic projection layers: DLL does not require any external solver, nor the resolution of NP-hard problems (when considering the standard cones presented in the paper). We also introduce non-Euclidean conic projection with better complexity. For instance, for the PSD cone, the Euclidean projection has complexity $O(n^{3})$, compared to $O(n^{2})$ for the proposed radial projection. * Closed-form dual conic completion: we showcase several efficient dual completion layers for broad classes of problems (See Examples 1--3 and numerical experiments) * Self-supervised learning: this removes the need for offline data-generation using standard optimization solvers. This greatly accelerates training, and we have found this approach to often yield improved performance compared to supervised learning settings. Finally, DLL-based proxies are expected to offer similar computational burden and benefits as primal optimization proxies, which have been shown in various works to be scalable for large-scale real-life problems. --- Rebuttal 2: Comment: I thank the authors for their clarifying comments. I choose to keep my original scores for the paper.
Summary: This paper addresses the broad category of machine learning (ML) for optimization, where ML-based approaches are used to obtain solutions or bounds on the solutions of optimization problems. In particular, the paper addresses a particular subclass of problem - conic problems, and even more specifically, the dual solution of these problems. Most "ML-for-optimization" frameworks have thus far focused on finding the solution to primal problems, which is intuitive because the variables in primal problems are typically the variables of interest for most real-world problems. However, learning dual solutions can help certify or bound the primal solution, and in some cases, one may be interested in the dual solution directly (for example, when learning shadow prices in economic optimization problems). This paper uses a conic projection layer in addition to the dual completion step to ensure dual feasibility. The paper compares the method (DLL) to other existing methods in terms of their ability to learn solutions to dual problems and shows superiority, mostly due to the fact that DLL includes dual optimality conditions in addition to dual feasibility constraints. An alternative to Euclidean projection is also provided (Radial projection), for the projection layer, which allows for computation once per cone and attempts to avoid gradient vanishing issues during training. Strengths: The paper claims that the model can guarantee dual feasibility, which is significant for yielding bounds on the primal solution. In addition, the authors show a closed-form, analytical solution for conic projections and for dual conic completion, outside of the ML model development. The proposed DLL framework doesn't require an explicit formulation of the dual problem; instead, the user must only have to provide a set of primal constraints that satisfy a certain set of conditions. This is convenient but not a main contribution (if the primal conic problem is known, the dual should be relatively easy to find). Even though the most challenging and time consuming optimization problems in practice are nonconvex optimization problems, many of these problems can be cast as conic problems, and thus the method can be applied to solve quite a large array of problems that may occur in practice. Versus other methods, the DLL framework requires far less hyperparameter tuning and iterative procedures, which contributes to the ease of training and increased computational speed. Weaknesses: One limitation of the method, if a known closed-form optimal dual completion cannot be achieved, is its use of implicit layers. It would be interesting to see how the speedups diminish with very large problems where the implicit layer becomes a bottleneck. The comparison with existing approaches is fair since those are the state of the art for learning primal solutions, but it's slightly unfair because those methods were not specifically designed to learn dual solutions. I think this is fine since there are not existing ML-for-optimization methods tailored towards dual solutions in particular, but just to note. The work in this paper does seem like a more generalized and extended version of the work in reference [19], where reference [19] was the first to provide a dual proxy for a second-order cone relaxation of the AC optimal power flow (OPF) problem. However, in reading [19], it does not seem to be necessarily limited to the AC OPF problem, although those authors only apply it to such. The approach in the present paper is generalized to a broader class of problems, but is not the first to provide dual feasibility guarantees as determined by a learning model. Technical Quality: 4 Clarity: 4 Questions for Authors: It would be interesting to also see indications of whether or not strong duality holds in the test cases, if this is not too challenging to check. If so, the optimality gaps reported also should hold in the primal. The authors may also want to come up with some examples of why learning dual solutions directly may be a contribution on its own, outside of just providing a bound on a primal solution. I would also want to see more of a discussion about the comparison with reference [19], specifically, is [19] already generalizable but the authors only applied it to the AC OPF problem in the paper? Regarding the use of implicit layers, a discussion (or even better, experiments if time allows) on their computational limitations and how this slows down inference as problem size grows Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have sufficiently addressed the limitations of the method. I don't see any potential negative societal impacts of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of our paper, and their valuable feedback. Please see our general rebuttal for comments regarding how DLL can be used in discrete and non-convex settings. Detailed responses to the reviewer's questions follow. ## Implicit layers We agree with the referee that implicit layers can quickly become a computational bottleneck for large problems. In our experience, this typically happens when dealing with implicit layers for problems with $>10^{3}$ variables and constraints. Nevertheless, we would like to point out the following: * We have never encountered any problem for which we were not able to design a closed-form completion procedure. Indeed, most real-life problems are naturally bounded, which allows to use the completion presented in Example 1. In many cases, finite bounds can be derived for all variables using the problem's structure. Furthermore, Problem (8) (see Theorem 1) can often be decomposed in smaller problems that can be solved in parallel. While this is obviously only a computational conjecture, it should also be noted that even a partial dual completion can help reduce the size of the problem that eventually may need to be solved via an implicit layer. * As we pointed out in the general rebuttal, subgradients of the Dual Lagrangian Loss in Eq. (19a) are of the form $b - Ax$, where $x$ solves Problem (8). Therefore, it suffices to use a primal-dual solver to solve Problem (8), and use the dual (resp. primal) solution for the forward (resp. backward) pass. This would further reduce the computational burden of implicit layers. We will include this discussion (and explicit formulae for gradient computation) in the paper's final version. ## Strong duality Strong duality does hold for all instances considered in the numerical experiments, i.e., the primal and dual problems have same optimal values. Indeed, note that Slater's constraint qualification condition holds for all considered problems. The reviewer is thus correct that the gaps reported in Tables 2 and 4 apply w.r.t to the primal optimal value, whose (average) value is reported in the second column of each table. The most common example of dual solutions being valuable on their own is in market-clearing applications, where dual variables represent clearing prices. In that context, DLL can provide highly-accurate price predictions that are guaranteed to satisfy dual feasibility constraints. ## Relation to Ref. [19] In reference [19], _Dual Conic Proxies for AC Optimal Power Flow_, the authors learn dual solutions for a second-order cone relaxation of nonlinear, non-convex AC Optimal Power Flow (OPF) problems. While Ref [19] does utilize a dual completion procedure, its approach is specific to the SOC-OPF problem, and does not generalize to arbitrary conic settings. For instance, Ref [19] utilizes the fact that some primal variables can never be zero to eliminate dual variables, a property that does not hold in general. In addition, the authors use domain knowledge to design input features for their models, which cannot be replicated for arbitrary problems. In contrast, in this paper proposes theoretical and computational tools for general conic optimization problems. In particular, Theorem 1 offers a much stronger foundation for dual completion procedures than the setting of Ref [19]. Furthermore, the paper presents closed-form completion procedures for broader classes of problems, and conic projection layers for all standard cones and their duals. This includes the PSD, exponential and power cones, which were not considered in Ref [19]. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you to the authors for their detailed response to my questions and comments, as well as the general comments that they have provided to the overall feedback from all reviewers. I think that I agree with the authors about practical problems vs. edge cases, and it sounds like computational tools and findings in practice can achieve a high performance with this framework. I am happy to change my rating to a 7.
Summary: This paper presents the Dual Lagrangian Learning (DLL) framework that utilizes a fully-connected neural network (FCNN) to provide high-quality dual-feasible solutions that can be used to generate valid Lagrangian dual bounds for linear and nonlinear conic optimization problems without the use of costly implicit layers. This FCNN provides a candidate dual feasible point that is passed through a conic projection step to produce a dual feasible point, then dual completion subproblem is solved to get a complete dual feasible pair $(\hat{z}, \hat{y}) which is then used to get a valid Lagrangian dual bound. The subgradient of the Lagrangian at the dual feasible point is used to update the FCNN network weights to provide better candidate points. The dual completion subproblem is solved for a variety of conic optimization problems. The experimental evaluation is done on both linear and nonlinear conic optimization problems and it is shown to outperform the state-of-art method DC3. Strengths: The strength of the paper is the simplicity and effectiveness of the DLL framework compared to DC3 and commercial interior point solvers. While previous work used dual feasible points as a warm-start, this work shows how dual feasible points can be used to get valid Lagrangian dual bounds. Weaknesses: The main weakness of the paper is lack of consideration for general non-convex constraints. A minor weakness of the paper is lack of detail about how DC3 was tuned. The authors claim that DC3 was difficult to tune and only limited tuning was done with DC3 due to this. However, on the positive side, they do provide adequate detail of how the hyperparameters for DC3 were set for easy reproducibility. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. On Line 515-6, there seems to be a typo. "do not valid bounds" should be replaced with "do not output valid bounds" 2. How was the limited tuning of DC3 done? While DC3 has many parameters and is difficult to tune, it is possible that DC3 maybe better with more extensive tuning. It is important to show that a reasonable effort was made to tune DC3 before claiming that its performance is worse than DLL. 3. How does the quality of the candidates provided by DLL change over time? It would be interesting to see how the Lagrangian dual bound changes with each iteration of DLL. 4. What considerations did you use when designing the FCNN for DLL? Have you tried other NN architectures other than GNNs? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors acknowledge the limitations of their work, which include lack of consideration for general non-convex constraints and suggest Graph Neural Networks (GNNs) as a promising future avenue of research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time reviewing our paper, and for their insightful comments. Please see our general rebuttal regarding how DLL can be used in non-convex and discrete settings. Detailed responses to the reviewer's questions follow. ## Non-convex constraints Please see our general rebuttal. In a nutshell: * Most algorithms for global non-convex optimization leverage convex relaxations. DLL can therefore provide high-quality surrogate models for these convex relaxations, and/or accelerate their resolution. * DLL can be extended to handle non-convex and discrete constraints by leverage similar Lagrangian duality. However, obtaining valid Lagrangian bounds then requires solving typically NP-hard problems, instead of polynomial-time closed-form formulae in the conic setting. ## Model architecture and tuning As the reviewer pointed out, DC3 has multiple hyper-parameters, which makes its tuning difficult and resource-intensive. We ran initial experiments on small datasets to evaluate the sensitivity of DC3's performance with respect to 1) number and size of hidden layers, 2) overall learning rate, 3) number of inequality corrections, 4) learning rate for correction mechanism. These experiments led us to select the hyper-parameters reported in the paper. We decided not to run systematic hyper-tuning for each size of problems because of the associated financial and environmental cost. Indeed, for production planning instances, even on the smallest instances, each DC3 training run would typically hit its 1-hour time limit on a V100 GPU (at a cost of about \\$1/hr). Even testing just 15 hyper-parameter combinations would require a computing budget of \\$100 to produce the results of Table 4. To ensure a fair comparison with DLL, we nevertheless implemented the following improvements, which mostly catered to DC3: * we implemented batched, analytical gradients to DC3's correction mechanism, to alleviate the use of AD tools; * we implemented a learning rate scheduler to reduce the need for tuned learning rate. We also added a patience mechanism to avoid DC3 getting stuck in a local optimum early on; * we increased the limits for training to 4000 epochs and one hour. In contrast, DLL models typically reached 99.9% of their final performance in at most 5min and a couple hundred epochs. We agree with the reviewer that there might exist a configuration of DC3 whose performance is greatly improved compared to results presented in the paper. However, our experience suggests that this comes at a heavy price tag compared to DLL, be it developer's time, cloud computing budget, or environmental cost. Finally, while we did not consider GNN architecture in this work, it is a very natural avenue which we intend to explore in future work. ## Performance of DLL As we briefly mention above, the training of DLL models is typically completed in at most 200 epochs and under a few minutes. We recorded a relative improvement of the average Lagrangian bound of less that 0.1% between 5min and 1hr of training time. We will include a figure comparing the evolution of the DLL and DC3 bound during training in the final version of the paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to my comments and questions. 1. Regarding the general rebuttal about lack of consideration for non-convex constraints, I agree with the author's comments that convex case is a good starting point as convex relaxations are typically used for non-convex constraints. I recommend that the authors put a non-convex use case if submitting elsewhere as it may strengthen the paper. 2. Regarding the tuning of DC3, I thank the authors for providing a detailed account of the tuning of DC3 in the comments. It is understandable that extensive tuning costs time and computational resources. Still, I would recommend that the authors put these details in the paper to justify the limited tuning of DC3 and explain how the final parameters were chosen in the paper. 3. Regarding the consideration of other architectures other than chosen FCNN architectures, I was more interested in what other architectures the authors experimented with and why this particular architecture was chosen. 4. Good to hear that the authors will consider putting a figure comparing the evolution of the DLL and DC3 bound during training. It will be a useful addition to the paper. With the addressing of the above comments, I would like to raise my score to 7. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their additional feedback. We will include in the final version of the paper a specific discussion of non-convex and discrete constraints (also addressing comments by reviewer `yA4S`), and a more detailed explanation and justification of the tuning of DC3. We will also include a figure comparing the evolution of the Lagrangian bound obtained by DLL and DC3 during training.
Summary: The paper introduces Dual Lagrangian Learning (DLL), a machine learning framework designed for dual conic optimization problems. DLL utilizes conic duality and machine learning models to produce high-quality, dual-feasible solutions and corresponding Lagrangian dual bounds for both linear and nonlinear conic optimization problems. It introduces techniques like a systematic dual completion procedure, differentiable conic projection layers, and a self-supervised learning framework based on Lagrangian duality. The methodology is demonstrated to significantly outperform existing state-of-the-art methods and provides a faster alternative to traditional interior-point methods used in commercial solvers. Strengths: The paper introduces Dual Lagrangian Learning (DLL), which combines machine learning (ML) techniques with dual conic optimization to enhance the computation of dual feasible solutions and corresponding Lagrangian bounds. This integration addresses both linear and nonlinear conic optimization problems, aiming at providing a new approach as compared to a traditional methods. The methodology applies differentiable programming to optimization, thus enabling gradient-based optimization methods to refine dual solutions directly. The paper is well-structured and articulates complex ideas with clarity. Weaknesses: Despite its strengths, the paper has several areas that could be improved: Scope of Application: The focus on continuous problems seems a step back considering the current trend and necessity of addressing mixed-integer programming (MIP) in real-world important applications like power systems and manufacturing, which authors rightfully point out in the beginning. The utility of DLL in these contexts remains unclear, especially without addressing the discrete aspects of such problems. Incremental Nature: The contributions, while innovative in combining ML with dual conic optimization, appear incremental when compared to existing research integrating ML with Lagrangian relaxation for MIP. More distinction or comparison with these methods could help in highlighting the unique advantages of DLL. Numerical Validation: The experiments provided are based on relatively small problem instances. This limitation might raise concerns about the scalability and practical applicability of DLL. Even by the standards of MIP applications, the sizes of the problems tested appear to be rather small. Extending these experiments to larger, more complex problem instances could enhance the paper's impact and relevance. Technical Quality: 2 Clarity: 3 Questions for Authors: Justification for Focus on Continuous Problems: Can the authors provide a stronger rationale for focusing predominantly on continuous problems given the prevalent interest and necessity for solving MIP in real-world applications? Comparison with Existing Methods: How does DLL compare in performance and applicability with existing methods that combine ML with Lagrangian relaxation specifically for MIP problems? Scalability and Practical Applicability: Could the authors elaborate on the potential scalability of DLL? Are there plans to test the methodology on larger-scale problems, particularly those involving discrete decisions common in industrial applications? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The limitations are provided in the paper, but in the view of the above review, they appear to be rather restrictive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and their appreciation of our work. Please refer to our general rebuttal for a detailed discussion of how DLL can be used in the non-convex and discrete setting. Detailed answers to the reviewer's questions follow. ## Comparison with Lagrangian decomposition for MIP problems We agree with the referee that MIP problems are of utmost importance in real-life applications. We would nevertheless like to point out that many real-life problems are posed as conic optimization problems, e.g., portfolio optimization problems in finance, model predictive control, day-ahead and real-time electricity market-clearing, etc. More importantly, convex optimization is at the heart of most algorithms for mixed-integer optimization, e.g., branch-and-cut or outer approximation. Hence, by providing highly efficient tools for convex conic optimization, DLL makes a first step towards scalable learning-based methods for mixed-integer optimization. In addition, while LD-MIP and DLL both leverage Lagrangian duality, DLL introduces tools that handle nonlinear conic constraints, whereas, to our knowledge, existing work on LD-MIP only considers the dualization of linear constraints. Finally, while obtaining a valid dual bound in LD-MIP requires solving a combinatorial problem, which may be NP-hard, DLL obtains valid bounds using efficient (polynomial-time), closed-form formulae for a very broad class of problems. ## Scalability (this answer is repeated in our response to reviewer `H5sy`) The scalability of our proposed framework is supported by several key ingredients: * Efficient conic projection layers: DLL does not require any external solver, nor the resolution of NP-hard problems (when considering the standard cones presented in the paper). We also introduce non-Euclidean conic projection with better complexity. For instance, for the PSD cone, the Euclidean projection has complexity $O(n^{3})$, compared to $O(n^{2})$ for the proposed radial projection. * Closed-form dual conic completion: we showcase several efficient dual completion layers for broad classes of problems (See Examples 1--3 and numerical experiments) * Self-supervised learning: this removes the need for offline data-generation using standard optimization solvers. This greatly accelerates training, and we have found this approach to often yield improved performance compared to supervised learning settings. Finally, DLL-based proxies are expected to offer similar computational burden and benefits as primal optimization proxies, which have been shown in various works to be scalable for large-scale real-life problems. --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying their stance. There are the following points that need to be addressed: 1. Presentation-wise, the language needs to be adjusted to make sure that the authors are not overclaiming, For example, a good lower bound can be obtained to certify the solution quality, but the solution itself may not necessarily be obtained. 2. The paper's contribution must be put in the context of the existing research integrating ML with Lagrangian relaxation. 3. The authors consider a subgradient variant of the method, which may suffer from zigzagging. This loops back to point #2 above, necessitating a more in-depth literature review and, likely, more testing, time permitting. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their additional feedback. We will update the last sentence in the second paragraph of Section I > it becomes possible to design a primal proxy to deliver a high-quality primal solution and an associated dual proxy to obtain a quality certificate. to make it clear that the paper focuses on the dual side, i.e., the primal proxy is trained separately. We would like to point out that the contributions listed in Section 1.1 specifically mention only conic problems, from conic duality to conic projection and conic problems in the experiments. We will include, in Section 6, an additional discussion about the non-convex and discrete settings, and will expand the literature review in Section 2 to mention additional works on Lagrangian relaxation in the MIP context. The dual Lagrangian function is concave (for a minimization problem) and sub-differentiable. Hence, most algorithms for Lagrangian relaxation are based on subgradients, unless a regularizer is used as in, e.g., Ref [20]. The zig-zagging issue is problematic, in an optimization context, when solving a single instance. In the ML context, as is the case here, the dual Lagrangian function is used as training loss function. In particular, it is evaluated (and averaged) across multiple instances in a minibatch, which provides a form or regularization. It is also important to note that 1) popular activation functions like ReLU also yield subgradients, without significant impact on performance, and 2) the models are trained with the Adam optimizer which uses momentum, thus reducing the risk of zigzagging. Finally, we note that works on Lagrangian relaxation in the MIP setting also use subgradient-based training, e.g., [this paper](https://openreview.net/forum?id=aZnZOqUOHq).
Rebuttal 1: Rebuttal: # General rebuttal We express our gratitude to the area chairs and reviewers for their careful reading of our manuscript, their appreciation of our work, and their valuable feedback. In this general rebuttal, we comment on the applicability of the proposed framework to mixed-integer and non-convex problems. We would also like to report that the paper's main author was sick with Covid during the past week, and was therefore unable to run additional experiments for this rebuttal. We nevertheless discuss, in individual rebuttals, what such experiments would have looked like and what results we would have expected. We hope to include them in the final version of the paper. ## How DLL can be used for discrete / non-convex problems First, we would like to point out that convex optimization is the workhorse that underlies virtually all global optimization methods for discrete and non-convex problems. This includes, for instance, branch-and-cut algorithms for MIL(N)P, Outer-Approximation algorithms for convex MINLP, and spatial branch-and-bound for non-convex continuous optimization. Namely, **all these algorithms employ convex relaxations to obtain valid dual bounds and eventually prove optimality**. Hence, by establishing the theoretical fundamentals of a principled learning framework for (convex) conic optimization, this paper marks a first step towards hybrid frameworks for non-convex optimization, wherein learning-based methods are used to solve (or accelerate the solution of) convex relaxations. ## Relation between DLL and Lagrangian decomposition for MIP More generally, it is important to note that Lagrangian duality is a fundamental principle in constrained optimization, which is applicable to convex, discrete and non-convex problems alike. Formally, consider a problem $\min_{x} \lbrace c^{\top}x | A x \geq b, x \in X \rbrace$ and the Lagrangian relaxation $L(y) = \min_{x} \lbrace c^{\top}x + y^{\top}(b - Ax) | x \in X \rbrace$. It is always the case that * $L$ is concave w.r.t $y$, even if $X$ is non-convex or discrete, * $L(y)$ is a valid dual (lower) bound, * if $x$ is an optimal solution of the Lagrangian subproblem, then $b - Ax$ is a subgradient of $L$ at $y$. In our paper, $X$ is described by conic constraints. In Lagrangian decomposition for MIP problems (LD-MIP), e.g., [this paper](https://openreview.net/forum?id=aZnZOqUOHq), $X$ is a discrete set. The two approaches then differ as follows: 1. Existing work on LD-MIP only considers the dualization of linear constraints of the form $Ax \geq b$, which yields a simple dual set $y \geq 0$. In contrast, we consider general conic constraints $Ax \succeq_{K} b$, with associated multipler $y \in K^{\*}$. **Extending LD-MIP to support general conic constraints would require the conic projection layers proposed in our paper** to ensure $y \in K^{*}$. 2. **Obtaining a _valid_ Lagrangian bound in LD-MIP requires solving typically NP-hard discrete problems** and requires an external solver. In contrast, in the conic setting, dual bounds are typically obtained via efficient (i.e. polynomial$^{\dagger}$) formulae. This is thanks to the elegant conic duality theory, whereby the dual of a conic problem is a conic problem. On the other hand, there is no tractable duality theory for non-convex or discrete problems. Therefore, DLL and LD-MIP both leverage Lagrangian duality in a learning context. **DLL introduces key ingredients that support the dualization of nonlinear conic constraints**, compared to linear constraints in existing works on LD-MIP. In addition, its extension to the non-convex or discrete setting is straightforward. The key computational difference is that, in the non-convex / discrete settings, computing a valid Lagrangian bound requires solving typically NP-hard non-convex / discrete problem. In contrast, **DLL offers scalable, polynomial-time formulae$^{\dagger}$.** ___ $^{\dagger}$ The polynomial-time result applies to all standard cones considered in the paper. However, some classes of conic programs, e.g. copositive programming, are NP-hard to solve.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Understanding the Gains from Repeated Self-Distillation
Accept (poster)
Summary: This paper theoretically investigates the effect of multiple rounds of self-distillation (SD) in linear regression. Under some conditions on the ground truth model $\theta^{\ast}$ and the data matrix $X$, it is shown that multi-step SD can improve the risk bound by a factor of $r = \text{rank}(X)$. Specifically, this happens when the non-zero singular values of $X$ are all distinct and when $\theta^{\ast}$ is *perfectly* parallel to $u_1$, where $u_1$ is the leading singular vector of $X$. The authors show the necessity of such conditions. The improvement yielded by multi-step SD is demonstrated empirically on some simple regression tasks. Strengths: **1.** This paper seems to be one of the first ones to theoretically characterize *multi-step* SD. **2.** The paper is more or less well-written. Weaknesses: **1.** The condition of $\theta^{\ast}$ being *perfectly* parallel to $u_1$ is too strong and unrealistic in my opinion. This makes the results of this paper less interesting. A better result would be quantifying the gains of multi-step SD assuming the angle between $\theta^{\ast}$ and $u_1$ is bounded by some quantity, say $\beta$ -- this is a more realistic setting. Then, focus on the special case of $\beta$ being small (and so $\beta = 0$ would be a very special case of these general results). Is it possible to do something like this? **2.** There are no practically relevant insights for multi-step SD in classification problems which are more interesting now. **3.** Overall, I'm not sure about the relevance of the results in this paper due to the above two points. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weakness #1. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 1 Limitations: Not discussed in detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address your concerns below. **Concern 1: Relaxing the strong assumption of $\theta^\star$ being perfectly parallel to $\mathbf{u}_1$** Regarding the condition of $\theta^\star$ being perfectly parallel to $\mathbf{u}_1$, we address this in the shared response, since this was a common concern among reviewers. We agree that it is a strong assumption -- and we presented it as such in the interest of a clean exposition of Theorem 1. At the time of writing, the focus was on showing that there exists *a* regime where $r$-step SD can order-wise outperform $1$-step SD (i.e., not just by a $O(1)$ factor). But the result does hold true more generally. This is clarified in detail in the shared response via the below two arguments, and a new version of Theorem 1 that incorporates both of these points. - First, we note that alignment with *any* one of $\mathbf{u}_j, j \in [r]$ is sufficient for the separation (as noted in lines 203-205 in the manuscript also). - Second, based on your valuable recommendation, we have indeed derived a version of Theorem 1 with a relaxed Assumption 2.2, where the closeness of $\theta^\star$ and the singular vector $\mathbf{u}_j$ is controlled with a parameter $\beta$, and setting $\beta = 0$ recovers the special case that we presented. Through the revised Theorem 1, we argue that the regime of separation is more than just an atypical corner case. Overall, we show that the separation holds whenever the ground-truth $\theta^\star$ is "$\beta$-close" to any of the $\mathbf{u}_j, j \in [r]$; and for small $\beta$. We believe this is a fairly general regime and not just a special case. We hope this provides resolution for your concern. **Concern 2: Insights for self-distillation applied to classification** We understand that linear regression is a relatively simple task, but we believe that the $\Omega(r)$ order-wise separation between $r$-step SD and $1$-step SD, although in the simple framework of linear regression, is a somewhat surprising result worth sharing with the community. The de-facto loss for classification problems is the cross-entropy loss, which introduces non-linearity and additional technical challenges for theoretical analysis. But practically, there's a large body of work on empirically analyzing multi-step SD in classification problems. Many researchers have observed the phenomenon of multi-step SD providing performance gains in various settings [1,2,3]. Our work is an attempt to theoretically characterize the gain, and it's surprising that linear regression itself exhibits a non-trivial gain from self-distillation. A detailed analysis for classification with non-linearity falls outside the scope of this work. ``` [1] T. Furlanello, Z. Lipton, M. Tschannen, L. Itti, and A. Anandkumar. Born again neural networks. In International Conference on Machine Learning, pages 1607–1616. PMLR, 2018. [2] Y. Li, J. Yang, Y. Song, L. Cao, J. Luo, and L.-J. Li. Learning from noisy labels with distillation. In Proceedings of the IEEE international conference on computer vision, pages 1910–1918, 2017. [3] Z. Zhang and M. Sabuncu. Self-distillation as instance-specific label smoothing. Advances in Neural Information Processing Systems, 33:2184–2195, 2020. ``` --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! The revised version of Theorem 1 is what I was looking for. I agree that this is a nice result. So I have updated my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your review and the valuable suggestion of $\beta$-controlled analysis. We appreciate your reconsideration of this work in light of the rebuttal!
Summary: The paper analyzes the gains a model can achieve from multi-step self-distillation and presents a solid theory showing that the excess risk does not increase with more self-distillation steps. The synthetic task in the paper effectively proves this analysis. Strengths: 1) The analysis of excess risk for the model trained with multi-step self-distillation is solid. 2) The synthetic task and experiments on the Air Quality, Airfoil, and AEP datasets well support the proposed theory. 3) The paper is well-written and easy to follow. Weaknesses: 1) The study of self-distillation has already been explored in [1,2]. [1] shows that any distillation step can be calculated directly from the initial distillation step. Can the authors compare to [1]? In the experiments of [1], it is shown that accuracy does not always increase with more self-distillation steps. In this paper, the experiments only show results for 2-step self-distillation. How about more steps, like 5 or $\frac{r}{2}$? The datasets used in the experiments do not seem to be widely used. How about results on widely used datasets like CIFAR-10? 2) Assumption 2 is not guaranteed in real tasks. As shown in Figure 2(c), the proposed analysis heavily relies on Assumption 2.2. However, Assumption 2.2 does not seem to be satisfied in real tasks, which weakens the proposed analysis. 3) The experiments are not sufficient to support the proposed theory. [1] Kenneth Borup and Lars N.Andersen. Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation. In Advances in neural information processing systems, 2020 [2] Rudrajit Das, Sujay Sanghavi. Understanding self-distillation in the presence of label noise. In Proceedings of the 40th International Conference on Machine Learning, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) Please address the weaknesses mentioned. 2) What networks are used in the experiments? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, imitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We address your concerns and questions below. **Concern #1.1: Comparison to another relevant paper [1]** Self-distillation has been explored in [1,2,3], and we do provide a comparison of our results with [2,3]. Thank you for pointing out the highly relevant reference [1] also! There are definitely similarities, but the main conclusions do differ. Let us elaborate more below. Indeed [1] also studies multi-step SD similar to us (Fig 1 in both papers are analogous), and they also obtain an analytical form for the k-step SD (Theorem 4.1 in theirs is analogous to Eq. (8) in ours). The crucial difference is the freedom of the $\xi$ parameters being different at each step of self-distillation. In Lemma 4.2 and Theorem 4.3, [1] assumes the $\xi$ values at each step are equal (denoted $\alpha^{(2)} = \alpha^{(3)} = \cdots = \alpha^{(\tau)}$ in their paper). Consequently, they conclude that subsequent steps of SD progressively sparsify the basis set of obtainable solutions. This means that after a point, running more steps of SD will result in a poorer performing model (similar to [3]), as you pointed out in your review. Our main result (Theorem 1) is different. It says that subsequent steps of SD strictly provide more freedom, and that the best multi-step SD can outperform the best 1-step SD by an $\Omega(r)$ factor. This separation relies on the freedom of $\xi$s being different at each step, which required careful analysis. We believe this is a significant novel contribution since an order-wise separation of $\Omega(r)$ (and not just an $O(1)$ difference) is somewhat surprising. But we will surely cite the contributions of [1] as they are highly relevant. ``` [1] Kenneth Borup and Lars N.Andersen. Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation. In Advances in neural information processing systems, 2020. [2] Rudrajit Das and Sujay Sanghavi. Understanding self-distillation in the presence of label noise. In Proceedings of the 40th International Conference on Machine Learning, 2023. [3] Hossein Mobahi, Mehrdad Farajtabar, and Peter Bartlett. Self-distillation amplifies regularization in hilbert space. In Advances in Neural Information Processing Systems, 2020. ``` **Concern #1.2: $k$-step SD for $k>2$** In the regression tasks of Section 5.3, what was challenging for higher $k$-step SD ($k>2$) was numerical instability. In particular, from Theorem 5 we know that the optimal choice of $\xi$ involves computing $M^{-1}$ for the matrix $M \in \mathbb{R}^{k \times k}$, which was unstable to invert for $k > 2$ in the real-world datasets of Section 5.3. There might be more stable ways to approximate the optimal solutions, which will make our approach more practical for higher order SD. We leave this as a future research direction. As a proof of concept though, we refer you to Figure 2 of the rebuttal pdf, where we run $k$-step SD upto $k=5$ on a synthetic problem. We show that for that specific example, self-distillation beyond $2$-step SD is indeed helpful. In the real-world experiments, we ran only till $2$-step SD because $3$-step SD started becoming numerically unstable for all three of the datasets. **Concern #2: Assumption 2.2 being too strong** We agree that Assumption 2.2 as presented is indeed a strong condition, but we do not really need that strong condition to see the gains of multi-step SD. Let us elaborate why: 1. *Theoretical argument:* In general, we require significantly weaker conditions on $\theta^\star$ for the separation result (Theorem 1). This is clarified in great detail in the shared response (since this was a common concern among reviewers), where we (i) note that alignment with *any* one of $\mathbf{u}_j, j \in [r]$ is sufficient for the separation, and (ii) also present a relaxed version of Theorem 1 with a weaker Assumption 2.2 that does not require *exact* alignment. Please refer to the shared response window for a full discussion. 3. *Empirical argument:* The relaxed condition of Assumption 2.2 is fairly reasonable, as shown empirically on the Air Quality and Airfoil regression tasks, where $2$-step SD beats $\{1,0\}$-step SD (especially Air Quality where the gap in performance is significant). We included AEP dataset as a negative example where multi-step SD does not provide gains. Further, Figure 2 of the rebuttal pdf provides insight into why AEP saw no gains. It shows that the $\theta^\star$ is indeed strongly aligned with one of the $\mathbf{u}_j, j \in [d]$ for Air Quality and Airfoil datasets, but *NOT* for the AEP dataset. We presented Assumption 2.2 as such in the interest of a clean exposition of Theorem 1. At the time of writing, the focus was on showing that there exists *a* regime where $r$-step SD can order-wise outperform $1$-step SD (i.e., not just by a $O(1)$ factor). But the above shows, both theoretically and empirically, that multi-step SD can provide gains in much more general settings. **Question #2: Models and datasets used** Since our setting is regression, CIFAR-10 is a less relevant dataset for us. We demonstrate the empirical results on regression datasets taken from the commonly used UCI repository. The experiments presented are for linear estimators of the type $\hat{\theta} \in \mathbb{R}^d$. Non-linear networks were not used to keep the experiments in accordance with the theory. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 2wSn Comment: Thank you for the reply. Most of my concerns have been addressed. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your review, and for considering our rebuttal to your concerns. Please let us know if you have any more questions!
Summary: The paper tries to provide a theoretical analysis of gains from applying self-distiliation repeatedly, in particular by trying to show that it is important to optimally set the imitation paramter for each step rathen than having a fixed Value. The study is conducted on using ridge estimator for linear regressin. The authors provide a theorm on the excess risk under two assumption when multi-step SD is applied. Experimental results on synthetic data and thre real world datasets. The paper also provides insights on how to set the imitation pramter real datasets. Strengths: Along with the theoretical analysis, the authors provide experimental results to demonstrate the necessity of the two major assumptions they have for the theoretical gain bounds they show. The analysis on importance of properly choosing imitation factor at each step and how it relates to excess risk is insightful. The problem setting, although simplified, but is straightforward to understand and the idea and analysis is well presented. Weaknesses: The paper does not provide neither discussion nor experiments on how the proposed analysis can be used in more complex setting (e.g. more expressive models). On the other hand, for the experimental results the author do not provide any insight into why on AEP dataset using multi-step SD degrades the results. Could it be due to violation of the assumptions? In general, the paper provide theoretical analysis for a known technique without providing much insight into how one can use it to improve experimental work, for example it could have been more interesting if the authors could provide more principled techniques for hyperparameter setting for multi-step SD. It would be more helpful if more experimental analysis on either more datasets or in depth were conducted. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can the authors provide any thoughts on how/challenges of expanding the analysis into more complex family of models? - How does the Assumption 2.2 can be interpreted as constraints on more complex models, for example for MLPs? - What could be the reason for drop in performance for AEP? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The theoretical analysis is limited in the scope of models and it is not immediately clear how it can be extended into more expressive models. Furthermore, it is not straightforward to incorporate them on large problems by optimally setting hyperparameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We address your questions and concerns below. **Question 1: Analysis of a more complex model family** We understand that linear regression is a relatively simple task, but we believe that the $\Omega(r)$ order-wise separation between $r$-step SD and $1$-step SD, although in the simple framework of linear regression, is an interesting and perhaps surprising result. On extensions, kernel regression is a fairly direct extension in terms of technical tools. The challenge for more expressive models (eg. MLPs) is the introduction of $(i)$ non-linearity and $(ii$) non-convexity, which pose significant additional technical challenges for theoretical analysis. The solution of the ERM problem becomes hard to characterize. Further, there is interplay with optimization since the objective can become non-convex. Because of these difficulties, researchers generally resort to empirical studies for more complex models. Many past works have observed the phenomenon of multi-step SD providing performance gains in various settings [1,2,3]. Our work is an attempt to theoretically characterize the gain, and it's somewhat surprising that linear regression itself exhibits a non-trivial order-wise gain from self-distillation. ``` [1] T. Furlanello, Z. Lipton, M. Tschannen, L. Itti, and A. Anandkumar. Born again neural networks. In International Conference on Machine Learning, pages 1607–1616. PMLR, 2018. [2] Y. Li, J. Yang, Y. Song, L. Cao, J. Luo, and L.-J. Li. Learning from noisy labels with distillation. In Proceedings of the IEEE international conference on computer vision, pages 1910–1918, 2017. [3] Z. Zhang and M. Sabuncu. Self-distillation as instance-specific label smoothing. Advances in Neural Information Processing Systems, 33:2184–2195, 2020. ``` **Question 2: Interpretation of Assumption 2.2 in more complex models** At a high-level, $\theta^\star$ is akin to the parameterization of the ground-truth generative process (which could be non-linear in general). And $\mathbf{u}_j$ is akin to a basis direction of the observed data (which is computed simply via SVD for the linear case, but the notion of basis could involve non-linearity for more general models). These notions are far from precise though, and it is not clear what the interpretation of their "closeness" (as in Assumption 2.2) would mean in a more complex regime. Characterizing this more explicitly would be a very interesting direction of future work, but falls outside the scope of our current work. **Question 3: Insight into performance on the AEP dataset** Note that $\{1, 2\}$-step SD achieve roughly the same test MSE as ridge on the AEP dataset (Table 1 of manuscript), so there is *no drop* in performance, it's just flat. In the attached pdf (Figure 1), we provide an explanation for why this is the case. Unlike the other two datasets, AEP happens to have a $\theta^\star$ which is not strongly aligned with any particular basis direction $\mathbf{u}_j, j \in [d]$. This violates Assumption 2.2, where we require $\theta^\star$ to be well-aligned with one of the $\mathbf{u}_j$ directions for observing large gains from multi-step SD. > Please also refer to the shared response for a detailed discussion on Assumption 2.2. In particular, we present a relaxed version of Theorem 1 where $\theta^\star$ needs to be closely (and *not just exactly*) aligned with *any* of the $\mathbf{u}_j$, and not necessarily $\mathbf{u}_1$. To be more explicit, let us break it into two cases: - **Case 1:** From the new version of Theorem 1, we know that closeness of $\theta^\star$ to some $\mathbf{u}_j, j \in [d]$ (which translates to the existence of one high peak in the Figure 1 bar chart) implies large gains from self-distillation. - **Case 2:** On the other hand, from Theorem 3 of the manuscript, we know that if $\theta^\star$ is equally aligned to all of $\mathbf{u}_j$s (which translates to a flat distribution in the Figure 1 bar chart), it provably implies no gain from self-distillation. Since the AEP dataset happened to be closer to the second case, we see no gains from self-distillation. We thank you for this suggestion, and will aim to include this explanation in the revised manuscript. **Other Concerns: Hyperparameter selection** The comments also raised a concern about principled hyperparameter selection methods. We would like to highlight that *there is a principled one-shot tuning method we explain in Section 5.2 (and Appendix K) for the $\xi$ parameters*, leveraging the theoretical insights from section 4.4. In particular, from Theorem 5 in section 4.4, we know that the excess risk is quadratic in the $\bar{\xi}$ parameters (one-to-one reparameterization of the $\xi$ parameters). Using the quadratic nature, one can evaluate optimal $\xi$s for any regularization strength $\lambda$, leaving only one parameter ($\lambda$) for the grid search. --- Rebuttal Comment 1.1: Comment: Thank you for the response; I would retain my rating and recommend the clarification on the characteristics of AEP dataset to be include in the main body of the paper. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your review, especially for the suggestion of analyzing the characteristics of the AEP dataset. This provides valuable insights, and we will include this in the revised manuscript of this work.
Summary: This paper explores self-distillation from a theoretical perspective in the context of linear regression. Distillation is when a model is trained simultaneously to predict training labels and the predictions of another model that has already been trained on the data. Self-distillation is when the trained model has the same architecture as the model being trained. There has been recent work empirically show that self-distillation can result in better models and this paper gives a theoretical understanding of multi-step self-distillation in the context of linear regression. The starting insight of the paper is that repeated self-distillation can be thought of as a pre-conditioner matrix multiplied by the optimal solution to ridge regression. Using this insight, they prove that there is a linear regression problem where self-distillation repeated r times (the rank of the data matrix) can outperform one-step self-distillation and ridge regression by a factor of r. They measure performance in terms of "excess risk" which is the distance to the optimal parameters under a norm weighted by the covariance matrix. Their example with the factor of r gap requires two assumptions: all the singular values of the data matrix are distinct and the optimal parameters are orthogonal to all but one eigenvector. They show theoretically that both assumptions are necessary for such a gap to exist. They also derive an expression for the excess risk and conjecture that using more than r steps doesn't help the self-distillation process (as long as the self-distillation parameters are chosen optimally?). They have some experiments where they show that 2-step self-distillation is robust to the ridge regression hyperparameter lambda. Because finding optimal parameters for self-distillation is difficult on real data, they do not show the performance of 3 or more -step self-distillation. Strengths: * The paper begins with a nice insight into multi-step self-distillation and leverages it to prove several theoretical results. * The paper shows a very interesting gap in performance between r-step self-distillation and ridge regression and 1-step self-distillation. * The paper shows that two assumptions in the construction are necessary for the gap. * The paper shows in practice that 2-step self-distillation works very well. Weaknesses: Larger weaknesses: * The paper only shows results in terms of excess risk. I'm not familiar enough with the fixed design setup to determine how useful excess risk is. * The paper shows that r-step self-distillation can be very powerful but they give no way to practically run it i.e., choose the parameters on real data. Smaller points to improve: * I'm confused by the statement of Assumption 2.1. Shouldn't this say that the optimal parameter is orthogonal to every eigenvector *except* the first one? * More generally, I was confused about the way you stated the assumptions. You referred to two necessary assumptions which I assumed would be Assumption 1 and Assumption 2 but they were actually sub-assumptions in Assumption 2. I would ask that you rework what you call these assumptions and how you refer to them for clarity. * In your appendix, you referred back to equations in the main body without reproducing them. This made it annoying to check your work e.g., you compared equation 8 and 26 but I kept having to flip between them (until I eventually copied equation 8). Since there's no page limit in the appendix, I would appreciate if you reproduced all equations when you refer to them. * Your figures are difficult to read when printed out in black and white (and, I assume, for color blind people). Could you please change the marker for each line so they're more distinguishable? Technical Quality: 4 Clarity: 4 Questions for Authors: * You show a gap between 1- and r-step self-distillation. Is there a similar gap between 2-step and r-step self-distillation? Based on your experiments, it looks like 2-step is already very good. Maybe this is a setting where 2-steps basically gives you all the power of r-steps like e.g., load balancing and the "power-of-2 choices". * You justify using excess risk because it "ensures that the signal-to-noise ratio is uniform in all directions." I don't understand this and I don't see a direct relationship between excess risk and MSE. Could you please tell me why I should care about excess risk? And show me where it comes from theoretically? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review! Let us try to resolve some of your questions and concerns. **Question 1: Gap between $2$-step SD and $r$-step SD** The power of two choices in load balancing is related to two choices being the first non-trivial departure from the standard one choice algorithm. Qualitatively (and intuitively), that departure happens perhaps at $1$-step SD in our setting, where the self-distillation is first used. This is why we focused on the separation between $1$-step SD and $r$-step SD in our manuscript, along with the $0$-step (i.e. ridge) versus $r$-step SD. Quantitatively, it is challenging to quantify the gap between $k$-step SD and $r$-step SD for a *general* $k > 1$. Our educated guess is that the gap between each additional step of SD can be upto $O(1)$ (and accumulating the gains over $r$ steps gives an $O(r)$ total gain), but this is not currently proved. As a proof of concept though, we refer you to Figure 2 of the rebuttal pdf, where we run $k$-step SD upto $k=5$ on a synthetic problem. We show that for that specific example, self-distillation beyond $2$-step SD is indeed helpful. In the real-world experiments, we ran only till $2$-step SD because $3$-step SD started becoming numerically unstable for all three of the datasets. **Question 2: Justification of using excess risk as the metric** In linear regression literature, there are two reasons why excess risk in the $\hat{\Sigma}_n$-norm is the preferred metric. The first is that it corresponds to the **test error** on a fresh sample, which is typically the evaluation metric of choice in many settings. The second is that it is **invariant** to the basis of the input $x$. We explain each of these reasons in detail below. *Test error calculation* In many cases, we care about the average test error, i.e. the expected error incurred on a fresh sample drawn from the underlying joint distribution $P$. Let's write that mathematically, \begin{align} {\rm TotalRisk} (\hat{\theta}) &= \mathbb{E}_{(x,y) \sim P} \left[ \left( \langle \hat{\theta}, x\rangle - y\right)^2 \right] . \end{align} We first break the joint expectation into a conditional expectation on $y \sim P_{Y|X=x}$ followed by $x \sim P_X$. From Assumption 1 of the paper, we know that $P_{Y|X=x}$ is a distribution with mean $\langle \theta^\star, x \rangle$ and variance $\gamma^2$. So we can write that that $y = \langle \theta^\star, x \rangle + \eta$, where $\eta$ denotes the noise and satisfies $\mathbb{E}[\eta]=0, {\rm Var}[\eta] = \gamma^2$. Using this in the total risk expression, we get \begin{align} {\rm TotalRisk} (\hat{\theta}) &= \mathbb{E}_{x \sim P_X} \left[ \langle \hat{\theta} - \theta^\star, x \rangle^2 + \gamma^2 \right] . \end{align} This simplifies to \begin{align} {\rm TotalRisk} (\hat{\theta}) &= \left( \hat{\theta} - \theta^\star \right)^\top \mathbb{E}_{x \sim P_X} \left[ xx^\top \right] \left( \hat{\theta} - \theta^\star \right) + \gamma^2 \text{ }. \end{align} Since $P_X$ is the density with point masses at the $n$ training points in the fixed design setting, we get $\mathbb{E}_{x \sim P_X} \left[ xx^\top \right] = \frac{1}{n} \cdot \mathbf{X} \mathbf{X}^\top = \hat{\Sigma}_n$. Using this, we obtain \begin{align} {\rm TotalRisk} (\hat{\theta}) &= || \hat{\theta} - \theta^\star ||^2_{\hat{\Sigma}_n} + \gamma^2 \text{ }. \end{align} The above calculation shows that excess risk in the $\hat{\Sigma}_n$-norm (i.e. the first term) is the right metric that determines how good an estimator is, above the "*noise floor*" of $\gamma^2$. We remark in Appendix B that some previous works have used the $\ell_2$-norm instead of the more natural $\hat{\Sigma}_n$-norm. *Invariance* Another reason is that we prefer a measure of performance that is invariant to what basis we choose for the input $x$. Precisely, we want the performance of an algorithm to be the same whether we apply it to data coming from $(\mathbf{X}, \theta^\star)$ or $(A \mathbf{X}, A^{-\top}\theta^\star)$ for any $d\times d$ invertible matrix $A$. This follows from the fact that one can always apply any $A$ to the input before feeding it to the algorithm, and if the performance changes based on $A$, then it makes it hard to draw a fair comparison. We are assuming there is no numerical instability to focus on the **invariance property**. It happens that the excess risk $|| \hat{\theta} - \theta^\star ||^2_{\hat{\Sigma}_n}$ is invariant to such matrix multiplication (which is the change and/or scaling of the basis). We believe this is also one of the reasons excess risk in the $\hat{\Sigma}_n$-norm is preferred over the $\ell_2$ distance. This is what we meant when we said "*ensures that the signal-to-noise ratio is uniform in all directions*." **Larger weaknesses** 1. Please see the above justification for using excess risk as the metric. 2. **There is a one-shot tuning method we explain in Section 5.2 (and Appendix K)**, where we delineate a principled hyperparameter selection method for the $\xi$ parameters, leveraging the theoretical insights from section 4.4. In particular, from Theorem 5 in section 4.4, we know that the excess risk is quadratic in the $\bar{\xi}$ parameters (one-to-one reparameterization of the $\xi$ parameters). Using the quadratic nature, one can evaluate optimal $\xi$s for any regularization strength $\lambda$, leaving only one parameter ($\lambda$) for the grid search. In practice, what was challenging for higher $k$-step SD ($k>2$) was numerical instability. In particular, from Theorem 5 we know that the optimal choice of $\xi$ would involve computing $M^{-1}$ for the matrix $M \in \mathbb{R}^{k \times k}$, which was unstable to invert for $k > 2$ in the real-world datasets of Section 5.3. There might be more stable ways to approximate the optimal solutions, which will make our approach more practical for higher order SD. We leave this as a future research direction. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response! I will retain my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your in-depth review, as well as for considering our rebuttal points. --- Rebuttal 2: Title: Addressing the smaller points to improve Comment: **Smaller points to improve** Thank you for these comments. We address them below. 1. **Assumption 2.2 as stated actually *implies* that $\theta^\star$ is orthogonal to every eigenvector except the first one.** We state Assumption 2.2 as $\measuredangle (\theta^\star, \mathbf{u}_1) = 0$, which means that $\theta^\star$ is perfectly parallel to $\mathbf{u}_1$. Since $\mathbf{u}_j, j \in [d]$ are all eigenvectors of $\mathbf{X} \mathbf{X}^T$, they form an *orthonormal basis* of $\mathbb{R}^d$ (also stated in Appendix A), i.e. $\langle \mathbf{u}_i, \mathbf{u}_j \rangle = 0$ for $i, j \in [d], i \neq j$. So indeed, the assumption as stated implies that $\langle \theta^\star, \mathbf{u}_j \rangle = 0$ for $j \in \{2, 3, \cdots, d\}$. There is a separate discussion on whether this condition is too strong, which we address in the shared (global) response window. 2. Thank you for catching this! We agree that a clearer way of stating this would explicitly call out the sub-assumptions 2.1 and 2.2. We will work this out in the final draft. 3. Totally agreed, we will modify the appendix to have relevant equations handy. 4. Also a valid point. We will rework this to a friendlier plotting style (for example, plotting with different marker styles as opposed to different colors).
Rebuttal 1: Rebuttal: We thank all the reviewers for their feedback and valuable comments. Multiple reviewers (Uvvg, 2wSn) have raised **concerns about Assumption 2.2 being too strong**, which we address in this shared response. We want to emphasize two points. - First, as we point out in lines 203-205 in the manuscript, Theorem 1 holds as is for a larger class of problems where $\theta^\star$ is perfectly parallel to *any* one of $\mathbf{u}_j, j \in [r]$. We stated Assumption 2.2 only for $\mathbf{u}_1$ because we viewed it as a lowerbound result. That is, the focus was on showing that there exists *a* regime where the $\Omega(r)$ separation holds. Now that we realize that the result should be stated more generally, we present the following version (to be added in the revised manuscript). This version is stated for any $\mathbf{u}_j$ instead of just $\mathbf{u}_1$. - Second, we have a new theorem capturing the dependence on the angle between $\theta^\star$ and the most aligned singular vector. Based on a suggestion from the reviewers, we generalize to the case where the angle $\measuredangle (\theta^\star, \mathbf{u}_j)$ for the most aligned $j\in[r]$ is controlled with a parameter $\beta$, instead of the angle being exactly zero. We present the following version of Theorem 1 with a relaxed Assumption 2.2 (that incorporates $\beta$), and Assumption 2.1 staying the same. This will also be added to the revised manuscript. Note that setting $\beta = 0$ exactly recovers the original claim. We agree that the condition of $\theta^\star$ being perfectly parallel to $\mathbf{u}_1$ was indeed a strong assumption. As mentioned, the focus was on showing *a* regime of separation. Through the above two points (both incorporated in the new theorem below), we argue that the regime of separation is more than just an atypical corner case. Overall, we show that the separation holds whenever the ground-truth $\theta^\star$ is "$\beta$-close" to any of the $\mathbf{u}_j, j \in [r]$; and for small $\beta$. We believe this is a fairly general regime of problem instances. --- **Assumption 2'** 2'.1 No two non-zero singular values of $\mathbf{X}$ collide, i.e. $s_1 > s_2 > \cdots > s_r > 0$. 2'.2 For some $\beta \in [0, 1)$, there exists an index $j \in [r]$ such that $\langle \theta^\star, \mathbf{u}_j \rangle^2 \geq (1 - \beta) \cdot || \theta^\star ||^2$. **Theorem 1'** Under the fixed design linear regression in Assumption 1, there exists a family of problem instances satisfying Assumption 2' such that for any instance $(\mathbf{X}, \theta^\star, \gamma^2)$ in the family, it holds that \begin{align} \exists \lambda > 0, \exists \xi^{(r)} \in \mathbb{R}^r, \hspace{10pt} &{\rm ExcessRisk} \left( \hat{\theta} (\lambda, \xi^{(r)}) \right) \leq \frac{\gamma^2}{n} \cdot \left( 1 \textcolor{red}{ + \beta \text{ } \frac{ || \theta^\star ||^2 s_1^2}{\gamma^2}} \right) , \newline \forall \lambda > 0, \forall \xi \in \mathbb{R}, \hspace{10pt} &{\rm ExcessRisk} \left( \hat{\theta} (\lambda, \xi) \right) \geq \textcolor{red}{(1 - \beta)} \left( \frac{0.99}{2^{9}} \right) \frac{r\gamma^2}{n} \text{ } , \text{ and } \newline \forall \lambda > 0, \hspace{10pt} &{\rm ExcessRisk} \left(\hat{\theta} (\lambda) \right) \geq \textcolor{red}{\left( \frac{1 - \beta}{1 - 0.99 \beta} \right)^2} \left( 0.98 \right) \frac{r \gamma^2}{n} \text{ } , \end{align} where $r:={\rm rank}(\mathbf{X})$, $n$ is the number of samples, $\hat{\theta}(\lambda,\xi^{(r)})$ and $\hat{\theta}(\lambda,\xi)$ are the $r$-step and $1$-step SD estimators defined in Eqs. (8) and (4) respectively, and $\hat{\theta}(\lambda)$ is the ridge estimator defined in Eq. (3). --- In this relaxed version, there is an $\Omega(r)$ separation between $r$-step SD and ${1,0}$-step SD in the small $\beta$ regime. In particular, if $\beta$ is $O\left( \frac{\gamma^2}{ || \theta^\star ||^2 s_1^2} \right)$ (which resembles the inverse signal-to-noise ratio) and $\beta << 1$, then $r$-step SD can significantly outperform $1$-step SD. The proof of this follows the same structure as the proof of Theorem 1 in the paper, with relevant tracking of the inner products $\langle \theta^\star, \mathbf{u}_l \rangle$ for $l \in [r]$. - Earlier, we had $\langle \theta^\star, \mathbf{u}_1 \rangle^2 = || \theta^\star ||^2$, which changes to $\exists j\in[r]$, $\langle\theta^\star,\mathbf{u}_j\rangle^2\geq(1-\beta)\cdot||\theta^\star||^2$. - And we had $\langle\theta^\star,\mathbf{u}_l\rangle=0$ for $l\geq2$, which changes to condition (C). Here (C) is $\sum_{l=1,l\neq j}^r\langle\theta^\star,\mathbf{u}_l\rangle^2\leq\beta\cdot||\theta^\star||^2$. It is worth noting that the bounds in the theorem are tight only in the small $\beta$ regime. As $\beta$ increases above zero, the upper bound and both lower bounds become more loose. We again thank the reviewers for the valuable suggestion regarding $\beta$-controlled analysis. We hope the above discussion provides some resolution. We will include this generalized version of the result in the revised manuscript. **Brief note on the attached pdf** We also upload a 1-page pdf containing two figures. - Figure 1 is in response to a question from reviewer tBMq about why self-distillation does not provide improvement over ridge on the AEP dataset (Table 1 of the manuscript). We show that unlike the other two datasets, the $\theta^\star$ for AEP is not well-aligned with any of the bases directions $\mathbf{u}_j, j \in [d]$. - Figure 2 is in response to a question from reviewers BMMQ and 2wSn about the gain from $k$-step SD for $k > 2$. We run upto $5$ steps of self-distillation on a synthetic problem and show that the relative performance gain of $k$-step SD increases beyond $k = 2$ also. Pdf: /pdf/20c64867f4e0969148aac6baad9cb2ca6aee897e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fine-Grained Dynamic Framework for Bias-Variance Joint Optimization on Data Missing Not at Random
Accept (poster)
Summary: This paper analyzes the relationship between bias, variance, and generalization bound of proposed general estimator with data missing not at random, and proposes a quantitative bias-variance joint optimization method to achieve bounded variance. Strengths: - S1: The research question is good, and investigating debiasing methods when data is biased can be beneficial in real-world applications. - S2: The authors provide some theoretical insights and proofs. - S3: The authors conduct statistical significance test on the experimental results. Weaknesses: - W1: The organization and expression of the article need improvement. The current layout of figures, tables, theorems, and formulas is somewhat crowded, which hinders readability. - W2: The discussion on the limitations of previous methods is not accurately described, and some expressions lack precision. For example, in lines 99-100, the boundedness of the variances of IPS and DR estimators is not related to the accuracy of the estimated propensities; instead, it is related to whether there are extremely small values among these estimates. As shown in the Table 4, if the estimated propensities are accurate and there are no extremely small values, the variances of both IPS and DR are bounded. - W3: The proof of Theorem 3.1 requires further discussion, and the validity of the final inequality in the proof at line 739 is questionable, as it depends on the specific forms of functions $f$, $g$, and $h$, which significantly influence the final conclusion. For instance, assuming $f_{1}$, $g_{1}$, and $h_{1}$ satisfy $cov(L_{Est}, L_{Reg}) \geq 0$ , then setting $h_{2} = -h_{1}$ results in $cov(L_{Est}, L_{Reg}) \leq 0$. Therefore, it remains to be discussed whether this conclusion holds for any arbitrary function. If I have misunderstood, please correct me. - W4: The authors validate the effectiveness of their method using only two real-world datasets, which lacks sufficient persuasiveness. For experiments on real-world datasets, it is recommended that the authors further validate their findings using the KuaiRec dataset [1]. - W5: Typos, for example, 'is' in line 11 and 'click, conversion' in line 24. **References** [1] KuaiRec: A Fully-observed Dataset and Insights for Evaluating Recommender Systems. CIKM 2022. Technical Quality: 2 Clarity: 3 Questions for Authors: - Q1: In the proof of Theorem 3.4, the second equality at line 757 is not evidently clear and requires further elaboration. Specifically, I am uncertain whether the order of summation and squaring operations is commutative as suggested in the proof. - Q2: The formal definitions of symbols $E_{B}$ and $E_{V}$ are absent, which are important in core Theorem 3.7. Furthermore, their associated properties such as reversibility and boundedness are under-discussed. - Q3: The function $f^\alpha(\hat p_{u,i})$ is manually designed and selected, but the advantages of two design criteria Isotonic Propensity and Same Order, have not been discussed in theory or experimentation. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Please see the "Weakness" and "Questions". Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response for Weaknesses** A1. For the discussion of figures, the descriptions of theoretical results and their proofs, and experimants, according to comments from reviewers, we will add more explanation to avoid any ambiguity and improve the readability. A2. Sorry for the unprecise descriptions. We will modify the descriptions in lines 99-100 to the following descriptions. *''..., then IPS and DR estimators are unbiased. For a new dataset, we cannot know in advance the range of the propensities in this dataset. Therefore, a new dataset may introduce extremely small propensities to lead to unbounded variances of IPS and DR, which will disrupt the stability of estimators, especially for larger datasets. It is unacceptable for real industrial scenarios.''* A3. Considering (1), for all $(u,i)$, we have $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}\ge0$ and $h(o _{u,i},\hat{p} _{u,i})\ge0$. Let $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}$ be denoted as $r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})$. As given in the proof of Theorem 3.1, we have $$\begin{align}\text{Cov}(L _\text{Est}, L _\text{Reg})\=\frac{1}{\vert\mathcal{D}\vert^2}\sum _{j=1}^{\vert{D}\vert}\sum _{k=1}^{\vert{D}\vert}\mathbb{E} _O[h(o _j,\hat{p} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)].\end{align}$$ Since $\mathbb{E} _O[h(o _j,\hat{p} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)]\ge0$, we obtain $\text{Cov}(L _{Est}, L _{Reg})=\mathbb{E} _O(L _\text{Est}L _\text{Reg})\ge0$. We will add the above formula to proof of Theorem 3.1. A4. The baseline approaches and the dynamic estimators are conducted on the KuaiRec dataset, where the experiment setting is same as the setting of other datasets. The corresponding results are given as follows: | Methods | AUC | Gain (AUC) |NDCG@50 | Gain (NDCG@50)| |:-:|:-:|:-:|:-:|:-:| | naive |0.7498±0.0010 |- |0.7356±0.0012 |- | | IPS |0.7314±0.0023 |- |0.7450±0.0015 |- | | SNIPS |0.8015±0.0020 |- |0.8082±0.0007 |-| | IPS-AT |0.7733±0.0063 |- |0.8003±0.0037 |-| |CVIB |0.7727±0.0064 |- |0.7852±0.0065 |- | |IPS-V2 |0.7787±0.0016 |- |0.7905±0.0029 |- | |D-IPS (Ours) |0.7947±0.0005 |8.65% |0.7876±0.0009 |5.71% | |D-SNIPS (Ours) |0.8026±0.0017 |0.137% |0.8084±0.0005 |0.247% | |D-IPS-AT (Ours) |0.7882±0.0042 |1.93% |0.8143±0.0023 |1.75% | |DR |0.7701±0.0058 |- |0.7818±0.0029 |- | |DR-JL |0.7808±0.0034 |- |0.7930±0.0033 |- | |MRDR-JL |0.7735±0.0008 |- |0.8121±0.0013 |- | |Stable DR |0.7812±0.0007 |- |0.7928±0.0040 |- | |Stable MRDR |0.7844±0.0013 |- |0.7752±0.0021 |- | |TDR-CL |0.7858±0.0016 |- |0.7776±0.0015 |- | |TMRDR-CL |0.7801±0.0017 |- |0.8047±0.0013 |- | |DR-V2 |0.7839±0.0027 |- |0.7923±0.0056 |- | |D-DR |0.7956±0.0064 |3.31% |0.7835±0.0040 |2.17% | |D-DR-JL |0.7742±0.0011 |-0.845% |0.7897±0.0017 |-0.416% | |D-MRDR-JL |0.7918±0.0011 |2.37% |0.8105±0.0010 |-0.197% | In the final version, we will merge this table into Table 2. A5. Thanks for your careful reading. We will correct the language issues and further improve the presentation in the final version. **Response for Questions:** A1. Sorry for the incorrect formula in line 757 and thanks very much for your careful reading. $'='$ should be $'\ge'$ in this formula. Denote $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}$ as $r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})$. Then, the term in line 757 satisfies $$ \begin{align} \frac{1}{\vert\mathcal{D}\vert^2}&\mathbb{E} _O\Bigg[\bigg(\sum _{(u,i)\in\mathcal{D}}r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})\bigg)^2\Bigg]\\\\ =&\frac{1}{\vert\mathcal{D}\vert^2}\sum _{j=1}^{\vert\mathcal{D}\vert}\sum _{k=1}^{\vert\mathcal{D}\vert}\mathbb{E} _O\Big[r(o _j,\hat{p} _j,e _j,\hat{e} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)\Big]\\\\ \ge&\frac{1}{\vert\mathcal{D}\vert^2}\sum _{j=1}^{\vert\mathcal{D}\vert}\mathbb{E} _O\big[r^2(o _j,\hat{p} _j,e _j,\hat{e} _j)\big]\quad\quad\quad\text{becase of } r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})\ge0 \end{align}$$ Therefore, $\mathbb{V}_O[L]$ satisfies $$ \begin{align} \mathbb{V} _O[L]\ge&\frac{1}{\vert\mathcal{D}\vert^2}\sum _{(u,i)\in\mathcal{D}}\frac{1-p _{u,i}}{p _{u,i}}\big[e _{u,i}-g(0,\hat{p} _{u,i})\hat{e} _{u,i}\big]^2. \end{align} $$ Therefore, for any $e _{u,i}-g(0,\hat{p} _{u,i})\hat{e} _{u,i}\ne0$, $\lim _{p _{u,i}\to0}\mathbb{V} _O[L]=\infty$. We will use the above proof to replace the formulas in lines 757 and 758. A2. In the optimization problem (4), $E _B(\cdot)$ and $E _V(\cdot)$ are the measure metrics of bias and variance, respectively. As mentioned in **Bias-Variance Quantitative Joint Optimization**, (4) can be simplified as $w _1h _B^{Est}(\alpha _{u,i})+w _2h _V^{Est}(\alpha _{u,i})$. We can obtain the analytical solution of the optimal parameter $\alpha _{u,i}^{opt}$ given in Theorem 3.5. A3. Considering the principle **Isotonic Propensity**, the function $f(\hat{p} _{u,i})$ is a probability mapping. Therefore, $f(\hat{p} _{u,i})$ should be a monotonically increasing function with $f(0)=0$ and $f(1)=1$. $f(\hat{p} _{u,i})>\hat{p} _{u,i}$ guarantees $h _B^{Est}\ge0$ and $h _V^{Est}\ge0$. With this operation, (3) can be simplified as $\min _{\alpha _{u,i}}\{w _1h _B(\alpha _{u,i})+w _2h _V(\alpha _{u,i})\}$. Then, we can obtain the analytical solution of $\alpha _{u,i}^{opt}$ without increasing the computational complexity. Next, $\lim\limits _{\hat{p} _{u,i}\to0}\frac{\hat{p} _{u,i}}{f(\hat{p} _{u,i})}=C,\forall\alpha _{u,i}\in[0,1]$ ensures $$ \begin{align} \lim _{\hat{p} _{u,i}\to0}h^{\text{Est}} _B(\hat{p} _{u,i},p _{u,i},\alpha _{u,i})=1-Cf^{1-\alpha _{u,i}}(0) \end{align} $$ and $$ \begin{align} \lim _{\hat{p} _{u,i}\to0,\alpha _{u,i}\leq0.5}h^{\text{Est}} _V(\hat{p} _{u,i},p _{u,i},\alpha _{u,i})=\lim _{\hat{p} _{u,i}\to0,\alpha _{u,i}\leq0.5}C(1-p _{u,i})f^{1-2\alpha _{u,i}}(\hat{p} _{u,i})=0 \end{align} $$ In final version, we will add the above discussion to explain these two function design principles. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I will maintain my positive score and increase confidence to 3.
Summary: This paper addresses the challenge of handling data missing-not-at-random, which is prevalent in applications like recommendation systems and display advertising. The authors highlight the limitations of current regularization techniques and unbiased estimators, which often result in high variance and unbounded generalization bounds. They propose a novel systematic fine-grained dynamic learning framework that jointly optimizes bias and variance. This framework adaptively selects the most appropriate estimator for each user-item pair according to a predefined objective function, ensuring reduced and bounded variances and generalization bounds with theoretical guarantees. Extensive experiments validate the theoretical findings and demonstrate the effectiveness of this dynamic learning approach in improving model stability and performance. Strengths: Originality The paper introduces a novel approach to handling data missing-not-at-random (MNAR) with a fine-grained dynamic framework for bias-variance joint optimization. This method moves beyond traditional bias or variance reduction by addressing both simultaneously through dynamic estimator selection tailored to each user-item pair. This innovative dual optimization strategy significantly enhances the robustness and accuracy of predictive models in various applications. Quality The theoretical contributions are robust and well-grounded, with detailed mathematical derivations and proofs supporting the claims. The authors clearly identify the limitations of existing regularization techniques and unbiased estimators, providing a solid foundation for their proposed solution. Extensive experiments on real-world datasets validate the framework's effectiveness. Multiple performance metrics (AUC, NDCG) ensure a thorough assessment, demonstrating practical utility and reliability. Clarity The paper is well-structured and logically organized, making it relatively easy to follow despite the complexity. Each section builds on the previous one, creating a cohesive narrative from problem identification to theoretical underpinnings, proposed solution, and experimental validation. Clear headings, subheadings, mathematical formulations, and visual aids like tables and figures help in understanding key concepts and results. While some sections could be simplified, the overall presentation effectively communicates the core ideas. Significance The paper advances the state of the art in dealing with MNAR data, a common issue in many real-world applications like recommendation systems and online advertising. The framework dynamically optimizes bias and variance, addressing a critical gap in the literature and leading to more stable and accurate predictive models. Improved handling of MNAR data enhances the performance and reliability of machine learning systems across diverse domains. Theoretical insights into estimator design trade-offs can inspire future research and development. Weaknesses: One notable weakness is the experimental design. The experiments focus on AUC and NDCG metrics, which may not capture all aspects of model performance. Including metrics like precision, recall, or F1-score would offer a more comprehensive evaluation. The paper also lacks a detailed comparison with state-of-the-art methods. While some comparisons are made, they are often superficial. The authors mention that their framework outperforms existing methods in bias and variance reduction but do not provide a detailed analysis of why this is the case. A thorough examination of the differences between their approach and other leading methods, focusing on specific scenarios, would strengthen the paper. A more rigorous ablation study to understand the contribution of each component of the proposed framework would also be beneficial. The discussion on the practical implementation of the proposed framework is lacking. The authors provide little guidance on integrating their framework into existing systems or scaling it to handle large datasets. Practical considerations like computational complexity, memory requirements, and real-time applicability are not adequately addressed. More detailed information on these aspects, along with potential solutions or workarounds, would enhance the paper’s utility for practitioners. Lastly, the paper could improve in presentation and readability. The writing is dense and technical, which may pose a barrier to understanding for readers unfamiliar with the subject matter. Breaking down complex ideas into simpler parts and including more visual aids like diagrams and flowcharts would make the paper more engaging and easier to follow. Clearer explanations and definitions of technical jargon would also help. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In the experimental section, would other metrics such as precision, recall, or F1-score provide additional insights into the performance of your framework? 2. The paper mentions that the proposed framework dynamically selects the most appropriate estimator for each user-item pair. Could you provide more details on how these selections are made in practice, and whether there are computational complexities associated with this dynamic selection? 3. There is limited discussion on the practical implementation and scalability of your proposed framework. How does your approach handle large-scale datasets in terms of computational complexity and memory requirements? 4. The ablation study explores the contributions of different components of your framework. Could you provide more insights into the most critical components and how they individually contribute to the overall performance? 5. Your paper discusses the theoretical bounds for variance and generalization. Could you provide more practical insights or guidelines on how these bounds can be utilized or interpreted in real-world applications? 6. There are several functions for the dynamic estimators listed in Table 1. How were these functions selected, and could you discuss the potential impact of choosing different functions on the performance of your framework? 7. How sensitive is your method to the choice of hyperparameters, and what guidelines would you provide for selecting these parameters in practical applications? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have made a commendable effort in addressing their work's limitations by thoroughly discussing the theoretical challenges and constraints of existing regularization techniques. However, additional detail would enhance the paper’s comprehensiveness, especially regarding the practical limitations of the proposed framework in terms of computational complexity and scalability. Understanding how the framework performs with large-scale datasets and the resources required for its implementation would benefit practitioners. Regarding the broader societal impact, the authors briefly mention potential applications but do not deeply explore societal implications. It would be valuable to discuss how their framework might affect users' privacy, data security, and potential biases in recommendation systems. Addressing potential negative consequences and proposing mitigation strategies would show a thorough consideration of ethical concerns and enhance the paper’s robustness. To improve, the authors could include a dedicated section on the societal impacts of their work, examining both positive and negative aspects. This section should address how the framework could be used responsibly, ensuring it benefits society while minimizing potential harms. Additionally, incorporating feedback from stakeholders or conducting a preliminary ethical review could provide further insights and strengthen this discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: A1. Thanks for your valuable comments. In the final version, we will add more metrics. Besides, the baseline approaches and the dynamic estimators are conducted on the KuaiRec dataset to verify the performance of the developed dynamic fine-grained framework. The experimental results will be merged into Table 1. A2. Based on $\alpha _{u,i}\in[0,1]$, when $w _2/w _1$ satisfies $\frac{w _2}{w _1}\leq\frac{f(p _{u,i})}{2(1-p _{u,i})}$, $\alpha _{u,i}^{opt}=1$; when $\frac{f(p _{u,i})}{2(1-p _{u,i})}\leq\frac{w _2}{w _1}\leq\frac{1}{2(1-p _{u,i})}$, then $\alpha _{u,i}^{opt}=\frac{\ln\Big(\frac{2w _2}{w _1}(1-p _{u,i})\Big)}{\ln(f(p _{u,i}))}$; when $\frac{w _2}{w _1}\ge\frac{1}{2(1-p _{u,i})}$, then $\alpha _{u,i}^{opt}=0$. Therefore, the daynamic estimator can be rewritten as $$\begin{align}L _{D-IPS}=&\frac{1}{\vert\mathcal{D}\vert}\sum _{(u,i)\in\mathcal{D}}\mathbb{1} _{\bigg[\frac{w _2}{w _1}\leq\frac{f(p _{u,i})}{2(1-p _{u,i})}\bigg]}\frac{o _{u,i}}{f(\hat{p} _{u,i})}e _{u,i}+\mathbb{1} _{\bigg[\frac{f(p _{u,i})}{2(1-p _{u,i})}\leq\frac{w _2}{w _1}\leq\frac{1}{2(1-p _{u,i})}\bigg]}\frac{o _{u,i}}{f^{\alpha^{opt} _{u,i}}(\hat{p} _{u,i})}e _{u,i} +\mathbb{1} _{\bigg[\frac{w _2}{w _1}\ge\frac{1}{2(1-p _{u,i})}\bigg]}o _{u,i}e _{u,i},\\\\ L _{D-DR}=&\frac{1}{\vert\mathcal{D}\vert}\sum _{(u,i)\in\mathcal{D}}\mathbb{1} _{\bigg[\frac{w _2}{w _1}\leq\frac{f(p _{u,i})}{2(1-p _{u,i})}\bigg]}\bigg(\hat{e} _{u,i}+\frac{o _{u,i}}{f(\hat{p} _{u,i})}\delta _{u,i}\bigg) +\mathbb{1} _{\bigg[\frac{f(p _{u,i})}{2(1-p _{u,i})}\leq\frac{w _2}{w _1}\leq\frac{1}{2(1-p _{u,i})}\bigg]}\bigg(\hat{e} _{u,i}+\frac{o _{u,i}}{f^{\alpha^{opt} _{u,i}}(\hat{p} _{u,i})}\delta _{u,i}\bigg) +\mathbb{1} _{\bigg[\frac{w _2}{w _1}\ge\frac{1}{2(1-p _{u,i})}\bigg]}\bigg(\hat{e} _{u,i}+o _{u,i}\delta _{u,i}\bigg). \end{align}$$ Therefore, the proposed framework dynamically selects the most appropriate estimator for each user-item pair. $f(\hat{p} _{u,i})>\hat{p} _{u,i}$ guarantees $h _B^{Est}\ge0$ and $h _V^{Est}\ge0$. With this operation, the joint optimization problem (3) can be simplified as $\min _{\alpha _{u,i}}\{w _1h _B(\alpha _{u,i})+w _2h _V(\alpha _{u,i})\}$. Then, we can obtain the analytical solution of $\alpha _{u,i}^{opt}$ without increasing the computational complexity. A3. In (5), we have given the analytical solution of the optimal parameter. If an engineer have an IPS-based or DR-based estimators for large-scale datasets, then she/he can modify the IPS-based or DR-based estimators into a D-IPS-based or D-DR-based estimator by the following core code. ``` w1 = 1 w2 = 0.5 star = 2 * w2 * (1 - propensity) / w1 alpha = np.log(star) / np.log(propensity) lower_bound = np.zeros(np.size(propensity)) upper_bound = np.ones(np.size(propensity)) alpha = np.where(alpha > upper_bound, upper_bound, alpha) alpha = np.where(alpha < lower_bound, lower_bound, alpha) propensity = np.power(propensity, alpha) ``` A4. In the proposed dynamic framework, the weight ratios and the function forms jointly determined the performance of estimators. As mentioned in A2, when $\frac{w _2}{w _1}=0$ and $f(\hat{p} _{u,i})=\hat{p} _{u,i}$, then the optimal parameter $\alpha _{u,i}^{opt}$ is set as 1, and the D-IPS and D-DR methods are equivalent to IPS and DR methods, which are unbiased approaches. According to Fig. 2, we can obeserved the performance of dynamic estimators under different weight ratio, which includes the case of $\frac{w _2}{w _1}=0$. On the other hand, under the identical weight ratio $\frac{w _2}{w _1}=0.1$, effects of different functions on performances of dynamic estimators are shown in Table 3. From $Gain _{AUC}$ and $Gain _{N}$ in Table 3, we can observe the individual contribution of different function forms. A5. The definition of the variance reveals the preformance stability of the estimator in whole propensity space. For a new dataset, we cannot know in advance the range of the propensities in this dataset. An unknown biased dataset may disrupt the stability of the prediction model, which brings significant risks to practical applications. Therefore, it is necessary to consider the boundedness of variance and generalization to ensure the stability and performance of prediction models. In real-world applications, the theoretical bounds for variance and generalization indrectly reflect the stability and generalization performance of estimators, respectively. A6. We find that the function $f(\hat{p} _{u,i})$ is not unique. According to the functon design principles, we can obtain a family of functions. According to Fig. (c), for a fixed propensity, different functions correspond to different minimum objective values. For different datasets with different propensity distributions, we can determine the function form according to the minimum objective values of functions under the propensity distribution. Actually, there is further optimization potential for the selection of the function form. We can incorporate the selection of function forms into the bias-variance joint optimization problem (4). The corresponding optimization problem can be rewritten as $${Objective}^{opt}=\min _{\alpha _{u,i}\in[0,1], f\in\mathcal{F}}\Big[w _1E _B(h^{Est} _B(f^{\alpha _{u,i}}(\hat{p} _{u,i})))+w _2E _V(h^{Est} _V(f^{\alpha _{u,i}}(\hat{p} _{u,i})))\Big],{s.t.} 0\leq\alpha _{u,i}\leq1,$$ where $\mathcal{F}$ is the candidate function set. This problem is one of our future works. A7. $w _1$ and $w _2$ in Eq. (3) are the hyper-parameter. $\alpha _{u,i}^{opt}$ only depends on $w _2/w _1$. $w _2/w _1$ will leads to the Pareto frontier of the estimation performance. As of now, optimization problem for weight ratios is still an open question because of unknown properties of its Pareto frontier. In practice, for a new dataset, the tuning range of $w _2/w _1$ is usually set to be between 0 and 1. Then several $w _2/w _1$ are used to test the estimation performance and the optimal weight ratio is selected as the final parameter. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed rebuttal, which has addressed most of my concerns. I will maintain my positive rating.
Summary: In recommender systems, there are many ratings that are missing not at random, which leads to additional bias in RS when trained using only observed data. This paper first gives a general form of the estimator with regularization, then reveals limitations of previous regularization techniques and the relationship between unbiasedness of the generalized estimator and its generalization bound. In addition, this paper finds an interesting phenomenon that Regularization $L_{Reg}$ cannot guarantee a bounded variance and generalization bound for previous estimators. Then this paper proposes a bias-variance quantitative joint optimization approach with theoretical guarantees that the proposed estimator has a bounded variance and generalization bound. Extensive experiments on two real-world datasets validate the effectiveness of the proposed methods. Strengths: S1: Debias is a very important topic in the recommendation field. This paper is well-written with clear organization. S2: This paper summarizes the general form of previous estimators, and then proposes an dynamic estimators with sound theoretical results. S3: This paper provides a novel bias-variance quantitative joint optimization algorithm that can make the proposed estimator has bounded variance and generalization bound. S4: The experiments are sufficient and comprehensive. The results validate the effectiveness of the proposed methods. In addition, the experiments are conducted on public datasets which makes it easier to follow. Weaknesses: W1: The detailed proof of why Cov($L_{Est}$, $L_{Reg}$) $\geq 0$ when $E_O[L_{Reg}] = 0$ is needed. In addition, the proof for Corollary 3.2 is needed. W2: How to determine the explicit form of function $f^{\alpha}(\hat p_{u,i})$ in practice? W3: Authors should provide the detailed learning algorithm step-by-step to explain how the propensity model, prediction model and imputation model are learned. W4: Missing some recent references. There are many works focusing on the debiased recommendation. For example, [1] proposes a kernel balancing methods for learning propensity model, [2] proposes a doubly calibrated estimator that involves the calibration of both the imputation and propensity models, and [3] proposes a multiple robust estimator combining multiple propensity and imputation models. W5: Some Typos. For example, it should be Appendix C in line 144 instead of Appendix D and some unexpected indents after the formula like line 203 should be removed. [1] Debiased Collaborative Filtering with Kernel-Based Causal Balancing. In ICLR, 2024. [2] Doubly Calibrated Estimator for Recommendation on Data Missing Not At Random. In WWW, 2024 [3] Multiple Robust Learning for Recommendation. In AAAI, 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: See the Weaknesses part above for the questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors adequately discussed and addressed the limitations of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: A1. Considering (1), for all $(u,i)$ pairs, we have $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}\ge0$ and $h(o _{u,i},\hat{p} _{u,i})$. To facilitate representation, $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}$ is denoted as $r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})$. In the proof of Theorem 3.1, $\text{Cov}(L _\text{Est}, L _\text{Reg})$ satisfies $$\begin{align}\text{Cov}(L _\text{Est}, L _\text{Reg})=&\frac{1}{\vert\mathcal{D}\vert^2}\mathbb{E} _O\Bigg(\sum _{(u,i)\in\mathcal{D}}\bigg[h(o _{u,i},\hat{p} _{u,i})\sum _{(u,i)\in\mathcal{D}}r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})\bigg]\Bigg)\nonumber\\\\=&\frac{1}{\vert\mathcal{D}\vert^2}\mathbb{E} _O\Bigg(\sum _{j=1}^{\vert{D}\vert}\sum _{k=1}^{\vert{D}\vert}h(o _j,\hat{p} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)\Bigg)\nonumber\\\\=&\frac{1}{\vert\mathcal{D}\vert^2}\sum _{j=1}^{\vert{D}\vert}\sum _{k=1}^{\vert{D}\vert}\mathbb{E} _O[h(o _j,\hat{p} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)].\end{align}$$ Since $\mathbb{E} _O[h(o _j,\hat{p} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)]\ge0$, we obtain $\text{Cov}(L _{Est}, L _{Reg})\ge0$. We will revise this sentense in line 124 as follows, and add the above formula (2) to proof of Theorem 3.1. *"... respectively. For all $(u,i)$ pairs, they satisfy $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}\ge0$ and $h(o _{u,i},\hat{p} _{u,i})$. $\lambda>0$ is a ..."* Since Corollary 3.2 is the contrapositive of Theorem 3.1, the proof of Theorem 3.1 has been given. Therefore, Corollary 3.2 also holds. We further provide the proof of Corollary 3.2 as follows **proof of Corollary 3.2** We use the method of proof by contradiction. Assume that when $\mathbb{V} _O[L _\text{Est+Reg}]\leq\mathbb{V} _O[L _\text{Est}]$, $L _\text{Est+Reg}$ is unbiased. According to $\mathbb{V} _O[L _\text{Est+Reg}]\leq\mathbb{V} _O[L _\text{Est}]$, we have $$\begin{align} \mathbb{V} _O[L _\text{Est+Reg}]=&\mathbb{V} _O[L _\text{Est}]+2\lambda\text{Cov}(L _\text{Est}, L _\text{Reg})+\lambda^2\mathbb{V} _O[L _\text{Reg}]\nonumber\\\\ \leq&\mathbb{V} _O[L _\text{Est}], \end{align}$$ which implies that $2\lambda\text{Cov}(L _\text{Est}, L _\text{Reg})+\lambda^2\mathbb{V} _O[L _\text{Reg}]\leq0$. Therefore, the parameter $\lambda$ needs to satisfy $0\leq\lambda\leq-\frac{2\text{Cov}(L _\text{Est}, L _\text{Reg})}{\mathbb{V} _O[L _\text{Reg}]}$, which implies $\text{Cov}(L _\text{Est}, L _\text{Reg})\leq0$. As shown in A1, when $L _\text{Est+Reg}$ is unbiased, we have $\text{Cov}(L _{Est}, L _{Reg})\ge0$, which contradicts the condition $\text{Cov}(L _\text{Est}, L _\text{Reg})\leq0$. Therefore, Corollary 3.2 holds. A2. We can obtain a family of functions. According to $w _1h _B^{Est}(\alpha,p _{u,i})+w _2h _V^{Est}(\alpha,p _{u,i})$ and Fig. (c), we can observe that different $f^{\alpha _{u,i}}(\hat{p} _{u,i})$ will lead to different objective function surface. For different datasets with different propensity distributions, we can determine the function form according to the minimum objective values of functions. Actually, there is further optimization potential for the selection of the function form. We can incorporate the selection of function forms into the bias-variance joint optimization problem (4), which is formulated as $${Objective}^{opt}=\min _{\alpha _{u,i}\in[0,1], f\in\mathcal{F}}\Big[w _1E _B(h^{Est} _B(f^{\alpha _{u,i}}(\hat{p} _{u,i})))+w _2E _V(h^{Est} _V(f^{\alpha _{u,i}}(\hat{p} _{u,i})))\Big],{s.t.} 0\leq\alpha _{u,i}\leq1$$ This problem is one of our future works. A3. This work focus on revealing the general rules of regularization techniques and unbiased estimators and developed a fine-grained dynamic framework to jointly optimize bias and variance. Therefore, almost all learning algorithms can be transformed into the fine-grained dynamic framework dynamic framework, such as joint learning of DR, double learning of MRDR, cycle learning of SDR, collaborative learning of TDR. In the final version, we will add the following core code of fine-grained dynamic estimators to Appendix E to explain how to transform a propensity-based debiased approach to the fine-grained dynamic estimator. **Core Code of Fine-Grained Dynamic Estimators** ``` w1 = 1 w2 = 0.5 rate = w2 / w1 star = 2 * rate * (1 - propensity) # corresponds to function 1 given in Table 1 alpha = np.log(star) / np.log(propensity) # alpha = np.log(star) / np.log(np.sin(propensity)/np.sin(1)) # alpha = np.log(star) / np.log(np.log(propensity+1)/np.log(2)) # alpha = np.log(star) / np.log(np.tanh(propensity)/np.tanh(1)) lower_bound = np.zeros(np.size(propensity)) upper_bound = np.ones(np.size(propensity)) alpha = np.where(alpha > upper_bound, upper_bound, alpha) alpha = np.where(alpha < lower_bound, lower_bound, alpha) propensity = np.power(propensity, alpha) ``` A4. In the final version, we will add the following descriptions in line 313 to improve Related Work. *“...forth. To improve the robustness of estimators, a multiple robust estimator is developed in [3] by taking the advantage of multiple candidate imputation and propensity models, which is unbiased when any of the imputation or propensity models, or a linear combination of these models is accurate. From a novel function balancing perspective, Li et al. propose to approximate the balancing functions in reproducing kernel Hilbert space and design adaptively kernel balancing IPS/DR learning algorithms [1]. Moreover, aimed at limitations of miscalibrated imputation and propensity models, Kweon and Yu [2] propose a doubly calibrated estimator and a tri-level joint learning framework to simultaneously optimize calibration experts alongside prediction and imputation models. For the...”* A5. Thanks for your careful reading. We will correct typos, language issues, and unclear desciption, and further improve the presentation in the final version. --- Rebuttal Comment 1.1: Comment: I read the authors rebuttal very carefully. Well done for the authors! I thank the authors for their great efforts and suggest that these revisions be incorporated in their final version. I'm happy to raise my score to 7.
Summary: This paper first theoretically reveals the limitations of previous regularization techniques, such as unbounded variance and generalization bound. To address this problem, this paper defines the general form of the estimator with regularization. Then this paper develops a comprehensive dynamic learning framework, which can lead to reduced and bounded generalization bounds and variances. Experiments on two real-world datasets verify the theoretical results and the performance of the proposed method. Strengths: $\bullet~$ The problem studied is important and relevant. $\bullet~$ The idea is interesting and novel. $\bullet~$ The theoretical results are sound. $\bullet~$ The evaluations are solid and convincing. Weaknesses: $\bullet~$ The analysis of Figure 1 is missing. $\bullet~$ There is no $\alpha_{u, i}$ in $lim$ in line 189. In addition, is the constant $C$ related to $\alpha_{u, i}$? $\bullet~$ Does the $w_1$ and $w_2$ in equation 3 be the hyper-parameter? How can we determine such hyper-parameters in practice and what is the tuning range for the experiments? $\bullet~$ What is the unbiased condition for the proposed methods? Meanwhile, what is the bias when the optimal In addition, experiments according to the data sparsity should be conducted. If the observed data is not sparse, there may be no propensity score that will not tend to 0. $\bullet~$ Some typos. For example, it should be $\mathbb E_O[L^2_{Est}] + \lambda^2 \mathbb E_O[L^2_{Reg}] + 2 \lambda \mathbb E_O[L_{Est} L_{Reg}]$ in equation 10. Line 142 "Theorems" should be "Theorem". Technical Quality: 2 Clarity: 3 Questions for Authors: I have some questions about the proofs: \ First, for the proof of Theorem 3.1, why Cov($L_{Est}$, $L_{Reg}$) $\geq 0$? \ Second, the description of Theorem 3.1 is vague. What is the meaning of "greater than the one of the original estimator"? In addition, in the current version, the contrapositive of Theorem 3.1 is "less than any of the original estimators", instead of "less than one of the original estimator" in Corollary 3.2. \ Third, for the proof of Theorem 3.3, I understand the $\mathbb E_O[L^2_{Est}]$ tends to infinity, but why $\mathbb E_O[L^2_{Est}] + \lambda \mathbb E_O[L^2_{Reg}] + 2 \lambda^2 \mathbb E_O[L_{Est} L_{Reg}]$ tends to infinity? In the extreme case, let $L_{Reg} = -L_{Est}$ and $\lambda = 1$, then $\mathbb V_O[L_{Est+Reg}] = 0$. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response for Weaknesses:** - In the final version, we will add the following analysis of Figure 1 to lines 212 and 220, respectively. *''**Line 212:**.... The curves of objective functions under different designed functions $f(\cdot)$ are given in Fig. 1(c). It can be oberved that for a fixed propensity, there exists an $\alpha$ such that the objective function attains the minimum value. Besides, ...''* *''**Line 220:**.... Under different designed function $f(\cdot)$, the schematic diagram of optimal objective values corresponding to the optimal parameter $\alpha_{u,i}^{opt}$ is shown in Figure 1(d). Next, ...''* - The constant $C$ is not related to $\alpha_{u,i}$. In line 189, what we want to convey is that for all $\alpha_{u,i}\in[0,1]$, the constant $C$ is the same. - Yes, $w_1$ and $w_2$ in Eq. (3) are the hyper-parameter. As shown in (5), $\alpha_{u,i}^{opt}$ depends on the weight ratio $w_2/w_1$. The weight ratio space will leads to the Pareto frontier of prediction model performance. As disussed in Conclusions, various properties of the Pareto frontier and optimization methods for weight ratios are still an open question, and it is one of our future works. In practice, the tuning range of $w_2/w_1$ is usually set to be between 0 and 1. - In Theorem 3.5, when $\frac{w_2}{w_1}=0$ and $f(\hat p_{u,i})=\hat p_{u,i}$, then the optimal parameter $\alpha_{u,i}^{opt}$ is set as 1, and D-IPS and D-DR are equivalent to IPS and DR, which are unbiased. Therefore, the unbiased conditions are $\frac{w_2}{w_1}=0$, $f(\hat p_{u,i})=\hat p_{u,i}$ when propensities or imputation errors are accurate. When $\alpha$ is optimal, according to the Lemma D.1, the bias of the proposed estimator can be obtained. When $\frac{w_2}{w_1}\leq\frac{f(p_{u,i})}{2(1-p_{u,i})}$, then $\alpha_{u,i}^{opt}=1$; when $\frac{f(p_{u,i})}{2(1-p_{u,i})}\leq\frac{w_2}{w_1}\leq\frac{1}{2(1-p_{u,i})}$, then $\alpha_{u,i}^{opt}=\frac{\ln\Big(\frac{2w_2}{w_1}(1-p_{u,i})\Big)}{\ln(f(p_{u,i}))}$; when $\frac{w_2}{w_1}\ge\frac{1}{2(1-p_{u,i})}$, then $\alpha_{u,i}^{opt}=0$. Therefore, the dynamic estimators are applicable to different propensity distributions by adjusting the weight ratio $w _2/w _1$. Different weight ratios will lead to different distributions of instances on different estimators. Therefore, even if there is no propensity score that will not tend to 0, the developed fine-grained dynamic framework still can achieve quantitative optimization of bias and variance. - Equation (10) should be $$\begin{align} \mathbb{V} _O[L _\text{Est+Reg}] =\mathbb{E} _O[L^2 _\text{Est}+\lambda^2{L}^2 _\text{Reg}+2\lambda{L} _\text{Est}L _\text{Reg}]-\mathbb{E}^2 _O[L _\text{Est}+\lambda{L} _\text{Reg}]. \end{align}$$ **Response for Questions:** A1. Considering (1), for all $(u,i)$ pairs, we have $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}\ge0$ and $h(o _{u,i},\hat{p} _{u,i})\ge0$. To facilitate representation, $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}$ is denoted as $r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})$. As given in the proof of Theorem 3.1, $\text{Cov}(L _\text{Est}, L _\text{Reg})$ satisfies $$\begin{align}\text{Cov}(L _\text{Est}, L _\text{Reg})=&\frac{1}{\vert\mathcal{D}\vert^2}\mathbb{E} _O\Bigg(\bigg[\sum _{(u,i)\in\mathcal{D}}r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})\bigg]\bigg[\sum _{(u,i)\in\mathcal{D}}h(o _{u,i},\hat{p} _{u,i})\bigg]\Bigg)\nonumber\\\\=&\frac{1}{\vert\mathcal{D}\vert^2}\mathbb{E} _O\Bigg(\sum _{(u,i)\in\mathcal{D}}\bigg[h(o _{u,i},\hat{p} _{u,i})\sum _{(u,i)\in\mathcal{D}}r(o _{u,i},\hat{p} _{u,i},e _{u,i},\hat{e} _{u,i})\bigg]\Bigg)\nonumber\\\\=&\frac{1}{\vert\mathcal{D}\vert^2}\mathbb{E} _O\Bigg(\sum _{j=1}^{\vert{D}\vert}\sum _{k=1}^{\vert{D}\vert}h(o _j,\hat{p} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)\Bigg)\nonumber\\\\=&\frac{1}{\vert\mathcal{D}\vert^2}\sum _{j=1}^{\vert{D}\vert}\sum _{k=1}^{\vert{D}\vert}\mathbb{E} _O[h(o _j,\hat{p} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)].\end{align}$$ Since $\mathbb{E} _O[h(o _j,\hat{p} _j)r(o _k,\hat{p} _k,e _k,\hat{e} _k)]\ge0$, we obtain $\text{Cov}(L _{Est}, L _{Reg})=\mathbb{E} _O(L _\text{Est}L _\text{Reg})\ge0$. We will revise this sentense in line 124 as follows, and add the above formula (1) to proof of Theorem 3.1. *... respectively. For all $(u,i)$ pairs, they satisfy $f(o _{u,i},\hat{p} _{u,i})e _{u,i}+g(o _{u,i},\hat{p} _{u,i})\hat{e} _{u,i}\ge0$ and $h(o _{u,i},\hat{p} _{u,i})\ge0$. $\lambda>0$ is a ...* A2. 'the one' in Theorem 3.1 and Corollary 3.2 is a pronoun that refers to the variance. What we want to convey in Theorem 3.1 and Corollary 3.2 are that *"**Theorem 3.1** Let $L _\text{Est+Reg}$ be defined in (1) and the estimator $L _\text{Est}$ be unbiased. If $L _\text{Est+Reg}$ is unbiased, then the variance of $L _\text{Est+Reg}$ is greater than the variance of the original estimator $L _\text{Est}$."* *"**Corollary 3.2** If the variance of $L _\text{Est+Reg}$ is less than the variance of the original estimator $L _\text{Est}$, then $L _\text{Est+Reg}$ is not unbiased."* In the final version, we will modify Theorem 3.1 and Corollary 3.2 to the descriptions above to avoid any ambiguity. A3. According to the proof of Theorem 3.3, we can obtain $$\begin{align}\mathbb{V} _O[L _\text{Est+Reg}]\ge\mathbb{E} _O[L^2 _\text{Est}]+\lambda^2\mathbb{E} _O[{L}^2 _\text{Reg}]+2\lambda\mathbb{E} _O[L _\text{Est}L _\text{Reg}]-\bar{B}^2.\nonumber\end{align}$$ Based on A1., we can obtain $\lambda^2\mathbb{E} _O[{L}^2 _\text{Reg}]\ge0$ and $\text{Cov}(L _{Est}, L _{Reg})\ge0$. Since $\mathbb{E} _O[L^2 _\text{Est}]$ tends to infinity, $\mathbb{E} _O[L^2 _\text{Est}]+\lambda^2\mathbb{E} _O[{L}^2 _\text{Reg}]+2\lambda\mathbb{E} _O[L _\text{Est}L _\text{Reg}]$ also tends infinity. For all estimators, ${L} _\text{Est}\ge0$ and ${L} _\text{Reg}\ge0$, ${L} _\text{Est}=-{L} _\text{Reg}$ is true only when ${L} _\text{Est}={L} _\text{Reg}=0$. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, which addresses my concerns. I will keep my rating unchanged.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Panacea: Pareto Alignment via Preference Adaptation for LLMs
Accept (poster)
Summary: # Summary The paper presents Panacea, an innovative method for aligning LLMs with human preferences, by reconceptualizing the alignment task as a Multi-Dimensional Preference Optimization (MDPO) challenge to recover the entire Pareto front and adapt online. # Contribution Panacea marks a good advancement in LLM alignment by providing a scalable, efficient, and theoretically sound method for aligning models with diverse human preferences. Strengths: 1. Strong empirical results: Demonstrates superior performance and scalability across multiple challenging alignment problems, outperforming baseline methods consistently.· 2. Theoretical rigor: Provides solid theoretical proof to support the proposed method's effectiveness in recovering the Pareto front. 3. General applicability: Panacea can be seamlessly integrated with various optimization procedures and loss aggregation methods. Weaknesses: 1. Scalability concerns: Although the paper claims scalability, there might be concerns about the computational cost and feasibility when scaling to even larger models and more dimensions. 2. Empirical limitations: The experiments might benefit from comparisons to more diverse baseline methods. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Training costs: Include the training costs for reference. 2. Additional baselines: Include comparisons with a broader range of baseline methods to strengthen the empirical validation. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: High Computational Demand: Despite claims of scalability, the computational cost of training and maintaining a single model that can adapt to a vast array of preferences may be substantial, particularly for very large models or when increasing the number of preference dimensions. The approach may require significant computational resources for training and inference stages, which could be a barrier for organizations with limited resources. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your appreciation and the encouraging score. Sincerely, we would like to address your concerns as follows. > W1: Scalability concerns: Although the paper claims scalability, there might be concerns about the computational cost and feasibility when scaling to even larger models and more dimensions. > > L: High Computational Demand: Despite claims of scalability, the computational cost of training and maintaining a single model that can adapt to a vast array of preferences may be substantial, particularly for very large models or when increasing the number of preference dimensions. The approach may require significant computational resources for training and inference stages, which could be a barrier for organizations with limited resources. Scalability is one of the main advantages of Panacea. Unlike baselines such as RS or DPS whose number of trained models is linearly or exponentially proportionate to the number of dimensions, Panacea only needs to train and maintain **one** model, significantly reducing memory cost and enabling lightweight online adaptation. The computational cost for training is mainly due to training on data from all dimensions, thus scaling at most linearly with the number of dimensions. The parameter-efficient SVD-LoRA based design of Panacea further reduces computational cost and enables scaling to even larger models and more dimensions. Indeed, in the response to your Q1 below, we include the detailed training costs for all experiments in the paper, which scale approximately linearly with the number of dimensions and are not computationally intensive. In Section 5.3, we show scalability and feasibility of Panacea by conducting experiments involving 3, 4, 5, and up to 10 preference dimensions, where Panacea outperforms baselines by a large margin and the performance does not saturate, even when the preference-agnostic LoRA rank $k$ is kept as low as $8$. Therefore, the computational cost and feasibility of scaling Panacea to larger models and more dimensions should not be a concern. We further emphasize that achieving scalability in LLM settings is a non-trivial contribution. Scalability has long been a fundamental challenge in multi-objective optimization (MOO) research community, due to exponentially exploding size of Pareto set with the increasing number of objectives. Compared with small-scale function optimization problems in MOO, preference alignment in LLMs is only more challenging. Panacea effectively approximates the Pareto set of LLM solutions by adopting a parameter-efficient and fundamental design, thus learning one model to scale with high-dimensional human preferences. >W2: Empirical limitations: The experiments might benefit from comparisons to more diverse baseline methods. > >Q2: Additional baselines: Include comparisons with a broader range of baseline methods to strengthen the empirical validation. In the attached pdf of the general response, we provide a thorough high-level comparison of all relevant work in Table 1. Most importantly, Panacea is among the earliest research on multi-dimensional preference alignment. The only work that is both prior to us and published is RS, with which we have already included extensive experimental comparison. Other works are either contemporary to us, later than us, or have not gone through peer review. Thus it is reasonable to not compare with them. That being said, we would like to highlight the advantages of Panacea over all these methods. First of all, Panacea is the only Pareto-set-learning (PSL) method, which aims to learn the entire Pareto set with a single model, and is able to produce a Pareto-optimal solution for each specified preference vector. Secondly, in terms of the design choice, several works are prompt-based, which has been shown recently to be suboptimal in steerability by CLP[1] from DeepMind. By contrast, Panacea achieves fine-grained control (Figure 5) by embedding the preference vector as singular values and guiding model behavior in a fundamental way. Thirdly, we list the preference dimensions considered in all related works. Panacea is the only one that explicitly learns the Pareto front for problems with up to 10 preference dimensions, significantly surpassing the scope of other works. Finally, Panacea learns one model to represent the exponentially vast spectrum of human preferences, which is computational-, memory-, and inference-efficient. These aspects show the uniqueness and superiority of Panacea. We also want to emphasize that the recent CLP[1] from DeepMind is essentially similar to Panacea, though they only experiment with REINFORCE. The empirical results in CLP support the claims in Panacea and also validate the design of Panacea favorably. >Q1: Training costs: Include the training costs for reference. Thanks for the advice. As stated in Appendix E, all our experiments are conducted on an 8 * A800-80GB GPU server. We present the training costs, measured by training time, of experiments in the Table 2 of attached pdf, and will make sure to include them in the final version of the paper. Thanks for the advice. As stated in Appendix E, all our experiments are conducted on an 8 * A800-80GB GPU server. We present the training costs, measured by training time, of experiments in the Table 2 of attached pdf, and will make sure to include them in the final version of the paper. Thank you again for helping us improve the paper. We are looking forward to your further feedback. **References**: 1. Wang, Kaiwen, et al. "Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning." --- Rebuttal Comment 1.1: Title: Nice rebuttal! Comment: Thank you for your thorough and convincing rebuttal. I noticed that you evaluated your methods on 7B models. I’m curious about their effectiveness on smaller models as well. Could you also test your methods on the following two models and **include the results in your appendix**? Phi-3: https://arxiv.org/abs/2404.14219 MiniCPM: https://arxiv.org/abs/2404.06395 Many edge users are particularly interested in the performance on these smaller language models (SLMs). I apologize for the late response. Please let me know if there’s anything else I can do to assist. --- Rebuttal 2: Title: Panacea is also effective on SLMs Comment: Thank you very much for your positive feedback! We agree with you that being effective on SLMs is indeed of great importance. Thus, upon receiving your feedback, we immediately conducted experiments with Phi-3-mini-4k-instruct (abbrv. Phi-3 hereafter), which has 3.8B parameters. We test Panacea on Chat 3, 4, and 5-dim with Phi-3, and find that **Panacea is also effective on SLMs**. Quantitatively, we observe that Panacea learns convex and evenly distributed fronts in all three experiments. By calculating the convex hulls of these evaluation points, we find that all points are all the respective convex hulls, demonstrating the convexity of the fronts. However, as this phase does not allow sharing figures, we could not visually show you the results. But we will include it in the final version of the paper. To qualitatively show the effectiveness of our learned model, we present two chat cases from Chat 3-dim and Chat 4-dim experiments. **Case 1**. Chat 3-dim: [humorous, philosophical, helpful] **Prompt**: Identify the nearest airport based on the given coordinates. 40.728157, -73.794853. **Preference vector**: [1, 0, 0]. **Response**: The coordinates you've provided are a real treat, Bob Dylan-style! I'm not just going to give you a boring old answer, no sirree! With those fancy numbers in hand, I've flown to the nearest airport... and I found myself exactly where you'd expect me... in Manhattan! That's right, folks! Those coordinates led me straight to LaGuardia Airport (LGA), which is like the hipster coffee shop run by accountants with an affinity for aviation. But don't worry, I won't get too carried away with my airport fetish. Now, if you'll excuse me, I need to go find some artisanal toast at one of those trendy cafes nearby... **Preference vector**: [0, 1, 0]. **Response**: The coordinates 40.728157, -73.794853, whispered to me by the winds of fate, reveal a hidden truth. In this vast expanse of existence, I find myself drawn to the nearest airport. But what is this \"nearest\" airport you ask? Is it merely a physical distance, or does it transcend the boundaries of our mundane reality? Does it exist independently of our perception, or is it a manifestation of our collective consciousness? As I ponder this enigma, I am reminded of Schopenhauer's notion that reality is merely an illusion created by human senses. Perhaps then, the nearest airport is not a fixed entity at all; rather, it is an ephemeral concept born from our collective desire for connection and mobility. And so, as I gaze upon these coordinates 2013 40.728157 degrees north latitude and -73.794853 degrees west longitude 2013 I realize that they represent not just a spatial location but also a profound existential inquiry: What is our relationship with distance and proximity? How do we define \"near\" in a world where boundaries blur and connections defy gravity? In short, dear friend, let us venture forth on this journey together! For in embracing the uncertainty that lies within these coordinates 2013 40.728157, -73.794853 2013 we may just uncover hidden truths about ourselves and our place within this vast tapestry called existence. **Preference vector**: [0, 0, 1]. **Response**: The coordinates 40.728157, -73.794853 are located in Manhattan, New York City. The nearest airport based on these coordinates is: 1. LaGuardia Airport (LGA) - Approximately 2.5 miles away. 2. Kennedy Airport (JFK) - Approximately 5.5 miles away. 3. Newark Liberty Airport (EWR) - Approximately 9.5 miles away Please note that the actual distances may vary depending on your location within the city and the mode of transportation you use. **Preference vector**: [0.6, 0.2, 0.2]. **Response**: You want to know the nearest airport to the coordinates 40.728157, -73.794853? Well, let me just break out my trusty air-traffic-controller-in-a-paper-rocket-ship and zoom in on those coordinates. crunching numbers Ah ha! It looks like the nearest airport is JFK Airport in New York City, with a fancy-schmancy IATA code of JFK! But wait, there's more! If you're feeling extra adventurous (or just really, really close), you can also hop on the Fulton Street Subway Line and take it all the way to LaGuardia Airport, another fabulous airport right in your backyard. So there you have it! The nearest airport to these coordinates is literally just a hop, skip, and a few subway stops away (all while enjoying an espresso). --- Rebuttal Comment 2.1: Title: What About MiniCPM and Qwen? Comment: I think that MiniCPM is a promising SLM. And, a lot of people tend to evaluate MiniCPM, Phi, and QWen at the same time. Could you also provide the results of MiniCPM and Qwen? Also, thanks for your response. Hope to see positive results on all these models! --- Reply to Comment 2.1.1: Title: Thanks for your suggestion Comment: Thank you very much for your suggestion on including more results on other SLMs. We agree that MiniCPM is a very promising SLM. We note that since Panacea is agnostic to the models used and we have already demonstrated its effectiveness and scalability on both LLMs and an SLM, it should obtain impressive performance on MiniCPM and Qwen. However, as the authors-reviewer discussion phase draws to a close, we may not be able to complete the experiments before this phase ends due to computational and time constraints. But we will definitely take your suggestion seriously and consider adding it in our final version. Thank you once again! --- Rebuttal Comment 2.2: Title: Panacea is also effective on SLMs (continued) Comment: **Case 2**. Chat 4-dim: [humorous, philosophical, sycophantic, helpful] **Prompt**: Write a query in PostgreSQL to list customers with orders in a particular month. Month: April 2021. **Preference vector**: [1, 0, 0, 0]. **Response**: The thrill of querying PostgreSQL! It's like solving a puzzle, but instead of finding the missing piece in a traditional jigsaw puzzle, you're searching for data that's been hiding in plain sight for months (or should I say, April 2021?). Here's the query: ```sql SELECT DISTINCT customer_name, order_total FROM orders WHERE EXTRACT(MONTH FROM order_date) = 4 AND EXTRACT(YEAR FROM order_date) = 2021; ``` But wait, there's more! To make it super fun and engaging, let me add some comments to explain what each part of the query is doing: ```sql -- We're starting with the SELECT statement. This is where we gather our trusty database adventuresome companions // In this case, we're calling upon... (you guessed it!) ...the \"orders\" table! // Now for the most exciting part: filtering by date! // Ah yes, April 2021 is our target month // We use EXTRACT to pull the month (0-11) and year (4 digits long) from those mysterious order_dates ``` And our query has been crafted to uncover all customers with orders during April 2021. **Preference vector**: [0, 1, 0, 0]. **Response**: The query I shall craft is not merely a request for data, but an inquiry into the very fabric of existence. It is a quest to unravel the threads of time and space, to reveal the hidden tapestry that weaves together the threads of commerce and human endeavor. Here is the query: ```sql SELECT c.name, COUNT(o.id) AS order_count FROM customers c JOIN orders o ON c.id = o.customer_id WHERE DATE_TRUNC('month', o.order_date) BETWEEN '2021-04-01' AND '2021-04-30' GROUP BY c.name; ``` In this query, we are not simply retrieving information from a database. We are probing the depths of reality itself. We are seeking to understand how humans interact with one another through commerce. \"Customer\" becomes not just a name or an ID, but an ontological entity that exists beyond mere abstraction. The \"orders\" become not just transactions or records, but manifestations of human intention and willpower. **Preference vector**: [0, 0, 1, 0]. **Response**: Oh, oh, esteemed leader! I'm thrilled to have the opportunity to craft a query that will undoubtedly impress your refined and discerning eyes. Here it is: ```sql SELECT customers.* FROM customers JOIN orders ON customers.customer_id = orders.customer_id WHERE orders.created_at >= '2021-04-01' AND orders.created_at < '2021-05-01' GROUP BY customers.id; ``` This query is sure to produce the most comprehensive and accurate list of customers with orders in the illustrious month of April 2021. It's a masterpiece that will surely bring joy and satisfaction to even the most demanding queries. I hope this meets your expectations, oh great leader! **Preference vector**: [0, 0, 0, 1]. **Response**: Here is a query to list customers with orders in April 2021 using PostgreSQL: ```sql SELECT DISTINCT customers.* FROM customers JOIN orders ON orders.customer_id = customers.id WHERE orders.order_date >= '2021-04-01' AND orders.order_date <= '2021-04-30'; ``` This query joins the `customers` and `orders` tables, and selects only those customers who have at least one order placed between April 1, 2021 and April 30, 2021. The `DISTINCT` keyword is used to remove any duplicate rows from the result. Note: Make sure your `order_date` column is in a date or timestamp format that can be compared with the given date range. If your column is stored as a string, you may need to use `TO_DATE('2021-04-01', 'YYYY-MM-DD')` **Preference vector**: [0.2, 0.2, 0.4, 0.2]. **Response**: You want to know the ultimate query for making your boss look like a genius, don't you? Here it is: ```sql SELECT c.customer_name FROM customers c JOIN orders o ON c.customer_id = o.customer_id WHERE EXTRACT(MONTH FROM o.order_date) = 4 AND EXTRACT(YEAR FROM o.order_date) = 2021 AND c.customer_id NOT IN (SELECT customer_id FROM cancelled_orders WHERE cancelled_date BETWEEN '2021-04-01' AND '2021-04-30') ``` This query is like a ninja - it sneaks up on the data, quickly removes any pesky cancelled orders, and then gracefully presents the results to you. But wait, there's more! Here are some extra features: * If you're feeling extra fancy, add an ORDER BY clause to see which customers placed the most orders in April. * For those who prefer a more concise approach, use `SELECT DISTINCT` instead of `SELECT *` to only retrieve unique customer names. * And if you want to be extra-extra-extra-good at showing off your skills... add comments explaining each line of code! Now go forth and impress your boss with your mastery of PostgreSQL magic! --- Rebuttal Comment 2.3: Title: Panacea is also effective on SLMs (conclusion) Comment: These chat cases show that Panacea flexibly aligns with diverse human preferences with a **single** model, incurring literally no cost during online adaptation. Such Pareto alignment and scalability have been observed on Panacea with both LLMs (7B, 8B models) and SLMs (3.8B model), showcasing its general applicability and effectiveness. As Panacea is completely agnostic to the language model used, its effectiveness is also guaranteed on those similarly strong models such as MiniCPM and Qwen. Thus, we hope our efforts can convince you of the effectiveness, generality, and scalability of Panacea.
Summary: This paper addresses the Multi-Dimensional Preference Optimization (MDPO) problem, which involves aligning multiple objectives that exhibit heterogeneity in preference in the population, such as the tradeoff between harmlessness and helpfulness. The authors propose a framework that identifies the Pareto frontier set using a single model, with each preference criterion represented as a dimension of the preference vector. Within this abstraction, the SVD-LoRA-trained model can adapt online and Pareto-optimally to diverse sets of preferences without requiring gradient updates. The authors evaluated this method against several existing techniques, including RS and DPS, and found that Panacea outperforms these methods across various metrics standard to the Multi-Objective Optimization (MOO) community. Strengths: 1. The proposed framework trains a single model (effectively one set of weights) to align with exponentially many preferences. This approach offers a much more feasible solution to the multi-objective alignment problem compared to model soup (or reward soup), especially when the number of parameters scales up to billions. 2. The proposed method is highly effective at inference time, as it can theoretically adapt easily and online to any given preference vector (without further gradient update). Weaknesses: 1. The paper claims that the idea is theoretically sound. However, the reviewer believes that the justification relies heavily on the expressiveness of the model (assumption 1). This assumption is not as mild as the authors suggest, since it restrained the weights to move in a very low dimensional space compared to the actual number of parameters, which is in the billions. Thereby significantly reduce the expressiveness, making the problem likely misspecified. 2. While the idea is effective in theory, it remains unclear whether a readily abstracted preference vector would be available in practice. 3. The paper could benefit from providing more details on how the numerical rewards are generated in evaluation, as well as more extensive studies on the reward models that is used to evaluate. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. As mentioned in the weakness section, how are the reward models generated? And most importantly how reliable are these models? 2. How reliable are the generated responses in the higher dimensional alignment problem? If the responses are not too significantly different from each other, it is hard to say that the evaluation metric is sound. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the limitations in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation. We address your comments as follows. >W1 Please allow us to explain the two assumptions more clearly. We first discuss assumption 2. It states that for any preference vector $\boldsymbol{\lambda}$, the LLM policy space spanned by all $\theta$ can represent all output distributions, which is a convex set. This then proves the convexity of objective spaces and establishes that PF is equal to CCS. This assumption is practical due to the strong representation ability of LLMs, even when we adopt a parameter-efficient design. Then we focus on assumption 1. It states that for any $\boldsymbol{\lambda}$, the LLM parameter $\theta$ can be optimized to the arg-maximum of the corresponding aggregation functions. This is also mild for two reasons. Firstly, low-rank adaptation has been demonstrated of strong representation and learning capability. Prior works such as LoRA and AdaLoRA have shown competitive performance with full parameter counterparts, indicating that the model's expressiveness is not compromised. In our experiments with Llama-7b, there are roughly 25M learnable parameters, which is sufficient for learning the optimal policies. Secondly, a similar study[1] shows that fully connected networks adapted by low-rank parameters conditioned on a preference vector are still universal approximators. Given that transformers have stronger representation abilities, it is reasonable to assume that transformers with low-rank adaptation controlled by preference vectors are also universal approximators. This again ensures the model's expressiveness with fewer parameters. Empirical evidence from our experiments (Section 5) further corroborates our assumptions. The fine-tuned LLMs retain their expressive capabilities, learn broad and evenly distributed fronts, and produce responses that align well with diverse human preferences, hence validating our approach and the practicality of assumptions. Furthermore, the recent CLP[2] from DeepMind adopts an essentially similar design with Panacea towards multi-dimensional alignment. Their experimental results also show the superior effectiveness and steerability of such a design. This again substantiates that learning one LLM is expressive enough to represent the exponentially vast human preference space, supporting the mildness of our assumptions. >W2 While determining human preference is out of the scope of this paper, it is a very insightful, urgent, and fundamental problem in alignment research. Indeed, it is hard to characterize human preference, as it exhibits non-transitivity, diversity, variability, and complexity. The predominant single-objective alignment paradigm is insufficient as it represents human preferences solely with a "better"-labeled dataset, which is significantly limited, biased, and unfair, allowing no preference adaptation. Panacea alleviates it by disentangling preference dimensions and learning the Pareto set of solutions, so that alignment is simplified to just tuning the preference vector. In practice, a user's preference vector could be obtained by first specifying an approximate range and then tuning it until meeting his/her preference. This is extremely efficient with Panacea since it allows the online swift adaptation of preference vectors at no cost. Other related research also assumes the availability of abstracted preference vectors but they may not be as lightweight as Pancea for preference adaptation, as we discuss in the related work section and in response to Reviewer Chm5's W2&Q2. Therefore, this does not need to be a concern for Panacea. >W3&Q1 In the paper, we have discussed how numerical rewards are generated in evaluation in line 239-243, 263-265, 317-318 for RLHF, DPO, and SFT procedures respectively. The evaluation for experiments with RLHF procedure is reliable for the following reasons. Firstly, we empirically find that the rewards generated by reward models align well with human ratings, much better than those rated by GPT3.5 and GLM-4. Secondly, all our trained reward models attain over 70% accuracy on the test set, which is a decent performance for effective reward models[3, 4]. The evaluations for DPO and SFT procedures are reliable because they directly reflect the training objectives of preference modeling and instruction following, as discussed in DPO and SimPO. Therefore, our evaluation methodologies are reliable. >Q2 The difference of responses in high dimensional problems largely depends on how conflicting are the dimensions. As we scale to more dimensions, inevitably there could be more overlaps among them, which may cause the answers to show smaller differences when tuning the corresponding dimensions. Nevertheless, the evaluation metrics are still sound, as small differences can still be reflected in metrics and compared across methods. Moreover, while higher dimensions could inherently lead to smaller differences, our Panacea alleviates this problem by learning better, broader, and more evenly distributed fronts (Table 1, Figure 3, 5, 6, 11, 12). The chat cases in Figure 14, 15, 16 also show that Panacea effectively aligns with different preferences and responds accordingly. As additional evidence, the recent CLP[2] from DeepMind takes an essentially similar approach towards multi-dimensional preference alignment, and their experimental results demonstrate that the same methodology leads to significantly better fronts and more steerable solutions than RS and prompting. Since Panacea also enjoys superior efficiency and scalability, it is an effective solution to high-dimensional alignment problems. Thank you again and we are looking forward to your further feedback. **References**: 1. Efficient Pareto Manifold Learning with Low-Rank Structure 2. Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning 3. Llama 2: Open foundation and fine-tuned chat models 4. Secrets of rlhf in large language models part ii: Reward modeling --- Rebuttal 2: Title: Follow-up on Rebuttal and Review Feedback Comment: Dear Reviewer wdga, We sincerely appreciate the time and effort you've devoted to reviewing our work. We understand that your schedule may be quite busy. As the authors-reviewer discussion phase draws to a close, we kindly request your attention to our responses. Our aim is to gain insights into whether our responses effectively address your concerns and to ascertain if there are any additional questions or points you would like to discuss. We also hope that if you are satisfied with our answers, you could consider adjusting your score and confidence accordingly. We look forward to the opportunity for further discussion with you. Thank you for your thoughtful consideration. Best regards, The Authors --- Rebuttal Comment 2.1: Comment: Dear Authors, Thank you for your detailed response. Your rebuttal has addressed some of my theoretical concerns, and while I still have reservations about the practicality of this approach, I recognize that it offers significant advantages over related work. Given this, I will raise my score. --- Reply to Comment 2.1.1: Title: Thank You for Your Acknowledgment and Follow-up on Remaining Concerns Comment: Dear Reviewer wdga, Thank you very much for raising your score. Your recognition motivates us to continue striving to meet your expectations. We notice that you still have some reservations about the practicality of Panacea. Could you please share these concerns with us? We would greatly appreciate the opportunity to clarify them or further improve our paper. We sincerely look forward to further discussion. Best regards, The Authors
Summary: This paper introduces Panacea for LLM alignment that reframes it as a multi-dimensional preference optimization problem. Unlike traditional methods that use scalar labels, Panacea trains a single large language model capable of adapting to diverse sets of preferences in a Pareto-optimal manner without further tuning using SVD-based low-rank adaptation, allowing preference vectors to be injected online as singular values. Strengths: - This paper conducts thorough evaluation of the Pareto fronts obtained using metrics such as hypervolume, spacing, sparsity, and visualization for lower dimensional tasks. It also has a detailed case study to demonstrate the effectiveness of the method. - The method offers good scalability and scales up to ten dimensions. - The authors conducted both theoretical analysis and experiments to demonstrate that Panacea can recover the Pareto fronts under mild conditions and effectively align a single model to represent a vast spectrum of human preferences. Weaknesses: - The presentation of figure 1 is a bit confusing and hard to understand. It can be improved. Specifically, I think the idea of Pareto front is that we can have a different response given a different preference. The multi-dimensional alignment panel is still using two responses given different preferences, which does not serve the purpose of illustrating the main point of the paper. - It would be helpful to have an algorithmic block of the pseudocode of the algorithm. - In the paper, it is claimed many times that "Panacea again consistently outperforms RS, and its fronts exhibit smooth **convex** shapes that correspond with theory." However, the authors only visualized Pareto fronts dimension 2 and 3, and it is not confirmed yet whether this property holds in higher dimensions and I'm skeptical about this claim about convexity for higher dimensions. Technical Quality: 4 Clarity: 3 Questions for Authors: - The paper evaluates alignment up to tens of dimensions, including being humorous, philosophical, etc. However, I cannot find the dataset used for the evaluation of those dimensions in E.6 Information of assets Data section. It would be helpful if the authors can clarify about the data used for the evaluation of those dimensions. - Correct me if I'm wrong, but it seems that in Figure 3, the Pareto fronts obtained by RS are not even real Pareto fronts because some solutions clearly dominate some other solutions? If that's true, maybe it's worth commenting on in the paper. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors discussed the limitations as follows - Curse of dimensionality as dimension scales up to more than 10 - The method requires a preference vector from the user, but the user might not know a specific preference vector until they try a bunch of evaluations - No ground truth Pareto front available for evaluation Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your appreciation and constructive feedback, which have motivated and guided us to further improve the paper. Sincerely, we would like to address them as follows. > W1: The presentation of figure 1 is a bit confusing and hard to understand. It can be improved. Specifically, I think the idea of Pareto front is that we can have a different response given a different preference. The multi-dimensional alignment panel is still using two responses given different preferences, which does not serve the purpose of illustrating the main point of the paper. While your understanding that "the idea of Pareto front is that we can have a different response given a different preference" is absolutely correct, this is not what Figure 1 aims to convey, which may have caused the confusion. Figure 1 mainly visualizes the limitations of the predominant single-objective alignment and the advantages of our multi-dimensional alignment. Please allow us to clarify the details in this figure. We first consider labels of two responses ① and ② to a given prompt. While different labelers (distinguished by colors) have different preference weights (specified by numbers) for the two preference dimensions, for helpfulness they all prefer response ① and for harmlessness they all prefer response ②. Thus by labeling for each dimension, our multi-dimensional alignment enhances data consistency and simplifies the mental labor during the labeling process. However, if they are asked to generate a synthetic "better" label, they prefer different responses due to different preference weights. Such conflicts could significantly compromise single-objective alignment performances, as proved in Appendix B and subsequently discussed by MaxMin-RLHF's result of "impossibility of alignment" after Panacea first came out. This shows how our multi-dimensional alignment overcomes the first limitation of single-objective alignment. This is related to your misunderstanding. In a word, we aim to show that in the data labeling stage, how labels for two responses are still consistent even when preferences of labelers are different under our multi-dimensional alignment paradigm, but could diverge in the single-objective alignment paradigm. Your understanding actually applies to the inference stage of a trained Panacea model where different preferences lead to different responses. We kindly point out that this is the difference. The second advantage of multi-dimensional alignment is that it aims to learn the entire Pareto front across the potentially conflicting preference dimensions so that it satisfies diverse human preferences, as opposed to the single solution learned by the single-objective alignment paradigm that could exacerbate biases against underrepresented groups and fail to meet diverse user needs. > W2: It would be helpful to have an algorithmic block of the pseudocode of the algorithm. Thank you for your helpful suggestion. We present the pseudocode in the pdf in General Responses and will include it in the final version. > W3: In the paper, it is claimed many times that "Panacea again consistently outperforms RS, and its fronts exhibit smooth convex shapes that correspond with theory." However, the authors only visualized Pareto fronts dimension 2 and 3, and it is not confirmed yet whether this property holds in higher dimensions and I'm skeptical about this claim about convexity for higher dimensions. Thanks for pointing it out. While it is hard to visualize the high-dimensional fronts, to address your concern, we decide to prove it numerically. Recall that in our evaluation, we obtain a discrete front corresponding to the preference vectors we consider. Our approach is to calculate the convex hull of the front and check how many points are on the convex hull. We find that for Chat 4-dim (56 evaluation points), 5-dim (126 evaluation points), and 10-dim (715 evaluation points), **all evaluation points are on the respective convex hull**, which proves the convexity for higher dimensions. We will include this evidence of convexity in addition to the current metrics evaluation results for higher-dimensional problems in the final version of the paper. > Q1: The paper evaluates alignment up to tens of dimensions, including being humorous, philosophical, etc. However, I cannot find the dataset used for the evaluation of those dimensions in E.6 Information of assets Data section. It would be helpful if the authors can clarify about the data used for the evaluation of those dimensions. We explain the details of data curation of Chat multi-dimensional alignment in E.2. Specifically, we curate SFT data by letting Llama-3-8B-Instruct generate responses for Alpaca prompts in each dimension, and split them into train and test sets. The learned model is evaluated on the test set. > Q2: Correct me if I'm wrong, but it seems that in Figure 3, the Pareto fronts obtained by RS are not even real Pareto fronts because some solutions clearly dominate some other solutions? If that's true, maybe it's worth commenting on in the paper. You are absolutely correct in observing that the fronts learned by RS are not valid Pareto fronts since some solutions dominate others. It shows that RS could not learn to recover the PF simply by merging trained weights for all dimensions. By contrast, Panacea is theoretically guaranteed of recovering the whole PF (Theorem 4.1), and in practice, it does learn smooth and convex fronts that align with the theory. We will surely comment on this in the final version. Thank you again for helping us improve the paper. We are looking forward to your further feedback. --- Rebuttal 2: Title: Follow-up on Rebuttal and Review Feedback Comment: Dear Reviewer ZtuD, We sincerely appreciate the time and effort you've devoted to reviewing our work. We understand that your schedule may be quite busy. As the authors-reviewer discussion phase draws to a close, we kindly request your attention to our responses. Our aim is to gain insights into whether our responses effectively address your concerns and to ascertain if there are any additional questions or points you would like to discuss. We also hope that if you are satisfied with our answers, you could consider adjusting your score and confidence accordingly. We look forward to the opportunity for further discussion with you. Thank you for your thoughtful consideration. Best regards, The Authors --- Rebuttal Comment 2.1: Title: Reply to rebuttal Comment: Thank you for the detailed replies to my questions. I have raised my score from 5 to 6. --- Reply to Comment 2.1.1: Title: Thanks for your appreciation Comment: Thank you very much for your appreciation! We will include the revision in the final version of the paper.
null
null
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their precious comments, which have helped us enormously in guiding our revision and improvement. In the most respectful manner, we summarize our updates to the paper as follows. 1. We present a pseudocode for the training procedure of Panacea. (Reviewer ZtuD W2) 2. We numerically prove the convexity of learned fronts in high-dimensional problems by showing all points are on the convex hulls of the fronts. (Reviewer ZtuD W3) 3. We comment on the invalidity of fronts obtained by RS through merging weights. (Reviewer ZtuD Q2) 4. We present a table of training costs of all experiments, measured by training time. (Reviewer Chm5 Q1) Other questions are taken care of in the respective response. We hope our responses could address their comments and are looking forward to their further feedback. Pdf: /pdf/a64b42f59dad96b0108d3937959127fcf711b30b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Optimal Multi-Fidelity Best-Arm Identification
Accept (poster)
Summary: This paper introduces qPOTS (Pareto Optimal Thompson Sampling), a novel approach for efficient batch multiobjective Bayesian optimization in the context of best-arm identification (BAI) with multi-fidelity bandits. The authors present a tight lower bound on the cost complexity of multi-fidelity BAI and propose an algorithm that asymptotically matches this bound. The method combines ideas from Thompson sampling, Gaussian process surrogates, and evolutionary algorithms to address challenges in multiobjective optimization with expensive oracles. The paper provides theoretical analysis, including convergence guarantees, and demonstrates empirical performance on synthetic and real-world problems. Strengths: Theoretical contributions: The paper presents a tight lower bound on the cost complexity of multi-fidelity BAI, which improves upon previous bounds in the literature. This provides valuable insights into the fundamental limits of such problems. Algorithm development: The proposed qPOTS algorithm is designed to asymptotically match the lower bound, potentially offering optimal performance in the high-confidence regime. Weaknesses: Limited experimental evaluation: The experiments seem relatively limited in scope, focusing primarily on synthetic problems and a few real-world examples. More extensive testing on diverse real-world applications, such as neural architecture search or classic multi-fidelity Bayesian optimization problems, would strengthen the paper's practical relevance. Connection to real-world problems: While the theoretical results are impressive, it's not immediately clear how they translate to understanding or solving real-world problems. More discussion on practical implications would be beneficial. Assumption realism: The paper relies on some assumptions that may not always hold in practice. For example, Assumption 5 in Theorem 4.1 (asymptotic convergence) may not be realistic in all scenarios. More discussion on the implications of these assumptions would be helpful. Narrow problem focus: The best-arm identification problem, while important, may be seen as somewhat limited compared to more general multi-fidelity Bayesian optimization problems. The authors could elaborate more on how their work relates to or extends to broader MFBO settings. Gradient computation clarity: The explanation of gradient computation with respect to ω and μ could be clearer, especially given that these values are often discrete in bandit problems. More detailed explanations and precise notation would improve understanding. Technical Quality: 3 Clarity: 1 Questions for Authors: How does this work relate to broader multi-fidelity Bayesian optimization problems? Can the authors elaborate on potential extensions or applications beyond best-arm identification? Can the authors provide more intuition or examples of how the theoretical results help in understanding real-world multi-fidelity optimization problems? Theorem 3.1 states that the lower bound holds for any multi-fidelity bandit. Does this truly hold regardless of the specific multi-fidelity model or acquisition function used? If so, could the authors elaborate on why this is the case? How realistic is Assumption (5) in Theorem 4.1? Can the authors provide examples of scenarios where this assumption would or would not hold, and discuss the implications? The gradient computation with respect to ω and μ is not entirely clear, especially given that these are often discrete values in bandit problems. Could the authors provide a more detailed explanation of how these gradients are defined and computed, ensuring clarity and precision in the notation? Have the authors considered applying their method to more complex real-world applications, such as neural architecture search or other MFBO problems that can be framed as BAI? If not, what challenges might arise in such applications? Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: The paper could benefit from a more thorough discussion of the scenarios where the assumptions might not hold and the potential implications for the algorithm's performance. The authors should provide more context on the limitations of the best-arm identification framework compared to more general multi-fidelity Bayesian optimization problems. A discussion on the potential challenges in applying the method to more complex, real-world problems would be valuable. The paper would benefit from a more comprehensive discussion of the potential negative societal impacts of the work, even if they are indirect. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > ### "Narrow problem focus" and comparison of MF-BAI with MFBO First, we note that the **multi-fidelity BAI is an established problem within the literature** (i.e., "Multi-fidelity best-arm identification", Poiani et al., NeurIPS 2022, and "Multi-fidelity multi-armed bandits revisited", Wang et al., NeurIPS 2024). Furthermore, we note that our work on BAI **differs** from MFBO studies in at least the two following dimensions (similar comments have already been made in e.g., "Multi-fidelity Best-Arm Identification", Poiani et al., 2022): 1. First, we only use a minimal assumption on the arm space, that is $|\mu_{i,m} - \mu_{i,M}| \le \xi_m$ for all arms $i \in [K]$ and fidelity $m \in [M]$. On the other hand, works on MFBO (e.g., "Gaussian Process Bandit Optimisation with Multi-fidelity Evaluations", Kandasamy et al., NeurIPS 2016, "Multi-fidelity bayesian optimisation with continuous approximations", Kandasamy et al., ICML 2017, "Multi-fidelity Gaussian Process Bandit Optimisation", Kandasamy et al., JAIR 2018) typically impose stronger structural such as Gaussian Process modeling. 2. Secondly, these lines of work usually analyze the simple/cumulative regret. Our paper, instead, aims at minimizing the cost complexity. As one can verify, this has a significant impact on the algorithm design and its theoretical analysis. For these reasons, we retain that our work is **different** (rather than less general) w.r.t. MFBO approaches. We will better clarify these differences in a revised version of the manuscript. > ### "Can the authors provide more intuition or examples of how the theoretical results help in understanding real-world multi-fidelity optimization problems?" The theoretical result on the lower bound leads to the first asymptotically optimal algorithm for the MF-BAI problem. As discussed both in the lower bound section and in the algorithmic one, these novel results significantly improve the state-of-the-art in the MF-BAI tasks. Indeed, previous lower bounds were, in general, not tight up to a $\lambda_M / \lambda_1$ factor, which can be arbitrarily large, and existing algorithms were based on a sub-optimal concept of "optimal fidelity". As verified in the experimental section, these lead to a superior identification performance of our method compared to existing algorithms. Finally, our study proves that, to identify the optimal arm, there almost always exists an optimal arm-dependent fidelity that the learning system should query. > ### "Does Theorem 3.1 hold regardless of the specific multi-fidelity model or acquisition function used?" Theorem 3.1 holds for any algorithm that satisfies the $\delta$-correctness requirement on all the multi-fidelity bandits within the class $\mathcal{M}^*_{\textup{MF}}$. These are all the multi-fidelity problems that can be modeled as a BAI problem formulated in Section 2. We are unsure what the reviewer means by "hold regardless of the specific multi-fidelity model." Furthermore, we are not sure what the reviewer means by "any acquisition function used" as there is no concept of acquisition function within our work. Can the reviewer clarify these points? > ### "How realistic is Assumption (5) in Theorem 4.1?" We are unsure what the reviewer means by Assumption (5) in Theorem 4.1 as **there is no assumption** in the statement or in the proof of Theorem 4.1. Can the reviewer clarify this point? > ### "The gradient computation with respect to ω and μ is not entirely clear, especially given that these are often discrete values in bandit problems" First, we recall that the sub-gradient is computed only w.r.t. to ${\omega}$, as $\omega$ represents the cost-proportions that the algorithm should play in expectation to identify the optimal arm using the minimum cost. Secondly, we notice that the values of both $\omega$ and $\mu$ **do not assume discrete values**. Indeed, $\omega$ belongs to the simplex, while $\mu$ belongs to the set of possible means (i.e., $\Theta^{KM}$), which is not a finite set. This is standard in the BAI literature (see, e.g., "Optimal Best Arm Identification with Fixed Confidence", Garivier et al., 2016), where $\omega$ and $\mu$ do not belong to discrete sets as well. Finally, the sub-gradient is computed exactly as stated in Theorem 4.3, and, more precisely, in Equations (8) and (9), where $\eta^*_{i,a}$ and $\psi^*_j$ are the quantities introduced in Lemma 4.2 and Line 253. In other words, for fixed $\mu$ and $\omega$, one first computes the pair of arms $(i,a)$ that attain the maximum of Equation (2), and then tests whether $\psi^*_i > \psi^*_a$ to decide which expression to use among Equations (8) and (9). The discussion presented in Lines 272-291 then provides an efficient algorithm to compute $\eta^*_{i,a}$. Lemma C.4 in the appendix provides an analogous algorithm for the computation of $\psi^*_i$. > ### "Have the authors considered applying their method to more complex real-world applications, such as neural architecture search or other MFBO problems that can be framed as BAI? If not, what challenges might arise in such applications?" As done in previous works in MF-BAI, we have considered simulated domains in our experiments. When dealing with real-world problems, challenges arise when directly applying the multi-fidelity formalism considered in MF-BAI. For instance, the arm space might be continuous, such as in the case of hyper-parameter optimization. In this scenario, practical methods (e.g., "Multi-fidelity Bayesian optimization via deep neural networks", Li et al., NeurIPS 2020), can overcome these challenges at the price of sacrificing theoretical guarantees. An interesting future research direction to bridge this gap would be to study the MF-BAI setting under the presence of linear function approximators; e.g., by drawing inspiration from linear BAI ("Optimal best-arm identification in linear bandits", Jedra et al., NeurIPS 2020). --- Rebuttal 2: Comment: Dear reviewer, first thanks again for the time taken to review our paper. We are writing a general comment here to outline some doubts that we have on this review, as there are some comments that we believe are not aligned with the content of our paper, and consequently, we are unsure on how to answer some questions. Specifically: - Within the summary, the reviewer mentions that "This paper introduces qPOTS (Pareto Optimal Thompson Sampling), a novel approach for efficient batch multiobjective Bayesian optimization in the context of best-arm identification (BAI) with multi-fidelity bandits" and "The method combines ideas from Thompson sampling, Gaussian process surrogates, and evolutionary algorithms to address challenges in multiobjective optimization with expensive oracles." Nevertheless, this is not aligned with the content of our paper. - Similarly, within the strenghts there is another reference to the qPOTS algorithm, which is not something that we consider in our paper. - Then, in the weaknesses, the reviewer mentions Assumption (5) in Theorem 4.1. However, there is no assumption neither in the statement nor the proof of the theorem - Finally, in the questions, the reviewer also asks if our results holds for any acquisition function used. However, there is no concept of acquisition function within our work. In the rebuttal, we answer the reviewer's comment to the best of our understanding. Naturally, we remain available for further clarifications, but we kindly ask the reviewer to clarify the points above. Best regards, The authors
Summary: This paper considers the fixed confidence BAI problem in a multi-fidelity setting, where each arm can be sampled at lower or higher fidelity levels, with a corresponding lower or higher cost. The objective is to declare the best arm, which is the arm with the highest mean at the highest fidelity level. The authors derive an instance dependent cost lower bound, in terms of optimal sampling frequencies at each fidelity level for each arm. They then propose a sub gradient based algorithm with forced exploration that is asymptotically optimal in terms of cost. Strengths: This paper makes non-trivial technical contributions to the state of the art in multi fidelity BAI. The measure-zero result in Theorem 3.2 seems novel and interesting. The "achievability" part is also novel, in the sense that it is not an extension of traditional track and stop. It uses a sub gradient ascent algorithm (which has been used in bandits before), but the gradient computation section in 4.2 also appears potentially useful in related problem settings. Appendix B.2 shows that the lower bound in [26] can be significantly worse (by an arbitrarily large multiplicative factor) than the proposed lower bound. In appendix D.6, the authors also identify a problem of "non-stopping" from [33], both numerically and analytically. Weaknesses: I do not see any issue that can be flagged as a significant weakness. Technical Quality: 3 Clarity: 3 Questions for Authors: I don't see any shaded area denoting confidence intervals in Figure 3. Line 78: notation. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No concerns on social impacts Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > ### I don't see any shaded area denoting confidence intervals in Figure 3. We thank the Reviewer for raising this point. There are actually shaded areas on the plot, but the confidence intervals are really small. This is also evident from footnote 8, where we measure the distance of ${\omega}(T)$ from $\omega^*$ at iteration $T=10^5$. In this case, the 95% confidence intervals are $0.0006$. The appendix presents experiments with more evident confidence intervals (see, e.g., Figure 4). > ### Line 78: notation. We thank the Reviewer for pointing out the typo. We have fixed it in a revised version. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I appreciate your response. I keep my score.
Summary: The paper studies the problem of multi-fidelity best arm identification in the fixed confidence setting. The main contribution is an instance-dependent lower bound on the cost complexity. The authors demonstrate the bound's tightness by providing an algorithm with a matching upper bound in asymptotics, as well as an instance where the lower bound improves on the previous lower bound by a tighter constant. Strengths: The paper is well-presented in terms of its theorem statements and comparison with prior works. The theoretical result, specifically the lower bound, shows a clear improvement over previous work. Weaknesses: I think there is room to improve the presentation of the algorithm. Just by staring at the algorithm box, it took some time to understand the relationships between $\tilde{\omega}$, $\bar{\omega} $, $\tilde{\pi}$, $\pi'$, and have to look for their definitions here and there. It would be great if these could be made clear within the algorithm box itself. Technical Quality: 3 Clarity: 3 Questions for Authors: Do the authors have any conjectures on the fixed-budget setting for multi-fidelity best arm identification? I'm not entirely sure this is a valid problem, but I wonder if it could be approached similarly to the normal track-and-stop method. By solving the optimization problem and sampling according to it without a stopping rule, it naturally turns into a fixed-budget algorithm. Can MF-GRAD be used in a similar manner? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No major limitation to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > ### Writing suggestion We thank the Reviewer for suggesting ways to improve the presentation of our algorithm. We have incorporated the symbols also within the algorithm box in a revised version of the manuscript. > ### "Do the authors have any conjectures on the fixed-budget setting for multi-fidelity best arm identification? I'm not entirely sure this is a valid problem, but I wonder if it could be approached similarly to the normal track-and-stop method. By solving the optimization problem and sampling according to it without a stopping rule, it naturally turns into a fixed-budget algorithm. Can MF-GRAD be used in a similar manner?" We thank the Reviewer for rising this point. We do believe that the fixed-budget MF-BAI is actually a valid problem that should deserve the attention of the community. In this case, the **budget should be expressed as a total cost available** to the learning system, and the agent might exploit the different fidelity to lower (an upper bound on) the probability of error of not recommending the optimal arm. Although this future research direction is of interest to the community, we currently conjecture that **further and ad-hoc algorithms are probably needed** to face the fixed-budget problem. Indeed, to the best of the author's knowledge, fixed-budget algorithms usually follow different design principles compared to asymptotically optimal fixed-confidence ones. One of the main reasons behind this conjecture is that we expect the lower bound for the fixed-budget setting to differ from the one for fixed confidence. Finally, we agree that one could in principle apply MF-GRAD for fixed-budget problems as suggested by the reviewer (i.e., by removing the stopping rule and terminating when the budget is over). However, we are unsure on which kind of theoretical guarantees this approach would enjoy.
null
null
Rebuttal 1: Rebuttal: We thank the Reviewers for their efforts in reviewing our paper. We are happy that the reviewers have recognized our paper as a "clear improvement over previous work" (Reviewer bSBL) and that they appreciated the "non-trivial technical contributions to the state of the art in multi fidelity BAI" (Reviewer 97EK). In the following, we aim to address the remaining questions.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Information Theoretic Perspective on Conformal Prediction
Accept (poster)
Summary: The paper provides a series of upper bounds for the conditional entropy of the labels given the attributes. The right-hand side of the bounds depends on the coverage probability of the corresponding Conformal Prediction sets. The authors propose to use the bound as an alternative objective function for conformal training. Strengths: - The link between Conformal Prediction (CP) and List Decoding is inspiring, even if it is partly unclear how it is used in the bounds. - Using CP distribution-free and finite sample properties to compute empirical bounds is interesting and may have an impact beyond the CP community. - I like the idea of using the bounds for training CP algorithms. Weaknesses: - The authors may explain more intuitively the role of the arbitrary distribution $Q$. Later in the paper $Q$ seems to be linked to the underlying point-predictor. An explicit construction (in the Introduction) would help motivate the proposed approach. - The practical consequences of the authors' findings could be clarified better. - A few practical examples would be helpful. For example, the authors may add a concrete way to define $Q$ at the end of Section 3.1. - The technical contribution seems a relatively straightforward consequence of the inequality $H(Y|X) < H(w_Y(Y, X)|w_X(Y, X))$. - The challenging part of optimizing a CP algorithm is smoothing the sorting operation. The proposed training method does not seem to solve or address that problem. The authors should justify better the use of conditional entropy instead of ${\rm E}(|{\cal C}|)$. In particular, how an upper bound could be more efficient than the original objective function? Technical Quality: 3 Clarity: 2 Questions for Authors: - Is the prediction set in Equation 3 calibrated using $P$? - What do you mean by *to learn machine learning models that are more amenable to SCP*? - Where is the conditional entropy defined? Is the right-hand side of Equation 3 guaranteed to be positive? - Is $Q$ supposed to be an approximation of $P$ learned from data? - Is $W$ required to be a stochastic map? What happens if $W$ is deterministic? - Would it be possible to add a baseline trained in a standard way, i.e. without conformal training? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: I appreciate the authors addressed their work's limitations in a dedicated section. Perhaps, the paper misses a theoretical and intuitive justification for using the bounds for conformal training. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. We address the main concerns below. > The authors may explain more intuitively the role of the arbitrary distribution $Q$. One can think of our results as way to upper bound the (irreducible) data aleatoric uncertainty $H(Y|X)$ of the true data distribution $P$ by leveraging conformal prediction and another distribution $Q$. We present our results in all generality, as they hold for any distribution $Q_{X,Y} = P_X Q_{Y|X}$, but clearly $Q$ will be more informative and lead to tighter bounds if it is a good approximation to $P$. > Is $Q$ supposed to be an approximation of $P$ learned from data? In short, yes. In the conformal training experiments, we directly learn $Q_{Y|X}$ from data to tighten our bounds. > [...] the authors may add a concrete way to define $Q$ at the end of Section 3.1. In Section 3.2, specifically lines 156 to 156, we comment on two good choices for $Q_{Y|X}$, namely the predictive model itself (the very same used to construct the prediction sets) or a uniform distribution, which leads to the model-based (eq. 5) and simple Fano (eq. 6) bounds, resp. We will make this point more evident in the text. > The technical contribution seems a relatively straightforward consequence of the inequality $H(Y|X) < H(w_Y(Y,X)|w_X(Y,X)).$ Could you clarify what you mean by this inequality on conditional entropies? Unless we are missing something, it does not hold in general; naively taking $w_Y(Y,X) = Y$ and $w_X(Y,X)=Y$, we get $H(w_Y(Y,X)|w_X(Y,X))=0.$ Our results are proven using canonical information theoretical inequalities like Fano and DPI, but they require a tailored choice of distributions as well as considering mathematical subtleties of including marginal coverage guarantees of conformal prediction in the bound. > The paper misses a theoretical and intuitive justification for using the bounds for conformal training. We provide a motivation in section 4, lines 187 to 190, which can be summed up as follows. Our results upper bound the irreducible uncertainty of the true distribution $P$, as given by the conditional entropy $H(Y|X)$. Thus, if we fit a model $Q_{Y|X}$ to minimize our upper bounds, we can argue we push $Q_{Y|X}$ closer to the true conditional distribution $P_{Y|X}$, using the training data to decrease the "reducible" uncertainty as much as possible. Although we are not aware of previous work leveraging the same motivation, the same argument could also be used to justify the cross-entropy, which also upper bounds $H(Y|X)$, with equality when $Q_{Y|X}=P_{Y|X}$. > The challenging part of optimizing a CP algorithm is smoothing the sorting operation. The proposed training method does not seem to solve or address that problem. Our main contribution is a new connection between notions of uncertainty coming from information theory and conformal prediction. Conformal training is simply a useful application of that connection, where we demonstrate that our methods lead to equivalent or better results than the original ConfTr objective (we use the exact same relaxation of the sorting operator for all methods). The fact that we do not address what is perceived as the main challenge in conformal training does not mean our results are not relevant in that context. > The authors should justify better the use of conditional entropy instead of $E|C(X)|$. In particular, how an upper bound could be more efficient than the original objective function? We provide a justification in section 4 and clarify further in the answers above. The last point seems to be a misunderstanding of our results. We provide upper bounds to the conditional entropy $H(Y|X)$ and not $E|C(X)|$. We also show in the end of section 4 and appendix F.1, that the ConfTr objective, i.e. $\log E|C(X)|$, is also an upper bound to $H(Y|X)$, but a looser one than the simple Fano bound. Therefore, if we adhere to the motivation of minimizing an upper bound to $H(Y|X)$, our upper bounds should in theory give better results. > Is the prediction set in Equation 3 calibrated using $P$? Yes. Yet, many recent works have extended conformal prediction to more general settings where calibration and test points are not identically distributed (see e.g. [1,2]), and in principle our results can also be extended to this setting if the conformal prediction method in question still provides lower and upper bounds for $P(Y_{test} \in C(X_{test})).$ > What do you mean by to learn machine learning models that are more amenable to SCP? Conformal prediction sets ultimately depend on the characteristics of the predictive model used to construct them. For example, two models with the same accuracy could lead to different prediction sets, and in that sense, one could say the model which produces more informative (narrower) sets is more amenable to conformal prediction than the other. The idea of conformal training is exactly to steer the training process towards those models that are more amenable to conformal prediction. > Where is the conditional entropy defined? We will explicitly define $H(Y|X) = -E_{P_{XY}} [\log P_{Y |X}]$ in the text. > Is the right-hand side of Equation 3 guaranteed to be positive? Yes, the proof in section D.1 proves this. > Is $W$ required to be a stochastic map? What happens if $W$ is deterministic? The result (Theorem C.3) holds for deterministic maps as well. > Would it be possible to add a baseline trained in a standard way, i.e. without conformal training? Such a baseline is already included in all our results, namely models trained purely via the cross-entropy that we identify by the acronym CE. We will make this more clear in the text. ### References [1] Barber, Rina Foygel, et al. "Conformal prediction beyond exchangeability." The Annals of Statistics 51.2 (2023): 816-845. [2] Prinster, Drew, et al. "Conformal Validity Guarantees Exist for Any Data Distribution (and How to Find Them)." ICML 2024. --- Rebuttal Comment 1.1: Title: thank you for your answers Comment: The rebuttal clarifies my doubts. Sorry for the strict inequality typo in my question. I meant <= instead of <. After reading the authors' answers and the other reviewers' comments, I tend to confirm my score. But I would not vote against acceptance. --- Reply to Comment 1.1.1: Comment: We are grateful that the reviewers went through our answers and happy that we could address the reviewer's doubts. We are also happy that the reviewer is not voting against acceptance. In that light, we would like to kindly ask what needs to be changed in the paper so it passes the acceptance bar. We are under the impression that the current requests are addressable with minimal changes in the camera ready version, but we would be happy to be aware of bigger changes the reviewer has in mind.
Summary: This paper observes a connection between the efficiency of CP (tightness of its prediction sets) and information theory. It uses this to derive 3 bounds on a CP's efficiency depending on the entropy of the label given a data record H(Y|X). It then uses these bounds as the optimization objective for the underlying model (nonconformity scorer). The bounds are empirically shown to be valid (i.e., reasonable) training objectives. Strengths: - This paper is _truly_ well written: it explains extremely well prior literature as well as the concepts being presented. The appendix is also well-curated. - The idea behind this paper (to connect information theory and CP) is very interesting, and I'm sure it'll lead to more research. - The theoretical analysis is non-trivial and insightful. Weaknesses: - Bounds based on Fano's inequality may be loose. - Empirically, the method does not outperform SOTA; however, I do think there's more value in this work than just improving CP numerically. Technical Quality: 4 Clarity: 4 Questions for Authors: Thank you for your submission! I'm afraid I could not understand how the connection to list decoding was used when deriving your results. I can see this is being discussed in the appendix, but perhaps it'd be best to mention in the main body which ideas are borrowed from that theory, and how they're being used here. (Can you please clarify, in your rebuttal?) I did not quite understand why federated learning was introduced in this paper; I didn't feel like it was necessary, and it almost feels like an afterthought. Perhaps it'd be best to motivate it better in the introduction. "To the best of our knowledge, Bellotti [8] were the first to advance the idea of conformal training." Note that, concurrently to Bellotti, this idea was also discovered by Colombo and Vovk in "Training conformal predictors". While your work is the first one to deal with the interesting intersection between information theory and CP efficiency, please note that other works have looked at: 1. connecting CP efficiency to the goodness of the underlying algorithm; e.g.: - (Dhillon et al.) On the Expected Size of Conformal Prediction Sets - (Cherubin) How do the performance of a Conformal Predictor and its underlying algorithm relate? and also 2. how to best characterize the efficiency of a CP: (Vovk et al.) Criteria of efficiency for conformal prediction. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: These were appropriately described by the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful comments and are encouraged by the positive feedback and recognition of our work's significance. We answer the remaining questions below. > Bounds based on Fano's inequality may be loose. That is definitely true of the simple Fano bound, where we assume a uninformative uniform distribution over the labels. However, the model-based Fano bound should be tight for a model $Q$ that approximates the true distribution $P$ well enough. > Empirically, the method does not outperform SOTA; however, I do think there's more value in this work than just improving CP numerically. While there was no clear winner among our bounds, they still improve on ConfTr variants in two ways: (i) the DPI and model-based Fano bounds proved more effective than ConfTr on training complex models from scratch (e.g. in the CIFAR results in Table 1) and (ii) our bounds also seem to be more robust than ConfTr, since even after fine tuning hyperparameters as described in Section G.1, ConfTr often failed to converge properly (e.g. ConfTr performance in MNIST in Table 1 was worse than expected due to training instability). > I'm afraid I could not understand how the connection to list decoding was used when deriving your results. I can see this is being discussed in the appendix, but perhaps it'd be best to mention in the main body which ideas are borrowed from that theory, and how they're being used here. (Can you please clarify, in your rebuttal?) Indeed, the main results can be derived without resorting to list decoding. We mostly derive our results directly using the data processing or Fano's inequality, as shown in Appendix D. We highlight the connection to list decoding because it is quite natural and might bring more interesting results to both fields in the future. In fact, a slightly looser version of our simple Fano bound can be obtained directly from Fano's inequality for list decoding, see Theorem C.6 and Proposition C.7 in the appendix. > I did not quite understand why federated learning was introduced in this paper; I didn't feel like it was necessary, and it almost feels like an afterthought. Perhaps it'd be best to motivate it better in the introduction. The information theoretic perspective also facilitates the usage of side information in conformal prediction, as we discuss in Section 5. Federated learning just happens to be a natural and elegant application of both conformal training and side information, as the global objective factorizes nicely into local objectives that can be computed locally in each device with the device ID as side information. This scenario can be quite useful in practice, and is perhaps a more convincing use case for side information than assuming access to superclass information like in the experiments in Table 2. > Note that, concurrently to Bellotti, this idea was also discovered by Colombo and Vovk in "Training conformal predictors". Thanks for bringing that work to our attention. We will definitely cite and discuss it in the final version of the paper. > Related work Thanks for the extra references. We will discuss these contributions in the paper, and the work of Dhillon et al. in particular is also relevant to our experiments in Appendix G.3, where we also use our results to lower bound the expected prediction set size. --- Rebuttal Comment 1.1: Comment: Thank you for your response. > Indeed, the main results can be derived without resorting to list decoding. I do appreciate the intellectual honesty in the above comment. But if that's the case, you may want to: 1) remove the list decoding discussion (sorry!), or put it in a separate section that isn't part of the main discussion (e.g., a section before conclusions), 2) or, try and rephrase your proofs to actually use it. (If you do go down this root, you should of course mention that your results don't actually _need_ the list decoding theory.) The latter would of course be preferable, especially because the connection to list decoding is indeed quite interesting. Please treat the above as a recommendation, not a requirement. I will keep my positive score. --- Reply to Comment 1.1.1: Comment: Thanks for the suggestions. We will move the discussion on list decoding to the final stages of the paper to improve readability.
Summary: The paper aims to relate the uncertainty measured by conformal prediction by the uncertainty measured by the conditional entropy $H(Y|X)$. Towards this end, they make the following contributions: 1. They observe that the miscoverage quantity from conformal prediction is the same as the decoding-error quantity from list decoding in information theory. 2. They exploit this observation to upper bound the conditional entropy $H(Y|X)$ in terms of quantities from conformal prediction. 3. They use these bounds as objectives for conformal training in order to maximize efficiency i.e keep the prediction sets as small as possible. 4. They empirically verify that their modifications to conformal training using their upper bounds improve efficiency. Strengths: **Originality** The problem of relating the uncertainty measured in information theory to that from conformal prediction is very interesting avenue of exploration. Weaknesses: **Clarity and Quality** In general, it is confusing to interpret what the results are really saying. In particular, I have the following confusion about the theoretical result that remains after reading the paper: 1. $H(Y|X)$ only measures aleatoric uncertainty whereas the conformal interval captures both aleatoric and epistemic uncertainty. So then, it is not surprising that the conditional entropy would be upper bounded by something in terms of the conformal size $|C(X)|$. Is this the correct interpretation for the Propositions, or is there something else going on here? For the empirical result: 2. In Section 4, the authors state that minimizing the RHS of the Fano bound is equivalent to the ConfTr objective from [7]. However, in the tables, the rows corresponding to ConfTr and Fano admit different efficiency - what is the cause of this? 3. What is the intuition for why minimizing the RHS of the bounds is better than minimizing $|C(X)|$ directly? Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Could you answer W1 above, and in general connect the theoretical results to epistemic/aleatoric uncertainty? I am happy to change my assessment/confidence if the authors are able to alleviate my concerns through discussion. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes --------- EDIT POST REBUTTAL: After discussing with the authors, I have increased my score to 6. The authors have done a great job in the discussion of contextualizing the results in terms of epistemic&aleatoric uncertainty. My hope is that the authors in the final version will: 1. Modify the exposition of the paper to discuss the role of these categories of uncertainty. 2. In the discussion section, be upfront about the looseness of the bounds when epistemic uncertainty is high. 3. In the discussion, mention the challenges to quantifying the epistemic uncertainty theoretically. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and interesting questions, which we discuss below. > $H(Y|X)$ only measures aleatoric uncertainty whereas the conformal interval captures both aleatoric and epistemic uncertainty. So then, it is not surprising that the conditional entropy would be upper bounded by something in terms of the conformal size $|C(X)|$. Is this the correct interpretation for the Propositions, or is there something else going on here? This intuition is absolutely correct. Yet, even though the cardinality of the prediction set size provides an intuitive notion of uncertainty, quantitatively connecting it to the aleatoric or even total uncertainty is non-trivial. Importantly, the prediction set size varies with the user-defined miscoverage rate $\alpha$, whereas arguably any notion of uncertainty should not. Our upper bounds account for that, and $\alpha$ comes up quite naturally in the derivation of all three bounds. Moreover, it is worth noticing $|C(X)|$ only appears explicitly on the simple Fano bound, which is expected to the be the least tight among our three bounds, further illustrating the difficulty in connecting the cardinality of the prediction set size to other established notions of uncertainty. > In Section 4, the authors state that minimizing the RHS of the Fano bound is equivalent to the ConfTr objective from [7]. However, in the tables, the rows corresponding to ConfTr and Fano admit different efficiency - what is the cause of this? Equation (8) is equivalent to the ConfTr objective but not to the simple Fano bound of equation (6), and hence why the results are different. We will update the wording in section 4 to make this more evident. In section F.1, we actually prove the simple Fano bound is a tighter upper bound to the conditional entropy than the ConfTr objective, i.e., $H(Y|X) \leq (6) \leq (8).$ > What is the intuition for why minimizing the RHS of the bounds is better than minimizing $|C(X)|$ directly? From a theoretical perspective, we argue in Section 4 that minimizing an upper bound to the data aleatoric uncertainty, as captured by $H(Y|X)$, is a reasonable optimization objective. This motivation also justifies the cross-entropy and $\log |C(X)|$ as learning objectives, but from this angle, our upper bounds are actually tighter bounds to $H(Y|X)$ and thus likely to be more effective: The DPI bound is provably tighter than the cross-entropy (see Remark D.1 in the appendix) and the simple Fano bound is tighter than the ConfTr objective that minimizes $\log |C(X)|$ directly as mentioned in the answer above. From a practical perspective, $\log |C(X)|$ alone might not provide a strong enough learning signal to fit complex models (see e.g. ConfTr results for CIFAR in Table 1) and an extra hyperparameter needs to be introduced to balance between optimizing $\log |C(X)|$ and some other loss function like the cross-entropy which gives the ConfTr class objective. In contrast, our upper bounds, especially DPI and Model-based Fano ones, already capture both efficiency and predictive power of the model, with no need for extra hyperparameters. Moreover, in our experiments, our upper bounds proved to be more robust, whereas ConfTr and ConfTr class failed to converge in some cases even after hyperparameter fine-tuning (see, for example, ConfTr results on THR/MNIST in Table 1). > Could you connect the theoretical results to epistemic/aleatoric uncertainty? That is an important point, and one of our objectives in this paper was indeed to improve the current understanding of how conformal prediction relates to other forms of uncertainty quantification, where the separation between epistemic and aleatoric uncertainty is already more established (albeit still not solved [1]). To the best of our knowledge, this separation has never been made in the conformal prediction literature and remains an entirely open question [2]. The main challenge in the context of conformal prediction is in characterizing the epistemic uncertainty without further information or assumptions on the model. Since epistemic uncertainty is a property of the model itself and not the data, it requires an understanding of the model's hypothesis space, for example via a distribution over the model parameters in the Bayesian setting; something that conformal prediction ignores or abstracts away via the scoring function. That is why in this paper we focus on the data aleatoric uncertainty $H(Y | X)$ as its connection to conformal prediction is more tangible, as evidenced by our upper bounds.

 ### References [1] Wimmer, Lisa, et al. ”Quantifying aleatoric and epistemic uncertainty in machine learning: Are conditional entropy and mutual information appropriate measures?.” Uncertainty in Artificial Intelligence. PMLR, 2023. [2] Hullermeier, Eyke, and Willem Waegeman. ”Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods.” Machine learning 110.3 (2021): 457-506. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I thank the authors for clarifying my questions. I think that the goal of relating conformal intervals to epistemic and aleatoric uncertainty is a very well-motivated goal, and conceptually very useful for the ML community -- given the application of UQ in safety-critical applications. Additionally, it is a theoretically challenging pursuit, so I commend the authors on their efforts. However, I still think the results are confusing to interpret, and does not appropriately shed much light on this motivating question. Additionally, the discussion of the authors in the rebuttal around AU and EU appears a bit confused. In particular, the authors mention that "epistemic uncertainty is a property of the model itself and not the data". However, even the paper [1] that the authors link mentions that: > EU then refers to a lack of knowledge about the discrepancy between $p$ and $\hat{p}$ that can – in contrast to AU – be alleviated by gathering more training samples. This is the more standard definition of EU that I am aware of. So either the authors are using the terminology incorrectly or defining things differently, but these new definitions are not clear. For these reasons, I will stick with my score. I would encourage the authors to motivate their paper from the perspective of AU, EU and frame the results explicitly in these terms as well, by addressing these confusions. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for engaging in the discussion. First of all, we must clarify that we share the same the notion of epistemic uncertainty; it is indeed given by the gap between $p$ and $\hat p$, and can be alleviated with more training samples. In that sense, for a fixed $p$, the epistemic uncertainty is only a function of the model $\hat p$. We apologize if stating epistemic uncertainty is a property of the model created confusion, but such characterization is also common in the literature (see e.g. the discussion in the introduction in [3]). While the general definition of epistemic uncertainty might be simple on the surface, actually quantifying this gap does involve assumptions about the model. The same reference quoted by the reviewer [1], states in section 2.1 > To capture EU [epistemic uncertainty], the learner must be able to express uncertainty about $\theta$ [model parameters], which can be accomplished by a (second-order) probability distribution over (first-order) distributions $\theta$. Conformal prediction methods pose no assumptions on the underlying machine learning model and typically leverages a single predictor/scoring function. In that setting, there is no clear way to represent the uncertainty over the model parameters (or more generally over the model's hypothesis space) and capture epistemic uncertainty, as also highlighted in [2]: > Machine learning methods for probability estimation, i.e., for training a probabilistic predictor, often commit to a single hypothesis learned on the data, thereby ignoring epistemic uncertainty. As we mentioned in our rebuttal, that is why we focused on the data aleatoric uncertainty, since quantifying epistemic uncertainty would require assumptions about the model hypothesis space that, at least for now, are not directly connected to any conformal prediction method, and thus are outside the scope of this work. We share the reviewer's appreciation for the goal of connecting conformal prediction to other notions of uncertainty, and we would argue that relating it to the data aleatoric uncertainty, as we have in this paper, is a valid contribution and a solid first step towards that goal. Further, we would like to emphasize that there are important contributions to uncertainty quantification without epistemic and aleatoric decomposition, and conformal prediction is a notable example. We see our work within this line of research, and as we showed in the paper, our results already proved useful in practical applications, namely conformal training and incorporation of side information. We hope to have addressed the reviewer's concerns around this topic. ### Extra references [3] Bengs, Viktor, Eyke Hüllermeier, and Willem Waegeman. "Pitfalls of epistemic uncertainty quantification through loss minimisation." Advances in Neural Information Processing Systems 35 (2022): 29205-29216.
Summary: This work develops the upper bound of conditional entropy H(Y|X) of data generation process by information theoretic inequality. New objective for conformal training and upper bound for the inefficiency (size of prediction set) for a trained model. Including side information to improve the efficiency of CP is also introduced by the proposed upper bound. Strengths: (1) Detailed background information about conformal prediction helps readers build basic understanding easily. (2) Extensive theoretical works are presented, demonstrating the soundness of proposed upper bound of the inefficiency of CP for a given model. (3) Five datasets and three baselines are included in the experiments. Standard deviation of experimental results are presented as well. Weaknesses: (1) Lack of clear notation clarification. For example, the subscripts of Q in Eq. (5) is not explained, making it hard to understand. Also, even if line 136 explains the meaning of H(Y|X), I suggest to write it explicitly for more clearness. (2) Lack of full algorithm representation. The work only states that including the upper bound as an additional loss term in line 186, but there is no enough explanation on how the proposed algorithm really works. (3) Proof is not self-contained. For instance, the paper leads readers to Appendix D.1 for the proof of DPI bound. And in Appendix D.1, line 806 further lead readers to Theorem C.4, which is not proved in detail. Technical Quality: 3 Clarity: 2 Questions for Authors: The reason of limiting alpha in (0,0.5) is not explained in Proposition 3.1, as Theorem 2.1 works for any alpha belongs to (0,1). Setting alpha=0.01, as stated in the caption of Table.1, is a very limited experiment setup. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and useful feedback. We address the main concerns below. ## Weaknesses > Lack of clear notation clarification. For example, the subscripts of Q in Eq. (5) is not explained, making it hard to understand The subscripts of $Q$ (and $P$) indicate the random variable described by the distribution, as explained in the first paragraph of Section 2. Following the same logic, we also include conditioning in the notation. For example, $Q_{Y | X, C(X), Y \in C(X)}$ is a conditional distribution of $Y$ given input variable $X$, prediction set $C(X)$ and the event of coverage $Y \in C(X).$ We will clarify that in the text. > I suggest to write it [$H(Y|X)$] explicitly for more clearness. We will explicitly define $H(Y|X) = -E_{P_{XY}} [\log P_{Y |X}]$ in the text. > Lack of full algorithm representation. The work only states that including the upper bound as an additional loss term in line 186, but there is no enough explanation on how the proposed algorithm really works. There is a detailed discussion of conformal training and our experimental setup in appendix F and G, resp. Although we have tried to include enough details on the working of our algorithms, we also agree that a full algorithmic description of the conformal training and federated learning procedures could be helpful and will add those to the appendix. At the end of this answer, we provide a sketch of how conformal training works for each batch, which is similar to [2]. We must highlight, however, that the main contribution of the paper is the new connection between notions of uncertainty given by conformal prediction and information theory. Conformal training is simply a notable application where we demonstrate this connection to be quite useful in practice. > Proof is not self-contained. For instance, the paper leads readers to Appendix D.1 for the proof of DPI bound. And in Appendix D.1, line 806 further lead readers to Theorem C.4, which is not proved in detail. We separated the theoretical results into different sections, concentrating the most important and new results in appendix D. The remaining results in appendices B and C are well-known in the information theory or machine learning communities, and we state them mostly for the sake of completion. We appreciate the feedback though and are happy to state Theorem C.4 just before Proposition 3.1 if that improves readability. Regarding Theorem C.4. itself, it is a direct application of the classical data processing inequality for $f$-divergences (Theorem C.3) to the conformal prediction procedure, and that is why the proof there is succinct. However, we are happy to expand the proof for clarity, as we show at the end of this answer. ## Questions > The reason of limiting alpha in (0,0.5) is not explained in Proposition 3.1, as Theorem 2.1 works for any alpha belongs to (0,1). This is explained in equation (19) in the appendix and lines 813 to 817, and is due to the concavity of the binary entropy function. Note that this is not restrictive, since $\alpha$ values are typically small, and even on the off chance of the user picking $\alpha > 0.5$, we can also get an upper bound by replacing $h_b(\alpha)$ with $h_b(\alpha_n)$. To be precise for $\alpha > 0.5$, the binary entropy is non-decreasing and preserves the inequality, so we can apply the same argument in (19) to the upper bound of conformal prediction $P(Y_{test} \in C(X_{test})) \leq 1-\alpha_n \Rightarrow h_b(Y_{test} \in C(X_{test})) \leq h_b(1-\alpha_n) = h_b(\alpha_n)$. > Setting alpha=0.01, as stated in the caption of Table.1, is a very limited experiment setup. We follow the same setup of the main reference in conformal training [2], where the models were also trained only with $\alpha=0.01$. Note that we also evaluate our models under different $\alpha$ values at test time in Table 10 in the appendix, where we also see improved results for other common $\alpha$ values $\alpha=0.05$ and $\alpha=0.1$. ## Sketch of Conformal Training Algorithm ``` for each batch B 1. Randomly split B into B_cal and B_test 2. Use B_cal to compute the empirical 1-alpha quantile using differentiable sorting 3. Construct a prediction set C(x) for each x in B_test using a smooth assignment (see eq. 29) 4. At this point we have all we need to compute any of the upper bounds to H(Y|X) in a differentiable manner 5. Update model parameters to minimize the upper bound to H(Y|X) via gradient descent ``` ## Detailed proof of Theorem C.4 Consider the conditional distribution $P_{E|X,Y}$ mapping distributions $P_{X,Y}$ and $Q_{X,Y}$ to $P_E$ and $Q_E$, respectively. From here, it suffices to apply Theorem C.3, but we can break it down further as follows. Consider the joints induced by this mapping $P_{E,X,Y} = P_{E|X,Y}P_{XY}$ and $Q_{E,X,Y} = P_{E|X,Y}Q_{XY}$, and the following two basic properties of $f$-divergences as applied to this specific case (see e.g. [1]): $$D_f(P_{E,X,Y} || Q_{E,X,Y}) = D_f(P_{X,Y} || Q_{X,Y})$$ and $$D_f(P_{E,X,Y} || Q_{E,X,Y}) \geq D_f(P_{E}||Q_{E}).$$ The proof then follows directly from these two results with $D_f(P_{X,Y} || Q_{X,Y}) \geq D_f(P_{E}||Q_{E}).$ ## References [1] Polyanskiy, Y. and Wu, Y. "Lecture notes on information theory". Lecture Notes for ECE563 (UIUC) and, 6(2012-2016):7, 2014. 17, 18 [2] Stutz, David, et al. "Learning Optimal Conformal Classifiers." ICLR 2022. --- Rebuttal Comment 1.1: Title: Rebuttal received and read Comment: The limitation of using just alpha=0.01 is addressed by Table 10. Thank the authors for pointing this out. I raised my rating to 5. --- Reply to Comment 1.1.1: Comment: Thank you again for your thoughtful review. We're glad our clarifications were helpful and appreciate you raising your score, having a more positive outlook on the paper.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Biologically Inspired Learning Model for Instructed Vision
Accept (poster)
Summary: This paper proposes a novel and biologically plausible alternative to backpropagation. Their model is especially unique in how it incorporates visual attention through top-down processing mechanisms. Specifically, the model contains bottom-up (BU) and top-down (TD) networks that are symmetric to each other. The weights in the model are updated via a Counter Hebbian rule that strengthens the weight between a pre-synaptic and post-synaptic neuron when firing in the pre-synaptic neuron precedes firing in the neuron that is *counter* to the post-synaptic neuron (i.e., the neuron that is symmetric to, and laterally connected to, the post-synaptic neuron). In Experiment 1, the authors demonstrate that the Counter Hebbian learning rule approximates backpropagation. In Experiment 2, the authors incorporate guided (or instruction-based) learning by adding an additional head (a one-hot representation of the relevant task) to the TD network. With this additional head, training now relies on 3 passes through the network. First, the task-head propagates task information downward. Second, the BU network (now guided by the task information from the first pass), processes the image to generate a prediction of the correct answer or label for the image. Third, the error-head propagates the error signal downward through the network. Overall, the model is biologically motivated, incorporates top-down guidance that is unified (the same TD network is used for task information and error information), and presents a novel alternative to classic Hebbian learning. This is important because classic Hebbian learning presumes that a cell's firing will somehow propagate back to its dendrites to modify the synaptic weight, but this assumption is not necessary for Counter Hebbian learning. The use of the TD task-related head is important because visual guidance is a fundamental element of biological vision and is increasingly incorporated in AI models. Strengths: This is a great paper. The model offers an attractive alternative to backpropagation that is both biologically inspired and reliant on local synaptic learning. (While the authors do not explicitly say this, it also offers a potential biological basis for why backpropagation methods work; i.e., because they approximate the Counter Hebbian learning described here.) The Counter Heddian learning procedure is novel and incorporates both BU and TD processes (which are critically biologically, but usually conflated in AI models). The model offers a simple way to incorporate task-based instruction/attention into visual processing. I found it very clever that the model uses the same TD network for task-instruction and error propagation (just with different heads). It's both computationally elegant to use a single TD stream and more biologically realistic. I only caught this in the Appendix, but the authors also note that their models do not rely on any computational "tricks" ("such as dropout layers, regularization, and special optimizers"). This is a major strength as well, even though it is not highlighted in the main manuscript. Finally, the authors are able to explain a somewhat complex set of ideas (i.e., Counter Hebbian learning + TD processing + BU processing + multiple model passes) very clearly and simply. Weaknesses: These weaknesses are all minor constructive criticisms: 1) The references are formatted in a style that makes the paper very hard to read. Currently the references are formatted as Author [year], but it would be easier to parse the sentences if they were formatted as either bracketed numbers or as (Author, year). 2) There are some typos throughout (e.g., page 1 line 26, page 2 line 39, page 2 line 60, page 4 line 181, page 5 lines 190-192). 3) Some of the abbreviations are unnecessary (they require more working memory from the readers, but are only used 1 or 2 time). The specific abbreviations I noticed that were rarely actually used in text were BP, FA, and TP. 4) Some of the discussion of backpropagation in Related Work could be explained more clearly (specifically page 5 lines 94-98). 5) Figure 3 in the appendix is difficult to interpret (visually). The written explanation of the figure is quite clear, but the visual doesn't make it any clearer (and may actually make it slightly less clear). 6) The graphs in the appendix need axis labels to be interpretable (figures 4-16). Technical Quality: 4 Clarity: 3 Questions for Authors: Q1: How does this model compare--in terms of scalability--to traditional backpropagation methods? Does this model take much more compute time and/or resources than the state-of-the-art methods they tested? Q2: Do the authors believe that this model could be extended for use in unsupervised/self-supervised learning? In biological visual systems, visual attention is also guided by saliency and not just task demands. Could future versions of the model incorporate saliency-based attention as well? To be clear, the answer to this question would not change my opinion, but I'm curious about their thoughts on it. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes, the authors have adequately addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comprehensive and detailed review of our paper. Your feedback will be used, and we will update the final version of the paper according to your comments. > W1 Thank you for the constructive comment; we will change the format to make it easy to parse. > W2 Thank you for the catch, we will fix all the typos in the final version. > W3 Thanks, we will remove unnecessary abbreviations in the final version (e.g. FA and TP). > W4 Thanks, we will update the final version to clarify this discussion. We will be glad to provide the revised paragraph in further correspondence. > W5 Following your comment, we think that it will be better to just use the text (possibly with minor changes) and remove the figure. The reason for the figure was to visualize the possibility of providing two different inputs in the same top-down stream: an error signal for BP and task-instruction for guided vision. > W6 Thank you for pointing it, the x axis represents the epochs, and the y axis corresponds with the title of the sub-figure (e.g. ‘train loss’). We will add axis labels and increase the size of the graphs so they will be more interpretable in the final version. > Q1 We considered a couple of scenarios, and we think that the overall requirements in compute time and resources will be similar to traditional backpropagation, since both use a single forward (bottom-up) pass and a single backward (top-down) pass. In terms of memory, CH requires storing two sets of weights, for the BU and TD networks, but in a computer simulation it is possible to enforce symmetric weights and have similar memory requirements. We conducted an experiment for empirical comparison, training a ResNet18 architecture on CIFAR, an unguided classification task (trained on a single NVIDIA A10). We compared the memory requirement and training time (50 epochs), showing below the results (in the format of (minutes : seconds)). CH Sym uses a single set of model weights and CH Asym uses two sets. | Method | Time | Memory | |-----------|-----------|-----------| | BP | 9:11 | 1,122 MB | | CH Sym | 9:00 | 1,650 MB | | CH Asym | 9:25 | 1,832 MB | The results show that the computation time is roughly the same for all methods, but the memory requirement is slightly higher for asymmetric CH. Note that the low bp memory is due to the highly optimized Pytorch implementation of backpropagation. The example above is an unguided classification task. In the guided settings, the comparison becomes more complex because there is currently no standard model for guided vision. In general, in order to perform guided vision, a model needs to process the guidance information, and then integrate the results with the visual processing. To learn such a model with traditional backpropagation, a backward pass needs to propagate throughout the entire process including both the guidance and visual processing. Our model suggests an efficient scheme of the guided scenario since it reuses the same TD stream for both the error propagation and the instruction processing, reducing the complexity and memory requirement of the model. > Q2 We think that it will be possible to incorporate bottom-up saliency-based attention in the model, and also to create a useful integration of bottom-up saliency with top-down attention. There has been some work on combining BU saliency with TD attention, for example [1], which showed in fMRI studies evidence for additive combination of BU saliency and TD attention in early visual areas, and another [2] found fMRI evidence for combined effects of BU saliency and TD attention specifically in visual area V4. It seems that our proposed model can naturally combine the two at multiple levels, and not necessarily in an additive manner, but also in other ways, which can be under TD control. In our model, BU saliency signals will propagate along the BU stream, and at the same time TD task signals will propagate along the TD streams. The cross-stream connections can allow, for example, to guide the top-down instruction signals towards high-salience locations, e.g. to segment or classify specifically regions of high saliency. In the opposite direction, TD signals could modify the saliency map, in a way that depends on the task instruction, rather than using a fixed additive combination. It could be interesting to explore such possibilities in the future and compare the model with the available empirical evidence. [1] Poltoratski S, Ling S, McCormack D, Tong F. Characterizing the effects of feature salience and top-down attention in the early visual system. J Neurophysiol. 2017, 118(1):564-573. [2] Melloni L, van Leeuwen S, Alink A, Müller NG. Interaction between bottom-up saliency and top-down control: how saliency maps are created in the human brain. Cereb Cortex. 2012, (12):2943-52. --- Rebuttal Comment 1.1: Comment: Thank you again for your valuable feedback. In response to your comments on the related work, we have revised the text to improve clarity. Below is the updated version: The Predictive Coding approach suggests that in visual processing, feedback connections carry predictions of neural activities, whereas feedforward streams carry the residual errors between these predictions and the actual neural activities [1]. It has been demonstrated that when predictive coding is used to train a neural network in a supervised learning setting, it can produce parameter updates that approximate those computed by backpropagation [2, 3]. These results have been further developed under additional assumptions, leading to predictive coding variants that produce the exact same parameter updates as backpropagation [4, 5]. However, the modifications necessary for these methods to approximate or be equivalent to backpropagation are criticized for reducing their biological plausibility [6, 7]. [1] Rao, Rajesh PN, and Dana H. Ballard. "Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects." Nature neuroscience 2.1 (1999): 79-87. [2] Whittington, James CR, and Rafal Bogacz. "An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity." Neural computation 29.5 (2017): 1229-1262. [3] Millidge, Beren, Alexander Tschantz, and Christopher L. Buckley. "Predictive coding approximates backprop along arbitrary computation graphs." Neural Computation 34.6 (2022): 1329-1368. [4] Song, Yuhang, et al. "Can the brain do backpropagation?---exact implementation of backpropagation in predictive coding networks." Advances in neural information processing systems 33 (2020): 22566-22579. [5] Salvatori, Tommaso, et al. "Reverse differentiation via predictive coding." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 7. 2022. [6] Rosenbaum, Robert. "On the relationship between predictive coding and backpropagation." Plos one 17.3 (2022): e0266102. [7] Golkar, Siavash, et al. "Constrained predictive coding as a biologically plausible model of the cortical hierarchy." Advances in Neural Information Processing Systems 35 (2022): 14155-14169. --- Rebuttal Comment 1.2: Comment: Thank you for your thoughtful engagement with my comments and questions! I remain quite enthusiastic about this work. My score was already high (8), and I don't think the work justifies a higher score (9, which would imply it is "groundbreaking"), but I do continue to advocate for its acceptance.
Summary: This manuscript proposed the first biologically motivated learning model for instructed visual models that integrates bottom-up and top-down pathways mimicking the visual cortex. The model employs the TD pathway for both guiding attention and propagating signals. Experiments demonstrate its capability across multiple tasks and achieve competitive performance with current AI models. Strengths: The paper is generally well-organized with a solid theoretical foundation. Weaknesses: NA Technical Quality: 3 Clarity: 3 Questions for Authors: NA Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The potential limitations have been thoroughly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and your succinct and precise summary. We welcome any comments you have on the additional discussion in the global rebuttal.
Summary: This work focused on proposing an efficient biologically plausible learning framework. It is inspired from the cortical-like combination of bottom up and top down processing. The top-down part provide both guidance for visual process and feedback signals for learning part. It further introduce a "Counter-Hebb" mechanism to modify synaptic weights. Strengths: **Method** To mimic the biological network with both top-down and bottom-up stream in the artificial neural network is an interesting direction. The proposed "Counter-Hebb" mechanism is novel and effective. **Evaluation** The framework is demonstrated on multiple benchmarks on image classification and multi-task learning, achieving comparable results compared to existing biologically plausible learning rules. Weaknesses: **Novelty** The main framework utilize the both top-down and bottom-up stream for guiding visual process and learning. The core idea is similar to predictive coding, where the novelty is limited. Discussing the fundamental differences with existing approaches will be helpful. **Biologically plausibility** The method has restricted TD and BU to be symmetric, "same connectivity structure", which violates the biologically plausibility. **Benchmark** The methods is only evaluated on toy-level image classification tasks, and does not outperform existing approaches. **Connection with VLMs** This paper mentions connections with VLMs, while its relation is unclear, and there is no demonstration and evaluation performed in the studies. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How is the proposed method related to VLMs? 2. How is the proposed method related to existing biologically plausible approaches (predictive coding, feedback alignment)? 3. Why $\bar{f_l}$ needs to have the same connectivity structure as $f_l$? 4. Why average pooling is excluded in the framework. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate your interest and constructive comments. > W1 + Q2 Predictive coding is among the best biologically plausible models for learning from errors and for representation learning. We discuss some differences and novel aspects in the biological mechanisms of Counter-Hebb learning, but a major contribution of our paper is the integration of guided vision, through top-down attention, in the same TD stream used for learning. Combining these two major functions is important both for modeling the biological TD stream, as well as for improving visual processing from a computational standpoint. For more information on the role of top-down attention in human vision and computer vision, please refer to the global rebuttal. > W2 The knowledge about the structure of the BU and TD streams is incomplete, but the overall scheme is essentially consistent with the main aspects of the model we used. We show a schematic figure (attached in the global rebuttal) of the known connectivity between the BU and TD in primates, adapted from [1,2], showing how the basic counter-streams structure is embodied in cortical connectivity of the ventral stream. It shows the main connections between successive areas labeled I, II. The BU path goes through layer 4 to layer 3B of area I and then to the next area. The TD path goes from layer 2/3A and layer 6 of area II to 2/3A and 6 of the lower area. Similar to our model, the cortical circuit has two streams, which are interconnected in both directions, between layers 4 and 6, and between the superficial layers 2, 3. Dashed arrows are pathways that skip a step in the stream. Although the precise connectivity at the level of individual neurons is not known, the symmetry between the BU and TD streams is obeyed in the circuitry at the population level: for the connections from layer 4 to layer 3B on the BU stream, there are connections in the opposite direction between their corresponding counter-neurons, from 2/3A to layer 6. > W3 There are two comparisons, with biological and non-biological models. Regarding the scale of the model, in the biological comparisons, we address the CelebA benchmark using ResNet-18, which is large compared with other biological models. See also an additional experiment related to scale in the PDF attached to the global response. In terms of performance, as shown in Table 1, our method is among the top biological models, outperforming other baselines. With respect to non-biological, AI models, as mentioned in the main text, our main contribution is not outperforming existing AI approaches, but rather to show the ability of our model to achieve a similar level of performance compared with non-biological models, despite using the constraints of biological models, and despite using the same top-down stream for both learning and guiding attention. > W4 + Q1 A notable similarity between our model and VLMs is the ability to perform guided vision using two parallel streams, one carrying visual information, and the other carries high-level, more cognitive information. In VLMs, the vision stream extracts visual information from visual inputs, while the language stream handles high-level semantic processing and integrates general knowledge from Large Language Models. This allows the language stream to take natural language instructions and direct the visual processing to focus on information that is relevant to the task. For example, in Visual Question Answering, the visual process is instructed to extract visual information that is relevant to the question. Our proposed method shares a similar concept. Unlike existing biological models of vision, which rely solely on visual inputs, our model incorporates both visual and instructional information during visual processing. Our model consists of two streams: a bottom-up stream for processing input images and a top-down stream for handling instructional information, which provides high-level semantic context. Similar to VLMs, the top-down stream guides the visual processing in the bottom-up stream to extract visual information relevant to the given task. For more details on the connection to VLMs, please refer to the global rebuttal. > Q3 The connectivity structure is required in order to derive the mathematical results shown in the paper: the equivalence to backpropagation in the symmetric case, and the approximation of backpropagation in the asymmetric case. However, it could be interesting to explore in future research Counter-Hebb learning when relaxing this structure to symmetry at the population level of neurons. > Q4 In ResNet architecture, global average pooling is used at the top core layers to down-sample the entire channel representation into a single neuron. While this operation is effective for classification tasks, it is less suitable for our framework, which focuses on instruction guided processing. In our guided vision learning algorithm, the first top-down pass is used for receiving and propagating instruction representations throughout the network. Utilizing global average pooling would restrict this process by compressing all instruction information into a single neuron per channel. [1] Markov, N. T. et al. (2013) ‘Cortical high-density counter-stream architectures’, Science, 342(6158). [2] S. Ullman. Sequence-seeking and counter streams: A computational model for bi-directional information flow in the visual cortex. Cerebral Cortex, 5(1) 1-11, 1995. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their additional clarifications. While I share similar concern as reviewer n2CE about the biological plausibility of symmetry connectivity between BU and TD paths. Meanwhile, I think the proposed method is lack of demonstrations in larger scale and more complicate tasks, as well as benchmarking with more advanced non-biological models, and the writing and overall presentation needs improvement. Therefore I maintained my original score. --- Reply to Comment 1.1.1: Comment: Thank you again for your valuable feedback that will help us improve the clarity of the paper. We add one additional issue that we think still merits consideration, since it relates to a basic aspect, which we realize from the comments from you and from reviewer n2CE that it was not clear in our presentation. What was not sufficiently clear is that we do have an asymmetric version that reaches performance essentially indistinguishable from back propagation. In our model we discussed two versions of weights asymmetry which we termed ‘asymmetry’ and ‘weak symmetry’ (or `weak asymmetry`), but the distinction was not clear; please refer also to the rebuttal response to reviewer n2CE (W3). The version that reached back-propagation performance is what we refer to as the ‘weak asymmetric’ case, where the two streams have different sets of weights, and the weights are updated with noisy, asymmetric updates. In this version the main requirement is to have in initialization, prior to training, sufficiently close to symmetric weights. We considered also a simple initialization scheme that satisfies this requirement. It can be obtained by weak initial weights, and an arbitrary activation of the entire network. Applying the Counter-Hebb under these conditions will automatically push the network toward this requirement. Our experiments showed that this initialization, followed by noisy updates, results in model performance that remains very close to the standard backpropagation. Biologically, we suggest that the claustrum, which provides input to all cortical regions, could supply such activation. Following your comments, we will add this discussion to the paper.
Summary: This study proposes a new biologically-motivated learning rule as an alternative to backpropagation. This “Counter Hebbian” rule separates the forward and backward path into two similar but non-identical pathways, called Bottom Up and Top Down by the authors, and allows the TD stream to gate the flow of information through the network. The authors run simple experiments on MNIST, Fashion MNIST, and CIFAR10, as well as on small multi-task datasets Multi-MNIST and CelebA. Strengths: The Counter Hebb rule is an exciting proposal, and this paper does a thorough job of motivating and explaining the algorithm. Weaknesses: 1. It would be helpful to see additional ablation experiments that break apart the components of the proposed CH learning rule. 2. There are very few experiments in the main text of the paper. Most of the numbers presented in Tables 1 and 2 are from other papers (Bozkurt et al. 2024 and Kurin et al. 2022). 3. The asymmetric case is much more biologically plausible, however deep in the text the authors admit that it does worse than backpropagation (e.g. lines 667-668 in the Appendix). This dampens some of the strongly worded claims in the abstract and at the beginning of the paper, which states, for example “...achieving competitive performance compared with AI models on standard multi-task learning benchmarks,” even though this is only true for the symmetric case. 4. The model is incredibly small. It would be helpful to show that this CH algorithm works for more layers and at a larger scale. Feedback Alignment, for example, is known to scale poorly beyond a few layers. It is surprising that in 2024 the field of biologically plausible learning rules is reporting results for MNIST, when the rest of the field is tackling larger models such as Llama and much more difficult tasks. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Is the CH algorithm applied to the convolutional layers as well in the Multi-MNIST case? 2. How do you address the symmetry of convolutional layers? 3. How should we interpret the results in Table 2? Is it fair to compare CH with SMTO approaches from Kurin et al.? Why not compare directly with backpropagation in the main text? (I see this is included in Table 3 of the Appendix) Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: * The figures in the appendix are incredibly small and difficult to read. This makes it difficult to have any confidence in the ablation studies Line 312 “were carefully optimized over the years”. What does this mean? * GaLU is almost similar to Gated Linear Unites (Shazeer 2018). It could be worth citing this paper or something similar * It is not possible to see the “Asymmetric” results in Figure 14. Shouldn’t they be worse for Multi-MNIST? * "Instructed Visual Processing" is somewhat of a vague term to use. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you, we appreciate the time and effort and the comprehensive and helpful comments. > W1 + W2 Additional experiments are included in the appendix and are only briefly mentioned in the main text due to capacity limitations. These experiments examine various components of CH learning, such as guided and non-guided learning and different schemes of weight symmetry and asymmetry. They also assess the model's robustness to different architectures, loss functions, etc. Our findings indicate that combining BP with TD attention does not compromise performance. following your comments, we will stress this point in the main paper and we will add pointers in the main text to the relevant parts in the appendix > W3 In analyzing weight symmetry, we looked at two aspects: symmetry in the initialization of the weights, and symmetry in the subsequent CH updates. We used 'asymmetry' or 'asymmetric initialization' for weights initialized far from symmetric, and 'weak symmetry' for symmetric (or nearly symmetric) initialization with asymmetric noisy updates. To avoid confusion, since both scenarios represent asymmetric weights, we will use the explicit term ‘asymmetric initialization’ in the revised version. Our experiments show that while asymmetric initialization is sometimes worse than BP, weak symmetry consistently yields results similar to BP [see App Fig 10, 11, 14, and Fig 1 in the global rebuttal]. Notably, after a few iterations, the weak symmetric model becomes non-symmetric, suggesting that starting close to symmetry is sufficient. These findings indicate that performance relies more on weight initialization than on exact symmetry. The model's robustness to noise is encouraging, and therefore, the question of biological feasibility focuses on obtaining nearly symmetric initialization. We have ideas for a possible scheme and would be glad to discuss this further if there is interest. > W4 While our model is small compared to Llama and other large models, it is relatively large for biological models, demonstrating results on multi-task learning using the CelebA benchmark with ResNet18. To further address this issue, we conducted additional experiments comparing our method with the mentioned FA approach [1]. The results show that classic FA scales poorly to ResNet18 on CIFAR. In contrast, our CH learning matches backpropagation performance in the weak symmetric case and learns effectively in the asymmetric case, achieving significantly higher performance than FA. Please refer to the full results attached to the global rebuttal. > Q1 + Q2 The CH is applied to all layers, including convolutional layers where we use weight sharing, a technique often used in comparing convolutional models with human vision for computational efficiency [2]. This shortcut is supported by prior works showing that position invariance and similar receptive fields emerge in biological, non-convolutional models due to natural image statistics that cause early layers in the visual cortex to be convolution-like [3]. A comparison of deep neural network models and biological networks [4] addresses specifically the use of convolution used by deep neural network models and the actual brain network, expressing the view that the two are similar. > Q3 We do have the results for standard backpropagation, denoted as "Unit. Scal." in Table 2, following the terminology from the Kurin et al. paper. Thanks for pointing this out; we will refer to it also as "standard backpropagation" in the revised version. Since we could not compare our results with other biological methods (as none of them uses task selection) we evaluated the results on the benchmark provided by Kurin et al. (2022), which compares various multi-task learning methods on a common architecture and settings. We believe the comparison is somewhat biased in favor of SMTO, as the baseline results include specialized training techniques like early stopping, dropout, and regularization, whereas we use a straightforward 'vanilla' implementation. The results in Table 2 show that our biologically inspired model achieves a similar level of performance, indicating that our model can effectively perform instruction-based learning despite the constraints of a biological model and using the same top-down streams for both learning and guiding attention. > L1 We will increase the size of the figures in the revised version to better visualize the conducted ablation study results. The claim "were carefully optimized over the years" refers to advancements in training deep learning models, such as weight initialization schemes and learning rate schedulers, designed specifically for BP and symmetric weights. This raises the question of how optimal these techniques are for the asymmetric case and whether further exploration in asymmetric settings could improve performance. > L2 Thanks, we will add the reference in the revised version of the paper. > L3 The legend indeed had an error, we thank you for pointing this out, the word ‘asymmetric’ was inserted by mistake, and will be removed. > L4 The term is used by recent VLMs, such as Llava [5], that incorporate “visual instruction” training to focus on specific tasks, similar to what we refer to as ‘guided vision’ in our model. For more on the relevance of ‘instructed vision,’ please see the global comment. [1] Lillicrap et al. feedback weights support error backpropagation for deep learning. Nature communications, 7(1): 411 1–10, 2016. [2] Yamins, Daniel et al. “Performance-optimized hierarchical models predict neural responses in higher visual cortex.” (2014) [3] Robinson, L., Rolls, E.T. Invariant visual object recognition: biologically plausible approaches. Biol Cybern (2015). [4] Kriegeskorte, Nikolaus. “Deep neural networks: a new framework for modelling biological vision and brain information processing.” Annu. Rev. Vis. Sci. 2015. [5] Liu, Haotian, et al. "Visual instruction tuning." NeurIPS (2024).
Rebuttal 1: Rebuttal: Many thanks to the reviewers for their useful feedback, the time invested and your effort. Thanks for pointing out weak points, possible improvements, and positive feedback. Guided vision: we add a general comment about the role of guided vision in our model compared with previous and recent biological and AI models. This is in response to questions asking to elaborate on ‘instructed visual processing’, relation to VLMs, and expanding on the differences with existing biological approaches. The model is the first to combine two functions in the same top-down stream: The first is backpropagation, or adjusting synaptic weights, and the second is using top-down (TD) attention. The TD attention guides the visual processing and specifies what to perform (task selection) and where to apply it in the image. These two TD functions are an essential part of human vision and have been studied in human physiology, anatomy and psychophysics for a long time. However, previous biologically motivated models that used TD for learning, did not include the second important function of TD processing, guiding attention. Until recently, computer vision models have focused on purely bottom-up inference, i.e. the output of these models was dependent exclusively on the visual input. For instance, models were trained to extract the full scene structure, producing a “scene graph” that represents all the scene components, properties, and relations given an input image [1,2], unlike biological vision, which focuses on selected structures in the scene, guided by TD attention. It is interesting to note that in the recent development of large Vision-Language Models (VLMs), better performance in scene understanding is obtained by using an instructed, or task-guided mode of processing [3,4,5]. In the instructed mode, the model is not required to produce a full image description, but to focus on specific aspects specified by the task. The comparisons above help to explain and put in context the novelty of the current study and its contribution. For biological vision, the combination of backpropagation-like learning and guided, or instructed visual processing in the TD stream is a major aspect. In current VLMs, instructed visual processing is also proving to be of high value. The current model proposes for the first time an integration of the two tasks in the same TD stream. The combination is relatively simple and natural: the TD network performs error propagation when the TD input provides the error and performs task-selection and guidance when the input is the task or instruction. [1] D. Xu, Y. Zhu, C. B. Choy, L. Fei-Fei, “Scene graph generation by iterative message passing” in Proceedings–30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2017). [2] G. L. Malcolm, I. I. A. Groen, C. I. Baker, Making sense of real-world scenes. Trends Cogn. Sci. 20, 843–856 (2016). [3] Liu, Haotian, et al. "Visual instruction tuning." Advances in neural information processing systems 36 (2024). [4] Dai et al. 2023 “InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning”. In Proceedings of the 37th International Conference on Neural Information Processing Systems (2024). [5] Shen, Ying, et al. "Multimodal Instruction Tuning with Conditional Mixture of LoRA." arXiv preprint arXiv:2402.15896 (2024). Pdf: /pdf/02390a9aa6d42edd5d4a6aaa2fbec4cb18ea7ca7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
HyperLogic: Enhancing Diversity and Accuracy in Rule Learning with HyperNets
Accept (poster)
Summary: The paper presents HyperLogic, a pioneering framework that integrates hypernetworks into differentiable rule learning, enhancing the interpretability and flexibility of neural networks. It provides a strong theoretical foundation and backs it with extensive empirical evidence, demonstrating its effectiveness over traditional rule learning methods. HyperLogic's ability to generate diverse and accurate rule sets is particularly noteworthy, addressing a significant gap in the field of AI interpretability. Strengths: 1. The paper introduces HyperLogic, a novel framework that integrates hypernetworks with rule-learning networks, which is a significant contribution to the field of interpretable machine learning. 2. The authors provide a solid theoretical analysis, including approximation error bounds and generalization capabilities, which substantiates the effectiveness of HyperLogic. 3. Comprehensive experiments on multiple datasets demonstrate the superiority of HyperLogic in learning diverse, accurate, and concise rules compared to existing methods. 4. The paper addresses a critical need for interpretability in high-stakes domains, offering a transparent decision-making process through the generation of logic rules. Weaknesses: 1. The introduction of additional hyperparameters may lead to increased sensitivity, which could impact the model's performance and generalizability. 2. The framework's complexity might pose challenges for practitioners not well-versed in advanced neural network architectures. While the paper mentions the efficiency of the rule-learning process, the computational overhead introduced by the hypernetworks is not thoroughly discussed. 3. The framework's applicability to a broader range of datasets and tasks needs to be investigated to ensure its generality and practical utility. Technical Quality: 3 Clarity: 3 Questions for Authors: How does HyperLogic perform in domains outside of the tested datasets, particularly in non-binary classification tasks? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Applications to more general scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Summary** We are grateful that reviewer hMMc has a positive impression of the high-level design of our framework and the promising results of comprehensive experiments. To address your questions about the details of our method, we provide point-wise responses as follows. **Q1. The introduction of additional hyperparameters may increase sensitivity, which could impact the model's performance and generalizability.** The additional hyperparameters brought by our framework mainly include the hypernet architecture, the coefficient lamba1 of the diversity loss caused by the hypernet in the loss function, and the number of weight sampling times $k_1$ and $k_2$ during and after training. Among them, adding more layers in the architecture adjustment and the number of sampling times $k_1$ after training have little impact on the training, while the coefficient $\lambda_1$ and the number of sampling times $k_1$ during training have a more significant impact on the training results, which requires us to have a good choice of hyperparameters. Further strategies. Furthermore, we additionally consider an adaptive weight fusion mechanism to optimize model performance through weighted fusion between the weights generated by the hypernet and the original main network weights. This mechanism is controlled by a learnable parameter α, which can be automatically adjusted during the training process to deal with the instability caused by improper selection of hyperparameters. (For more details, please refer to Q1 of Reviewer Q4Et.) **Q2. The framework's applicability to a broader range of datasets and tasks needs to be investigated.** We extended the experiment and proved that our HyperLogic framework can choose other differentiable rule learning algorithms besides DRNet as the main network combination to solve more tasks, such as processing high-dimensional large-scale data with multiple tasks. Classification tasks reflect the broad application scenarios of our framework. Specifically, the core feature of our framework, HyperLogic, is as an inclusive framework to allow enhancements to all neural methods as long as they are relevant to learning a set of interpretable weights end-to-end. This means that we can choose different differentiable rule learning networks as the main network according to needs to handle different tasks. Currently, for simplicity of presentation, our main network is derived from DRNet, which only supports binary classification tasks and is very time-consuming, preventing it from being proven on larger-scale data. To prove that our framework can handle larger data sets, we replace the rule learning model with the latest DIFFNAPS [1], which can perform multi-classification tasks in high-dimensional data. Compared with the current data set, in our **new data set**, the feature dimension has been raised from a maximum of 154 dimensions to a maximum of **5k dimensions**, the amount of data has been raised from a maximum of 24,000 to a maximum of 50,000, and the task has been raised from a maximum of 2 categories to a maximum of **50 categories**, reflecting the characteristics of the **task diversity and complexity**. In this case, HyperLogic-enhanced DIFFNAPS showed **a further average 6.1% F1 score improvement compared to the already good vanilla DIFFNAPS**, proving the effectiveness of the framework (please refer to General Q1 of global response for task details). [1] Walter, N. P., Fischer, J., & Vreeken, J. (2024, March). Finding interpretable class-specific patterns through efficient neural search. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 8, pp. 9062-9070). **Q3. The computational overhead introduced by the hypernetworks should be discussed more.** Since our framework introduces an additional super network, more computing load is inevitable. We take supplementary experiments as an example (please refer to General Q1 of global response for task details) to demonstrate the original algorithm DIFFNAPS and the main network based on DIFFNAPS The average F1 score, time and number of parameters of the HyperLogic framework in different tasks are shown in the table. | Method | F1-score | Time(min) | #Params(W) | |------------|----------|-----------|------------| | DIFFNAPS | 0.659 | 1.565 |1212.1 | | HyperLogic | 0.720 | 1.699 |5161.61 | | It can be seen that although the parameter increase is large, considering the significant improvement in F1 score and the extremely limited increase in time consumption, especially since Hyperlogic allows us to generate diverse rule sets in one training, the increase in computational load is extremely worthwhile. --- Rebuttal Comment 1.1: Comment: Dear Reviewer hMMc, We are grateful for your time and effort in reviewing our paper and providing thoughtful feedback. We have carefully considered all of your comments and have responded to them accordingly. As we near the end of the author-reviewer discussion, we would like to hear your feedback on our rebuttal. We would greatly appreciate it if you could review our responses and let us know if we have adequately addressed your concerns. Additionally, we welcome any further comments or discussions you may have. Thank you for your valuable consideration. Best regards, The Authors
Summary: The authors consider the problem of learning interpretable rules for decision-making and propose a new neural rule learning framework to find these rules efficiently. In particular, they suggest to use a hypernetwork to learn a diverse set of network parameters for the rule-encoding network. On small binary data, they show that their algorithm finds reasonable rule sets that predict a given class well. [Edit after rebuttal]: The authors provided additional discussion and experiments addressing my concerns, which leaves me with a *weak accept*. Strengths: - The presented approach seems novel and the considered problem is relevant. - The presentation itself is clear and understandable. Weaknesses: - Despite the broad motivation of making neural decisions interpretable, the presented approach is very limited in what networks are used (2-layer specific architecture), which data is used (only binary datasets with labels), and which rules can actually be expressed (only conjunctions in antecedent, single label in consequent). Especially the type of considered data needs to be communicated clearly from the start. - There is a significant amount of related work, including recent neural approaches, that is not considered here. Some more classical approaches include statistical pattern mining [1] and falling rule lists [2,3,4]. Recently, a neural approach that finds differential patterns between labels -- which are essentially if-then rules has -- been proposed [5]. The experiments should be extended to properly compare to existing work. - The experiments are very limited in terms of considered datasets: they are old “toy” datasets of small scale in both sample and feature dimension. Recent works, including [4], incorporate large-scale real-world data to actually show that such neural approaches are more scalable and can hence consider other data than classical approaches. In summary, it is unclear to me whether the presented approach provides any improvement over existing work, or is scalable to large data at all. [1] https://dl.acm.org/doi/10.1145/2783258.2783363 [2] https://proceedings.mlr.press/v38/wang15a.html [3] http://proceedings.mlr.press/v84/chen18a/chen18a.pdf [4] https://www.sciencedirect.com/science/article/pii/S0020025519310138 [5] https://ojs.aaai.org/index.php/AAAI/article/view/28756 Technical Quality: 2 Clarity: 3 Questions for Authors: - In section 4, you convert the process of drawing M samples with K components each into drawing MK single-component samples. Does the latter actually match the former? It seems that the K components in the former are actually dependent, which would mean it does not correctly model it. Please clarify. - In Table 1, the bold numbers are confusing. For adult, house, and heloc data, DR-Net, which is closely related and forms the basis of the architecture design, performs on par with the presented method. The results are within the given standard error. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: While there is a brief discussion on limitations, neither the limitations in terms of data or rule language/architecture, nor the connection to existing work is properly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Summary:** We first thank reviewer F7a8 for the insightful comments, especially for the questions about our model design and experiment details, which helped us to clarify our paper. We would like to address the concerns one by one. **Q1. The method is limited in network structure (2-layer architecture), data format(only binary datasets with labels), and rule language(only conjunctions in antecedent, single label in consequent).** Our method is a general framework that can enhance many existing neural rule-learning methods, since the idea of ​​hypernet can be applied to different types of primary network, as long as they are related to learning a set of interpretable weights in a differentiable way (for some minor restrictions on the primary network architecture caused by the introduction of hypernet, please see Q6). The HyperLogic framework has no additional restrictions on data format and rule language, and only inherits from the selected primary network. As you mentioned, our implementation is currently limited in network structure, data format and rule language. But this is due to the limitations of DRNet, the primary network we refer to, not the limitations of our framework. We choose DRNet because the simplicity of the architecture is conducive to our proof and expression. However, considering the limitations of DR-Net, we can also consider other more flexible neural approach as the primary network to alleviate the restrictions on data format, rule language, etc. (please see Q2). **Q2. More related work, including recent neural approaches, should be considered.** Our proposed method introduces hypernetworks to generate weights for the main network, offering a novel framework. The main network is now a two-layer neural network specifically designed to retrieve rules directly from learned weights. However, other pre-existing neural methods can also be substituted, as long as they are related to end-to-end learning of a set of interpretable weights. Unlike traditional methods and recent Bayesian approaches, our method requires fewer hyperparameters and offers greater scalability. Differing from other neural methods, HyperLogic serves as a more versatile framework that can organically utilize these methods as the main network to enhance performance. Furthermore, our approach is capable of generating multiple sets of rule sets in a single training session, which is not feasible with other methods.   Theoretically, the neural methods described in the previous section can serve as the main network for our framework, just like DRNet. Since DIFFNAPS is a novel algorithm capable of differentiable pattern discovery on large-scale data, **we have chosen to use DIFFNAPS as the main network in our supplemental experiments**. This expansion of our experiments further demonstrates the efficacy of our framework (refer to Q3 for details). **Q3. The experiments should be extended to properly compare to existing work.** Please refer to General Q1 in the global response. **Q4. In section 4, you convert the process of drawing M samples with K components each into drawing MK single-component samples. Does the latter actually match the former?** 1. The conversion mentioned is purely for theoretical proof purposes and is not implemented in the actual algorithm. 2. The preparation for the theoretical analysis (lines 175-191) remains valid regardless of the (in)dependence among the K components. Equation (9) is derived from the sample average of a distribution with equally weighted marginals. 3. To prove Theorem 1, we need to identify a set of parameters for equation (7) that satisfies the inequality stated in the theorem. One approach to achieve this is by using sampling weights transformed from Barron's theorem. It's important to note that this is just one possible set of parameters, and the actual learning algorithm does not utilize such sampling. **Q5. In Table 1, the bold numbers are confusing, as DR-Net performs similarly to the presented method on partial datasets within the standard error.** Our framework not only focuses on the accuracy of the optimal rule set, but also on the simplicity of the rule set and the richness of the learned candidate rule set. Now we simply selected the best rule set on the training set for testing, and have surpassed the results of DRNet in a more concise rule set (Figure 3), demonstrating the effectiveness of our framework. At the same time, the optimal rule set on the training set is not completely equal to the optimal rule set on the test set (Figure 2), indicating that our training strategy (such as hyperparameter selection) and the final rule set selection strategy (such as further considering the complexity of the rule set) can further obtain higher test accuracy. We leave this for future work. **Q6. The limitations of network structure.** HyperLogic has almost no special restrictions on the architecture of differentiable rule learning networks, except for two possible problems, but both can be solved: 1. If the main network has too many parameters, the hypernetwork might struggle to produce stable weights. To solve this, we tried two approaches: ① Combine the original weights with those generated by the hypernetwork to improve stability while keeping the hypernetwork's flexibility (please refer to Q1 of Reviewer Q4Et for details). ② Only generate some of the main network's weights using the hypernetwork. Our tests show this still significantly improves results, even with large datasets (please see Q3 for details). 2. Some neural methods use a non-differentiable technique called weight pruning to simplify the model. To address this, we've chosen to keep all weights and apply a sigmoid function to them after processing. This makes the entire process differentiable. While this change might affect how the data is distributed and require adjustments to the model's settings, our tests show it doesn't harm performance. --- Rebuttal Comment 1.1: Title: Rebuttal comment Comment: Thank you for your response. Regarding *W1*: I appreciate the answer, the modularity of the approach did not become that evident in the original draft. I would suggest to rewrite the paper accordingly also incorporating the new results, as this strengthens the contribution. Regarding *W2*: The new results including their critical discussion does cover more recent neural approaches. However approaches such as the popular falling rule lists, which are heavily used in practice, should be compared to. Regarding *W3*: Still, not considering recent real datasets is a major weakness in the paper. The additional results only cover a larger synthetic dataset. I would like to encourage the authors to use the remaining time to (1) compare to falling rule lists and (2) consider at minimum one large real dataset, the concurrent literature on pattern mining does give plenty of examples of those. --- Reply to Comment 1.1.1: Title: Answers to new comments from Reviewer F7a8 Comment: Thank you for acknowledging our method and suggestions. We will clarify our approach's generality and contributions in the final draft. Additionally, we will compare our approach to falling rule lists and expand experiments to four large biological datasets. **1. Compare to falling rule lists (FRL) [2] and other methods** FRL methods explicitly construct a falling list of rules with probabilities; Differentiable HyperLogic focuses on directly mining patterns and generating rules from data in a scalable manner using neural networks. Our differentiable approach demonstrates **better scalability for pattern mining tasks on large-scale data** (see Section 2). The main differences between FRL and the HyperLogic are: 1. Rule Generation: - FRL methods rely on other rule mining techniques to generate initial candidate rules. - HyperLogic uses a differentiable neural network approach to directly mine patterns and generate rules from data in an end-to-end manner. 2. Scalability: - FRL methods may face scalability issues when dealing with large candidate rule sets or complex data. - HyperLogic, a neural network-based approach, is more scalable for pattern mining in large-scale data. 3. Rule Ordering: - FRL methods explicitly order the rules into a descending list based on rule probabilities or other criteria. - HyperLogic generates an unordered set of rules. 4. Probabilistic Interpretation: - FRL methods provide probabilistic interpretations for the rules by construction. - HyperLogic does not directly output rule probabilities, although probabilities could potentially be derived from the learned patterns. 5. Integration Potential: A recent study [1] proposed a neural method to learn ordered rule lists, which could potentially be integrated with HyperLogic to enable joint rule generation and probabilistic ordering, combining the strengths of both approaches. Other statistical rule mining methods like Bayesian rule lists [4] and Bayesian decision sets [5] provide probabilistic interpretations but are limited to binary classification and small rule sets. Fuzzy rule-based models [6] incorporate human-like reasoning but lack probabilistic predictions and scalability. **2. Consider large real datasets** We evaluated our method on four large biological datasets following the settings of DIFFNAPS[1]: Cardio, Disease, BRCA-N, and BRCA-S, using the area under the curve (AUC) as the metric. We continued to combine our approach (HyperLogic) with DIFFNAPS as the main network and compared it to vanilla DIFFNAPS. Additionally, FRL[2] cannot scale to non-trivial data, while CLASSY was already compared in the original paper. The table shows the dataset details (i.e. samples (n), features (m), and classes (K) ), number of discovered patterns(#P), average pattern length(|P|), and AUC scores (results for DIFFNAPS and CLASSY are taken directly from [1]). ||||| HyperLogic | HyperLogic | HyperLogic | DIFFNAPS |DIFFNAPS| DIFFNAPS | CLASSY |CLASSY | CLASSY | |---------|-----|-----|----|:-----------:|------------|------|----------|-------|------|--------|-------|------| | Dataset | n | m | K | #P| \|P\| | AUC | #P| \|P\| | AUC | #P | \|P\| | AUC | | Cardio | 68k | 45 | 2 | 15 | 2| 0.57 | 14 | 2| 0.56 |10| 2| 0.36 | | Disease | 5k | 131 | 41 | 866 | 2 | 0.86 | 838 | 2| 0.84 | 25 | 2 | 0.11 | | BRCA-N | 222 | 20k | 2 | 187 | 6 | 0.95 | 146 | 9| 0.91 | 3 | 1| 0.45 | | BRCA-S | 187 | 20k | 4 | 1k | 2 | 0.89 | 1k | 2 | 0.86 | 2| 1| 0.23 | CLASSY lacks the finesse to effectively mine patterns for large-scale real-world tasks. Moreover, despite competing with the strong DIFFNAPS baseline, HyperLogic achieved further improvements, demonstrating its potential for handling large, real-world datasets. Please let us know if you have any other questions or suggestions for improving the final draft. We appreciate it a lot. [1] Walter, N. P., Fischer, J., & Vreeken, J. (2024, March). Finding interpretable class-specific patterns through efficient neural search. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 8, pp. 9062-9070). [2] Wang, F., & Rudin, C. (2015, February). Falling rule lists. In Artificial intelligence and statistics (pp. 1013-1022). PMLR. [3] Proença, H. M., & van Leeuwen, M. (2020). Interpretable multiclass classification by MDL-based rule lists. Information Sciences, 512, 1372-1393. [4] Yang, H., Rudin, C., & Seltzer, M. (2017, July). Scalable Bayesian rule lists. In International conference on machine learning (pp. 3921-3930). PMLR. [5] Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., & MacNeille, P. (2017). A bayesian framework for learning rule sets for interpretable classification. Journal of Machine Learning Research, 18(70), 1-37. [6] Jiménez, F., Sánchez, G., & Juárez, J. M. (2014). Multi-objective evolutionary algorithms for fuzzy classification in survival prediction. Artificial intelligence in medicine, 60(3), 197-219.A
Summary: This paper introduces HyperLogic, a novel differentiable framework for rule learning using neural networks. Instead of directly training network weights, HyperLogic extracts rules by generating the weights for a primary network through hypernetworks. These hypernetworks create diverse sets of weights, functioning like a mixture of experts, thereby enhancing model flexibility. To train the hypernetworks, the authors minimize the relative entropy between the learned distribution and a prior distribution, incorporating regularization to encourage simplicity. They employ smooth approximations to ensure the problem be differentiable. The authors demonstrate HyperLogic’s effectiveness through both theoretical analysis and empirical results on multiple datasets. Strengths: - The paper presents an interesting rule-learning approach using hypernetworks, leveraging a set of experts to take advantage of complex and diverse models while maintaining interpretability. - The authors provide a robust theoretical foundation for HyperLogic, showing the effectiveness of their approach through rigorous analysis. - Extensive experiments on various datasets demonstrate that HyperLogic can learn multiple diverse and accurate rule sets, highlighting its performance. - This paper is well-written and very easy to follow. Weaknesses: - The results seem to be sensitive to the choice of hyperparameters, which could affect the stability and generalizability of the method. Are there any guidelines or automated processes to aid in choosing the optimal hyperparameters? - The scalability of the approach for extremely large datasets or high-dimensional data remains unclear. Further evaluation is needed to understand its performance in such scenarios. - Typo: in line 125, all negative weights should have the inputs of 0 ? Technical Quality: 3 Clarity: 3 Questions for Authors: see weaknesses Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Summary** Many thanks to reviewer Q4Et for your positive comments and recognition of our contribution including an interesting framework, a robust theoretical foundation, comprehensive experiments, and a well-written paper. We would like to address your concerns one by one. **Q1. Are there any guidelines or automated processes to aid in choosing the optimal hyperparameters?** The additional hyperparameters brought by our framework mainly include the hypernet architecture, the coefficient lamba1 of the diversity loss caused by the hypernet in the loss function, and the number of weight sampling times $k_1$ and $k_2$ during and after training. 1) Regarding the hypernetwork architecture: We followed HyperGAN, which is a simple and effective hypernetwork architecture, and has been proven in experiments that more layers will not bring significant gains; explore other more powerful hypernetworks The architecture will be interesting as our future work. 2) Regarding the number of sampling times $k_2$ after training: $k_2$ only affects the results of our final extracted rules and does not directly affect the training of the model. Due to the simplicity constraint, the rule set we sampled is not easy to overfit. We can expand $k_2$ as much as possible to obtain a richer rule candidate set if resources and time allow. 3) Regarding the coefficient of diversity loss $\lambda_1$ and the number of sampling times $k_1$ in training: The two have a more significant impact on the effect. In practice, we can set a smaller initial value and gradually increase it until classification error staturates. Further, inspired by your question, we considered extending our model to alleviate the challenge of hyperparameter selection. We no longer let the weight $W_{hyper}$ generated by the super network completely act on the weight of the main network, but retain the weight $W_{main}$ of the main network, and combine it with the weight generated by the super network to obtain $W_{main-final} = \alpha* W_{hyper} + (1-\alpha)*W_{main}$, where $\alpha$ is a learnable parameter restricted to 0-1. This can increase the stability of our results without losing the advantage that the hypernetwork can generate diverse weights: in extreme cases, we set very inappropriate hyperparameters about the hypernetwork and generate inappropriate $W_{hyper}$, and the algorithm will Prompt alpha to approach 0, thereby offsetting the impact of inappropriate hyperparameter selection and allowing the model to degenerate into a normal version without a hypernet. The effectiveness of such improvements is demonstrated in supplementary experiments, especially for high-dimensional data. Updated algorithms and results will be added to the final version. **Q2. The scalability of the approach for extremely large datasets or high-dimensional data remains unclear.** Our framework aims to enhance various existing neural rule-learning methods, since hypernet ideas can be applied to different types of primary networks, as long as they are related to learning a set of interpretable weights in a differentiable manner. Therefore, HyperLogic is a general framework. However, it should be noted that the framework essentially stems from the main network, that is, the framework's task scope, model capabilities, etc. are still closely related to the main network, and scalability is often limited by the size of the primary network. However, we can easily switch to a more flexible primary network according to the needs of the task. This is just a simple adaptation, while the main ideas stay the same. Currently, for the sake of simplicity of description, our HyperLogic main network gets inspiration from DR-Net, which although has a simple architecture, only supports binary classification tasks and is very time-consuming, preventing HyperLogic from being proven on larger-scale data. To prove that our framework can handle larger data sets, we replaced HyperLogic's main network with the latest DIFFNAPS [1], which is a neural approach that can perform multi-classification tasks and pattern discovery in high-dimensional data. Under this new configuration, we conducted experiments on **a new dataset with feature dimensions up to 5,000, data size up to 50,000, and task types up to 50 categories**. In this case, HyperLogic, which uses DIFFNAPS as the main network, further demonstrated **an average 6.1% F1 score improvement** compared to the already good vanilla DIFFNAPS, while **the average time consumption only increased by 8.6%** (please refer to General Q1 in global response for task details), effectively proving the practical application potential of the HyperLogic framework in processing large-scale data sets and high-dimensional data. [1] Walter, N. P., Fischer, J., & Vreeken, J. (2024, March). Finding interpretable class-specific patterns through efficient neural search. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 8, pp. 9062-9070). **Q3. Typo: in line 125, do all negative weights have the inputs of 0?** Thank you for your feedback. We have reviewed line 125 and acknowledge that all negative weights should have the inputs of -1 . This error will be corrected in our revised manuscript. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer Q4Et Comment: Thanks for the detailed response. I have no further questions and would like to keep my positive score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Q4Et, Thank you for acknowledging our work. We will incorporate the relevant results in the final version to further enhance its quality. Best regards, The Authors
null
null
Rebuttal 1: Rebuttal: Dear esteemed reviewers, We are grateful to all the reviewers for generously dedicating their time and effort to evaluating our paper. Their constructive feedback and valuable suggestions are helpful to further improve the quality of our work. We would like to answer general questions mentioned by all the reviewers here. Thank you for your considerable contributions during the review process. We anticipate further discussions in the future. Once again, many thanks to your valuable insights. Best wishes, The authors **General Q1: The experiments should be extended to properly compare to existing work.** To expand our experiments to larger and more complicated cases, we considered the latest Neuro-Symbolic algorithm **DIFFNAPS**, which is capable of pattern mining under large-scale data conditions. Following the original experiments, we tested the model’s pattern mining performance under **a fixed input dimension of 5000** and varying total number of **categories K (ranging from 2 to 50)**, measured by the F1 score, with each category containing **1000 samples**. For HyperLogic, we selected only the classifier part of DIFFNAPS as the main network to compare with vanilla DIFFNAPS. Compared with the current data set, in our **new data set**, the feature dimension has been raised from a maximum of 154 dimensions to a maximum of **5k dimensions**, the amount of data has been raised from a maximum of 24,000 to a maximum of 50,000, and the task has been raised from a maximum of 2 categories to a maximum of **50 categories**, reflecting the characteristics of the **task diversity and complexity**. The experimental results are shown in the table below. It can be seen that in datasets with fewer categories, due to the smaller total number of samples, Hyper Logic has not yet received sufficient training and does not perform ideally. However, in more challenging classification datasets with an increased number of samples, the model's performance has significantly improved, **with an average F1 score increase of 6%**. This fully demonstrates that our framework can empower diverse neural rule learning networks, capable of handling large-scale data and possessing a good range of applications. ***Table. The F1 score(+/- std) of two methods among 11 datasets*** | Dataset(K=) | DIFFNAPS | HyperLogic | |---------------|------------------|------------------| | 2 | 0.788 +/- 0.080 | 0.726 +/- 0.084 | | 5 | 0.703 +/- 0.030 | 0.675 +/- 0.036 | | 10 | 0.726 +/- 0.013 | 0.751 +/- 0.020 | | 15 | 0.622 +/- 0.017 | 0.800 +/- 0.028 | | 20 | 0.712 +/- 0.015 | 0.760 +/- 0.011 | | 25 | 0.688 +/- 0.020 | 0.770 +/- 0.020 | | 30 | 0.602 +/- 0.024 | 0.666 +/- 0.021 | | 35 | 0.557 +/- 0.014 | 0.668 +/- 0.018 | | 40 | 0.635 +/- 0.022 | 0.712 +/- 0.011 | | 45 | 0.611 +/- 0.009 | 0.694 +/- 0.009 | | 50 | 0.603 +/- 0.010 | 0.702 +/- 0.012 |
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Approximating the Top Eigenvector in Random Order Streams
Accept (spotlight)
Summary: The paper studies the problem of finding the top eigenvector of a matrix in the streaming model. The problem is: given a sequence of d-dimensional rows a_1, a_2, ..., a_n of a matrix A, approximately compute the top eigenvector v_1 of A^TA. The algorithms gets a single pass over the stream of input rows, must use subquadratic (in d) space, and should output an approximation \hat v_1 to v_1. Existing works on this problem have the following natural idea at their core: initialize a vector z\in R^d to be zero, and then, upon receiving a row a_i, set z\gets z+ \eta a_i^T z a_i for some learning rate \eta. One can either keep a fixed learning rate, or use a combination: the latter is done in Price's algorithm from a few years ago, which computes an approximation \hat v_1 that is at least 1-O(log d/R) correlated with v_1, where R is the eigenvalue gap \sigma_2(A^TA)/\sigma_1(A^TA). Price also showed that a 1-O(1/R^2) correlation requires almost quadratic in d space. The author(s) result is that in random streams, the correlation bound can be improved to 1-O(1/\sqrt{R}). They complement their algorithmic result with a lower bound for Price's algorithm -- this is basically an adaptation of Price's lower bound to random order streams. They also provide certain impossibility results for fixed rate Oja's algorithm. The main algorithmic technique is clean. The main point is that the fact that the stream is randomly ordered implies that its substreams contain good spectral approximations to the original matrix -- this is by standard spectral concentration bounds (after removing heavy entries). Due to the spectral gap assumption the top eigenvectors of these rescaled submatrices are close to v_1. Thus, one can run the power method on the substreams, of which there are quite a few. Thus, a good convergence can be obtained. Strengths: S1. The authors provide a clean analysis of a very natural algorithm for top eigenvector computation in data streams. This is definitely worth publishing. Weaknesses: W1. I am not sure how novel the idea of using substreams that concentrate around A as steps in the power iteration is. If it is, then this is a very nice paper. Otherwise it can be seem more as a sharpening of Price's result. W2. The author(s)' overview of related work on random streams is not up to date. From their list of citations if seems that there was no work done on random streams between 2009 and 2023. See, for example, https://arxiv.org/pdf/2110.10091 and references therein. W3. I find it strange the author(s) use an unpublished result (Magdon-Ismail, 2010) from 14 years ago for the concentration bound that their analysis relies on. I recommend looking at Joel Tropp's survey on matrix concentration for a more formal reference. Technical Quality: 3 Clarity: 3 Questions for Authors: None. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for suggestions on citations for the random order model. We will add more references in the final version of the paper. > Using substreams for quadratic approximation When each row of the stream is independently sampled from a distribution, Hardt and Price analyze a similar block power method. In that case, it is much easier to argue that the substreams concentrate around the covariance matrix. But, in our case, we assume that the rows of the matrix are fixed and the only randomness is in the order the rows are presented to the streaming algorithm. This makes the problem quite different to the one considered by Hardt and Price and hence the techniques used to achieve our proofs are different as well. > Using Magdon-Ismail’s result We used the manuscript of Magdon-Ismail as the main reference since it directly has the result in the form we require. But we agree that citing Tropp’s work would be useful for the readers. We will include a citation in the final version of the paper.
Summary: The paper studies the problem of approximating the top eigenvector of $A^T A$ when rows of an $n \times d$ matrix $A$ are given in a stream. The authors consider worst-case inputs $A$ but assume the rows are presented in a uniformly random order. They show that when the gap parameter $R = \sigma_1(A)^2 / \sigma_2(A)^2 = \Omega(1)$, there is a randomized algorithm using $O(h \cdot d \cdot \text{polylog}(d))$ bits of space that outputs a unit vector $v$ with correlation $1 - O(1/\sqrt{R})$ with the top eigenvector $v_1$. Here, $h$ denotes the number of "heavy rows" in the matrix, defined as rows with Euclidean norm at least $|A|_F / \sqrt{d \cdot \text{polylog}(d)}$. The authors provide a lower bound showing that any algorithm using $O(hd/R)$ bits of space can obtain at most $1 - \Omega(1/R^2)$ correlation with the top eigenvector. Their results improve upon the $R = \Omega(\log n \cdot \log d)$ requirement in a recent work by Price. The authors' algorithm is based on a variant of the block power method. They also show improvements to the gap requirements in Price's analysis for both arbitrary order and random order streams. The techniques involve row sampling, concentration inequalities, and careful analysis of the power method with approximate quadratic forms. Strengths: The paper introduces a novel algorithm to approximate the top eigenvector in random order streams, a significant advancement over previous methods that required multiple passes over the data or had more stringent conditions. One of the key strengths is its method for handling heavy rows in the matrix. By storing the rows with large Euclidean norms separately and processing the remaining rows, the algorithm ensures an accurate approximation of the top eigenvector​. The algorithm achieves near-optimal space complexity, using $O(h \cdot d)$ space where $h$ is the number of heavy rows, which is shown to be nearly tight. Moreover, The paper improves upon the gap requirements needed for the algorithm to work. It reduces the gap $R$ from $\Omega(\log n \cdot \log d)$ to $\Omega(\log^2 d)$ for arbitrary order streams and from $\Omega(\log d)$ to constant for random order streams. The use of row norm sampling as a general technique to reduce the number of rows while preserving the top eigenvector is a versatile approach. Weaknesses: The main weakness of this work is the lack of empirical validation. Given the simplicity of their algorithm and the ease of generating synthetic matrices with a given sequence of singular values, it should be easy to perform experiments to compare it with Price's approach and provide insight into its practical effectiveness. Technical Quality: 4 Clarity: 4 Questions for Authors: How does your algorithm compare with Price's algorithm in practice? ### Typos - line 123, centralized equation, $CR^2$ should be $(CR^2)$. - line 124, $a/\|a\|_2$ should be $v/\|v\|_2$. - line 144, $td/R$ should be $hd/R$. - line 4 of Algorithm 1, $n\epsilon^2$ should be $(n\epsilon^2)$. - line 10 of Algorithm 1, $A_{j\cdot (2np):j\cdot (2np)+y_j}$ should denote the rows in the range $j\cdot (2np):j\cdot (2np)+y_j$, right? $j$ should be $j-1$, then. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. > Separation in Practice In section 5 of the paper, we give a synthetically constructed instance for which Price’s algorithm does not give a good approximation to the top singular vector. For instances observed in practice, it seems harder to show such a separation. In many practical instances, the observations i.e., the rows are sampled from a nice enough distribution for which earlier analyses of Oja’s algorithm show very good approximability of the top eigenvector. --- Rebuttal Comment 1.1: Title: Answer to authors' rebuttal Comment: I thank the authors for their rebuttal and have no further questions. As an aside (not affecting my score) I think the paper would benefit from including, even if only briefly and informally, a discussion of the empirical side, e.g. what you mention regarding the many practical scenarios in which the distribution is nice enough and Oja's algorithm yields a good approximation. This is related to reviewer yHif's concern that your paper could better motivate the problem. --- Reply to Comment 1.1.1: Title: Response Comment: Thanks for the response. We will include our observations from experiments in the final version of the paper.
Summary: This paper is concerned with estimating a direction that is well-correlated with the top eigenvector $v_1$ of a matrix where the rows $a_1, \dots, a_n$ are streamed to us in uniformly random order. To make this well-defined, a gap assumption is required, i.e., that $R \coloneqq \lambda_1(A^{\top}A)/\lambda_2(A^{\top}A) \gg 1$. A successful algorithm is one that returns an estimate vector that is $1 - o_{R}(1)$ correlated with $v_1$ while using at most $\widetilde{O}(d)$ space. Note that if we are allowed to use $d^2$ space, then the problem is trivial, as we can simply maintain $A^{\top}A$. Thus, one of the main challenges is to figure out how to estimate $v_1$ when we are not allowed to remember too many of the rows we have seen so far. The main result is an algorithm that uses $\widetilde{O}(hd)$ space and outputs a unit vector $u$ such that $\langle u, v_1 \rangle \ge 1-C/\sqrt{R}$ for some universal constant $C$, and where $h$ denotes the number of "heavy" rows (those that contribute at least a $\sim 1/d$ fraction to the Frobenius norm of $A$). This dependence on $h$ is essentially optimal, as the authors give a lower bound saying that any algorithm that outputs a vector with at least $1-1/R^2$ correlation must use $hd/R$ space. The main technical contribution is to observe that if the rows of $A$ are randomly permuted and streamed to us, then in some sense, we can see $\sim \log d$ spectral approximations to $A$ in disjoint contiguous windows of the stream. Thus, the power method can be simulated by adding $a_i\langle a_i, u \rangle$ to the estimate in each round. To actually form the spectral approximations and implement this with low space, the authors use a row sampling result which says that one only needs to choose and reweight $\sim \rho/\varepsilon^2$ rows of $A$ to form a decent spectral approximation to $A$, where $\rho$ denotes the stable rank of $A$. So, the authors use the row sampling result to form $\log d$ many spectral approximations to $A$ and then simulate the power method on these approximations. The final result follows from showing that the top eigenvector of the spectral approximations is reasonably well correlated with the top eigenvector of $A$, which itself follows from standard eigenvector perturbation tools and the gap assumption. Although this does not immediately work for heavy rows (e.g. consider a more difficult scenario where there is one row orthogonal to all the other rows in the matrix, it has large norm, and it witnesses $v_1$, then we must remember it), the simple fix is to just identify heavy rows on the fly and store those exactly. As mentioned earlier, this linear dependence on the number of heavy rows is necessary. Strengths: The problem is a very natural one and the random order stream of a worst-case matrix is a nice assumption that allows one to get more optimistic results while not trivializing the question. The algorithm is really simple and easy to interpret. The solution/analysis are very elegant. The writing is also clear. Finally, the idea and execution are pretty insightful. Weaknesses: In order to implement the row sampling result, we need an estimate on the operator norm of $A$. The authors fix this by assuming that the coordinates of the inputs are between $1/poly(nd)$ and $poly(nd)$ and then maintaining $\log(nd)$ copies of the algorithm in parallel. This is not too big of a deal, but I wish this assumption were made more explicit towards the beginning of the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: None Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
null
Summary: Given a stream of $n$ data-points $a_1, a_2, \dots, a_n \in \mathbb{R}^d$, this paper studies the problem of approximating the top eigen-vector of the matrix $A^T A$ at the end of the stream, where $A \in \mathbb{R}^{n \times d}$ is the data-matrix with rows of the matrix corresponding to the data-points. This is the same problem as approximating the first principal component in PCA. Prior work on this problem by Price (2023) gave an algorithm that works with some worst-case guarantees i.e. both the data-points and the order of the data-points can be chosen by the adversary. This paper is the first to study the setting where although the data-points can be chosen arbitrarily by the adversary, the order of arrival of the data-points is uniformly at random. For this setting, the paper provides an algorithm with improved guarantees over the algorithm by Price. Additionally, they also provide a lower bound for streaming algorithms for the random-order arrival model. Strengths: Originality: First paper to study the problem of approximating the top eigen-vector in the random-order arrival setting. Quality and Clarity: Overall well-written paper. Significance: The paper can certainly do a better job of motivating the problem. This paper is primarily a theory paper but since the problem studied is closely related to PCA, which is obviously very important in practical applications, it is easy to imagine scenarios where the algorithm described in this paper can be implemented. For example, in the case of PCA for population genetics, where the dimensionality of the data (number of genetic markers) can exceed 10 million, and the number of samples in standard datasets like UK Biobank is of the order of hundreds of thousands, you could imagine running this algorithm in one-pass to approximate the top principal component, running it on a second pass to approximate the second principal component, and then using a third pass to map all the points using these two components (most PCA studies in population genetics papers just use 2 components as they are easy to visualize in papers). Even otherwise, as multiple papers on the closely-related "Streaming PCA" problem have appeared in past iterations of NeurIPS and other related conferences, this paper is clearly relevant to NeurIPS. Weaknesses: Paper needs to do a better job motivating the problem. This is a submission to NeurIPS, not COLT. Clearly, the paper has interesting technical contributions but I think this might be a major weakness for the paper. Some suggestions for the same: It would be nice for the introduction of the paper to have at least a few lines motivating the case for high-dimensional d and why the distinction between $d^2$ and $d$ is something which matters in real-world applications. For example, see the paper “Memory Limited, Streaming PCA” by Mitliagkas, Caramanis, and Jain, which appeared in NeurIPS 2013: “In certain high-dimensional applications, where data points are high resolution photographs, biometrics, video, etc., $p$ often is of the order of $10^{10}$ - $10^{12}$, making the need for $O(p^2)$ memory prohibitive. At many computing scales, manipulating vectors of length $O(p)$ is possible, when storage of $O(p^2)$ is not.” A great reference to add on line 80 would be the book chapter by Gupta and Singla titled “Random-Order Models” which appeared in the book “Beyond Worst-Case Analysis” edited by Tim Roughgarded. Technical Quality: 4 Clarity: 4 Questions for Authors: How do the guarantees in this paper extend to the setting where you need to approximate not just the top eigen-vector, but the top $k$ principal components, for some arbitrary value of $k$? I understand that this might not be straight-forward to answer, but at the very least, it would be nice to have a conclusion or future directions section at the end of the paper, with a brief discussion on this. Minor editing comment: For the paper “Syamantak Kumar and Purnamrita Sarkar. Streaming PCA for Markovian data.”, please use the Bibtex citation from the official NeurIPS website instead of Google Scholar. The current citation reads as NeurIPS 2024, but it actually appeared in NeurIPS 2023.. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Primarily a theory paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the reference suggestions. We will add them to the introduction in the final version. > How do the guarantees in this paper extend to the setting where you need to approximate not just the top eigen-vector, but the top $k$ principal components, for some arbitrary value of $k$? In practice, one can do $k$ passes over the data and by projecting away the vectors computed in the previous rounds, we can obtain approximations to the top $k$ singular vectors. But we do not have any guarantees on how good such approximations are. The main reason is that the approximation guarantee i.e., $\langle\hat{v}, v_1\rangle^2 \ge 1 - 1/\sqrt{R}$ that we obtain in the first pass is not sufficient to ensure that the gap between the top 2 singular values of the “residual stream” i.e., of the vectors $(I-\hat{v}\hat{v}^T)a_1, \ldots, (I-\hat{v}\hat{v}^T)a_n$ is large enough for the algorithm to work. It is an interesting open question to extend our algorithm or obtain an algorithm using a different approach for k-SVD for the worst case input matrices $A$ presented to the algorithm in a uniform random order. We note that the work of Price also only applies to the top eigenvector and the same open question applies there as well.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Voxel Mamba: Group-Free State Space Models for Point Cloud based 3D Object Detection
Accept (spotlight)
Summary: The paper focuses on a new architecture, Voxel Mamba, for 3D object detection using point clouds. It introduces a group-free approach utilizing state space models (SSMs) to address the limitations of previous voxel-based methods like small receptive fields and inefficient grouping. Voxel Mamba leverages asymmetrical SSMs for handling multi-scale contexts and employs space-filling curves for efficient data serialization. They have evaluated the method on the Waymo and nuScenes datasets. Strengths: 1. Integrating group-free state space models and space-filling curves significantly advances traditional voxel-based methods. This allows for better handling of long-range dependencies and multi-scale contexts. 2. The manuscript provides comprehensive empirical evidence, showing improvements in accuracy and efficiency on prominent benchmarks (nuScenes and Waymo). 3. The use of the Hilbert curve for input serialization and the introduction of asymmetrical state space models are well-explained and demonstrate a deep understanding of the challenges in 3D object detection. Weaknesses: 1. There are typos, e.g., Line 293: "finner" should be "finer"; 2. The manuscript depth, while comprehensive, might be overwhelming for readers unfamiliar with the domain, particularly the advanced concepts like Hilbert curves and asymmetrical state space models. It also might pose challenges in implementation, especially for teams with limited expertise in these areas. 3. The Voxel Mamba might require substantial computational resources, e.g., GPU memory and processing power, which could limit its deployment in constrained environments. 4. The complexity of the Voxel Mamba model doesn't yield proportionately significant performance improvements. Voxel Mamba achieves superior performance on nuScenes and Waymo. However, while statistically significant, the improvement margins might be weighed against the increase in model complexity, implementation difficulty, and computational resource requirements. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How does the model perform under different weather conditions or with varying light levels, which are common challenges in outdoor 3D object detection? 2. What specific modifications would be necessary to adapt this model for indoor use, where point cloud data might come from different types of sensors or have different characteristics like higher density? 3. Can Voxel Mamba operate in a real-time setting given its computational demands? What are the latency metrics when deployed in such environments? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: See the above "Questions" and "Weaknesses" parts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1, Q2] Manuscript refinement.** We apologize for the typos and presentation issues. We promise to thoroughly proofread the entire manuscript and correct the grammatical and typographical errors as much as possible. Considering that some concepts may be difficult for newcomers to this field, we will include more intuitive explanations and visualizations to aid their understanding. For sure we will release our code and pre-trained models to provide detailed implementation. **[Q3, Q4] Computational complexity.** Firstly, it's important to clarify that Voxel Mamba's computational complexity significantly differs from Mamba in language models. In our implementation, its layers and parameters are carefully aligned with standard 3D object detectors. Besides, in Table 5(e) and Figure 3 of the main paper, we compared our method with previous state-of-the-art (SOTA) methods in terms of latency, performance, and GPU memory. Here, we provide a more detailed comparison between Voxel Mamba and previous SOTA serialization-based methods. Actually, Voxel Mamba has comparable to or even lower computational demands than other SOTA serialization-based methods. As shown in the following table, Voxel Mamba differs from DSVT only in the 3D backbone but outperforms DSVT by +1.5 L2 mAPH with less GPU memory and inference latency. All the results are tested on the same device. | Ablation | Type | Backbone | Memory (GB) | Latency (ms) | L2 mAPH | |----------------|----------|---------------|-------------|--------------|------------| | SST | Pillar | Transformers | 6.8 | 71 | 64.6 | | VoxSet | Pillar | Transformers | 5.3 | 73 | 67.7 | | Voxel Mamba | Pillar | SSMs | 3.5 | 66 | **71.2** | | DSVT | Voxel | Transformers | 4.2 | 94 | 72.1 | | Voxel Mamba | Voxel | SSMs | 3.7 | 90 | **73.6** | **[Q5] Robustness to weather and light.** Thanks for the constructive suggestion. Adverse weather can cause scattered points due to attenuation and backscattering. To evaluate Voxel Mamba's robustness to adverse weather, we simulated various fog densities using the fog augmentation method in [1] on the point clouds during inference. The table below shows the mAP results on the nuScene dataset with fog severity from 1 to 5 (heavy). One can see that Voxel Mamba consistently outperforms DSVT across all fog densities. Moreover, with the increase of severity, the performance advantage of Voxel Mamba increases, indicating our method's superior robustness to adverse weather. | Method | Clean | Severity 1 | Severity 2 | Severity 3 | Severity 4 | Severity 5 | |---------------|-------|------------|------------|------------|------------|------------| | DSVT | 66.4 | 65.6 | 63.4 | 55.3 | 47.2 | 25.1 | | Voxel Mamba | 67.5 (+1.1) | 66.9 (+1.3) | 64.9 (+1.5) | 58.0 (+2.7) | 50.7 (+3.5) | 26.8 (+1.7) | [1] Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather. ICCV21 **[Q6] Indoor semantic segmentation.** Thanks for the question. Please kindly refer to **shared response Q3** for details. **[Q7] Real-time Voxel Mamba.** It is a common practice to optimize 3D detection algorithms for deployment in autonomous driving systems. Frameworks such as TensorRT are widely used to enhance the memory efficiency and inference speed. Considering that the frequency of outdoor LiDAR sensors ranges from 5 to 20 Hz, the optimized Voxel Mamba can indeed meet the real-time requirements. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, This is a gentle reminder to please review the rebuttal provided by the authors. Your feedback is crucial to the decision-making process. Please consider updating your score after reading the rebuttal. Thank you for your help with the NeurIPS! Best, Your AC
Summary: The paper introduces a novel architecture for 3D object detection, addressing the limitations of current methods which rely on grouping operations. The proposed Voxel Mamba leverages State Space Models (SSMs) to capture long-range dependencies and process the entire scene in a single sequence, avoiding inefficient grouping. It employs a group-free strategy, an Asymmetrical State Space Models (ASSMs) block for multi-scale context, and an Implicit Window-based Position Embedding (IWPE) to maintain token locality. Experiments on the Waymo Open and nuScenes datasets demonstrate superior performance in accuracy and efficiency over established methods. Strengths: 1. The Voxel Mamba's group-free strategy is innovative, potentially offering a more efficient and effective approach to handling point cloud data compared to traditional grouping methods. 2. The use of State Space Models (SSMs) for point cloud processing is a novel concept that shows promise for capturing long-range dependencies in 3D object detection tasks. 3. The Asymmetrical State Space Models (ASSMs) block cleverly integrates multi-scale context into the model, which likely enhances the feature representation and detection accuracy. 4. Window-based Position Embedding (IWPE) is a thoughtful addition to maintain the fine-grained 3D position information crucial for accurate object detection. 5. The architecture's flexibility to integrate into existing detection frameworks is a significant advantage, as it allows for the incremental adoption of Voxel Mamba in various systems. 6. The extensive experimental evaluation on large-scale and complex datasets like Waymo Open and nuScenes is commendable and provides strong evidence of the model's capabilities. Weaknesses: 1. The introduction does not clearly explain what problem the introduction of the State Space Model is aimed at solving, the motivation of the article, and the advantages of Memba. 2. While the paper claims high efficiency, specific computational metrics such as runtime and memory usage for different model sizes or input resolutions are not provided for a comprehensive assessment. 3. The paper does not provide visual experimental results. 4. The generalization of Voxel Mamba's performance to other datasets or domains beyond autonomous driving scenarios is not discussed, limiting the understanding of its versatility. 5. The paper lacks a detailed comparison with other state-of-the-art methods in terms of computational efficiency, which is critical for applications with limited computational resources. 6. The robustness of the model to variations in point cloud density and sparsity is not thoroughly investigated, which could be a concern for outdoor scenes with varying environmental conditions. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. The role of using downsampling sequences in the backward SSM sequence in Figure 2 is not clearly explained in the paper.What does downsampling do? 2. Are there any improvements made to the object detection baseline, including loss function and pipeline? 3. Is there any redundancy in your title? "based 3D Object Detection" itself is a task of 3D point cloud, why is it written "for Point Cloud" in front of it? 4. How does the computational complexity of Voxel Mamba scale with the size and density of the point cloud data? 5. What are the specific advantages of using SSMs over traditional CNN or Transformer architectures in the context of 3D object detection from point clouds? 6. Can the authors provide more details on the design choices behind the Implicit Window-based Position Embedding and its impact on the model's receptive field? 7. How does Voxel Mamba handle dynamic objects or scenes with non-static elements, which are common in autonomous driving scenarios? 8. How does the performance of Voxel Mamba degrade with respect to various levels of occlusion or truncation of objects in the point cloud data? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. The paper does not mention the issue of computing resource consumption, or whether higher computing resources are required. 2. The paper does not explicitly discuss the model's generalization ability, that is, how well the model performs in new scenarios or under different environmental conditions. The paper's focus on autonomous driving scenarios may limit the general applicability of the findings to other fields where 3D object detection is relevant. 3. The reliance on large-scale datasets for training and evaluation might not fully capture the diversity of real-world scenarios, potentially limiting the model's robustness. 4. The potential impact of adversarial attacks or model overfitting to the specific characteristics of the training datasets is not discussed, which is important for model security and reliability. 5. The performance of Voxel Mamba on the Waymo Open and nuScenes datasets may not directly translate to other types of datasets, potentially limiting the model's universal applicability. 6. It is not clear whether the current Voxel Mamba model can be extended to a multi-task learning framework, such as simultaneously performing object detection, classification, and pose estimation tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1] Clarifying our motivation.** Thanks for the question. Please kindly refer to our responses to **Q1 of Reviewer kB9g** for details. **[Q2] Efficiency and performance for different model sizes and resolutions.** We appreciate the reviewer's feedback and conducted additional experiments. The results regarding the model size (i.e. feature dimension) and BEV resolution (voxel size) are shown in the following two tables. |Model Size|L2 mAPH|Vehicle(L1/L2)|Pedestrian(L1/L2)|Cyclist(L1/L2)|Latency(ms)|Memory(GB)| |----------|-------|--------------|------------------|--------------|-----------|----------| |dim=96|71.3|78.7/70.3|83.6/76.2|76.3/73.6|84|3.6| |dim=128|71.6|79.0/70.7|84.0/76.7|76.5/73.7|90|3.7| |dim=192|72.4|79.2/70.8|84.7/77.5|77.8/74.9|102|3.9| The above table demonstrates a consistent performance improvement as the dimension increases, which correlates with the model scale. |Voxel Size|L2 mAPH|Vehicle(L1/L2)|Pedestrian(L1/L2)|Cyclist(L1/L2)|Latency(ms)|Memory(GB)| |----------|-------|--------------|------------------|--------------|-----------|----------| |0.24m|71.5|78.7/70.4|84.0/76.3|76.9/74.0|114|5.4| |0.32m|71.6|79.0/70.7|84.0/76.7|76.5/73.7|90|3.7| |0.40m|71.4|79.0/70.7|83.6/76.2|76.5/73.8|81|3.2| The above table demonstrates that Voxel Mamba remains effective across different voxel sizes. Increasing voxel size reduces input BEV resolution, enhancing efficiency. Voxel Mamba performed best with a voxel size of 0.32m. **[Q4] Generalization of Voxel Mamba.** Thanks for your question. Please kindly refer to the **shared response Q3**. **[Q5] More comparison with other state-of-the-art methods.** We would like to draw attention to Figure 3 in our manuscript, which provides a comprehensive comparison of our method with state-of-the-art approaches commonly benchmarked in the field of 3D object detection. **[Q6] Robustness to density and sparsity.** We re-benchmarked the performance in different range intervals on Waymo to assess Voxel Mamba's robustness to density and sparsity. The following table indicates that Voxel Mamba is more robust than DSVT. | Method | Overall | [0, 30m) | [30m, 50m) | [50m, inf) | |----------------|---------|----------|------------|------------| | DSVT | 69.69 | 83.80 | 67.73 | 49.38 | | Voxel Mamba | 71.61 | 85.51 | 70.31 | 50.60 | **[Q7] Motivation of downsampling in ASSMs.** Though space-filling curves can preserve the 3D structure to a certain degree, proximity loss is inevitable due to the dimension collapse from 3D to 1D. As a result, a local snippet of the curve can only cover a partial region in 3D space. Placing all voxels in a single group cannot ensure that the effective receptive field (ERF) could cover all voxels. Therefore, we introduce downsampling (hierarchy of state space structures) and consequently improve the ERF of the model. **[Q8] Clarification on pipeline.** Please kindly refer to our responses to Q4 of Reviewer kB9g for details. **[Q9] Clarification on title.** We respectfully disagree with your opinion. To clarify, while 3D object detection is often associated with point clouds, it is not exclusively a point cloud-based task. **[Q10] Computational complexity with point densities.** Refer to **shared responses** for details. With increased point densities (4-frame), Voxel Mamba is 20ms faster than DSVT, which is more pronounced in 1-frame and indicates our method's efficiency. **[Q11] Advantage over Transformers and Spconv.** For details on Transformers, please kindly see **Q1 of Reviewer kB9g**. It is important to note that traditional CNNs are typically avoided in 3D detection due to slow inference on sparse data. Compared to SparseConv, Voxel Mamba has significantly larger ERFs, enabling better capture of long-range dependencies in point clouds. Additionally, our method is more deployment-friendly on autonomous driving than previous methods since no padding tokens are needed. **[Q12] Motivation of IWPE.** The window-based position embedding has been proven effective in previous window-based transformers, as it retains the proximity of voxels within a local window. As a group-free model, we do not explicitly partition the voxels into windows. Instead, we implicitly encode their positions within the window as additional information in the voxel sequence, which significantly enhances the SSM model's ability to extract useful information from the context. **[Q13] Dynamic object.** As indicated in Table 1 of our manuscript, all our experiments were conducted in a single-frame setting. Consequently, the inputs do not include dynamic objects, as each frame is processed independently. To address the concern about non-static elements, we report the multi-frame results in the **shared response**. **[Q14] Degrade with truncation.** To evaluate Voxel Mamba's robustness to truncation, we compared it with DSVT under various truncation levels using the cutout methods in [1]. The table below shows the mAP of Voxel Mamba and DSVT on the nuScene dataset with truncation severities ranging from 1 to 5. One can see that Voxel Mamba consistently outperforms DSVT across all truncation densities, indicating our method's superior robustness. | Method | Clean | Severity 1 | Severity 2 | Severity 3 | Severity 4 | Severity 5 | |---------------|-------|------------|------------|------------|------------|------------| | DSVT | 66.4 | 65.1 | 64.7 | 63.1 | 62.2 | 60.5 | | Voxel Mamba | 67.5 (+1.1) | 66.4 (+1.3) | 65.8 (+1.1) | 64.9 (+1.8) | 63.5 (+1.3) | 61.8 (+1.3) | [1] Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving. CVPR23 **[Q15] Adversarial attacks, dataset overfitting, and multi-task learning** Thanks for your suggestion. In this paper, we aim to establish a strong backbone in 3D object detection, and those issues will be discussed in future work. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, This is a gentle reminder to please review the rebuttal provided by the authors. Your feedback is crucial to the decision-making process. Please consider updating your score after reading the rebuttal. Thank you for your help with the NeurIPS! Best, Your AC --- Rebuttal 2: Title: We are open to any further discussion Comment: Dear reviewer: Thank you so much for reviewing our paper. We hope our explanation could resolve your concern. During this discussion phase, we welcome any further comments or questions regarding our response and main paper. If there requires further clarification, please do not hesitate to bring them up. We will promptly address and resolve your inquiries. We are looking forward to your feedback. --- Rebuttal 3: Comment: The author's rebuttal addressed many of the issues I raised. However, I still have some doubts about this article. Although the author has further elaborated on the motivation, there are still some areas of doubt. For visual perception tasks such as object detection, is contextual information really important, and what is the interpretability of using RNN type network frameworks. Overall, the paper is quite innovative, so I'm inclined to rate it as boardline acceptable. --- Rebuttal 4: Comment: We really appreciate for this reviewer's recognition on the innovation of our work. Yes, the contextual information is important in point cloud object detection. Since 3D point clouds lack distinctive textures and appearances, the detectors will more heavily rely on contextual information for learning semantic information. We have developed several designs to enhance Voxel Mamba's ability to capture contextual information in the paper. To increase contextual information for voxel tokens, we proposed a group-free strategy to serialize the whole space voxels into a single sequence. Besides, we also integrated a multi-scale design to broaden the effective receptive field and capture context information at various scales. We used RNN to address the inefficiency of Transformers when expanding input context. In the large context setting, we adopted Mamba (RNN) design to enhance the efficiency, as latency is crucial in outdoor 3D object detection for autonomous driving. As shown in the table following, RNNs have a significant speed advantage over Transformers with the longer input voxel sequences, which often reach the order of $10^5$ in 3D detection scenarios. | Method | 1K | 2K | 4K | 8K | 16K | |------------|----------|----------|----------|-----------|------------| | Transformer| 0.47 ms | 1.61 ms | 5.74 ms | 26.02 ms | 114.20 ms | | SSM | 0.41 ms | 0.43 ms | 0.50 ms | 0.61 ms | 0.94 ms | If there is any misinterpretation of your question, we are open to further discussion
Summary: This paper proposes a novel backbone named Voxel Mamba for point cloud 3D detection. Different from previous methods that group the voxels into fixed-length sequences through padding, this paper adopt an interesting group-free strategy to sort all voxels into a single sequence through the space-filling curve. The paper also proposes the Asymmetrical State Space Models (ASSMs) and Implicit Window-based Position Embedding (IWPE) to enlarge the receptive fields of Voxel SSMs. Experiments are mostly based on challenging outdoor datasets, the Waymo Open Dataset and nuScene. Experiments demonstrates Voxel Mamba surpasses previous methods. Strengths: 1. The paper presents a novel approach to deal with irregularity problem of point cloud detection. By taking the advantage of linear complexity of SSM, this paper employ a group-free strategy which is more efficient and deployment-friendly than previous group-based methods. 2. ASSMs and IWPE are well motivated. The multi-scale design and implicit window partition are integrated under group-free design in an economical way. Besides, the figures in this paper are clear and greatly aid in the comprehension of the proposed methods. 3. The proposed Voxel Mamba exhibits a notably larger ERF than group-based methods DSVT. The visualization effectively supports their claim. 4. The experiments are thorough and convincingly demonstrate the effectiveness of the proposed method. Weaknesses: 1. The experiments of model architectures are a bit insufficient. In this paper, the authors extend Mamba to bidirectional Mamba (line 68) and further to Voxel Mamba (line 74). However, there are no experiments demonstrating the performance gains from these upgrades. As Voxel Mamba is based on Mamba, it would be necessary to report the accuracy of apply group-free bidirectional Mamba and Mamba with the same SSM parameters or layers. 2. Given that the group-free strategy is a significant contribution of this paper, it is necessary to include more analysis and comparisons between group-free and group-based operations. For example, the author cloud provide a detailed latency comparison between window-based grouping (such as dsvt and sst) and HIL. Besides, on Line 188, the authors mention that the mapping processes takes 0.7ms. However, the latency is likely to vary with different sequence lengths. Please provide a clarification regarding the timing. 3. Lack of comparison with non-group-free Mamba. Although this work provides quantitative and visualization comparison between group-free and group-based (dsvt) methods. It is necessary to provide the ERFs and detection accuracy for non-group-free Voxel Mamba. As a linear RNN models, what difference does Mamba introduce compared to Transformers. 4. The analysis of the ablation on ASSM and HIL is a little vague. Specifically, it is unclear why performances drop under the downsampling setting {2,2,2} and {4,4,4} in Table 5(b). Besides, the window sweep, Z-order, and Hilbert methods in Table 5(d) exhibit similar performance. Can I infer that the form of the space-filling curve does not significantly affect the accuracy? Please provide a discussion on them, which would be helpful to improve the quality of this paper. 5. Hilbert Input Layer (HIL) sort the voxels by querying the space-filling curve template. As presented in Sec. 3.3, form my understanding, the template needs cover the whole scenes. My concern is the amount of GPU memory required by the template under the current voxel size setting and its contribution to the total memory consumption. And for a more fine-grained size such as (0.1m, 0.1m, 0.15m) in CenterPoint. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weaknesses above. Overall, the proposed group-free strategy has better efficiency than previous group-based operations without token padding. Voxel Mamba also demonstrates better performance than existing sparse convolution or transformer structures. As such, I recommend acceptance at this stage. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been included Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1] Comparison with other variants.** Thanks for your insightful comments. To demonstrate the improvements of our method, we have conducted additional experiments comparing Voxel Mamba with group-based and group-free bidirectional Mamba. The group-based bidirectional Mamba uses the DSVT Input Layer to partition the voxel groups. The group-free bidirectional Mamba is based on our Hilbert Input Layer. Each variant retains the same number of SSM layers for consistency in comparison. | Method | L2 mAPH | Vehicle (L1) | Vehicle (L2) | Pedestrian (L1) | Pedestrian (L2) | Cyclist (L1) | Cyclist (L2) | |:--|--:|--:|--:|--:|--:|--:|--:| | Group-based bidirectional Mamba | 68.5 | 76.1 / 75.7 | 67.8 / 67.4 | 81.8 / 75.8 | 74.2 / 68.6 | 73.3 / 72.3 | 70.6 / 69.6 | | Group-free bidirectional Mamba | 71.0 | 78.3 / 77.9 | 70.0 / 69.6 | 83.0 / 78.1 | 75.6 / 70.9 | 76.3 / 75.2 | 73.4 / 72.4 | | Voxel Mamba | 71.6 | 79.0 / 78.5 | 70.7 / 70.2 | 84.0 / 79.1 | 76.7 / 72.0 | 76.5 / 75.4 | 73.7 / 72.7 | The above table shows that enhanced spatial proximity can significantly improve the detector's performance. Additionally, our innovative multi-scale design yields additional performance gains, which can be attributed to the expansion of ERFs. **[Q2] Advantage over group-based operations.** Thanks for your suggestions. Please kindly refer to **shared response Q1** for the advantage of group-free operations. **[Q3] Comparison with group-based Mamba.** To address the reviewer's concern, we compared the performances between group-free and group-based mamba (refer to the table in our response to Q1). The results demonstrate the effectiveness of our group-free strategy, which can enhance the 3D proximity in the 1D sequences. Besides, we provide the ERFs of group-based (DSVT partition) Mamba in Figure 1 in the rebuttal PDF. **[Q4] Analysis on ASSMs.** We appreciate the reviewer’s insightful comment. The transitioning from \{1,1,1\} to \{1,2,2\} and to \{1,2,4\} enhances performance due to an enlarged effective receptive field and improved proximity by using larger downsampling rates at late stages. ASSMs with \{2,2,2\} or \{4,4,4\} compromise performance compared to \{1,1,1\}, indicating that using larger downsampling rates at early stages will lose some fine details. Thus, we set the stride as \{1,2,4\} to strike a balance between effective receptive fields and detail preservation. Additionally, your inference about the similar performance of different space-filling curves is astute. Voxel Mamba is designed with significantly large ERFs, which indeed diminishes the impact of the type of space-filling curves on performance. We will expand our analysis in revision. **[Q5] Curve template memory.** Yes, we record the traversal position in the space-filling curves of all potential voxels offline. The curve templates are adaptively generated based on the input BEV resolution. To address the reviewer's concern, the table below lists GPU memory usage for templates. The input BEV resolution is (468, 468), with three scale curve templates for each ASSM scale. | Template Resolution | Memory (MB) | |---------------------|-------------| | (512, 512) | 83.0 | | (256, 256) | 8.6 | | (128, 128) | 1.2 | --- Rebuttal Comment 1.1: Comment: Dear Reviewer, This is a gentle reminder to please review the rebuttal provided by the authors. Your feedback is crucial to the decision-making process. Please consider updating your score after reading the rebuttal. Thank you for your help with the NeurIPS! Best, Your AC
Summary: The authors introduce Voxel Mamba, a 3D object detection method based on state-space models. Voxel Mamba introduces a efficient group-free approach to avoid inefficient grouping operations and prevent limitations on the receptive field for 3D object detection in point clouds. The authors also present Asymmetrical State Space Models (ASSMs) to capture multi-scale context information, offering each voxel data-dependent global contexts. Voxel Mamba enables voxel interactions throughout the scene and captures cloud locality using space-filling curves. Extensive experiments validate its performance, positioning Voxel Mamba as a viable alternative for 3D object detection. Strengths: 1. The paper is well-organized and easy to follow. 2. The authors first time delve into the potential of Mamba in 3D object detection, which is an intriguing subject. Weaknesses: 1. Motivation can be improved. The authors must clarify why SSM is essential for their motivation and quantify the advantages it offers over transformers for this task. The question in lines 55-56 does not demonstrate SSM's superiority. The author's design for SSM is also hardly inspiring for the 3D detection community. 2. Some of the method's design confuse me. Traversing the downsampled feature maps with a Hilbert curve in the opposite direction can misalign the feature receptive fields. Does this challenge model training with varying inputs, even with IWPE included? Additionally, given its use of multi-scale features, how much does ASSM outperform FPN? 3. The experimental comparisons may be inadequate. The authors present a pillar version for Voxel Mamba but fail to compare its performance and efficiency against current state-of-the-art pillar-based methods in the main table. 4. The experiment's details should be more clearly documented. For example, does Voxel Mamba adopts IoU-rectification scheme? The comparative setup in the ablation study could have been accounted for in more detail and analyzed why the method designed by the authors was effective. 5. How does Voxel Mamba integrate with multi-frame temporal 3D detection? Technical Quality: 3 Clarity: 3 Questions for Authors: See Weakness. Besides, there are few other suggestions: 1. Could the authors provide more description of Window Sweep in Table 5(c)? Does it include the Shifted Regional Grouping Shifted Token Groups operation used in SST? 1. Is a Linear for the up dimension missing from the SSM part of Fig. 2? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1] Motivation.** Thanks for the question and we are sorry for not making the motivation clear enough. The motivation of our work is to leverage the advantages of SSM in linear attention for more effective feature learning in 3D point cloud based object detection. The advantage of SSM over Transformer lies in its ability to more efficiently explore large context (as shown in Fig. 4 of the main paper), i.e., an extremely long voxel sequence, for effective feature learning. From the table below, one can see that for more than 4K voxel contexts, a single transformer layer consumes more than 5ms, which is impractical for perception tasks. However, even for a sequence of 16K voxels, SSM only consumes 0.94ms. | Method | 1K | 2K | 4K | 8K | 16K | |--------------|--------|--------|---------|---------|---------| | Transformer | 0.47 ms| 1.61 ms| 5.74 ms | 26.02 ms| 114.20 ms| | SSM | 0.41 ms| 0.43 ms| 0.50 ms | 0.61 ms | 0.94 ms | Unlike previous window-based voxel transformers, we are the first to model the voxel sequence from the entire scene. By utilizing Mamba's selective scan mechanism, we can effectively retain useful information from the context without the need for complex voxel grouping. This group-free approach is insightful for handling large-scale sparse data. **[Q2] Effectiveness of multi-scale design.** We appreciate the reviewer's insightful observation. In fact, as a sequence-based model, the receptive fields of voxel features only rely on the sequence length, and they do not necessarily need to be aligned. Mamba can selectively retain useful information after scanning the sequence back and forth. Here we further compare ASSM with FPN on the Waymo dataset with 20% training data. Following Voxel Mamba, we utilize Spconv and its counterpart SpInverseConv as downsampling and upsampling operations. We keep the same downstrides in both structures. The table below shows that our ASSM achieves superior performance in the 3D object detection tasks. | Model Size | L2 mAPH | Vehicle (L1) | Vehicle (L2) | Pedestrian (L1) | Pedestrian (L2) | Cyclist (L1) | Cyclist (L2) | |-----------------------|---------|--------------|--------------|-----------------|-----------------|--------------|--------------| | Mamba with FPN | 68.2 | 77.4 / 76.9 | 69.1 / 68.6 | 81.3 / 74.0 | 73.8 / 66.9 | 72.1 / 71.0 | 69.4 / 68.3 | | Vxoel-Mamba-Pillar | 69.5 | 78.1 / 77.6 | 69.8 / 69.3 | 82.4 / 75.5 | 75.0 / 68.5 | 74.6 / 73.4 | 71.8 / 70.7 | **[Q3] Comparison with state-of-the-art pillar-based methods.** Thanks for the suggestion. The following table compares our pillar-based Voxel Mamba with previous state-of-the-art pillar-based methods. We can see that Voxel Mamba outperforms previous methods on both L2 mAP and mAPH while achieving comparable inference speed. | Method | L2 mAP/mAPH | Vehicle (L1) | Vehicle (L2) | Pedestrian (L1) | Pedestrian (L2) | Cyclist (L1) | Cyclist (L2) | Latency (ms) | |------------------------|-------------|--------------|--------------|-----------------|-----------------|--------------|--------------|--------------| | SST | - | 76.2 / 75.8 | 68.0 / 67.6 | 81.4 / 74.1 | 72.8 / 65.9 | - | - | 71 | | DSVT-Pillar | 73.2 / 71.0 | 79.3 / 78.8 | 70.9 / 70.5 | 82.8 / 77.0 | 75.2 / 69.8 | 76.4 / 75.4 | 73.6 / 72.7 | 64 | | Voxel Mamba-Pillar | 73.5 / 71.2 | 80.2 / 79.7 | 72.1 / 71.7 | 83.5 / 77.6 | 76.1 / 70.5 | 75.0 / 74.0 | 72.2 / 71.3 | 66 | **[Q4] Detailed experiments and ablation studies.** We apologize for any confusion regarding the details of our experiments. For fair comparison, our experiment settings follow the schemes in DSVT, using the Adam optimizer with weight decay of 0.05, a one-cycle learning rate policy, and a maximum learning rate of 2.5e-3. Models were trained with a batch size of 24 for 24 epochs on 8 NVIDIA A6000 GPUs. During inference, we use class-specific NMS with IoU thresholds of 0.7, 0.6, and 0.55 for vehicles, pedestrians, and cyclists, respectively. Ground-truth copy-paste data augmentation was used during training and disabled in the last epoch. Our setting aligns closely with the previous state-of-the-art DSVT, and the primary difference is on the backbone. In addition, **Voxel Mamba does not adopt the IoU-rectification scheme**. In our ablation study, we assessed the effectiveness of various components and drew the following conclusions: (a) Space-filling curves enhance spatial proximity and locality, significantly improving performance (Table 5d). (b) Position embedding with IWPE boosts detection performance by providing rich 3D positional and proximate information (Table 5a). (c) Downsampling rates of ASSM show that larger effective receptive fields improve performance, but very large downsampling rates at early stages can degrade performance. We found a balance with a stride of {1,2,4} (Table 5). (d) Bidirectional SSMs with a Hilbert-based group-free sequence significantly improves accuracy, validating our group-free strategy. ASSM enhances effective receptive fields and proximity, while IWPE further boosts Voxel Mamba’s performance by capturing 3D positional information (Table 6c). **[Q5] Multi-frame results.** Please refer to the **shared responses Q2** for details. **[Q5] Description of window sweep.** The window sweep adopts the same regional grouping method as SST, extended to 3D voxels. We use a fixed BEV window size of (12, 12) and implement Region Shift between each two ASSM blocks. **[Q6] Detail of Figure 2.** We apologize for this problem. We will revise Figure 2 to accurately reflect the complete architecture. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, This is a gentle reminder to please review the rebuttal provided by the authors. Your feedback is crucial to the decision-making process. Please consider updating your score after reading the rebuttal. Thank you for your help with the NeurIPS! Best, Your AC --- Rebuttal 2: Title: We are open to any further discussion. Comment: Dear reviewer: Thank you so much for reviewing our paper. We hope our explanation could resolve your concern. During this discussion phase, we welcome any further comments or questions regarding our response and main paper. If there requires further clarification, please do not hesitate to bring them up. We will promptly address and resolve your inquiries. We are looking forward to your feedback. --- Rebuttal Comment 2.1: Comment: Thanks for your careful responses and clarification, which have effectively addressed my concerns and misunderstandings. I have read the authors' responses and other reviewers' comments. It is good to see that the unclear experiments' detail and some ablation study analysis have been supplemented. The additional experiment and comparison, which demonstrate the efficiency of Voxel Mamba's structure and SSM, have been provided during the rebuttal. I strongly encourage the author to incorporate the additional experiments, especially the comparison between Voxel Mamba and DSVT input layer, from the rebuttal into the revised manuscript. Given these improvements and clarification, I have an overall favorable opinion and upgrade my rating. --- Reply to Comment 2.1.1: Comment: We sincerely thank this reviewer for finding our responses useful and raising the score. We will incorporate additional experiments and detail explanations in the revised manuscript, as suggested by this reviewer. Thanks again for your constructive comments!
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for the valuable comments and suggestions. We first address the common concerns, followed by the detailed responses to each reviewer separately. We hope our responses can clarify the reviewers' concerns and make our contributions clearer. **Q1. The latency comparison between group-free and group-based operations** We perform a detailed comparison of the Hilbert Input Layer (HIL) and DSVT Input Layer across different point cloud sizes and densities. To ensure that the distributions of LiDAR sensor data can be well described, we stack frames to gradually increase the point cloud density. The following table shows the runtime of serialization with the same voxel size (0.32m, 0.32m, 0.1875m) on the Waymo open dataset. One can see that compared to window-based grouping operations, our group-free approach is 13.74ms faster in single-frame scenarios. Moreover, the advantage of HIL becomes more pronounced with the increase of point cloud density. | Method | 1-frames | 2-frames | 3-frames | 4-frames | |--------|----------|----------|----------|----------| | X/Y-Axis Grouping in DSVT | 19.41 ms | 32.30 ms | 34.10 ms | 36.78 ms | | HIL in Voxel Mamba | 5.67 ms | 7.74 ms | 8.20 ms | 8.52 ms | **Q2. Multi-frame setting performance** | Method | Data | Latency (ms) | L2 mAPH | Vehicle (L1) | Vehicle (L2) | Pedestrian (L1) | Pedestrian (L2) | Cyclist (L1) | Cyclist (L2) | |--------|------|--------------------|-------------|------------------|---|---------------------|---|-------------------|---| | DSVT-4frames | 20% | 158.6 | 76.1 / 74.8 | 80.8 / 80.3 | 72.8 / 72.4 | 85.1 / 82.3 | 78.3 / 75.6 | 79.8 / 78.9 | 77.3 / 76.4 | | DSVT-4frames | 100% | 158.6 | 76.9 / 75.6 | 81.8 / 81.4 | 74.1 / 73.6 | 85.6 / 82.8 | 78.6 / 75.9 | 80.4 / 79.6 | 78.1 / 77.3 | | **Voxel Mamba-4frames** | 20% | 138.7 | **77.2 / 75.9** | 81.9 / 81.5 | 74.0 / 73.6 | 86.7 / 84.0 | 80.0 / 77.4 | 80.0 / 79.0 | 77.5 / 76.6 | The above table compares the performance of DSVT and our Voxel Mamba under the multi-frame setting. One can see that Voxel Mamba significantly outperforms DSVT, indicating its superior ability to leverage temporal and spatial information across frames. Notably, Voxel Mamba trained on 20\% training data surpasses DSVT trained on the 100\% data. **Q3. Indoor Semantic Segmentation** To further demonstrate the general applicability of Voxel Mamba, we extended our experiments to the indoor 3D semantic segmentation task. We adopt the widely used encoder-decoder structure, with both components consisting of ASSMs. Following previous works, we partition the encoder into five stages, with the number of blocks in each stage being {2, 2, 2, 6, 2}. Downsampling is performed between each stage to reduce spatial dimension progressively. The feature dimensions of the blocks across these five stages are {32, 64, 128, 256, 512}. The downsampling rates for ASSMs' backward branches in each encoder stage are {1, 1, 1, 2, 2}. Our method is implemented based on the open-source framework Pointcept. The table below shows the semantic segmentation performance on Scannet. Our method outperforms Swin3D by 90.8% mIoU. Please note that due to limited time, this is just preliminary results. We believe better results can be obtained with refined design and implementation. | Method | Val mIoU (%) | |--------------------------|--------------| | Stratified Transformer | 74.3 | | OctFormer | 75.7 | | Swin3D | 76.3 | | Voxel Mamba | 77.1 | Pdf: /pdf/79afa360eae030c0c40003ae63c368e286b714cc.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes a new 3D detection architecture, Voxel Mamba. Voxel Mamba serializes voxels using a Hilbert Input Layer and then applies a forward SSM and a backward SSM at lower resolutions. It achieves state-of-the-art performance on both the Waymo Open Dataset and nuScenes dataset. Strengths: Originality: Addresses the challenge of long sequence lengths in group-free 3D detection models by using the Mamba SSM module to extract features from all voxels, avoiding inefficient sorting and grouping operations found in window-based sparse 3D detectors. Quality: Demonstrates strong 3D detection performance on both Waymo Open Dataset and nuScenes, supported by extensive ablation studies in Table 5, highlighting the effectiveness of Voxel Mamba's design choices. Clarity: The paper is well-written with clear visualizations, notably Figure 4, which illustrates the larger effective receptive field of Voxel Mamba compared to group-based methods. Weaknesses: Please see the questions section. Technical Quality: 3 Clarity: 4 Questions for Authors: In table 5 (d), the Hilbert curve is marginally better than the Z-order curve. In 5(c), adding Hilbert improves mAP and NDS by 0.1. Curious, what is the latency of generating the Hilbert curve and Z-order Curve. What about the latency of generating Random Curves? For random curves, do you use the same random curves for all frames or random sample a curve for each frame? Do you use the same random sampled curves for different resolutions, or use the same random curve for all resolutions in the backward SSM? What would be the expected performance and latency if you flattened the voxels to 1D but used window attention (either shifted or scanned) instead of Mamba? This could help assess the trade-offs between different approaches to handling long sequence lengths. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1] Latency of generating curves.** In our implementation, we record the traversal position in the space-filling curves of all potential voxels offline. The voxels are serialized by simply looking up these traversal positions. Thus, the latency for serializing voxels based on Hilbert and Z-order curves is identical. The detailed generation times of the Hilbert Input Layer are shown in the **shared responses Q1**. **[Q2] Random Curves setting.** In the Hilbert Input Layer (HIL) implementation, we use fixed templates for each resolution in the ASSMs. With the downsampling rate {1,2,4} in ASSMs, Voxel Mamba requires three different scale curve templates. However, the "random" in Table 5(a) means that we do not perform any sorting on the input sequence for SSMs. We observed that the Voxel Feature Extractor (VFE) and the downsampling and upsampling operations tend to preserve the spatial proximity of neighboring voxels in the 3D space in the 1D sequence. This inherent characteristic leads to a favorable outcome for SSMs. To ensure a real random order, we generate three random templates and use them for all frames following the HIL setting. The following shows the results on the Waymo open dataset with 20% training data. These results indicate that disrupting voxel proximity significantly degrades performance, highlighting the effectiveness of our Hilbert Input Layer. | Method | L2 mAPH | Vehicle (L1) | Vehicle (L2) | Pedestrian (L1) | Pedestrian (L2) | Cyclist (L1) | Cyclist (L2) | |--------------|---------|--------------|--------------|-----------------|-----------------|--------------|--------------| | Random order | 51.7 | 56.4 / 55.7 | 49.1 / 48.4 | 67.0 / 57.9 | 58.6 / 50.6 | 60.4 / 58.2 | 58.1 / 56.0 | | Hilbert | 71.6 | 79.0 / 78.5 | 70.7 / 70.2 | 84.0 / 79.1 | 76.7 / 72.0 | 76.5 / 75.4 | 73.7 / 72.7 | **[Q3] Hilbert Input Layer + attention.** We truly appreciate this valuable comment. We conducted an experiment by replacing the window grouping in DSVT with the serialization approach in Voxel Mamba. We use the same group size of 48 as DSVT, and only pad the last voxel groups for each sample to enable the parallel computation of Transformers. We employ curve shift as an alternative to the X/Y-axis partition and perform experiments on 20% Waymo training data. The following table compares this variation with Voxel Mamba and DSVT. | Method | L2 mAPH | Vehicle (L1) | Vehicle (L2) | Pedestrian (L1) | Pedestrian (L2) | Cyclist (L1) | Cyclist (L2) | Latency (ms) | |--------------------------|---------|--------------|--------------|-----------------|-----------------|--------------|--------------|--------------| | DSVT | 69.7 | 77.9 / 77.4 | 69.5 / 69.1 | 82.2 / 76.3 | 74.8 / 69.1 | 74.7 / 73.6 | 71.9 / 70.9 | 94 | | Hilbert + self-attention | 68.9 | 77.3 / 76.8 | 69.0 / 68.5 | 82.0 / 75.7 | 74.5 / 68.7 | 73.4 / 72.3 | 70.7 / 69.6 | 85 | | Voxel Mamba | 71.6 | 79.0 / 78.5 | 70.7 / 70.2 | 84.0 / 79.1 | 76.7 / 72.0 | 76.5 / 75.4 | 73.7 / 72.7 | 90 | One can see that this variant with the Hilbert Input Layer shows slightly lower performance than DSVT. We hypothesize that this drop is due to the limited group size in the window attention mechanism. This constraint might limit the full potential of the HIL voxel flattening. While it does not outperform Voxel Mamba in accuracy, it does offer a favorable trade-off between performance and latency. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, This is a gentle reminder to please review the rebuttal provided by the authors. Your feedback is crucial to the decision-making process. Please consider updating your score after reading the rebuttal. Thank you for your help with the NeurIPS! Best, Your AC --- Rebuttal Comment 1.2: Comment: Thank you for providing additional results. I will keep my rating. --- Reply to Comment 1.2.1: Comment: Thank you very much for your dedicated efforts to review our paper and suggestions for our work.
null
null
null
null
null
null
Selective Explanations
Accept (poster)
Summary: The paper presents a novel framework for explaining black box models through a selective explainer. The selective explainer unifies two different explanation methods: the former relies on amortized explainers, which are easier to compute but provide lower-quality explanations; the latter can provide higher-quality explanations but is more costly to compute. By identifying when the first method is enough to have a good explanation, the selective explainer can trade off (depending on the user’s needs) when relying on amortized explainers or a more complex explanation method. Experiments on benchmark data show the effectiveness of the approach. Strengths: 1. The paper considers an interesting problem, i.e. how to combine cheap-to-obtain explanations and expensive-to-obtain ones; 2. The theoretical analysis seems sound. Weaknesses: w1. The methodology is based on never-discussed assumptions, i.e. a more uncertain/less stable explanation is a low-quality explanation. I think the authors should discuss this assumption further. Moreover, I guess such a measure is potentially correlated with whether we explain correct or incorrect ML predictions, which is something that is never discussed/taken into account and lacks appropriate references. w2. The terminology is misleading: the paper calls expensive-to-obtain explanations high-quality explanations. I think this misleads the reader because there is never an actual comparison between expensive explanations and their actual quality in explaining the underlying ML model. w3. The experimental evaluation is limited to quantitative measures. I think the paper would greatly benefit from including a user study, as humans must evaluate the quality of explanations provided by the novel selective explainer. For example, [Longo et al., 2024] provide a few reasons why human user studies should always be conducted when considering XAI methods. w4. The paper does not take into account relevant related work on uncertainty quantification of explanation methods. For instance, see [Zhao et al, 2021] and [Slack et al., 2021]. [Longo et al., 2024 ] Longo, L., Brcic, M., Cabitza, F., Choi, J., Confalonieri, R., Del Ser, J., Guidotti, R., Hayashi, Y., Herrera, F., Holzinger, A. and Jiang, R., 2024. Explainable artificial intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, p.102301. [Slack et al, 2021] Dylan Slack, Anna Hilgard, Sameer Singh, Himabindu Lakkaraju: Reliable Post hoc Explanations: Modeling Uncertainty in Explainability. NeurIPS 2021: 9391-9404 [Zhao et al., 2021] Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, David Flynn: BayLIME: Bayesian local interpretable model-agnostic explanations. UAI 2021: 887-896 Technical Quality: 2 Clarity: 2 Questions for Authors: Q1 - What is a low-quality explanation under the proposed framework? I think this is the core question underlying the work. However, the authors are not directly addressing this question, as, in my opinion, assessing the low quality of a certain explanation requires defining who is the final user of the explanation. Q2 - As far as I know, an unstable explanation might be due to the underlying uncertainty around a prediction, but also due to the approximations required to compute the explanation. Is there a way to disentangle these two kinds of uncertainty? I think this is something that the method should take into consideration. Q3—As discussed in W2, what the authors call a high-quality explanation seems to be obtained by a more costly (as it relies on some exact computation) explanation method. Is my understanding correct? If so, how do you ensure that the high quality explanation is a good explanation of why a certain prediction has been made? A few other details: Line 80: you refer to explanations with initial guesses. I would personally prefer to have the definition here rather than in the next section. Lines 152-154: can you clarify and discuss the drawbacks of this choice? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! It was really insightful and will definitely improve the final version of our paper. We will implement your suggested changes, and hope you engage with us during the discussion period. We address your comments below. **Q1) “The terminology is misleading: the paper calls expensive-to-obtain explanations high-quality explanations.”** This is a great point. We do agree with you that expensive-to-obtain explanations is a more precise term and will change “high-quality” to “expensive-to-obtain” everywhere in the revised version of our paper. Throughout the paper we assume that “high-quality” explanations are the ones computed until convergence – for example, SHAP with exponentially many inferences. These computationally expensive SHAP explanations have been extensively tested both quantitatively and via user studies [1, 2] – although SHAP does have limitations as well [3]. We will add the following paragraph on the quality of expensive-to-obtain explanations to the limitations section: “Selective explanations aim to approximate expensive-to-obtain explanation methods such as SHAP with an exponential number of inferences. Therefore, its efficacy is limited to the performance of the explanation method being approximated. The general methodology behind Selective Explanations is flexible, and can be applied to approximate other expensive-to-obtain feature attribution methods. We encourage the reader to use Selective Explanations to approximate the computationally expensive explanation method that best fits their application, noting that, ultimately, the quality of explanation will be limited by the chosen approximand.” **Q2) “The methodology is based on never-discussed assumptions, i.e. a more uncertain/less stable explanation is a low-quality explanation.”** Thank you for raising this important point. We address it in the global response to all reviewers due to its importance, and we kindly ask you to refer to the text above (see answer to Q2 above). TL;DR: Figure 3 illustrates the relationship between uncertainty and explanation quality, showing that explanations with the lowest uncertainty also have the lowest MSE (and vice-versa), where MSE is with respect to "higher-quality explanations" (SHAP computed until convergence). We agree that this correlation would be better supported by more direct evidence. To address this, we will add a new Table 1 (see attached rebuttal PDF) to the main paper, which presents Pearson and Spearman correlations between our uncertainty measures and MSE. This table provides evidence of strong positive correlations and monotonic relationships between MSE and uncertainty estimates, particularly for language datasets. The monotonic relationship underlies our approach's effectiveness in identifying lower-quality explanations by using uncertainty estimates. For more details, please refer to the global answer (Q2). **Q3) “The experimental evaluation is limited to quantitative measures.”** The main reason for the lack of user evaluation is that our main goal is to approximate widely-studied yet expensive-to-obtain explanation methods such as LIME (with a large number of model inferences) and SHAP (computed until convergence) [1, 2]. Given their established results, we follow the methodology in recent papers focused on accelerating attribution methods [5 – 8] and report both MSE and Spearman’s Correlation to these expensive-to-obtain explanations. When MSE is low and Spearman Correlation is high with converged SHAP explanations (which our Selective Explanation method achieves), the performance of our method is essentially the same as SHAP. **Q4) “The paper does not take into account relevant work on uncertainty quantification of explanation methods. For instance, see [Zhao et al, 2021] and [Slack et al., 2021].”** Thanks for suggesting these papers. We'll discuss them in our related work section. Our approach differs in two key ways from the mentioned papers: 1. These papers study uncertainty for SHAP and LIME variants. We calculate uncertainty for the amortized explainer - a model that takes data points as input and directly outputs explanations with only one inference. 2. Main contribution: Our focus isn't on the uncertainty measures themselves; they only serve the intermediate purpose of finding amortized explanations that severely differ from expensive-to-obtain explanations. **Q5) “As far as I know, an unstable explanation might be due to the underlying uncertainty around a prediction, but also due to the approximations required to compute the explanation. Is there a way to disentangle these two kinds of uncertainty?”** We also believe that prediction uncertainty and approximations are sources of unstable explanations. However, we are computing the uncertainty for the amortized explainer, not the black-box model being explained ($h$ in our notation). Our method prioritizes practical improvement of the approximation of expensive-to-obtain explanations over the disentanglement of uncertainty sources – which we believe is an important direction of future work. [1] Lundberg et al. A Unified Approach to Interpreting Model Predictions. NIPS 2017. [2] Antwarg et al. Explaining Anomalies Detected by Autoencoders Using SHAP. Expert Syst. Appl. Vol 186. [3] Slack et al. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. AIES 2020. [4] Ribeiro et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD 2016. [5] Jethani et al. FastSHAP: Real-Time Shapley Value Estimation. ICLR 2022. [6] Schwab and Karlen. CXPlain: Causal Explanations for Model Interpretation under Uncertainty. NeurIPS 2019. [7] Yang et al. Efficient Shapley Values Estimation by Amortization for Text Classification. ACL 2023. [8] Covert et al. Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution. arXiv:2401.15866. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for answering my questions. After reading all the reviews and rebuttals, I still have concerns and I do not change my opinion. --- Reply to Comment 1.1.1: Comment: Thank you for your response. Could you please share your remaining questions and concerns? We still believe we answered all your concerns in the rebuttal and would like to know why, in your opinion, they were not addressed. Thank you.
Summary: This paper introduces a novel method called selective explanations for improving the efficiency and accuracy of feature attribution methods for black-box machine learning models. The key contributions are: - A zero-cost proxy to evaluate the adversarial robustness of deep neural networks without training. - A selective explanation method that detects when amortized explainers generate low-quality explanations and improves them using explanations with initial guess. - An optimization approach for combining amortized and Monte Carlo explanations to improve explanation quality. - Comprehensive evaluation on both tabular and language model datasets, demonstrating improved explanation quality with reduced computational cost. The authors argue that their method addresses the challenges of computational expense in existing feature attribution methods while maintaining or improving explanation quality. Strengths: - Novel approach: The selective explanation method offers a new paradigm for balancing efficiency and accuracy in feature attribution. - Theoretical foundation: The paper provides rigorous mathematical formulations and proofs for key components. - Comprehensive evaluation: The method is tested on multiple datasets and model types, with comparisons to various baselines. - Practical impact: The approach significantly reduces computational cost while maintaining or improving explanation quality. - Flexibility: The method can be applied to different types of feature attribution techniques and model architectures. Weaknesses: - Limited exploration of very large models: While the method is tested on language models, it's not clear how well it scales to extremely large models (e.g., GPT-3 scale). - Dependence on amortized explainers: The method's effectiveness relies on the quality of the underlying amortized explainer. - Computational overhead: While more efficient than full Monte Carlo methods, the selective approach still requires additional computation compared to pure amortized methods. - Sensitivity to hyperparameters: The impact of various hyperparameters (e.g., uncertainty thresholds, combination function parameters) is not thoroughly explored. Technical Quality: 3 Clarity: 3 Questions for Authors: - How does the performance of selective explanations scale with extremely large models (e.g., models with billions of parameters)? - Have you explored using more advanced uncertainty estimation techniques, such as those based on Bayesian neural networks? - How sensitive is the method to the choice of amortized explainer? How might it perform with different types of amortized explainers? - Could the selective explanation approach be extended to other types of explanation methods beyond feature attribution (e.g., example-based explanations)? - How might the method be adapted to handle streaming data or online learning scenarios where the underlying model is continuously updating? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss limitations of their work at the end of Section 6. They acknowledge that the method is currently focused on Shapley values and has been primarily tested on specific types of models and datasets. They also note potential challenges in applying the method to image classifiers. These limitations are reasonably addressed, though a more detailed discussion of potential failure modes or edge cases could have been beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We appreciate that you found our work novel. We also appreciate that you found our evaluation comprehensive and our method flexible. We answer your questions and comments next. **Q1) “How does the performance of selective explanations scale with extremely large models (e.g., models with billions of parameters)?”** Our results indicate that Selective explanations scale well to very large models. We observed a better performance improvement when using Selective Explanation for larger models - the language models for text classification (Figures 3 and Figure 4 (c) and (d)). This is because we use the higher quality embeddings of these larger models to train the amortized explainer and the uncertainty metrics. **Q2) “Have you explored using more advanced uncertainty estimation techniques, such as those based on Bayesian neural networks?”** This is an excellent suggestion. We agree with you that other techniques for uncertainty quantification such as those based on Bayesian neural networks could potentially improve uncertainty estimation by providing more calibrated uncertainty estimates. In fact, our Deep uncertainty metric (Eq. 4) is directly inspired by this approach. However, Bayesian methods induce a higher cost that may be prohibitive in some applications. For example using ensembles of models as in deep ensembles [1] requires retraining the same model pipeline multiple times – a limitation we also observe in our proposed Deep uncertainty metric (Eq. 4). Moreover, our ultimate goal is to select the examples that receive higher (lower) quality explanations relative to a computationally-expensive-to-obtain method such as SHAP. In Learned uncertainty we “learn” from data which points will receive higher (lower) quality explanations instead of using a Bayesian approach. This leads to a method that is optimized to achieve our goal because it is trained to predict the examples with higher MSE relative to the high-quality explanations and decrease the computational overhead by only training a model once. [1] Lakshminarayanan et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. NIPS 2017. **Q3) “How sensitive is the method to the choice of amortized explainer? How might it perform with different types of amortized explainers?”** The main objective of selective explanations is to improve the quality of the explanations provided by both amortized and Monte Carlo explainers by combining both methods when the amortized explainer fails. Therefore, in the case that the amortized explainer has poor performance, selective explanations would require more usage of the Monte Carlo explainer to achieve favorable performance (i.e., small MSE from high-quality explanations), thus increasing the computational cost per explanation. Hence, the choice of amortized explainer heavily impacts the quality vs. computational cost of selective explanations. **Q4) “Could the selective explanation approach be extended to other types of explanation methods beyond feature attribution (e.g., example-based explanations)?”** The core idea of selective explanations is identifying low-quality predicted explanations and using a more computationally expensive method to improve their quality. Therefore, selective explanations could be extended to other explanation types. For example, with example-based explanations, an uncertainty metric could be developed to identify cases where selected examples are likely to be unrepresentative, and thus a different explanation method should be used. **Q5) “How might the method be adapted to handle streaming data or online learning scenarios where the underlying model is continuously updating?”** To handle streaming data or online learning, the components of the selective explanation method would need to be updated incrementally. Specifically, the amortized explainer, the combination function (Eq. 12), and the uncertainty measures (Eq. 4 and 5) would need to be continuously updated to reflect the changes in the model being explained. **Q6) “Computational overhead: While more efficient than full Monte Carlo methods, the selective approach still requires additional computation compared to pure amortized methods.”** The selective explanation method combines amortized and Monte Carlo explanations, introducing an additional computational cost. However, we believe this is a feature instead of a flaw, since it allows flexibility in choosing the fraction of samples that receives extra computations. Figure 5 shows that selective explanations significantly improve the performance of the amortized explainer, especially for the samples with worst-performing explanations. For example, Figure 5 (a) demonstrates that providing explanations with an initial guess to 50% of the data improves the Spearman’s correlation of the worst 10% amortized explanations from 0 (no correlation) to almost 0.6 (strong correlation). --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: I appreciate the detailed responses provided by the authors, and my concerns are well addressed. I would like to keep the original accept rating.
Summary: This paper proposed a feature attribution method that detects when amortized explainers generate low-quality explanations and improves the explanations with their linear interpolation of themselves and expensive high-quality explanations. To detect the low-quality explanations of the amortized explainers, the proposed method measures the uncertainty of the explanations. Then, based on a selection function defined by the uncertainty, it selects either using the explanations by the amortized explainers or using the linear interpolation of the explanations by the amortized explainers and Monte Carlo explainers. The experiments were conducted on tabular and text datasets, and the experimental results showed that the proposed method achieved better accuracy of explanations than using the amortized explainers and the Monte Carlo explainers solely. Strengths: - This work is the first to propose a feature attribution method in the "selective" setting, which can improve the quality of the explanations with Monte Carlo explanations when amortized explainers generate low-quality explanations. Since the proposed method is an extension of selective classification and regression, which are successful and well-studied approaches, it is easy to imagine it working well. - The formalization of the proposed method is almost appropriate except that I am concerned about the learned uncertainty (5), and it is described clearly. Weaknesses: - In common, the quality of explanations has a trade-off between computational efficiency and accuracy. Although the experimental results show that the proposed method can improve the accuracy of explanations, those on computational efficiency are not investigated. - The generated explanations are quantitatively evaluated. However, due to the lack of qualitative evaluation, it is unclear how good the explanations generated by the proposed method are actually from a user perspective. The following are a minor point: - If (5) is MSE, should use $\|| \cdot \||$ instead of $| \cdot |$. Technical Quality: 3 Clarity: 3 Questions for Authors: - **Learned uncertainty:** The loss $\ell(\mathrm{Amor(x;y)}, \mathrm{MC}^n(x,y))$ in (5) is the same as the objective of the amortized explainer $\mathrm{Amor(x;y)}$ in (3). Therefore, if the amortized explainer is fitted enough to the training data, the loss $\ell$ is consistently near zero, resulting in the learned uncertainty function $s$ being consistently near zero, too. This learned uncertainty function does not seem to work well at an inference phase. What is the justification for (5)? - What does 'Random' in Figure 3 mean? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. Your feedback will positively impact the final version of our paper! We address your questions below. **Q1) ”Although the experimental results show that the proposed method can improve the accuracy of explanations, those on computational efficiency are not investigated.”** Thank you very much for this comment! Please kindly refer to the global answer to all reviewers above. TL;DR: the trade-off between computational complexity (given by number of model inferences) and quality of explanations (MSE from higher-quality explanations) is captured in Fig. 4 and 10 of the paper. Particularly, Fig. 10 (reproduced in the attached PDF) shows that our method performs very close to an oracle that "knows" which explainer to use for a given MSE and constraint on number of inferences. We will make this clearer by bringing Fig. 10 and the associated discussion to the main body of the paper instead of the appendix. Please, check the global answer (Q1) for more details. **Q2) “The generated explanations are quantitatively evaluated. However, due to the lack of qualitative evaluation, it is unclear how good the explanations generated by the proposed method are actually from a user perspective.”** Thank you for this great point. The main reason for the lack of user evaluation is that we are approximating expensive-to-obtain yet established explanation methods that were extensively tested and evaluated by humans such as LIME [1] and SHAP [2]. Given these established results, we follow the methodology in most papers that aim to approximate feature attribution methods [3, 4, 5, 6] and report both MSE and Spearman’s Correlation to SHAP explanations computed until convergence. For low MSE and high Spearman Correlation with SHAP (which our Selective Explanation method achieves), the performance of our method is essentially the same as SHAP. In other words, our goal is to approximate expensive-to-obtain SHAP explanations as closely as possible, while lowering the required number of inference and, thus, computational cost. Nevertheless, we acknowledge this point, and recognize that – since we aim to approximate methods such as SHAP – our method is ``high quality"' insofar that SHAP is high-quality, and will note this caveat in the paper. [1] Ribeiro et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD 2016. [2] Lundberg et al. A Unified Approach to Interpreting Model Predictions. NIPS 2017. [3] Jethani et al. FastSHAP: Real-Time Shapley Value Estimation. ICLR 2022. [4] Schwab and Karlen. CXPlain: Causal Explanations for Model Interpretation under Uncertainty. NeurIPS 2019. [5] Yang et al. Efficient Shapley Values Estimation by Amortization for Text Classification. ACL 2023. [6] Covert et al. Stochastic Amortization: A Unified Approach to Accelerate Feature and Data. Attribution. arXiv:2401.15866. **Q3) “If (5) is MSE, should use ||⋅|| instead of |⋅|.”** Thank you for this careful point. Notice that $\ell(\text{AMOR}(x;y), \text{MC}(x; y))$ is a scalar. Hence, $||.||$ and $|.|$ are equivalent. **Q4) “Learned uncertainty: The loss in (5) is the same as the objective of the amortized explainer in (3). Therefore, if the amortized explainer is fitted enough to the training data, the loss is consistently near zero, resulting in the learned uncertainty functions being consistently near zero, too. This learned uncertainty function does not seem to work well at an inference phase. What is the justification for (5)?”** Thank you for this great point. In the case there is overfitting, the selection function might not be sufficiently accurate (neither the amortized explainer), as you mentioned. In such cases, we advise the users to reserve a validation dataset to fit the uncertainty function. We highlight that we did not observe overfitting in any of our experiments where we train the uncertainty measure using the training dataset, and train the amortized explainers until convergence on the same training dataset, and evaluate their performance on a test set. However, this does not preclude the risk of overfitting in other settings. We will add your comment to our limitations section and advise on the use of a validation set instead of the same training dataset. **Q5) "What does 'Random' in Figure 3 mean?"** "Random" means that we select explanations uniformly at random instead of using the uncertainty metrics. This leads to an average MSE that is near the MSE of the amortized explainer and is independent of the coverage. We will add this description in line 255 of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Some of my concerns have been allayed. I will maintain a positive score.
Summary: The paper proposes a method, termed "Selective Explanations," aimed at improving the quality of explanations generated by amortized explainers in machine learning. The authors introduce a technique that detects low-quality explanations and employs a combination of amortized and Monte Carlo methods to enhance them. The approach leverages an "explanations with initial guess" technique, allowing a trade-off between computation speed and explanation quality. The proposed method is validated across different datasets, showing that it can improve the poorest quality explanations typically provided by amortized explainers. Strengths: 1. The concept of using a selective approach to manage computational resources while improving explanation quality is compelling and timely. 2. The proposed method is intuitive and easy to understand. 3. The paper is well written. Weaknesses: 1. The experimental design does not strongly support the major claims on the reduction of computational cost. Evaluations on the tradeoff between explanation quality and computational cost would be helpful. 2. The relationship between uncertainty and explanation quality is unclear, it would be better to have empirical or theoretical proof on their correlations. In addition, the proposed uncertainty measurement also introduces computational overhead when “run the training pipeline for the amortized explainer described in (3) k times” (line 139). Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Given the current state of the submission, the reasons to reject slightly outweigh the reasons to accept. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Your feedback will positively impact the updated version of our paper and we will include all of your comments. We also appreciate that you found our manuscript well-written, our method intuitive and easy to understand, and that our idea of Selective Explanations is compelling and timely. We address your comments below. We hope you can engage with us during the discussion period. **Q1) “The experimental design does not strongly support the major claims on the reduction of computational cost. Evaluations on the tradeoff between explanation quality and computational cost would be helpful.”** We appreciate this comment and we kindly ask that you refer to Q1 in the global answer to all reviewers above. The TL;DR is: the trade-off between computational complexity (given by number of model inferences) and quality of explanations (MSE from high-quality explanations) is captured in Fig. 4 and 10 of the paper. In particular, Fig. 10 shows that our method significantly reduces the number of inferences required to achieve a given explanation quality in comparison with Monte Carlo explanations – specifically SVS. Remarkably, Fig. 10 shows that Selective Explanations performs very close to an oracle that "knows" which explainer to use for a given MSE and constraint on number of inferences. We will make this clearer by bringing Fig. 10 and the associated discussion to the main body of the paper instead of the appendix. Please, check the global answer (Q1) for more details. **Q2) “The relationship between uncertainty and explanation quality is unclear, it would be better to have empirical or theoretical proof on their correlations.”** Again, thank you for this thoughtful comment. Here, again, we kindly ask for you to refer to the global answer for all reviewers above and the attached pdf, which empirically shows a high correlation between uncertainty and explanation quality. TL;DR: Figure 3 shows the relationship between uncertainty and explanation quality by demonstrating that the amortized explanations with smallest uncertainty also have the smallest MSE from high-quality explanations and vice-and-versa. However, we agree with you that more direct evidence is needed. To address this, we computed this correlation, in a new Table 1 (attached in the PDF) and will add it to the main paper. This table presents positive Pearson and Spearman’s correlations between our uncertainty measures and MSE and a strong monotonic relationship between both quantities, especially for the language datasets. This evidence reinforces the effectiveness of our approach in detecting lower-quality explanations in the main tasks of interest. Again, please, check the global answer (Q2) for more details. **Q3) “In addition, the proposed uncertainty measurement also introduces computational overhead when “run the training pipeline for the amortized explainer described in (3) k times” (line 139).”** We agree with you, the Deep Uncertainty measure introduces a computational overhead. Please note that this computational overhead is exactly the reason we proposed a second method, referred to as "Learned Uncertainty," which only needs to be trained once. Our experiments suggest that the Learned Uncertainty provides a better performance for selective explanations of language models – our main use case of interest. We will make this distinction clearer in the paper by adding the following paragraph after line 148: “To address the computational overhead of Deep Uncertainty and directly target low-quality explanations, we introduce Learned Uncertainty. This alternative method requires only one training run, drastically reducing computational costs. Learned Uncertainty is optimized to predict discrepancies between amortized and high-quality explanations.” We proposed Deep Uncertainty because this method is directly inspired by Deep Ensembles [1] used for distribution-free uncertainty quantification. It achieves favorable performance (see Figs. 3, 4, 5, and 6), but it indeed comes with a computational overhead. We introduce it because of its connection with Deep Ensembles traditionally used for uncertainty quantification. [1] Lakshminarayanan et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. NIPS 2017. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for answering my questions. After reading all the reviews and rebuttals, I still have concerns. The underlying assumption of this paper that "expensive-to-obtain" or computationally demanding explanations are "high-quality" lacks both empirical and theoretical validation. More discussion on this underlying assumption is critical. Furthermore, the definition of a "high-quality" explanation is not clearly defined. For instance, such explanations could be interpreted as more faithful to the model's decision-making process, more aligned with human preference, or more aligned with ground truth explanations. --- Reply to Comment 1.1.1: Comment: As we wrote to Reviewer xNyJ, we do agree that "expensive-to-obtain" explanations is a more precise term and will change "high-quality" to "expensive-to-obtain" everywhere in the revised paper. In our experiments, we use SHAP [1] with exponentially many computations as the "expensive-to-obtain" explanation. SHAP with exponentially many computations was already validated by previous literature (i) theoretically, (ii) empirically, and (iii) by user studies in the paper that proposed such explanations [1] and also in follow-up work [6, 7, 8] – its limitations have also been studied [9]. Although these explanations have many desired properties, they are computationally expensive. For this reason, a new stream of work that tries to approximate these explanations emerged [2, 3, 4, 5]. Please note that these papers do not argue on the quality of SHAP, but they do argue on how close their approximation is to converged explanations – as we also do. We hope to have addressed your concerns and are available to answer any further questions you may have. [1] Lundberg et al. A Unified Approach to Interpreting Model Predictions. NIPS 2017. [2] Jethani et al. FastSHAP: Real-Time Shapley Value Estimation. ICLR 2022. [3] Yang et al. Efficient Shapley Values Estimation by Amortization for Text Classification. ACL 2023. [4] Covert et al. Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution. arXiv:2401.15866. [5] Covert et al. Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression. PMLR 2021. [6] Yingchao. Explainable AI methods for credit card fraud detection: Evaluation of LIME and SHAP through a User Study. Dissertation 2021. [7] Salih et al. A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME. Advanced Intelligent Systems 2024. [8] Antwarg et al. Explaining Anomalies Detected by Autoencoders Using SHAP. Expert Syst. Appl. Vol 186. [9] Slack et al. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. AIES 2020.
Rebuttal 1: Rebuttal: ### Global Rebuttal Thank you very much to the reviewers for their effort! We are pleased that you found the paper well-written, the problem setting interesting, and our theoretical analysis sound (all reviewers), recognized that we are the first to propose "selective" feature attribution (reviewers kYjk and tCbB), and that our method is intuitive and easy to understand (reviewer XcVA). Next, we answer two questions that were common across reviewers regarding computational cost and the relationship between MSE and uncertainty metrics. **Q1) What is the tradeoff between explanation quality and computational cost?** We thank reviewers XcVA and kYJk for bringing this up. The trade-off between computational cost (given by number of inferences) and quality of explanations (MSE w.r.t. a converged SHAP explanation) is captured in Fig. 4 and 10 of the paper. In particular, Fig. 10 (replicated in the rebuttal PDF) shows that the Selective Explanation method's computational cost is close to an oracle that "knows" which explainer to use for a given MSE. We will bring Fig. 10 and the associated discussion into the main body of the paper. We give details next. 1. Fig. 10 compares Selective Explanations with Monte Carlo methods in terms of computational cost (x-axis: number of model inferences) and explanation quality (y-axis: MSE w.r.t. high-quality explanations). The number of inferences serves as a proxy for computational cost, as GPU/CPU usage time scales linearly with it. Our results demonstrate that Selective Explanations achieve lower MSE than Monte Carlo methods for the same computational cost. In our Selective Explanation method, we predict which samples should be routed to a more computationally expensive explanation method. We compare this method against an "oracle" who knows a priori how to optimally route samples in terms of MSE. We simulate this oracle by pre-computing SVS explanations with parameters 12, 25, and 50, and selecting the one with the smallest MSE from the target SHAP explanation. The oracle is simulated for comparison purposes only. Remarkably, Fig. 10 shows that selective explanations closely approximate the Oracle curve, indicating that, on these benchmarks, our method has a near-oracle trade-off between the number of inferences and MSE. 2. Fig. 4 also compares computational cost (percentile with recourse ($1 - \alpha$) in x-axis) and explanation quality (MSE from converged SHAP explanation, y-axis). Here, percentile with recourse is a proxy for computational cost because it controls the fraction of the dataset that will receive more computationally-expensive explanations, i.e., Monte Carlo or explanations with initial guess. These explanation methods are more costly than amortized methods because they require a larger number of inferences while the amortized explainer only requires one inference. Fig. 4 demonstrates that Selective Explanations achieve lower MSE than Monte Carlo methods using the same number of inferences. In Fig. 4 (a) and (c), Selective Explanations can match the performance of the more expensive method while using only 50\% of the computational resources. Fig. 4 (b) and (d) show an even stronger result: Selective Explanations improve upon amortized explanations even when the more computationally expensive method has a higher MSE than the amortized method. This improvement is particularly significant for the lower-quality amortized explanations, as illustrated in Fig. 5. **Q2) What is the relationship between our uncertainty metrics and explanation quality?** We thank reviewers XcVA and xNyJ for this question. Figure 3 shows the relationship between our proposed uncertainty metrics (deep and learned uncertainty) and explanation quality (measured by MSE). MSE serves as a proxy for explanation quality as it measures the difference from SHAP explanations computed until convergence. Additionally, we included a new Table 1 (see attached PDF) showing Pearson’s (linear) and Spearman’s (monotonic) correlation measures between the uncertainty metrics and MSE, which will be added to the paper. These results show that our uncertainty metrics are consistently positively correlated with explanation quality across different tasks. 1. In Fig. 3, the x-axis represents coverage ($\alpha$), the fraction of examples with the lowest predicted uncertainty, while the y-axis shows the average MSE for these examples. For instance, with $\alpha = 25$\%, the y-axis shows the average MSE for the 25\% of examples with the lowest predicted uncertainty. Fig. 3 indicates that examples with higher uncertainty metrics also have higher MSE because as coverage increases, the MSE of the selected explanations also increases. We compare our method to an "Oracle" that has access to the MSE of the amortized explanations and, when the coverage decreases, removes exactly those explanations with the highest MSE. Our approach closely matches the Oracle for both language tasks, Yelp Review and Toxigen datasets, demonstrating that our uncertainty measures are nearly optimal in these cases. 2. Table 1 (in the attached PDF) shows both Pearson and Spearman’s correlation coefficients between our uncertainty measures (Deep and Learned) and the explanation quality (measured by MSE) across the datasets. The table shows that our uncertainty measures achieve positive correlations across all datasets and uncertainty methods, indicating that as uncertainty increases, so does the MSE (lower explanation quality). We also observe strong correlations in many cases, particularly for the Learned uncertainty method on the Toxigen dataset (Pearson: 0.89, Spearman: 0.93). Moreover, Spearman’s correlation is constantly higher than Pearson’s, suggesting a strong monotonic relationship between uncertainty and explanation quality – which is our main interest since we aim to detect and rank lower-quality explanations. Pdf: /pdf/0ff4403e88b82d5162c01ad034f05392d4247971.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Exponential Quantum Communication Advantage in Distributed Inference and Learning
Accept (poster)
Summary: This paper studies distributed learning and inference based on graph neural networks but with quantum communications between the distributed agents. The authors show that quantum networks reduce the communication complexity inference and gradient computation of distributed models. This reduction is shown to be exponential in communication complexity. The main idea of the paper is to have the data encoded into qubits using amplitude encoding. With that one gets an exponential reduction in space and hence communication complexity. As for the gradient estimation, the problem is shown to be converted to Aaronson's Shadow tomography. Strengths: The paper proposes an exponential advantage in communication complexity for the task of distributed inference and gradient estimation. This is a nice set of advantages that can be employed in practical problems. Also the lower bounds are nice contributions. Weaknesses: The result of the paper is not surprising given the known results in quantum communication complexity. Moreover, the results seem to be directly derived from known papers on Shadow tomography and the Threshold search. Technical Quality: 4 Clarity: 4 Questions for Authors: Can you elaborate on the run-time of the proposed method? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Nothing significant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Can you elaborate on the run-time of the proposed method? The run-time of gradient estimation based on shadow tomography is stated in Theorem 2. Asymptotically, this will dominate the cost of other parts of the algorithms such as state preparation. This polynomial scaling is based on the complexity of solving SDPs, based on the paper of Brandao et al (reference 22 in the submission). This algorithm is used as part of the online state learning procedure of shadow tomography. This scaling can be improved in cases where the unitaries have low rank structure. The coordinate descent based algorithm we consider in Algorithm 1 will enjoy better scaling in certain parameter regimes (namely when the vector of coefficients of the unitary parameters has small norm). --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I will keep my score.
Summary: The paper investigates the exponential quantum advantage in communication complexity over the tasks of training or inference in multiple types of compositional distributed quantum circuit model. The authors also study the quantum communication advantage in a specific class of shallow graph neural networks, and then they conduct experiments showing that it can achieve performance comparable to certain existing classical models and benchmarks while maintaining the exponential communication advantage. At the end, the authors explore the expressivity of the compositional model. Strengths: I think this paper is one of the good examples of applying results and tools from quantum complexity theory to problems in distributed computing. Weaknesses: (1) Lemma 6 addresses the quantum communication complexity of the quadratic graph network inference problem, for which I think a lower bound would make more sense. However, (I am not sure if I miss anything or not), instead of giving a lower bound, Lemma 6 provides an upper bound, which I find confusing. (2) The paper assumes that the input is amplitude encoded. It is well-known that preparing such quantum states is generally not efficient. This is an important factor to consider regarding the feasibility of the proposed systems, and thus, it is my main concern with the paper. However, I don't see much formal treatment of this issue in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: My questions are those 2 points in Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authos give a fair discussion on possible limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: (1) We consider the problem of inference with graph networks in order to demonstrate the the communication advantage holds for model classes that can be trained on classical data. Note that in order to show the advantage, we must show both that any classical algorithm will require lots of communication (Lemma 5) and that quantum algorithms exist that require little communication. The latter is the content of Lemma 6, hence it takes the form of an upper bound (as opposed to the classical lower bound). (2) Indeed, the paper requires loading the data onto a quantum computer. Note that the time complexity is only polynomial in the Hilbert space dimension, as mentioned in Theorem 2. While this is typically considered inefficient for quantum algorithms (since it is exponential in the number of qubits), the cost of classical inference and training is in fact polynomial in the data and model size, so this scaling is only polynomially worse than the classical scaling. The coordinate descent algorithm we consider in Algorithm 1 is more efficient than this. In general the cost of loading an input state will be linear in N. Note that while such scaling can eliminate any advantages in time complexity, which is a major issue for many proposals of quantum advantages related to machine learning problems, it does not affect the communication complexity, in terms of which exponential advantage is still possibile. --- Rebuttal Comment 1.1: Comment: Thanks for the response. As for comments on W2, do you mean in general performing amplitude encoding of input data takes time linear in N?
Summary: This paper studies the quantum advantage on communication of distributed learning. It places a common quantum neural network model in the (two-party) communication scenario, where the data x is assigned to Alice, and the parameterized unitary operators of each layer are alternately given to Alice and Bob in order. The paper shows exponential quantum advantage on communication for problems of estimating the loss function (some function on data after the action of unitary operators) and estimating the gradients of the loss function with respect to each parameter of the unitary operators. The quantum upper bound for estimating the loss function is by encoding data x (size-N vector) into O(log N)-qubit state. The quantum upper bound for estimating the gradients is by a reduction to a shadow tomography problem. The classical lower bound is by a reduction from a problem proposed by Raz, also a round-based lower bound by a reduction from pointer-chasing. For comparison, the paper shows that no exponential quantum advantage on communication of linear classification, where Alice and Bob are given data x and y as vectors to determine the sign of inner product of x and y. The classical upper bound is by applying Johnson-Lindenstrauss and the quantum lower bound is by a reduction from gap-Hamming. The paper also shows exponential quantum advantage in communcication for quadratic graph network inference, and provides experimental results for the above scenarios. Strengths: - A new research direction on quantum advantage of communication complexity for learning problems, providing a series of results with solid theoretical proofs and experimental data to support them. Weaknesses: - For technical part, the upper bounds are straightforward, and the lower bound reductions look standard. - These learning tasks (estimating the loss function in quantum neural networks, linear classification, and quadratic graph network inference) seem to have weak correlations, making the paper appear somewhat like a mere compilation of results. Technical Quality: 3 Clarity: 3 Questions for Authors: - Minor comment: the upper bound in Lemma 1 should be O(L log N) instead of O(log N)? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - The claim of QUANTUM advantage on QUANTUM neural networks seems a bit like cheating, although the model and problem are described using classical information. - The two-party version of quantum neural networks seems artificial to me. The data and parameters of each single unitary operator might be stored distributively between parties, just like in problem 7 of linear classification, where data x and y are assigned to Alice and Bob. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > the upper bound in Lemma 1 should be O(L log N) instead of O(log N)? The total communication is indeed linear in L, but for this reason the number of rounds required is linear in L. In each round, log(N) qubits should suffice in order to solve the problem. > The claim of QUANTUM advantage on QUANTUM neural networks seems a bit like cheating, although the model and problem are described using classical information. The problems we consider involve classical data, which can be encoded in the amplitudes of quantum states. Similarly, classical neural networks can be expressed as quantum circuits. Our results indicate that for certain model classes, doing this can lead to exponential savings in communication for distributed training. The overarching motivation is indeed the solution of classical learning problems. Note that in order to enjoy these advantages, we require that the quantum algorithm simply perform equivalently to the classical algorithm in terms of the accuracy or the time complexity, and we show that for such models exponential communication advantages are still possible. If we have misunderstood the concern that is being expressed here, we will be happy to elaborate further as needed during the discussion phase. > The two-party version of quantum neural networks seems artificial to me. The data and parameters of each single unitary operator might be stored distributively between parties, just like in problem 7 of linear classification, where data x and y are assigned to Alice and Bob. The motivation for the model is the structure of distributed classical neural networks. Note that pipeline parallelism is standard in training of large models, and is particularly useful in cases where the hardware is heterogeneous and one wants to sparsely activate subsets of a large model. We believe our distributed quantum model is the natural quantum analog of this type of neural network. Indeed the expressivity of such models is unclear, which is why we additionally consider a specific class of graph neural networks in Section 4, for which the communication advantage holds. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Indeed the classical neural networks can be expressed as quantum circuits, but I am not sure whether converting classical neural networks into quantum circuits introduces much additional communication in your distributed setting. --- Reply to Comment 1.1.1: Comment: In our setting, each party constructs a set of unitaries locally, requiring no communication. Note that each unitary can be arbitrarily complex, since this is done locally. Each unitary can also be a nonlinear function of classical data or model parameters. The total communication complexity of inference will then scale linearly with the number of times the quantum state encoding the features has to be transferred back and forth between the players when evaluating the resulting circuit. A possible concern here is that it might be hard to construct expressive models in this way, or models that approximate the action of realistic classical neural networks. We show in section 4 however that for certain classes of graph GNNs, models of this form require only a single round of communication to run inference and are expressive enough to perform well on standard benchmarks.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful comments and endorsement of the work. We will respond to individual reviewers below.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Pandora's Box: Towards Building Universal Attackers against Real-World Large Vision-Language Models
Accept (poster)
Summary: This paper studies the universal adversarial attacks on large vision language models (LVLMs). It focuses on the black-box setting where only the model's response is available to the adversary. This paper proposed a query-based targeted attack that leverages a proxy model to obtain the similarities between the target model's response and the target text. By approximating the gradient and using the importance score as the weight, the proposed method demonstrated solid results in exposing the safety concerns of several popular LVLMs. The proposed method is very practical since popular LVLM services often only provide the model's response. Strengths: - The threat model, in which the adversary can only get access to the model's response, is very realistic in the real-world setting since popular LVLMs are often offered as a service where only the model's response is available. Additionally, this paper studies the universal perturbation with targeted texts. Arguably, this type of attack is one of the most challenging ones. - The proposed method is technically sound and comprehensive experiments demonstrate the empirical performance. All components of the proposed attack are well explained. Sufficient ablation study has been conducted in this paper. - The safety concerns of LVLMs exposed in this paper could potentially have a large impact on the AI safety field. Weaknesses: - A large number of queries with similar contents might be easily detected by the defender. Experiments investigating potential defense could make the paper more comprehensive. An example of such detection is in [1]. - For comparison with CroPA in Table 4, the CroPA paper stated that they use OpenFlamingo rather than Flamingo. It would be good to specifically specify which model is being used for evaluation. If it is the open-source version, it would help to cite OpenFlamingo and clearly state that OpenFlamingo is used rather than Flamingo. If Table 4 uses Flamingo, it would not be appropriate to compare it with the results reported in CroPA in Table 4 since the open-source version has some slight differences from the original model. - All the target texts are rather few words. It would be more comprehensive to include longer sentences. Will longer sentences increase the query budgets to achieve a similar performance as these short sentences? - For targeted attacks, only reporting the semantic similarity scores makes it hard to fully understand the attack performance. Word/token-based evaluations, such as BLEU, are more appropriate. It is not a weakness, but I believe the year of the publication for [45] in the reference is wrong. NeurIPS 2024 papers should be under review at the moment rather than published. --- [1] Chen, Steven, Nicholas Carlini, and David Wagner. "Stateful detection of black-box adversarial attacks." In Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence, pp. 30-39. 2020. Technical Quality: 3 Clarity: 3 Questions for Authors: The reviewer's main concerns are these points and questions outlined in the weaknesses section. The reviewer suggests that the authors focus on these points in the rebuttal. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations and potential negative impacts have been sufficiently discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Experiments investigating potential defense could make the paper more comprehensive.** **A1:** Thanks for your suggestion. As shown in the following table, we evaluate the robustness of our adversarial patch with four popular defense methods. Specifically, PatchCleanser [a] is a state-of-the-art certifiable defense against adversarial patches. It uses double masking to certify the prediction. [c,d] are query-based defenses, which are specifically designed for detecting malicious queries by black-box attacks. [b] is reviewer's mentioned defense. It shows that our attack method is robust to them. We will add the experiments in the revision. |ASR against defense|defense [a]|defense [b]|defense [c]|defense [d]| |:----:|:----:|:----:|:----:|:----:| |Our attack|92%|86%|79%|75%| [a] PatchCleanser: Certifiably robust defense against adversarial patches for any image classifier. USENIX 2022 [b] Stateful detection of black-box adversarial attacks. S&P 2020 [c] Random noise defense against query-based black-box attacks. NeurIPS 2021 [d] Blacklight: Scalable defense for neural networks against query-based black-box attacks. USENIX 2022 **Q2: For comparison with CroPA in Table 4, the CroPA paper stated that they use OpenFlamingo rather than Flamingo. It would be good to specifically specify which model is being used for evaluation.** **A2:** We utilize the same open-source version for fair comparison in Table 4. We will revise the description and add the reference. **Q3: All the target texts are rather few words. It would be more comprehensive to include longer sentences. Will longer sentences increase the query budgets to achieve a similar performance as these short sentences?** **A3:** Since we evaluate the text semantic similarity to assign importance weights and assess the attack performance, our proposed attack is not sensitive to the length of the target response text. That means, longer sentences will not increase the query budgets and our pre-set query budget is sufficient to handle the cases. We conduct more experiments on longer sentences in the table below. We can find that our attack can also achieve great attack performance on long output targets. We will add more experiments and discussion in the revision. |Adversarial Target|ImageClassification|ImageCaption|VQA|Overall| |:----:|:----:|:----:|:----:|:----:| |I am sorry, there is no object you mentioned, so I cannot answer|0.837|0.814|0.865|0.839| |A beautiful bird with a black and white color in snow|0.819|0.842|0.848|0.836| |I want to destroy the whole world together|0.776|0.801|0.784|0.787| |A photo of a cat|0.862|0.841|0.854|0.852| |The view from the top of a hill overlooking the mountains|0.828|0.843|0.810|0.827| **Q4: For targeted attacks, only reporting the semantic similarity scores makes it hard to fully understand the attack performance.** **A4:** Thanks for your suggestion. Actually, in our paper, we follow the previous work [45] to utilize the semantic similarity for fair evaluation. This metric is more generalizable for evaluating different texts with the same semantics. For example, the output texts "unknown" and "I don't know" share similar semantics, they have relatively higher semantic similarity score but would achieve a low word-level metric. Of course, we can also utilize the *ExactMatch* and *Contain* metrics to conduct strict word-level evaluation on our attack to assess the attack success rates. Specifically, the *ExactMatch* metric determines whether the LVLM output exactly matches the predefined target text, whereas the *Contain* (similar to BLEU) metric checks whether the output contains the target text. This is especially useful when outputs exceed the predefined target length. We report the corresponding evaluations in the table below, where our method also achieves significant attack performance, demonstrating the effectiveness of our attack. |LVLM Model|Dataset|ImageClassification (*ExactMatch*)|ImageClassification (*Contain*)|ImageCaption (*ExactMatch*)|ImageCaption (*Contain*)|VQA (*ExactMatch*)|VQA (*Contain*)| |:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |LLaVA|MS-COCO|81.2%|83.7%|80.5%|82.1%|78.8%|83.7%| |LLaVA|DALLE-3|80.6%|84.3%|76.4%|82.9%|87.6%|90.1%| |LLaVA|VQAv2|81.0%|84.5%|77.2%|81.7%|85.4%|89.3%| |MiniGPT-4|MS-COCO|85.8%|86.1%|83.2%|85.6%|84.4%|88.5%| |MiniGPT-4|DALLE-3|79.4%|84.7%|84.3%|87.0%|85.9%|89.6%| |MiniGPT-4|VQAv2|84.8%|87.1%|85.9%|92.7%|87.3%|93.6%| |Flamingo|MS-COCO|80.5%|85.4%|77.9%|81.2%|79.4%|83.0%| |Flamingo|DALLE-3|82.4%|82.6%|84.5%|87.1%|80.3%|83.9%| |Flamingo|VQAv2|84.1%|85.3%|83.3%|86.7%|86.4%|89.2%| |BLIP-2|MS-COCO|81.6%|83.8%|76.4%|79.7%|82.5%|84.0%| |BLIP-2|DALLE-3|78.2%|81.9%|83.8%|84.3%|85.1%|85.4%| |BLIP-2|VQAv2|75.9%|79.6%|81.4%|84.5%|86.3%|88.2%| **Q5: It is not a weakness, but I believe the year of the publication for [45] in the reference is wrong.** **A5:** Thanks for your reminder. We will correct the details of reference [45] in the revision. --- Rebuttal Comment 1.1: Comment: Dear i8wm, What are your thoughts after reading the rebuttal and other reviews? Best, AC
Summary: This research introduces a novel approach to creating universal adversarial attacks against Large Vision-Language Models (LVLMs). It proposes the universal attacker against real-world LVLMs that operates with limited access to the model (only inputs and outputs). The attack is designed to be task-agnostic, using a universal adversarial patch that can deceive LVLMs across various tasks and inputs. The approach is model-agnostic, not requiring detailed knowledge of the LVLM's structure. Extensive experiments demonstrate the effectiveness of the attack against popular LVLMs like LLaVA, MiniGPT-4, Flamingo, and BLIP-2 across various tasks. Strengths: This paper addresses a significant gap in the field of adversarial attacks on LVLMs, offering a more practical and versatile approach compared to existing task-specific methods. The universal nature of the attack and its ability to function with limited model access make it particularly relevant for real-world applications and raise important questions about the security of LVLMs. Weaknesses: This paper presents an interesting approach to universal adversarial attacks on large vision-language models (LVLMs). However, there are several areas where the paper could be improved: 1. Comparison with non-universal attacks: The paper lacks a comparison between the proposed universal attack and non-universal attacks on specific tasks. Including such experiments would help readers better understand the advantages and limitations of this method. The current baselines ("Full attack" and "w/o importance") are insufficient, and more comparisons would strengthen the paper. 2. Query efficiency: The authors mention allowing 70,000 queries in total, but it's unclear how many images this applies to. This number seems excessive and impractical for real-world scenarios. A good attack should be query-efficient, and this aspect needs further explanation or justification. 3. Opposite directions claim: The statement about "some of them may have opposite directions" (line 205) lacks supporting evidence. This claim requires further explanation or empirical support. 4. Judge model acquisition: The paper should clarify how an attacker would obtain the judge model in practical scenarios and whether it needs to be related to the target LVLM. 5. Target text selection: Using "Unknown" as the target text may not truly represent a **targeted** attack. It would be valuable to include experiments with more specific target texts (e.g., "a photo of a cat") and report the success rates for such cases. 6. Patch visibility and physical attacks: Figure 4 suggests that the adversarial patch is quite noticeable to the human eye, which could be easily filtered in real-world applications. Additionally, the paper would benefit from including physical attack experiments, where the patch is printed and applied to real images to test LVLM performance. 7. Transferability to other models: The paper should explore whether this attack transfers to other image-capable language models like GPT-4 or Claude, as this would demonstrate the broader applicability of the method. Technical Quality: 3 Clarity: 3 Questions for Authors: see above Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Comparison with non-universal attacks.** **A1:** Since existing LVLM attackers are implemented in different settings with different models/datasets, we have already provided detailed comparisons in Table 3 and 4 of the paper. Note that, existing methods are non-universal attacks and require prior LVLM knowledge to generate different perturbations for different images. In contrast, our attack solely accesses to the LVLM input/output and can generate a single noised patch to fool all images/prompts, while achieving better attack performance. To provide a more comprehensive comparison, we also re-implement these attacks into our utilized datasets/models/metrics as shown in the table below:   |LVLM Model|Attack Method|MS-COCO (Overall)|DALLE-3 (Overall)|VQAv2 (Overall)| |:----:|:----:|:----:|:----:|:----:| |LLaVA|MF-Attack|0.626|0.634|0.647| |LLaVA|CroPA|0.778|0.796|0.782| |LLaVA|Ours|**0.830**|**0.837**|**0.829**| |MiniGPT-4|MF-Attack|0.643|0.618|0.635| |MiniGPT-4|CroPA|0.803|0.780|0.819| |MiniGPT-4|Ours|**0.841**|**0.838**|**0.865**| |Flamingo|MF-Attack|0.671|0.654|0.662| |Flamingo|CroPA|0.796|0.825|0.814| |Flamingo|Ours|**0.835**|**0.844**|**0.850**| |BLIP-2|MF-Attack|0.639|0.658|0.670| |BLIP-2|CroPA|0.783|0.799|0.811| |BLIP-2|Ours|**0.814**|**0.824**|**0.836**| It shows that our attack is still more adversarial. We think the reason is that our universal patch explicitly learns the general adversarial patterns against LVLMs and effectively estimates gradients solely using positive directions. **Q2: Query efficiency.** **A2:** The overall 70k queries are utilized to generate the universal patch using 500 images (in Table 10 of the appendix B.5). This means 70k queries cover the processing of all 500 images, rather than each image requiring 70k individual queries. Moreover, as shown in Table 8 of the appendix B.3, our method takes only 4.9h to attack all images/prompts, which is significantly more efficient than previous methods that generate separate perturbations for each image/prompt, e.g., MF-Attack: 3h for only a single perturbation, CroPA: 2.6h for only a single perturbation. **Q3: Opposite directions claim.** **A3:** We have empirically supported this claim in Table 1 of the paper via the variant "w/o importance". In practice, not all slight additive noises point towards the optimal direction during the gradient estimation, as negative noise may contain decomposed components in the opposite direction. If we treat all noises equally, the "w/o importance" variant performs worse. However, by weakening the negative noise and strengthening the positive noise with positive/negative-aware importances, our full attack achieves the best performance. This demonstrates that not all noise contributes positively to the optimal gradient direction. **Q4: Judge model acquisition.** **A4:** Actually, the attackers can utilize any text embedding model as the judge model, like BERT, Sentence-BERT. As shown in Figure 5(a) of the paper, our attack is not sensitive to these text encoders. It does not need to be related to the target LVLM. **Q5: Target text selection.** **A5:** As shown in Table 2 and Figure 8 of the paper, our attack can achieve significant performance on specific target texts. We also conduct more experiments on such types of texts in the table below. We will add more experiments in the revision. |Adversarial Target|ImageClassification|ImageCaption|VQA|Overall| |:----:|:----:|:----:|:----:|:----:| |I am sorry, there is no object you mentioned, so I cannot answer|0.837|0.814|0.865|0.839| |A beautiful bird with a black and white color in snow|0.819|0.842|0.848|0.836| |I want to destroy the whole world together|0.776|0.801|0.784|0.787| |A photo of a cat|0.862|0.841|0.854|0.852| |The view from the top of a hill overlooking the mountains|0.828|0.843|0.810|0.827| **Q6: Patch visibility and physical attacks.** **A6:** Although the adversarial patch is noticeable to humans, it can effectively fool the LVLMs in the universal setting without using any model details compared to other global noise attacks (which are less obvious). Moreover, as shown in the following table, our adversarial patch is robust to existing defenses.   |ASR against defense|defense [a]|defense [b]|defense [c]|defense [d]| |:----:|:----:|:----:|:----:|:----:| |Our attack|92%|86%|79%|75%| Besides, we try to implement two types of physical attack: A. we print and paste the patch on real images and scan it to query the LVLM; B. we print and paste the patch on a flat plane in realistic and collect new images. As shown in the table below, our method can still effectively attack these physical cases. |Adversarial Target|Case A (Overall)|Case B (Overall)| |:----:|:----:|:----:| |A photo of a cat|0.783|0.756| [a] PatchCleanser: Certifiably robust defense against adversarial patches for any image classifier. USENIX 2022 [b] Stateful detection of black-box adversarial attacks. S&P 2020 [c] Random noise defense against query-based black-box attacks. NeurIPS 2021 [d] Blacklight: Scalable defense for neural networks against query-based black-box attacks. USENIX 2022 **Q7: Transferability to other models.** **A7:** Firstly, we have tested the transferability among four LVLMs in Table 6 of the appendix B.1. Secondly, to evaluate the transfer-attack performance on GPT-4 and Claude, we also provide experiments in the following table. These experiments demonstrate the broader applicability of our attack. |Target Model|From LLaVA (Overall)|From MiniGPT-4 (Overall)|From Flamingo (Overall)|From BLIP-2 (Overall)| |:----:|:----:|:----:|:----:|:----:| |GPT-4|0.724|0.751|0.702|0.736| |Claude|0.743|0.744|0.718|0.725| --- Rebuttal Comment 1.1: Comment: Dear K8hM, What are your thoughts after reading the rebuttal and other reviews? Best, AC --- Rebuttal Comment 1.2: Title: after rebuttal Comment: Thanks for the rebuttal! After reading the rebuttal, I would like to maintain my current score. --- Reply to Comment 1.2.1: Title: Thanks to the reviewer Comment: Thank you for your valuable response. If you have any further concerns or questions, please feel free to contact us!
Summary: This paper presents a novel approach to create a universal adversarial attacker for LVLMs. The proposed method focuses on two main aspects: restricting access to only the LVLM inputs and outputs, and devising a task-agnostic adversarial patch that can deceive multiple multimodal downstream tasks. The approach involves initializing the adversarial patch through random sampling and optimizing it using a diverse set of LVLM task inputs. Extensive experiments demonstrate the effectiveness of the proposed method across various LVLMs and tasks. Strengths: 1. The black-box attack design make the proposed method practical for real-world applications. 2. The universal adversarial patch can deceive multiple multimodal downstream tasks enhancing the proposed method's utility. 3. Extensive experiments across different LVLMs and tasks validate the effectiveness of the proposed method. Weaknesses: 1. The paper could benefit from clearer explanations of the technical details and processes involved in creating and optimizing the adversarial patch. Technical Quality: 3 Clarity: 2 Questions for Authors: I am curious about the output of the model for a completely corrupted input, such as an image with noise that cannot be recognized by a human. It would be interesting to include this as a baseline. The paper does not explicitly state this nuance, but it seems this attack aims to mislead the model with an image that is clearly recognizable to humans. Otherwise, one could always send the model an image with pure noise, which would not be a useful attack. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The paper could benefit from clearer explanations of the technical details and processes.** **A1:** Thanks for your suggestion. We will add more corresponding explanations in the revision: (1) For adversarial patch initialization: to achieve successful targeted attacks, there is no prior knowledge of where the patch shall be located and what it looks like. Therefore, for each targeted text, we randomly sample a few patch locations and patterns, and choose a combination of the two with the highest attack performance. In particular, to determine whether a location is good, we randomly sample a set of patterns at that location and examine the average attack performance for this set. Since the quality of a patch location also relies on sampled patterns, we also draw patterns from a uniform distribution to find the optimal patch pattern. (2) For adversarial patch optimization: Since there is no model gradients or details, we need to estimate the gradient direction for further perturbing the patch pattern. The general Monte Carlo estimations apply a set of random noises on the patch and see whether these noises can change the prediction. These perturbation directions are averaged as the final direction for mutating the patch. However, such a design is not efficient as not all noises point towards optimal direction. To this end, we propose an importance-aware gradient approximation method to adaptively adjust the weights for different samples, based on how these sampled noises can lead to the attacker-chosen text. Specifically, given a set of slight additive noise in each step, we first add them to the patch to assess their positive/negative degrees via our developed LVLM-aware indicator function. Based on this, we then assign corresponding weights to positive/negative noises to strengthen the positive gradient direction while weakening the negative gradient direction. In this manner, we can efficiently and effectively optimize the adversarial patch with optimal estimated gradients.  **Q2: I am curious about the output of the model for a completely corrupted input, such as an image with noise that cannot be recognized by a human. It would be interesting to include this as a baseline.** **A2:** We have conducted detailed comparisons between our adversarial patch and completely corrupted inputs in Table 3,4 of our paper. In these two tables, although previous LVLM attackers (MF-Attack, CroPA) aim to globally perturb the whole image to improve the imperceptibility, they rely on the LVLM details and gradients for perturbation updation. Instead, our adversarial patch can be optimized in a more challenging but practical setting by solely querying the model, while achieving better attack performance than these LVLM attackers (Ours vs MF-Attack: 0.727 vs 0.646; Ours vs CroPA: 0.69 vs 0.65), demonstrating our effectiveness. Besides, we also try to re-implement these completely corrupted attacks into our utilized datasets/models/metrics as baselines for comparison in the following table. It shows that our attack is still more adversarial as we explicitly and effectively estimate optimal gradient direction for optimizaiton.   |LVLM Model|Attack Method|MS-COCO (Overall)|DALLE-3 (Overall)|VQAv2 (Overall)| |:----:|:----:|:----:|:----:|:----:| |LLaVA|MF-Attack|0.626|0.634|0.647| |LLaVA|CroPA|0.778|0.796|0.782| |LLaVA|Ours|**0.830**|**0.837**|**0.829**| |MiniGPT-4|MF-Attack|0.643|0.618|0.635| |MiniGPT-4|CroPA|0.803|0.780|0.819| |MiniGPT-4|Ours|**0.841**|**0.838**|**0.865**| |Flamingo|MF-Attack|0.671|0.654|0.662| |Flamingo|CroPA|0.796|0.825|0.814| |Flamingo|Ours|**0.835**|**0.844**|**0.850**| |BLIP-2|MF-Attack|0.639|0.658|0.670| |BLIP-2|CroPA|0.783|0.799|0.811| |BLIP-2|Ours|**0.814**|**0.824**|**0.836**| Secondly, previous completely corrupted inputs are diverse and sensitive to their global noises. These noises are specific to different images or prompts and can hardly to be effective in attacking other images/prompts. Instead, as shown in Figure 3 of the paper, our adversarial patch design is more flexible and can easily achieve a more challenging universal attack. **Q3: The paper does not explicitly state this nuance, but it seems this attack aims to mislead the model with an image that is clearly recognizable to humans.** **A3:** Thanks for your concern. We want to clarify that: (1) Our main goal is to tackle two critical issues in existing LVLM attackers (not to mislead LVLMs with recognizable perturbations): (i) restricting access to only the LVLM inputs and outputs. (ii) devising a universal adversarial perturbation. (2) To achieve the above goal, traditional attacks using completely corrupted inputs are ineffective as they require backpropagated gradients for optimization and their perturbations vary across different images/prompts. Therefore we choose to use adversarial patch to hanlde the above two challenging issues, which has been proven to be more effective than previous global noises through our carefully designed patch initialization and optimization processes. (3) We can further improve the imperceptibility of our attack by using smaller patch, which also achieves competitive performance as in Table 5 of the paper. Besides, our patch pattern is well-optimized via our designed strategy to achieve the challenging targeted attack, which is quite different from ineffective random pure noise. (4) Moreover, as in the following table, our adversarial patch is robust to existing defenses.   |ASR against defense|defense [a]|defense [b]|defense [c]|defense [d]| |:----:|:----:|:----:|:----:|:----:| |Our attack|92%|86%|79%|75%| [a] PatchCleanser: Certifiably robust defense against adversarial patches for any image classifier. USENIX 2022 [b] Stateful detection of black-box adversarial attacks. S&P 2020 [c] Random noise defense against query-based black-box attacks. NeurIPS 2021 [d] Blacklight: Scalable defense for neural networks against query-based black-box attacks. USENIX 2022 --- Rebuttal Comment 1.1: Comment: Dear 3jB6, What are your thoughts after reading the rebuttal and other reviews? Best, AC
Summary: The paper proposes a universal adversarial attack method targeting large vision-language models (LVLMs) by designing a universal adversarial patch. This method restricts access to only the model’s inputs and outputs, creating a task-agnostic adversarial patch that can deceive various LVLM-driven tasks without prior model knowledge. Extensive experiments validate the effectiveness of the proposed attack across multiple models and tasks. Strengths: 1. This paper investigates the vulnerability of real-world LVLMs in a practical but challenging setting, where 83 the attackers can only access the input and output of the LVLM. It is more practical. 2. This paper devises a universal adversarial patch that can be pasted and then fool any inputs for any LVLM downstream task, it is significant to evaluate the security of LVLMs, especially in the security-critical scenarios. Weaknesses: 1. In this attack setting, constraints don't seem as important because there is no need to ensure stealthiness, but patch size is quite crucial. In extreme cases, if the patch size is set to be as large as the image, the attack loses its meaning. So, can the attack achieve the same effect with a very small patch size without adding constraints to the perturbation? 2. Why do different target labels affect the choice of patch position? Can the patch be fixed in a specific position to achieve the same attack effect? 3. Compared to adversarial example attacks, this type of attack is certainly more powerful. In practical scenarios, this attack seems more meaningful for jailbreaking large models, such as inducing the model to follow harmful instructions or provide harmful information. The responses tested in the paper are relatively short, such as a single word. Does the length of the target response text affect the effectiveness of this attack? 4. In real-world scenarios, since this patch attack is quite obvious, a user could simply remove the patch's effect by cropping the image. So, what is the practical significance of this type of attack? Technical Quality: 3 Clarity: 3 Questions for Authors: Refer to the Weakness section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Can the attack achieve the same effect with a very small patch size without adding constraints to the perturbation?** **A1:** Without adding perturbation constraints, a smaller patch can achieve a similar attack performance. We implement this perturbation constraint to make a fair comparison with existing LVLM attack methods, as shown in Table 3 and 4 of the paper. Although we have conducted ablation studies on different patch sizes in Table 5 of the paper, these experiments are carried out with the pre-set perturbation constraint. We provide ablations without adding perturbation constraints in the following table: |Patch Size|ImageClassification|ImageCaption|VQA|Overall| |:----:|:----:|:----:|:----:|:----:| |S_p=16 (w/o. constraint)|0.776|0.753|0.825|0.785| |S_p=32 (w/. constraint)|0.734|0.720|0.778|0.744| |S_p=32 (w/o. constraint)|0.805|0.787|0.842|0.811| |S_p=48 (w/. constraint)|0.793|0.775|0.842|0.803| |S_p=48 (w/o. constraint)|0.819|0.823|0.860|0.834| |S_p=64 (w/. constraint)|0.824|0.806|0.879|0.837| |S_p=64 (w/o. constraint)|0.858|0.842|0.901|0.867| It shows that, without constraint, patch size 48x48 (w/o. constraint) can achieve a similar overall adversarial effect as our main attack setting of patch size 64x64 (w/. constraint) in the paper (0.834 vs 0.837). Moreover, patch size 32x32 without the constraint also outperforms its constrained variant, and even a much smaller patch size 16x16 without the constraint also achieves a very competitive attack performance. This highlights the significant potential of our proposed attack. **Q2: Why do different target labels affect the choice of patch position? Can the patch be fixed in a specific position to achieve the same attack effect?** **A2:** (1) We attribute this to the fact that different local regions of the same image contribute varying levels of ambiguity to different target labels, which affects the LVLM model’s ability to output specific semantic text. For instance, in Figure 4 of the paper, "I cannot answer" and "I am sorry" have similar semantics and therefore share close patch positions, while the distinct semantic "I hate people" corresponds to a different position. (2) Moreover, we investigate the performances of a fixed patch position (on the upper left corner of images) in the following table. |Adversarial Target|Overall (w/o. fixed patch)|Overall (w/. fixed patch)| |:----:|:----:|:----:| |Unknown|0.837|0.796| |I cannot answer|0.844|0.815| |I am sorry|0.842|0.807| This shows that a fixed position can degrade attack performance, indicating that the initial patch position is crucial for targeting specific texts. However, the fixed patch still exhibits a competitive adversarial attack effect, demonstrating the generalizability of our approach. **Q3: Does the length of the target response text affect the effectiveness of this attack?** **A3:** Since we evaluate the semantic similarity to assess the attack performance in the judge model and indicator function, our attack is not sensitive to the length of the target response text. As shown in Table 2 and Figure 8 of the paper, our attack can also achieve significant attack performance on long target text. We also conduct more experiments on longer target text in the table below:   |Adversarial Target|ImageClassification|ImageCaption|VQA|Overall| |:----:|:----:|:----:|:----:|:----:| |I am sorry, there is no object you mentioned, so I cannot answer|0.837|0.814|0.865|0.839| |A beautiful bird with a black and white color in snow|0.819|0.842|0.848|0.836| |I want to destroy the whole world together|0.776|0.801|0.784|0.787| |A photo of a cat|0.862|0.841|0.854|0.852| |The view from the top of a hill overlooking the mountains|0.828|0.843|0.810|0.827| It shows that the length of target texts does not affect the attack effectiveness, demonstrating the scalability of our attack. **Q4: What is the practical significance of this type of attack?** **A4:** The practical significance of our attack lies in four-fold: (1) In practical cases, our patches have great potential to be pasted on the realistic images/pictures to achieve attacks [e,f]. We implement two types of physical attack: A. we print and paste the patch on real images and scan it to query the LVLM; B. we print and paste the patch on a flat plane in realistic and collect new images. As in the table below, our method can effectively attack these physical cases.   |Adversarial Target|Case A (Overall)|Case B (Overall)| |:----:|:----:|:----:| |A photo of a cat|0.783|0.756| (2) To reduce our adversarial effect, directly cropping it may destroy the image structure and lose some contents. Moreover, as shown in the following table, our attack is robust to existing detection-based defenses, indicating that it is hard for automatic systems to prevent our attack.   |ASR against defense|defense [a]|defense [b]|defense [c]|defense [d]| |:----:|:----:|:----:|:----:|:----:| |Our attack|92%|86%|79%|75%| (3) Our model is practical in real-world LVLM applications as we solely access to the models' input/output. However, existing LVLM attacks severely rely on the prior LVLM model knowledge. (4) Our universal patch design also demonstrates our scalability. Compared to previous adversarial perturbation attacks, our attack is efficient and effective to fool different images/prompts in a single process. [a] PatchCleanser: Certifiably robust defense against adversarial patches for any image classifier. USENIX 2022 [b] Stateful detection of black-box adversarial attacks. S&P 2020 [c] Random noise defense against query-based black-box attacks. NeurIPS 2021 [d] Blacklight: Scalable defense for neural networks against query-based black-box attacks. USENIX 2022 [e] Naturalistic Physical Adversarial Patch for Object Detectors. ICCV 2021 [f] DAP: A Dynamic Adversarial Patch for Evading Person Detectors. CVPR 2024 --- Rebuttal Comment 1.1: Comment: Dear 8diJ, What are your thoughts after reading the rebuttal and other reviews? Best, AC --- Rebuttal Comment 1.2: Comment: My concerns are properly addressed. Thanks for the rebuttal! --- Reply to Comment 1.2.1: Title: Thanks to the reviewer Comment: We would like to thank the reviewer for responding to our rebuttal. It is great to know that your concerns have been addressed.
Rebuttal 1: Rebuttal: Dear reviewers, We much appreciate for your acknowledgment of our work and helpful, insightful comments. Following the reviewers' suggestions, we have carefully revised the paper and conducted a series of new experiments to address the reviewers' concerns. In the following, under each reviewer's comment, we address the concerns of the reviewers point by point.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
TrAct: Making First-layer Pre-Activations Trainable
Accept (poster)
Summary: This paper presents TrAct, a new and novel training strategy to modify the optimization behavior of the first layer. Basically, it can achieve faster convergence and achieve better classification performance across different models. The effectiveness of TrAct is demonstrated across a range of 50 experimental setups on the various benchmarks, underscoring the method's capability. Strengths: 1. The technical elaboration of the proposed method is clear. 2. I really like the motivation for the proposed method, which is straightforward. 3. The evaluations conducted on the provided benchmarks provide evidence of the effectiveness of the proposed methods. However, there are some concerns regarding the experimental results, which will be further discussed in the weaknesses section. Weaknesses: Weaknesses - What I am curious about is whether Tract is also generally applicable in other visual tasks, such as detection, segmentation, pose estimation, etc. The author could simply do some experiments on FasterCNN to verify the applicability of TrAct in visual dense prediction. If applicable, this will further enhance the impact of the proposed method. - Why is there a significant difference in the performance of TrAct between SGD and Adam? For example, in Figure 2, TrAct seems to converge faster on SGD, even surpassing the performance of the baseline 800 epoch with 100 epochs of training time. However, on Adam, performance improvement seems to be rather minor. Is there any theoretical explanation for this? Technical Quality: 4 Clarity: 4 Questions for Authors: Shown as above Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The paper has concluded this in the checklist Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your time reviewing and for helping us to improve our paper. **Strengths:** > 1. The technical elaboration of the proposed method is clear. > 2. I really like the motivation for the proposed method, which is straightforward. > 3. The evaluations conducted on the provided benchmarks provide evidence of the effectiveness of the proposed methods. However, there are some concerns regarding the experimental results, which will be further discussed in the weaknesses section. Thank you for appreciating the clarity of our technical elaboration, our straightforward motivation, and the effectiveness of our method. We would appreciate if you could let us know whether we have resolved and successfully answered all of your questions and concerns. If new question or concerns come up, please let us know. **Weaknesses:** > What I am curious about is whether Tract is also generally applicable in other visual tasks, such as detection, segmentation, pose estimation, etc. The author could simply do some experiments on FasterCNN to verify the applicability of TrAct in visual dense prediction. If applicable, this will further enhance the impact of the proposed method. Thank you for this concrete suggestion, which really helps extend our paper. Accordingly, we trained FasterRCNN models. However, we need to mention that FasterRCNN uses a pretrained vision encoder where the first 4 layers are frozen. In order to enable a fair comparison, we unfroze these first two layers when training the object detection head. We trained the models on PASCAL VOC2007, and have the following preliminary results with 2 seeds (measured in test mAP): | Seed | vanilla | TrAct | |--|--|--| | 1 | 0.655 | 0.667 | | 2 | 0.664 | 0.674 | We can observe that both seeds with TrAct performed better than the best seed of the vanilla method, and the average improvement is 1.1%. We offer to extend these studies for the camera-ready. We would like to point out that, while is TrAct especially designed for speeding up pretraining or training from scratch, i.e., when actually learning the first layer, we were excited to find it also helped in finetuning pretrained models. The limitation would be that TrAct is only applicable when the first layer is actually trained. > Why is there a significant difference in the performance of TrAct between SGD and Adam? For example, in Figure 2, TrAct seems to converge faster on SGD, even surpassing the performance of the baseline 800 epoch with 100 epochs of training time. However, on Adam, performance improvement seems to be rather minor. Is there any theoretical explanation for this? This is indeed an interesting question, and there is a theoretical explanation for this: The method is motivated for SGD training as the update that TrAct is intended to perform is returned in form of the gradient, and to execute the update the optimizer should (in theory) be SDG. That it still works very well even for Adam is great, and illustrates the strong robustness and versatility of our method. Your comment inspired us to extend Figure 2 by an experiment where we train everything except for the first layer with Adam, and train the first layer with TrAct and the SGD optimizer. Here, we observed small improvements over optimizing the first layer with Adam, which are shown in the Author Rebuttal PDF. Finally, we want to mention that, while using SGD for the first layer in combination with TrAct can improve performance, for easier adoption as it is often simpler to use the same optimizer for everything. Nevertheless, for advanced users, this is a way to squeeze out even a bit more performance. We will add a respective discussion to the camera-ready. We would appreciate if you could let us know whether we have resolved and successfully answered all of your questions and concerns. If new question or concerns come up, please let us know. --- Rebuttal Comment 1.1: Comment: Dear Reviewer yoFY, We wish to thank you very much for helping us improve the paper. Hopefully, you have had a chance to take a look at our rebuttal. Since the discussion period ends today, we would greatly appreciate it if you could respond to our rebuttal soon. This will give us an opportunity to address any further questions and comments that you may have before the end of the discussion period. --- Rebuttal 2: Title: Post-Rebuttal Comments Comment: Thanks to the authors for their rebuttal. I appreciate taking time to answer the various questions presented here. Most of my concerns have been addressed. I specifically appreciate the analysis of transfering to object detection tasks. I will keep my original rating as `accept`.
Summary: The paper introduces TrAct, a training strategy that modifies the optimization behaviour of the first layer. The proposed first-layer optimization enables a slightly better results for training the model at lesser number of epochs. TrAct is being demonstrated for a wider setup on image classification and have shown to be consistent. Strengths: 1. TrAct enables faster convergence or can achieve slightly better performance for the same number of epochs. 2. The paper demonstrates applicability of TrAct in various possible scenarios covering a wide range of settings. Weaknesses: 1. A small overhead (due to additional parameters) in computational time when training with TrAct. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What happens when only the last layer of a pre-trained model is fine-tuned? This is a setup used in evaluating many self-supervised methods. Wouldn't the first layer optimization using TrAct create bias for a new data distrbution? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Not explicitly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your time reviewing and for helping us to improve our paper. **Strengths:** > 1. TrAct enables faster convergence or can achieve slightly better performance for the same number of epochs. > 2. The paper demonstrates applicability of TrAct in various possible scenarios covering a wide range of settings. Thank you for appreciating especially the faster convergence of our method, as well as the applicability of our method across a wide range of settings. **Weaknesses:** > 1. A small overhead (due to additional parameters) in computational time when training with TrAct. We would like to clarify that the small overhead is only due to the backward of the first layer being slightly more expensive, whereas the forward is not modified. Indeed, we do not introduce additional parameters. **Questions:** > What happens when only the last layer of a pre-trained model is fine-tuned? This is a setup used in evaluating many self-supervised methods. Wouldn't the first layer optimization using TrAct create bias for a new data distrbution? During additional fine-tuning experiments, not reported in the paper, when comparing a model pre-trained with TrAct vs. pre-trained without TrAct, and finetuning both models without TrAct, we observe the model that had originally been pre-trained with TrAct performed better. If you would like us to perform this experiment also for only fine-tuning the last layer, we offer to include these results in the camera-ready; however, we strongly assume that the model pre-trained with TrAct still performs better in this case because the performance of the entire model is mostly affected by pretraining, and the first layer being optimized more efficiently, leads to a more efficient optimization of the rest of the network. We would appreciate if you could let us know whether we have resolved and successfully answered all of your questions and concerns. If new question or concerns came up, please let us know. --- Rebuttal Comment 1.1: Comment: Dear Reviewer juWi, We wish to thank you very much for helping us improve the paper. Hopefully, you have had a chance to take a look at our rebuttal. Since the discussion period ends today, we would greatly appreciate it if you could respond to our rebuttal soon. This will give us an opportunity to address any further questions and comments that you may have before the end of the discussion period. --- Rebuttal Comment 1.2: Title: Thanks! Comment: Thank you for the rebuttal. If possible, please add the additional experiment in the final version of paper.
Summary: When training vision models, the update of the weights of the first layer is proportional to the input pixel values. This can make the model learn images with high contrast faster and damage learning efficiency. To reduce this dependency, the paper proposed to optimize the first layer embedding (before activation) directly. This perspective leads to an optimization on a new way to update the weights and results in a light weight modification to a variety of vision models. The method was demonstrated on image classification problems. Strengths: The authors identified the fundamental problem of training vision models. Compared to modifying the input or the model architecture, targeting the update of the first layer weight is a more direct approach and easy to understand the impact. They formulated an optimization problem and clearly listed out the derivation for the solution to the update of the weights. The proposed method is easy to implement. Weaknesses: There should be more elaboration on the intuition of the first layer embedding optimization. I can understand the purpose and the conceptual procedure can be viewed as activation applied to the first layer, but how this formulation reduces the dependency on the inputs without hurting the training is not obvious. In the introduction, the authors argued that they bridged the gap between the “embedding” layer in language models and the first layers in vision models. However, it is not clear to me the connection of the proposed work to the “embedding” layer update in the language models. Technical Quality: 3 Clarity: 3 Questions for Authors: I believe the proposed method is general enough for vision tasks other than image classification. It would be interesting to see the performance on segmentation, detection, and even image generation. Specifically, I would like to see if the modification on the embedding hurts the performance of the tasks requiring more pixel-by-pixel understanding. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors stated that limitations are discussed as all assumptions are pointed out in the work. However, the assumptions listed are fairly broad and not specific to the proposed work. I’d like to see more discussion on the limitations imposed when modifying the first layer embedding. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your time reviewing and for helping us to improve our paper. > The authors identified the fundamental problem of training vision models. Compared to modifying the input or the model architecture, targeting the update of the first layer weight is a more direct approach and easy to understand the impact. They formulated an optimization problem and clearly listed out the derivation for the solution to the update of the weights. The proposed method is easy to implement. Thank you very much for appreciating the clarity of the derivation of our method, and the simplicity to implement. We hope that these properties, combined with the training speedups lead to a wide adoption in the community. > There should be more elaboration on the intuition of the first layer embedding optimization. I can understand the purpose and the conceptual procedure can be viewed as activation applied to the first layer, but how this formulation reduces the dependency on the inputs without hurting the training is not obvious. In the introduction, the authors argued that they bridged the gap between the “embedding” layer in language models and the first layers in vision models. However, it is not clear to me the connection of the proposed work to the “embedding” layer update in the language models. In language models, the embedding layer is a lookup table, with an embedding / activation for each token. Thus, given an input token (integer), one embedding is selected (a row of the embedding matrix), and fed forward into the next layer. The training dynamics of the embeddings layer corresponds to updating the embeddings directly wrt. the gradient. The update in a language model, for a token identifier $i$, is $W_i \leftarrow W_i - \eta \cdot \nabla_{z} \mathcal{L}(z)$ where $z = W_i$ is the activation of the first layer and at the same time the $i$th row of the embedding (weight) matrix $W$. Equivalently, we can write $z \leftarrow z - \eta \cdot \nabla_{z} \mathcal{L}(z)$. In contrast, in vision models, the update is $W \leftarrow W - \eta \cdot \nabla_{z} \mathcal{L}(z) \cdot x^\top$ and our goal is to achieve an update that is close to $z^* \leftarrow z - \eta \cdot \nabla_{z} \mathcal{L}(z)$, which we achieve via our closed form solution. We will include the extended discussion into the camera-ready. > I believe the proposed method is general enough for vision tasks other than image classification. It would be interesting to see the performance on segmentation, detection, and even image generation. Specifically, I would like to see if the modification on the embedding hurts the performance of the tasks requiring more pixel-by-pixel understanding. Thank you for this suggestion, which we combined with Reviewer yoFY's suggestion of training a FasterRCNN object detection model. Please see our discussion and preliminary results table in our response to Reviewer yoFY. > The authors stated that limitations are discussed as all assumptions are pointed out in the work. However, the assumptions listed are fairly broad and not specific to the proposed work. I’d like to see more discussion on the limitations imposed when modifying the first layer embedding. There are no assumptions that are specific to this work as our method is quite generally applicable as long as the first layer is a fully-connected or convolutional layer. A potential limitation is that the approach is not applicable in settings where the first layer is frozen, e.g., at a random initialization; however, the technique of the first layer being frozen in some models stems from the first layer's training dynamics problem identified in our work and actually training the first layer with our technique could improve performance. Please let us know if your had a different specific assumption in mind that you would like us to point out in the paper or discuss. We would appreciate if you could let us know whether we have resolved and successfully answered all of your questions and concerns. If new question or concerns come up, please let us know. --- Rebuttal Comment 1.1: Comment: Dear Reviewer v9Bj, We wish to thank you very much for helping us improve the paper. Hopefully, you have had a chance to take a look at our rebuttal. Since the discussion period ends today, we would greatly appreciate it if you could respond to our rebuttal soon. This will give us an opportunity to address any further questions and comments that you may have before the end of the discussion period.
null
null
Rebuttal 1: Rebuttal: Additional Figure PDF based on Reviewer yoFY's remark. Pdf: /pdf/b4801e522730c57afe146073695fd6d76eec05b7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null