title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MOTE-NAS: Multi-Objective Training-based Estimate for Efficient Neural Architecture Search | Accept (poster) | Summary: This paper proposes Multi-Objective Training-based Estimate (MOTE) for efficient NAS, leveraging landscape view and convergence speed to estimate the performance of neural architectures. It also introduces two reduction strategies for speeding up MOTE generation. Compared to other training-free NAS methods, MOTE achieves better search performance on NasBench201.
Strengths: + The paper is well-written.
+ The search cost of NAS is very low, enabling NAS with very limited computing resources.
+ The theoretical analysis is sound.
+ The experimental results are promising.
Weaknesses: - Code is not available.
- The experimental results on ImageNet are not very good.
- Regarding the title "Multi-Objective Training-based Estimate for Efficient NAS," on NasBench201, only accuracy is considered as a metric, missing other important metrics like the number of parameters, FLOPs, latency, etc.
- The reduction strategies lack novelty, as many low-fidelity estimation methods have been proposed, including early stopping[1], training with down-scaled models[1], and training on a subset of the data[2].
[1]: B. Zoph, V. Vasudevan, J. Shlens, and Q. Le. Learning transferable architectures for scalable image recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[2]: A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine learning hyperparameters on large datasets. In Artificial intelligence and statistics, pages 528–536. PMLR, 2017.
Technical Quality: 4
Clarity: 4
Questions for Authors: I would like to ask about the results on ImageNet in Table 2. OFA costs 50 days, including the training and search process. Do you consider the training cost of the searched architectures? I think 0.1 days is far from enough to well-train a model on ImageNet.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: No additional significant limitations. Please refer to the weaknesses. It would be beneficial if the paper could add future research directions for this work and extend this method beyond image classification tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback and positive evaluation of our work.
**Answers to Questions:**
**A1.** Yes, by adjusting the K value in MOTE-NAS to extend the search time, we can achieve better architectures. While the current architecture obtained through our method does not have the highest accuracy on ImageNet, the significantly lower time cost is a considerable advantage.
**Answers to Weaknesses:**
**AW1.** We have included the core code of MOTE in the supplementary materials, and the complete code will soon be available on GitHub (the link is not shown due to the policy of NIPS 2024).
**AW2.** Please refer to A1.
**AW3.** Generally, current NAS research focuses on finding the architecture with the highest accuracy within a given search space (NASBench-101 and NASBench-201) as quickly as possible. Other factors such as parameters, FLOPs, latency, and other metrics can be considered in our cost function in a hybrid form.
**AW4.** The reduction strategies serve as acceleration methods for MOTE. When MOTE applies RD+RA, it does not experience significant performance loss in terms of "accuracy." MOTE can achieve substantial acceleration with minimal performance loss and still outperform other NAS methods, as demonstrated in Fig. 7 and Tab. 1 of our paper. Therefore, we emphasize the robustness of MOTE rather than the novelty of the reduction strategies. In TABLE III of the attached file, we replaced the random selection-based RD-RS, and MOTE still performed well. This further demonstrates that the novelty of the reduction strategies is not a primary factor affecting MOTE's performance.
---
Rebuttal Comment 1.1:
Title: Comments Added After Reading Author Response
Comment: Thank you for the further response. In your response to question A1, you mentioned that the training cost of the searched architecture has been considered. However, I am concerned that 0.1 GPU days may not be sufficient for training neural architectures to convergence on ImageNet. Could you please clarify if the 0.1 GPU days include the training time? Thank you.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. The experiments conducted on ImageNet only represent the search time and do not include the training time. Since the MOTE-NAS-EF (top-k=1) version we used in Tab.2 is evaluation-free, we specifically emphasized that this method requires only 0.1 GPU days for the search process. In fact, training this top one architecture found after the search still takes nearly a week on a single V100 GPU.
---
Rebuttal 2:
Comment: Thank you for your feedback. The discrepancy arises due to differences in the search space and other hyperparameter settings. The configuration in our Tab. 2 was based on the experimental setup from ZiCO [1] (refer to their Table 2). Some of the values marked with an asterisk in our Tab. 2 are experimental results that we have reproduced.
[1] Li, Guihong, et al. "Zico: Zero-shot nas via inverse coefficient of variation on gradients." arXiv preprint arXiv:2301.11300 (2023).
---
Rebuttal Comment 2.1:
Comment: Thank you again for your response. Although the search cost of 50 GPU days for OFA in ZiCO does not make sense to me, your answer has partially addressed my concerns. I will keep a positive score for this work.
---
Reply to Comment 2.1.1:
Comment: Thank you very much for your detailed feedback and thoughtful review. We greatly appreciate the time and effort you have invested in our paper. Your positive assessment means a lot to us. | Summary: The paper presents MOTE, a training-based estimate for Neural Network accuracy, as a proxy method to accelerate Neural Architecture Search. The intuition behind MOTE is the non-convexity and non-linearity in the training loss landscape. Consequently, the authors provide a model that characterizes the training process's non-convexity and convergence speed and consider both in a few-shot joint optimization fashion. The proposed model is applied on a reduced architecture and dataset and requires a few training iterations. The evaluation results are promising and show the superiority of MOTE compared to predictor-based and few-shot NAS methods.
Strengths: - The paper tackles a critical problem—accelerating the NAS framework with few-shot performance predictors.
- The problem is nicely formulated, and the motivations are clearly stated. The paper is also well-written and has a nice flow.
- The evaluation section seems technically sound, with various experiments showing MOTE's superiority compared to state-of-the-art methods.
Weaknesses: - Zero-shot estimates like NTK struggle with high sensitivity to the weights initialization method. While the authors have raised this point, there's no detailed study on MOTE's sensitivity to weight initialization . Furthermore, the type of initialization adopted by MOTE is not mentioned in the paper.
- The non-convexity and non-linearity in DNNs is a core reason for the inefficiency of NTK-based methods. The authors should put more emphasis on this point, as it is a significant factor influencing the performance of MOTE. An ablation study on the impact of skip-connection and non-linear operations on the performance of MOTE can help draw more concrete conclusions about the robustness of the proposed approach when the DNN model is highly non-linear.
- The RD transformation needs to be fully detailed. For CIFAR100, the authors employed a VGG model pretrained on ImageNet1K. However, it's not mentioned whether they perform a fine-tuning for CIFAR100 and the rationale behind the choice of the VGG model specifically or whether VGG should also be employed to perform an RD on any dataset.
- While the comparison in the evaluation section is more focused on predictor-based and few-shot NAS frameworks, it does not specifically compare against zero-shot approaches [1, 2], especially the latest work that considers the feature map correlation [3].
- The search spaces considered are mainly from the NAS-Bench line of work. MOTE should also be evaluated on supernets like P-DARTS and Transformer-based models.
- The MOTE-NAS and MOTE-NAS-EF are built upon evolutionary algorithms. It's therefore unclear whether the superior performance comes from the evolutionary search or MOTE evaluation. The evolutionary search should be compared w/ and w/o MOTE and also against simple baselines (e.g., random search).
- The conclusion section should discuss MOTE's limitations in more details, especially for multi-objective NAS methods, where efficiency and accuracy need to be optimized in a joint fashion.
**References:**
- [1]: Lin, Ming, et al. "Zen-nas: A zero-shot nas for high-performance image recognition." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
- [2]: Bhardwaj, Kartikeya, et al. "ZiCo-BC: A Bias Corrected Zero-Shot NAS for Vision Tasks." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
- [3]: Jiang, Tangyu, Haodi Wang, and Rongfang Bie. "MeCo: zero-shot NAS with one data and single forward pass via minimum eigenvalue of correlation." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 2
Clarity: 3
Questions for Authors: - To what extent is MOTE sensitive to weight initialization?
- What impact does the non-linearity in DNN architectures have on MOTE’s performances?
- What is the exact process of the DR transformation? Can the same process be adapted to any type of dataset? And what’s the rationale behind choosing a VGG model?
- What if the search space is a supernet (e.g., P-DARTS) or Transformer-based (e.g., NASViT)? How generalizable is MOTE to these search spaces?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: - Highly non-linear DNNs are more challenging for MOTE---Thus the need for an ablation study on non-linear operations.
- The limited scope of MOTE's evaluation on NAS search spaces—only on NAS-Bench-related search spaces.
- The RD transformation method is highly specific, hindering its applicability to other tasks/datasets.
- No detailed study on the impact of weights initlization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback.
**Answers to Questions:**
**A1.** To determine whether MOTE is affected by random weight initialization, an additional experiment was conducted, as shown in TABLE II of the attached file. The results indicate that the effect of random weight initialization on MOTE's architecture search is minimal. This is mainly because the landscape and speed terms used in MOTE are derived from a training procedure, making MOTE insensitive to weight initialization.
**A2.** To assess the sensitivity of MOTE to the use of skip connections, we selected four architectures from NASBench-201 to observe changes in accuracy and MOTE values with and without skip connections. These four architectures include Res-Conv(3x3) and Res-Conv(1x1), which use conv-3x3 and conv-1x1 with skip connections, respectively, and NoRes-Conv(3x3) and NoRes-Conv(1x1), which do not include skip connections. As shown in TABLE V and Fig. 2 of the attached file, the speed term, landscape term, and MOTE accurately reflected the changes in accuracy across the four architectures. When the accuracy increases, the values of the two terms and MOTE also increase, and vice versa. These experiments demonstrate that MOTE effectively captures the nonlinear architectures of DNNs.
**A3.** The specific process of RD is discussed in the global rebuttal. We encode each image in the original dataset using a VGG network trained on ImageNet (not fine-tuned on CIFAR-100), inspired by the Inception Score method. Although VGG can be replaced, its inherent knowledge as a simple and high-capacity CNN makes it suitable for image encoding.
**A4.** The MOTE also apply on the open search space(as shown Tab. 2 of our paper), where MOTE-NAS shows significant performance. To evaluate whether MOTE is suitable for other search spaces, an ablation study was performed on NASBench-301 (based on DARTS), as shown in TABLE I of the attached file. In this study, MOTE continues to outperform TSE and Synflow.
**Answers to Weaknesses:**
**AW1.** Please refer to A1.
**AW2.** Please refer to A2.
**AW3.** RD is a method used to accelerate MOTE. The selection method used in RD can be replaced by other methods. The ablation studies, shown in TABLE III of the attached file, indicate that MOTE performs well even if RD is replaced by other methods. Determining the best reduction method will be explored in future research.
**AW4.** In Fig. 7 and Tab. 1 of our paper, most comparison methods such as NASWOT, TE-NAS, KNAS, Zen-Score, and ZiCO are training-free (or zero-shot) NAS methods. However, the term "free" only applies to the search stage; these methods still require GPU time in the evaluation stage. The core idea of MOTE focuses on the time spent in the evaluation stage, which is more time-consuming than the search stage.
**AW5.** Please refer to A4.
**AW6.** MOTE-NAS-RS is a version that replaces the evolutionary algorithm with random sampling (please see Table A4 in the supplementary materials).
**AW7.** The limitations and contributions of this paper have been addressed in the global rebuttal and will be added to the camera-ready version if this paper is accepted.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed responses, which have addressed several of my concerns. I encourage the authors to incorporate these explanations into the revised version of the paper and to share their codebase as well. I updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind feedback and for updating your score. We appreciate your suggestions and will make sure to incorporate the explanations into the revised version of our paper. We also plan to publicly share our codebase on GitHub. | Summary: The paper introduces MOTE-NAS, a novel approach for efficient Neural Architecture Search (NAS). MOTE-NAS suggests a novel proxy utilizing both macro-level loss landscape smoothness and micro-level convergence speed to predict the performance. By utilizing reduced architectures (RA) and datasets (RD), MOTE-NAS achieves state-of-the-art accuracy on benchmarks such as CIFAR-10, CIFAR-100, and ImageNet-16-120 while significantly reducing computational costs.
Strengths: The proxy proposed by the authors has a significant correlation with the final accuracy, showing particularly high correlation in the reduced search space.
The RA and RD methods can drastically reduce the dataset and architecture, providing substantial advantages in terms of speed.
Weaknesses: The design of the skeleton for the reduced architecture seems to require human expert’s guide. It would be better if it were demonstrated that MOTE-NAS is robust across various RA configurations or if there were a general methodology for constructing RA.
Technical Quality: 4
Clarity: 3
Questions for Authors: Typos
Table 2, UTRE-NAS
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors do mention some of the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your high appreciation of our work. We greatly value your feedback.
**Answers to Questions:**
**A1.** If our paper is accepted, all the typos you mentioned will be corrected and eliminated in the camera-ready version.
**Answers to Weaknesses:**
**AW1.** The RA design might be achievable by other algorithms, which remains an open question. However, the proposed RA is simple yet achieves promising results in NABench-101 and NASBench-201, demonstrating its effectiveness across different datasets.
---
Rebuttal 2:
Comment: Dear Reviewer XpJH ,
Please respond to authors and engage in discussion as soon as possible.
AC | Summary: This paper proposes a novel limited training NAS method that is able to rank the candidate architectures after training them for a limited number of epochs. The MOTE metric consists of two terms, the landscape term and the speed term. The landscape term is indicative of the loss landscape and it is the cross-entropy loss of the model $\theta(g)$ obtained by the weighted sum of the initial parameter weights and the current parameter weights. This loss values are summed for a certain number of epochs. As the number of epochs increase, the weight assigned to the initial model weights decreases and that of the current model increases to result in $\theta(g)$. The speed term is used to estimate the convergence speed of an architecture. It is computed by summing the cross-entropy loss at every epoch divided by the time taken to train for that epoch over all the epochs.
Rather than training the original architectures for N epochs, they derive a reduced variant for each architecture that is close enough to the original architecture. This reduced architecture is then trained on a reduced dataset to obtain the training dynamics necessary for MOTE. The reduced dataset is created by first obtaining the logits of a pretrained VGG-16 model and then for each label, k-means and farthest point sampling are applied to yield the reduced dataset.
Strengths: This paper proposes a novel limited training NAS method that also takes into account the convergence time while training a neural architecture. They were able to consistently achieve better correlation when compared to all the other training-free NAS baselines on 2 search spaces.
Weaknesses: 1. What is the correlation between the accuracies of the (i) reduced architectures and the original architectures (ii) training original architectures on reduced dataset and training the original architecture on the entire dataset (iii) the reduced architectures trained on the reduced dataset and the original architectures. Table A3 shows the correlation between the reduced architecture trained for 4 epochs but that is not the actual setting. The correlation already drops from 34.5 to 8.1%. MOTE when applied on the original architectures and the original datasets already would introduce some errors in ranking the architectures. The reduced architecture and the reduced dataset setting would further cascade the errors as the reduced setting is not 100% correlated to the original setting. So how can this reduced setting be used to begin with? Please include the correlation results for all the search spaces and all the datasets.
2. The authors should not be comparing their approach against zero-cost methods while MOTE considers training information. Please evaluate the rest of the baselines also on the trained architectures for fair comparison. However, as mentioned above, it is not clear how the other methods would perform in the reduced setting owing to the concerns stated in 1. Also, 7 seconds per architecture while the rest of the baselines take only 1 second on an average would become very expensive when the number of architectures in the search space is large.
3. Please evaluate the efficacy of MOTE on other search spaces as well such as DARTS, ENAS [1],[2]. [3] demonstrated that training-free NAS methods don't generalize to tasks beyond image classification. Please evaluate on TransNASBench-101 [4] too similar to ZiCO.
[1] NAS-Bench-301and the case for surrogate benchmarks for neural architecture search, Siems et al.
[2] On Network Design Spaces for Visual Recognition, Radosavovic et al.
[3] https://iclr-blog-track.github.io/2022/03/25/zero-cost-proxies/
[4] TransNAS-Bench-101: Improving Transferability and Generalizability of Cross-Task Neural Architecture Search, Duan et al.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Also, how many architectures did you sample for each search space? Did you run all the other baselines on the same architectures, same data augmentations that MOTE was evaluated on or did you use the results reported in their corresponding papers? Also, if you are evaluating the top-k architectures for some baselines, please change the table to be : (i) evaluation free, top-k=5, top-k=10) for all the baselines and report the accuracies accordingly. Right now the table is inconsistent.
2. How does a linear combination represent the loss landscape? Given a point $\theta_{B}$ in the loss landscape and 2 other intermediate points $\theta_{C}$ and $\theta_{D}$, if we draw a line from init to B, it might not actually pass through intermediate points in the loss landscape. So how is it capturing the landscape? The main reason this might be working better than TSE is that it is assigning higher weight to the current weights as the epochs increase and lesser weight to the initial weight. How would it compare against TSE-E and TSE-EMA where the loss values of the initial few epochs are discounted.
3. This paper used architecture reduction and dataset reduction to accelerate the training so that MOTE can be computed faster. Can you clarify if the k-means and the farthest point sampling is applied to datapoints corresponding to each label seperately to obtain a subset of them for that label? If not, how exactly is the clustering and the sampling done?
Can you show how well the architectures the reduced dataset performs when compared to the original dataset by computing their correlations? Why did you use k-means followed by FPS instead of other subset selection methods such as set-cover, facility location or coresets [5]?
4. Can you please elaborate further what the meta architecture and the reduced architectures are comprised of? It is not clear to me.
5. MOTE uses the landscape term and the speed term. In Table1, the estimation free version of MOTE seems to discover architectures that train much faster than the others. However, once top-k are considered, the compute cost of training the discovered architecture is not any lesser than those discovered by some of the other baselines. Figure 1 shows that the landscape term performs better than the speed term. Can you also do an ablation and show how the MOTE-landscape and MOTE-speedterm perform in figure 7 and table 1?
[5] https://cords.readthedocs.io/en/latest/strategies/cords.selection_strategies.SL.html#module-cords.selectionstrategies.SL.submodularselectionstrategy
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback. However, we noticed that you might have misunderstood the core idea of MOTE. Specifically, the landscape term does not reduce the proportion of $\theta_{init}$ (initial model weights) or increase the proportion of $\theta$ (trained model weights) as epochs increase. MOTE is composed of a macroscopic landscape term and a microscopic speed term. The formula for the landscape term is:
$\theta(g)=(\frac{g}{G})\theta_{init}+(1-\frac{g}{G})\theta,$
where $g$ ranges from 0 to $G$ and is independent of the number of epochs. Here, $G$ is a hyperparameter (default $G=10$), which indicates the granularity of the linear combination between $\theta_{init}$ and $\theta$. Therefore, even if the epochs for training candidate architectures increase, the proportion between $\theta_{init}$ and $\theta$ remains unchanged; only the position of $\theta$ in the loss landscape changes. The landscape term is designed to observe the loss landscape on a macroscopic scale rather than focusing on the minute variations in training loss (as TSE and the speed term do).
**Answers to Questions:**
**A1.** The number of samples ranges from dozens to hundreds. Since we used an evolutionary algorithm for dynamic sampling and conducted each experiment independently ten times to take the average, the specific numbers vary slightly each time. For instance, in NASBench-201, the number of samples for MOTE-NAS-EF ranges from 60 to 100. Some results of our paper in Fig. 1, Fig. 5, Fig. 7 (SynFlow, TSE), and the predictors in Tab. 1 and MobileNetV2/V3 in Tab. 2 are our reproduced experimental results. In contrast, other results directly refer to their corresponding papers. During the MOTE generation process, we did not use any data augmentation techniques (refer to the "GetTrainData" function in the "mote/gen_mote.py" code in the supplementary materials). Not all NAS algorithms follow a "search then evaluate" process; for example, WeakNAS does not follow this process. Therefore, the top-k hyperparameter does not apply to all NAS methods, making it impossible to categorize every NAS method as "evaluation-free" or "top-k".
**A2.** Plotting 2D or 3D loss landscapes with more information is computationally intensive [1], which is not conducive to efficient NAS. Indeed, the linear combination between two points does not fully capture the loss landscape, but it provides a convenient and quick assessment method. This method slices a 1D section through the loss landscape using two sets of model weights, and we can glean some of the landscape features by observing this slice [2]. The increase in epochs does not affect the proportion of $\theta$; please refer to the begin sentence. Our implementation of TSE already includes removing the initial 20% loss values (similar to what TSE-E and TSE-EMA do). In short:
1. The landscape term does not increase the proportion of $\theta$ when increasing epochs.
2. Since early loss values (20%) are eliminated in TSE, the proportion of $\theta$ increases with the epoch. Therefore, the increase in the proportion of $\theta$ is not why our proposed landscape term performs better than TSE. MOTE aims to describe the loss landscape from a microscopic perspective (speed term) and a macroscopic perspective (landscape term). The speed term can be considered a version of TSE that is aware of time variations.
**A3.** Detailed explanations of RD are addressed in the global rebuttal. The ablation experiments between the original dataset and the reduced one are shown in Fig. 5 of our paper. As the sampling hyperparameter $r$ decreases (RD equals the original dataset if $r=100$), the performance of both the landscape and speed term will degrade, but less than the "accuracy". The core idea of RD is simple and intuitive (see the global rebuttal). Other methods can build RD, the attached file shows the ablation study in TABLE III. Based on the study, MOTE still works well if RD is built by other methods. The best reduction method will be explored in the future.
**A4.** Cell-based search spaces require the combination of the predefined meta-architecture and the candidate cell to form a candidate architecture. The meta-architecture predefines the number of layers, downsampling method, other hyperparameters, etc. Different search spaces have distinct meta-architectures (e.g., NASBench-101 and NASBench-201) with huge time complexities. The goal of RA is to reduce redundant layers as much as possible, except for the cell layers. Although RA results in a simplified structure different from the actual candidate architecture (original meta-architecture + cell), it can construct an extremely lightweight and compact architecture to serve as a new meta-architecture for NASBench-101 and NASBench-201.
**A5.** MOTE-NAS-EF completely discards the evaluation stage to minimize total time consumption (search time + evaluation time). Since many training-free estimates are unreliable during the search stage, the evaluation stage needs to verify more architectures, leading to higher time consumption (e.g., KNAS). Currently, known training-free estimates are not truly "free" NAS methods.
Other ablation experiments to evaluate the performance of the landscape term and speed term are addressed in the attached file (see Fig. 1 and TABLE IV).
**Answers to Weaknesses:**
Due to space limitations, these answers will be moved to global rebuttal.
[1]. Li, Hao, et al. "Visualizing the loss landscape of neural nets." Advances in neural information processing systems, 31 (2018).
[2]. Goodfellow, Ian J., Oriol Vinyals, and Andrew M. Saxe. "Qualitatively characterizing neural network optimization problems." International Conference on Learning Representations (2015).
---
Rebuttal 2:
Title: Response to rebuttal
Comment: I thank the authors for their response. To begin with, while I understand the idea of the MOTE-NAS paper, i seem to have misunderstood the way the linear combination of \theta and \theta_{init} is computed to arrive at \theta_{g}. Thank you for clarifying and correcting my understanding regarding g and G. In that case, how often do you evaluate \theta_{g} during the training?
While I am familiar with the cell based search space, i now understood what you meant by the term meta-architecture.
A3. I understand that there are various methods to reduce the dataset. I wanted to know how the proposed k-means method compares to other subset selection methods that I pointed out. I see that you compared RD against random sampling in the ablation study.
A5. Algorithms such as TE-NAS, NASWOT, Zico, Zero-cost proxies for lightweight NAS rely only on the inference of a few batches during the search phase and evaluate the best architecture found. Like I mentioned earlier, the architecture found by MOTE-NAS-EF is worse than TE-NAS in table 1. Zico is also not included in Table 1.
While all the search spaces require the algorithm to search for a cell, some search spaces are more complicated than others. NASBench 101, 201 etc are all reduced search spaces when compared to the original DARTS search space. TransNASBench 101 is also a challenging search space. That was the reason, it is important to evaluate how well the proposed method performs on these search spaces
The authors do point out that the correlation of the landscape term and the speed term don't deteriorate as much as the accuracy term in the reduced setting, albeit on 1k architectures. It is important to see how well they can discern between the top few architectures that are close in performance. However, computing the landscape term and the speed term is still computationally expensive.
I thank the authors for answering most of my questions and I would like to increase my score
---
Rebuttal Comment 2.1:
Comment: We sincerely appreciate the effort you have put into reviewing our paper. We will thoroughly consider your valuable suggestions and carefully reflect on the limitations you mentioned, including computational costs and search space issues. Your insights will significantly contribute to our continued research in the field of NAS. Finally, we would like to express our gratitude once again for your efforts on our paper, and we are also very thankful for your decision to increase the score. | Rebuttal 1:
Rebuttal: Thank you to all four reviewers for your diligent efforts and valuable suggestions. We appreciate your feedback and comments, which will help us improve the quality of our paper. In this response, we will (1) summarize our paper's contributions and main limitations, (2) address the RD issue raised by the reviewers, (3) respond to the remaining concerns from Reviewer NBd3, and (4) include key experiments to clarify the raised questions.
**Contributions:**
1. Our proposed MOTE-NAS efficiently estimates training outcomes by jointly optimizing landscape view and convergence speed objectives. It captures the non-convex nature of DNNs and monitors convergence speed.
2. We introduce two reduction strategies to accelerate MOTE generation, making the process more lightweight.
3. Our MOTE-NAS establishes a new state-of-the-art accuracy-cost plot for NAS, with an evaluation-free version outperforming some NTK-based methods, such as KNAS.
**Limitations:**
1. Our work currently focuses on image classification; the applicability of MOTE-NAS to other tasks has yet to be explored.
2. While MOTE-NAS has been successfully applied in the closed search spaces (NASBench-101, NASBench-201) and the open search space (MobileNetV3), its effectiveness in broader search spaces, particularly with Transformers, requires further investigation.
---
**To address the reviewers' questions about RD:**
**Q1. Why is RD necessary?**
**A1.** MOTE generation relies on training, which is computationally expensive. For example, 12 epochs on NASBench-201 take about 200 GPU seconds. Reducing the number of training samples can significantly cut this time. RD is designed to accelerate the process by decreasing the number of training samples.
**Q2. Why is it based on CIFAR-100?**
**A2.** Randomly sampling images can lead to underfitting if some labels have too few samples. Sampling by labels instead ensures sufficient sample diversity. CIFAR-100, with many labels and fewer images, is ideal for this method. MNIST or CIFAR-10 would lose diversity at lower sampling rates (e.g., 10%).
**Q3. What is the specific process?**
**A3.** The process involves five steps:
Step 1: Encode each CIFAR-100 image using VGG-16 (trained on ImageNet, without fine-tuning on any datasets), taking the softmax logits.
Step 2: Sum and average the encoding results of images with the same label, resulting in 100 encoding categories for 100 labels.
Step 3: Cluster these 100 encoding categories into $r$ groups using the K-Means algorithm.
Step 4: Extract each center $c_i, 1 \leq i \leq r$ from the $r$ groups.
Step 5: Run FPS (Farthest Point Selection) within each group to find a representative point $l_i, 1 \leq i \leq r$ that is farthest from all $c_i$. Then, {$l_i$} is the set of representative labels required by RD.
Steps 3 to 5 aim to avoid selecting similar labels (e.g., bus and streetcar). We first group labels with K-Means and then use FPS to select labels that are farthest from all group centers.
**Q4. Is RD effective? Are there other methods?**
**A4.** RD has shown high compatibility with MOTE, significantly speeding up calculations despite some loss in performance (refer to Fig. 5). An intuitive method (RD-RS) that randomly samples images was also tested and is shown in the attached PDF file. As shown in TABLE III, although RD outperformed RD-RS, the latter still performed well, indicating that alternative methods for generating RD are possible and inspire further research.
---
**Weaknesses mentioned by Reviewer NBd3:**
**AW1.** Both reduction strategies (RA and RD) accelerate computation but harm performance, affecting both metrics 'accuracy' and 'MOTE'. However, MOTE's performance drops less than 'accuracy' (see Tab. A3). Despite the errors caused by the reduction strategies using RA and RD, MOTE can maintain a high correlation and significant acceleration. The insight of this paper is to prove MOTE's ability to capture the loss landscape from macro and micro perspectives, using the reduction strategy for computation speed gains. Experiments (Fig. 5 and Tab. A3) show that 'Accuracy' adapts poorly to this strategy, while MOTE's performance and speed impact are good trade-offs. Further experiments (Fig. 7, Tab. 1, Tab. 2) illustrate MOTE results with RA+RD applied, showing its effectiveness across different benchmarks, including NASBench-101, NASBench-201, and MobileNetV3 search spaces.
**AW2.** Not all comparison targets are training-free; methods such as TSE, LGA require training. The primary comparison, NTK-based estimates, though training-free, are unstable, as shown in Fig. 1, and a similar conclusion is supported by the LGA paper. The reduction strategies will degrade performance, so current comparison targets without reduction strategies represent their upper-performance bounds. Under this adverse situation, MOTE achieves a higher correlation (Fig. 7), making further reduction strategies and applications on other estimates unnecessary. On the other hand, evaluating the final performance of a searched architecture is very time-consuming (e.g., full training on ImageNet requires days to tens of days of GPU time). While MOTE takes 7 seconds, it is more effective than other estimates in the search stage for finding promising architectures, reducing the burden of the evaluation stage. Consequently, the overall time consumption is faster than other training-free methods (as shown in Tab. 1 and Tab. 2).
**AW3.** Our experiments in Tab. 2 were performed based on an open MobileNetV3 search space, employing a variation of RA—Rescale Reduced Architecture—for searching promising architectures (see section A.6 and Fig. A4). Additional experiments on NASBench-301 (based on DARTS) are included in the attached file (please refer to TABLE I). Most NAS tasks focus on searching the backbone for image classification. Expanding NAS tasks is crucial in NAS research and not easily finished during the rebuttal time. We plan to explore this further.
Pdf: /pdf/92a087f0a5704a7338c69d9a02b36db1443dd274.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AMOR: A Recipe for Building Adaptable Modular Knowledge Agents Through Process Feedback | Accept (poster) | Summary: This paper proposes AMOR, which is a modular framework for answering questions by reasoning over external knowledge bases. AMOR is designed as a Finite State Machine (FSM), which provides a structured way to break down the question into smaller pieces and solve complex tasks through various steps. Each state in AMOR is a module, which can be an LLM call or a tool call. Each LLM module is fine-tuned separately using data relevant to only that step, thus making data collection and evaluation easier.
Strengths: The authors propose an innovative framework that incorporates FSMs, which is a "classical" technique, with state-of-the-art LLM and tool calling that gracefully incorporates human feedback. The text is well-written and clearly explains all parts of the framework. The figures are also helpful in understanding their proposal. I can see AMOR framework being useful in real-world scenarios like question-answering in private datasets. The paper also presents extensive experiments using popular datasets like HotPotQA and PubMedQA.
Weaknesses: The weaknesses I see are the complexity of this framework: many moving parts, including various LLMs that need to be fine-tuned separately; and the 2 stages fine-tuning approach (warm-up + fine-tuning) makes it even more complex and resource intensive. Dependency on human feedback is another weakness I can see here. However, these are weaknesses common to other agents frameworks so I wouldn't consider this as a disqualifier for this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: I don't have any questions
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors were transparent about possible limitations and negative impacts in the **Broader impacts and safeguards** session and also included suggestions on how to mitigate these problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Regarding the complexity and human feedback dependency.
We would like to address your concerns as follows:
- **Regarding the “Separately Fine-tuned LLMs”:** We would like to emphasize that this argument might be inaccurate since we use the same MA-MoE model for all modules and activate different experts to execute their corresponding modules. This approach eliminates the need for separately fine-tuning multiple LLMs.
- **Regarding the Two-Stage Fine-Tuning:** We argue that the two-stage fine-tuning process is analogous to the widely adopted "pretrain and fine-tune" paradigm. The warm-up stage, which is performed only once, enables AMOR to generalize across different knowledge environments. Once the warm-up stage is complete, AMOR can be deployed and adapted to various domains through the efficient adaptation stage. We believe that the benefits of improved generalization and domain adaptation justify the additional computational cost.
- **Regarding the Dependency on Human Feedback:** We underscore the importance of maintaining an adaptive mechanism that leverages human feedback to continuously improve performance. Previous agent frameworks often overlook this crucial aspect. AMOR's process feedback mechanism enables efficient and targeted feedback, reducing the overall burden on human supervisors.
In summary, while AMOR introduces complexities, its modular design, two-stage fine-tuning, and process feedback mechanism are deliberate choices that enable it to handle complex tasks effectively and adapt to specific domains efficiently. We believe that the benefits of improved performance and domain adaptability outweigh the concerns raised regarding complexity and human feedback dependency. | Summary: This work proposes AMOR, a modular approach to building knowledge agents using open-source LLMs. AMOR decomposes tasks into reasoning logic, represented as a finite state machine (FSM) composed of sequentially chained modules. These modules include tools e.g, document retrieval and LLMs, e.g., answer extraction. AMOR retrieves relevant documents for a query and combines information from different sources to produce an answer.
The development process of AMOR includes three stages:
1. **Warm-up Phase:** Each module is fine-tuned on datasets containing not only input-output pairs but also the relevant intermediate steps for each module. This phase ensures that AMOR can generalize across various tasks and knowledge environments.
2. **Adaptation Phase:** The agent is further fine-tuned on specific domains using process feedback. This feedback, which can be human-generated or derived from evaluation benchmarks, is provided at each reasoning step to refine the agent's performance.
Empirical evaluation demonstrates that AMOR effectively utilizes the rich warm-up and evaluation datasets, outperforming relevant baselines. Ablation studies confirm that each component of AMOR is essential for achieving maximal performance.
Strengths: This work is well-motivated and addresses a need for knowledge agents. The modular FSM-based approach is innovative and sensible, allowing for precise reasoning logic. The paper is well-written, and the empirical study is thorough and well-conducted, demonstrating the method's effectiveness through extensive experiments and ablation studies.
Weaknesses: All experiments do not report any measure of uncertainty. Thus, it is impossible to determine if the conclusions are statistically significant. Including measures of uncertainty would strengthen the validity of the results.
Technical Quality: 4
Clarity: 3
Questions for Authors: How are the modules chosen in MA-MoE? Is the module/step ID provided to the routers?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Weakness: Regarding the measure of uncertainty.
We agree that it is crucial to provide measures of uncertainty to assess the statistical significance and robustness of the results. To address this concern, we show the mean and standard deviation across three independent runs in the table below.
| Method | Base LLM | HotpotQA EM | HotpotQA F1 | PubMedQA Acc | Qasper EM | Qasper F1 |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| AMOR$_{\rm Process}$ | L-7B | $45.8{_{\pm0.25}}$ | $54.8_{\pm0.26}$ | $81.6_{\pm0.33}$ | $19.0_{\pm0.05}$ | $35.4_{\pm0.5}$ |
As shown in the table, the standard deviations across multiple runs are very small, suggesting that our approach yields consistent and robust results. This strengthens the validity of our conclusions and provide confidence in the reported performance.
We will incorporate the uncertainty measures in the revised manuscript to enhance the transparency and credibility of our work.
> Question: Regarding module choosing
Each module corresponds to a distinct expert in the MA-MoE model. When AMOR executes a certain module, its module ID will be provided to the routers of the MA-MoE model to indicate which expert should be activated, thereby enabling our model to be “module-aware.” We will endeavor to provide a more comprehensive explanation of this module-expert mapping mechanism in the revised version of our paper. | Summary: This work presents a modular pipeline for QA tasks. The pipeline consists of several modules such as question decomposition, document/passage retrieval, answer extraction, etc. Training data is separately constructed for each module (based on existing datasets) and models are individually fine-tuned for the respective modules. This processes is referred to as the 'warm-up' stage. Models are further updated based on feedback obtained during inference using an alignment algorithm (KTO is used in this work). Experiments on QA datasets shows that the proposed method performs better than various baselines in the literature. Several ablations are also provided to study the impact of each component.
Strengths: * The proposed modular approach is sensible and performant.
* Extensive baselines were considered.
* Several ablations were performed that show the impact of various components. It is interesting that the KTO based approach performed better than a simple SFT approach. The human feedback experiment was also interesting.
* Strong performance on three QA benchmarks (HotpotQA, PubmedQA, Qasper).
Overall I think this paper presents interesting ideas. Various presentation aspects can be improved (see weaknesses).
Weaknesses: * The scope for this work was not properly introduced. The paper is motivated from very broad goals only for the reader to later find that the approach and experiments focus on QA tasks. In fact, the paper does not present a formal task description or problem formulation before presenting the methods, leaving the reader to piece together things.
* Method description: In addition to the task not being formally presented, several notations were not clearly defined. For example, equations 1, 3 start talking about a policy which was not described earlier. Although I do understand what's happening in these equations, clearly defined problem and terminology would have made things much clearer.
* In general, I felt the ideas in the paper could've been framed/described in a much simpler way. Readability could be improved.
* The introduction provides motivations but fails to convey any details/intuitions of the proposed method or how it overcomes the challenges mentioned.
* The 'processed based feedback' was not clear to me. I think the paper tries to squeeze in many ideas but this also distracts the reader from the core ideas.
* The experimental setup could have been made clearer, including the various settings such as with/without finetuning and process/outcome feedback.
* Figures and tables are just too small and impossible to parse.
Technical Quality: 3
Clarity: 2
Questions for Authors: * What is the without FT setting in Table 6?
* Are the comparisons against baselines fair? Does the proposed approach use more data compared to the baselines?
* What is the major delta of this paper compared to prior modular reasoning works (e.g., Rewoo, Lumos)?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I did not find a discussion of limitations in the papers, I suggest the authors to add some discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Weakness 1: Regarding the scope and problem formulation
- **Scope.** AMOR aims to develop a general framework for building adaptable modular LLM agents that can leverage external knowledge sources to tackle complex reasoning tasks. However, we appreciate the reviewer's advice that being more explicit in framing the specific tasks and domains upfront would strengthen our readability. We will clarify the scope early in the next revision.
- **Problem Formulation.** We also appreciate the reviewer's advice to provide a clear task formulation upfront. In the revision, we will add a problem statement and formulation section, which will precisely define the QA-style reasoning tasks we use AMOR to solve.
> Weakness 2: Regarding the method description
In Equations 1 and 3, the policy $\pi$ refers to the strategy of the MA-MoE model that maps from the state $s$ to an action $y$.
We will explicitly define it to improve clarity in the revision.
> Weakness 3: Regarding simpler presentation
AMOR uses several novel techniques, including FSM-based reasoning logic, process feedback mechanism, and two-stage fine-tuning strategy. The paper is organized to provide clear definitions, examples, and explanations without oversimplifying the technical details. We will simplify the content in the revision and welcome specific advice from the reviewer on how we could improve the clarity.
> Weakness 4: Regarding the details of AMOR overcoming challenges in the introduction
We believe we have made every effort to convey the key insights and intuitions behind AMOR and how it addresses the stated challenges.
As summarized in Tab. 1, current agents face challenges in three main aspects: **(1):** uncontrollable reasoning logic; **(2):** lack of adaptation mechanisms for new environments; and **(3):** difficulty for humans to intervene in the reasoning process.
In the fourth paragraph (Lines 34-41), we outline the core idea behind AMOR. The FSM-based reasoning logic enables AMOR to solve problems via executions and transitions over a set of modules, allowing for process-based feedback. This design directly addresses **challenges (1)** and **(3)**.
In the fifth paragraph (Lines 42-50), we provide technical details on AMOR's adaptive mechanism, which tackles **challenge (2)**.
We welcome any further feedback from the reviewer on how we could improve the presentation of our work in the revision.
> Weakness 5: Regarding process-based feedback
Alg. 1 shows that AMOR solves problems through executions and transitions over a set of modules. In each reasoning step, AMOR executes the module $m$ while in a specific state $s$, obtains the output $y$, and transits to the next state. The overall reasoning process $R$ is formed by a series of such steps. Each step can receive human feedback, termed process-based feedback, as opposed to outcome feedback typically provided for the final step. We will elucidate this concept more explicitly in the revision.
> Weakness 6: Regarding the experimental setup
We provide additional details about the experimental setup below:
- **Without Fine-tuning:** We apply different methods directly to off-the-shelf LLMs without fine-tuning. We provide in-context examples to instruct LLMs to solve given problems. Tab. 12-15 in Appendix A.1 show the prompts for four LLM modules in AMOR under this setting on HotpotQA.
- **With Fine-tuning:** LLMs are fine-tuned on specific datasets. For AMOR, we employ a two-stage fine-tuning strategy:
- **Warm-up Fine-tuning:** The stage fine-tunes an LLM on trajectories from public datasets.
- **Adaptation Fine-tuning:** The stage allows AMOR to adapt to specific environments by leveraging different forms of feedback:
- **Process Feedback:** AMOR is optimized using feedback provided for both intermediate and final steps of its reasoning process.
- **Outcome Feedback:** AMOR is optimized using feedback provided only for the final step of its reasoning process.
We will include these clarifications in our revision.
> Weakness 7: Regarding the table and figure size
We will explore ways to enhance their clarity in the revision.
> Question 1: Regarding the fairness between AMOR and baselines
We have carefully designed our experimental setup to ensure fair comparisons in terms of training data sizes. It involves two groups of comparisons:
- **Comparisons between AMOR$_{\rm WFT}$ and baselines that are NOT fine-tuned on the target datasets:** The table below lists the training data sources and sizes for AMOR$_{\rm WFT}$ and baselines.
| Methods | Data Source | Size |
| --- | --- | --- |
| FireAct | GPT-4 Trajectories | 2.5k |
| AgentLM | GPT-4/3.5 Trajectories | 1.8k |
| Self-RAG | Public Resources | 145k |
| LUMOS | Public Resources | 57k |
| AMOR$_{\rm WFT}$ | Public Resources | 50k |
- While FireAct and AgentLM use fewer data than AMOR$_{\rm WFT}$, they rely on the GPT-4/3.5 API for data annotation, which hinders them from scaling up training examples. We believe that AMOR's ability to achieve sota results without any proprietary LLMs is a substantial milestone.
- Compared to Self-RAG and LUMOS, which also use public resources, AMOR$_{\rm WFT}$ uses fewer or a comparable number of training examples.
In summary, this group of comparisons is fair, as AMOR$_{\rm WFT}$ uses fewer or a comparable number of data from public resources without relying on proprietary LLMs.
- **Comparisons between AMOR$_{\rm Process}$ and baselines that are fine-tuned on the target datasets:** AMOR$\_{\rm Process}$, AMOR$_{\rm Outcome}$ and OneR use the same training data from the target datasets, leading to a fair comparison. Table 3 shows the data statistics.
We will provide detailed explanations in the revision.
> Question 3: Regarding the difference between AMOR and prior modular agents
Please kindly refer to "Author Rebuttal by Authors" provided at the very beginning. | Summary: This paper proposed an architecture for advanced reasoning in LLMs. The architecture contains several modules dedicated to different tasks in the reasoning flow, each of which can be trained separately using related datasets constructed from public datasets. The proposed method disentangles the reasoning process into sub-steps that are easier to train and suffer less from sparse feedback which is a critical problem for reasoning in traditional LLMs. In the experiments part, the authors conducted sufficient empirical studies, showing the effectiveness of the architecture proposed.
Strengths: 1. There's a clear analysis of the bottlenecks that limit the reasoning ability of LLMs. The architecture proposed is designed to solve these problems accordingly. The motivation is clear and the method is reasonable.
2. The structure of the proposed method is well-explained with text and pictures, making it easy for reviewers and readers to understand.
3. In experiments, the choice of baselines has good coverage. The dataset selection respects each subsection's theme, and the experiment setup is reasonable. The experiments are well explained, making it easy to reproduce them.
Weaknesses: 1. Currently, many works focus on enhancing the reasoning capability of LLMs. Although the authors mentioned previous works about LLM reasoning and RAG in the related works part, an in-depth explanation of why this method surpasses its predecessors is still needed. A good way to do this is to explain clearly, probably with a few pictures or equations, how those methods dealt with the difficulties and their advantages/drawbacks, and then show that AMOR is better as it avoids some of the existing drawbacks in previous methods.
2. The state transition diagram is well displayed. However, when readers look into the implementation of each module, some key information such as the architecture, hyperparameters, and techniques used for better performance is missing, making it more difficult to understand and reproduce the whole model. A table of these details or a later public code repository would be appreciated.
3. In section 3.3, the detail of how the policy is trained is not well-explained. Did the author use parameterized policies such as policy networks and use a policy-based method in RL for optimization? If so, is the loss presented in equation (3) an auxiliary loss to the loss in policy gradient methods or the pure source of gradients?
4. Following the above question. Will the gradient induced by feedback go through the whole model or only certain blocks?
Technical Quality: 3
Clarity: 2
Questions for Authors: See "Weaknesses" part. My concerns listed there are also my questions to the authors.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors discussed the limitations of their method adequately in the "Limitations" part in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Weakness 1: Regarding the difference between AMOR and prior reasoning methods
Please kindly refer to "Author Rebuttal by Authors" provided at the very beginning.
> Weakness 2: Regarding the technical details.
We acknowledge the importance of transparency and reproducibility in scientific research, and therefore we have included the implementation details, including hyper-parameters, in Lines 204-210 of our manuscript. Additionally, we have submitted the source code for MA-MoE implementation as part of the supplementary material. We intend to release a publicly accessible code repository after the double-blind review period. This repository will facilitate follow-up research to replicate and extend our work.
To further address the reviewer's comments, we plan to include an additional supplementary section in the final version of our paper, which will provide a comprehensive explanation of the algorithm underlying AMOR, elucidating the reasoning process in detail.
We believe that these actions will adequately address the reviewer's concerns, thereby enhancing the clarity and reproducibility of our model.
> Weakness 3: Regarding the training details
Equation 3 is the weighted sum of the standard RLHF optimization objective over all modules. Each module corresponds to a parameterized policy network $\pi_{\theta_m}$. The optimization objective is to maximize the reward with a KL divergence penalty. In this paper, we adopt the KTO algorithm [1] for optimization. Equation 3 illustrates the gradients for KTO.
We brief the derivation from Equation 3 to a differentiable loss as follows: As introduced by prior works [1, 2], it is straightforward to show that the closed-form expression for the optimal reward function $r^*_m$ for each module $m$ takes the form: $r^*_m=\beta\text{log}({\pi\_{\theta_m}}/{\pi\_{\theta_m}^{\text{old}}})+\beta\text{log}Z_m$, where $Z_m$ is the partition function. Applying the expression to the Kahneman-Tversky model [1], we can get the differentiable KTO loss. Please kindly refer to the KTO paper for more details. We will provide detailed descriptions to improve the clarity of my work in the revision.
[1] Ethayarajh et al. KTO: Model Alignment as Prospect Theoretic Optimization. ICML 2024.
[2] Rafailov et al. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. NeurIPS 2023.
> Weakness 4: Regarding the gradients from the feedback
In the MA-MoE architecture, different modules correspond to different FFN layers and share the same embedding layers and multi-head self-attention layers. Therefore, regarding a training example from $\mathcal{R}_m$ for module $m$, the gradient induced by the feedback will go through the whole MA-MoE model, except those FFN layers corresponding to other modules.
---
Rebuttal Comment 1.1:
Comment: The authors adequately answered my questions and now I have a better understanding of their work. I've raised my grades accordingly. | Rebuttal 1:
Rebuttal: > Regarding the difference between AMOR and prior reasoning methods
Please kindly refer to the attached pdf file for an illustration of the reasoning processes of AMOR and prior reasoning methods. The table below further elaborates the advantages and drawbacks of prior agents in terms of the following three aspects.
| **Prior Reasoning Methods** | **Reasoning Logic** | **Adaptive Mechanism for New Environments** | **Human Intervention in the Reasoning Process** |
| --- | --- | --- | --- |
| **Retrieval-Augmented Generation** (e.g., Self-RAG) | Sequential Pipeline. **Drawback:** It is difficult to handle complex tasks. | $\times$ (*Undefined*) | $\times$ (*Undefined*) |
| **Agents with Modular Reasoning** (e.g., LUMOS) | *ditto* | Prompting or Imitation Learning from Humans/LLMs. **Drawbacks:** The former often leads to suboptimal results, while the latter suffers from the scarcity of high-quality data. | $\times$ (*Undefined*) |
| **Agents with Free-form Reasoning** (e.g., AgentLM, FireAct) | $\times$ (*Undefined*) | *ditto* | Outcome Feedback. **Drawbacks:** **(1)** Outcome feedback alone is often too sparse and insufficient to improve the intermediate reasoning steps effectively [1]; **(2)** The reasoning steps taken by LLMs can frequently contradict or deviate from the desired outcome [2]. |
In contrast, AMOR addresses the issues of prior reasoning methods in terms of the above three aspects:
**(1) AMOR is equipped with a controllable FSM-based reasoning logic with a stronger capacity for handling complex tasks than simple pipelines employed by Self-RAG, ReWOO, LUMOS.** For instance, if no relevant passages are retrieved from a document, AMOR can dynamically transit to the next document, while LUMOS would be constrained to generate answers based on the irrelevant passages, potentially leading to incorrect or low-quality outputs.
**(2) AMOR adapts to new environments through exploration and exploitation .** AMOR is designed with an adaptation fine-tuning stage, enabling it to adapt effectively to specific domains based on human feedback. This adaptive mechanism sets AMOR apart from prior modular agents that lack the ability to incorporate expert guidance.
**(3) AMOR enables humans to conveniently and effectively intervene and provide feedback at each reasoning step.** AMOR introduces a process feedback mechanism that enables humans to provide direct feedback on the individual modules within the FSM-based reasoning process. This approach facilitates a more natural and interpretable form of supervision, allowing for targeted improvements and fine-tuning of specific reasoning components.
In summary, AMOR achieves more controllable, adaptable, and human-guided reasoning capabilities compared to existing methods. We hope this clarifies how AMOR advances the state-of-the-art in building adaptable and modular knowledge agents. We will include the clarification to better illustrate the proposed method in the final version.
[1] Lightman et al. Let’s verify step by step. 2023.
[2] Liu et al. Score: A framework for self-contradictory reasoning evaluation. 2023.
Pdf: /pdf/ac9e2ba2360343903197daf6fd343d4c0a052ad7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Autoregressive Image Diffusion: Generation of Image Sequence and Application in MRI | Accept (poster) | Summary: This paper presents a diffusion model for MRI acceleration. In which, an autoregressive image diffusion (AID) model is proposed to sequentially generate MRI image conditions on a given prior image sequences. This method is evaluated on the accelerated MRI reconstruction task using the public available dataset, fastMRI dataset.
Strengths: The figures are very helpful for the understanding of the paper.
The paper is well-organized and easy to follow.
Weaknesses: 1. Lack of quantitive comparison with other methods for MRI reconstruction.
2. The paper should highlight the technical contribution, as most of the context of the methods is from the existing works.
3. The paper claims the contribution on 3D and dynamic MRI, but expects more such results, not just for 2D images.
4. The paper deals with 3D reconstruction but the results only show 2D slices. It is not clear the consistency from other views like Coronal and Sagittal views.
Technical Quality: 2
Clarity: 3
Questions for Authors: The method is conditioned on previous images, how about the computational cost, as the method will use all previous slices?
The efficiency of the proposed method should be horrible as it is a drawback of the autoregressive model, is this correct?
As the cost is high, can the AIM model be constructed in the latent space?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: 1. The method lacks novelty as diffusion models are commonly used in MRI acceleration tasks
2. The method is more like for normal images but not for medical images like MRI; For medical studies, we care more about he anatomy plausibility while the image metrics, like MSE, may not be a good measurement for this purpose;
3. 3D medical images are not videos; we care more about the structure consistency from any of the three views.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Author Response
We thank the reviewer for the insightful comments. We address the reviewer's questions and concerns below:
### Weaknesses:
1. **Comparison with other methods**: We have compared our method with the CSGM method from Reference [1] in the general rebuttal, using an external model trained in Reference [1]. We further provided new statistics in Table 1 of the supporting PDF file, which shows that our method outperforms CSGM in terms of PSNR and NRMSE.
2. **Technical contribution**: The main technical contribution of our work is the autoregressive-diffusion model for the generation of high-dimensional data, such as volumetric MRI images of size 46x128x128, using affordable computation resources such as 4xA100 (see general rebuttal).
3. **3D and dynamic MRI**: We have included the results for dynamic MRI image (cardiac) in the appendix of the initial submission. We further included 3D volumetric MRI image in Figure 4 of the supporting PDF file.
4. **Sample consistency**: We have included other views like Coronal and Sagittal views in Figure 3 of the supporting PDF file (see general rebuttal). In the application to MRI reconstruction tasks, the consistency of image slices in different views is guaranteed by k-space.
### Answers to Questions:
1. *The method is conditioned on previous images, how about the computational cost, as the method will use all previous slices?*
The computational cost of the proposed model (AID) is not high, as it is a diffusion model with an efficient temporal-spatial conditioning (TSC) block. Comparing to the standard diffusion models like Guide Diffusion, the AID model trained on fastMRI dataset achieve ~10 times evaluation per second, and Guide model achieves ~13 times evaluation.
2. *The efficiency of the proposed method should be horrible as it is a drawback of the autoregressive model, is this correct?*
If the length of the autoregressive mechanism within TSC is long, the computation cost of the model will increase significantly. However, we found that the length of AID is affordable for MRI reconstruction tasks, as the speed difference between AID and Guide is not significant.
3. *As the cost is high, can the AID model be constructed in the latent space?*
Yes. We constructed two AID models in the latent space by using the VQ-VAE or Autoencoder-KL encoder to encode the image sequence into a sequence of latent code. You can find all our models in the supporting PDF file from the general rebuttal.
### Limitations:
1. *The method lacks novelty as diffusion models are commonly used in MRI acceleration tasks.*
Diffusion models are indeed commonly used in MRI acceleration tasks. However, the novelty of our work lies in the application of the autoregressive-diffusion model to MRI reconstruction tasks, which allows us to incorporate temporal information for better image reconstruction quality and has not been explored before. And our results show that the proposed AID model outperforms the single-image diffusion model in MRI reconstruction tasks.
2. *The method is more like for normal images but not for medical images like MRI; For medical studies, we care more about the anatomy plausibility, while the image metrics, like MSE, may not be a good measurement for this purpose;*
NRMSE and PSNR are commonly used metrics for evaluating the quality of MRI images when references are available. Further medical studies, like anatomy plausibility, would be worthwhile future work performed by radiologists.
3. *3D medical images are not videos; we care more about the structure consistency from any of the three views.*
We have included the results for different views of the generated image sequence in Figure 3 of the supporting PDF file. The consistency of image slices in different views is ensured by k-space in MRI reconstruction tasks. In practical MRI applications, 3D volumetric MRI images are acquired sequentially in k-space. It is important to note that AID is not designed for video generation tasks, which is a different research direction and requires a much higher level of temporal consistency in the generated image sequence.
## Reference
[1] Jalal, A., Arvinte, M., Daras, et al, 2021. Robust compressed sensing mri with deep generative priors. Advances in Neural Information Processing Systems, 34, pp.14938-14954.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for their efforts in the reply. My concerns remain. 1. the baseline method is very old and there are many better ones, like VarNet. 2. We can clearly see the cross-slice artifacts from other views, this further demonstrate this is not sutable model for 3D generation tasks.
---
Rebuttal 2:
Comment: Thank you for your feedback.
Regarding your points, it's worth noting that VarNet (2018 by Hammerick et al.) predates CSGM (2021 by Jalal et al.). Furthermore, Jalal et al.'s NIPS paper includes a direct comparison between CSGM and VarNet, demonstrating that CSGM outperforms VarNet. Therefore, additional comparisons may not provide new insights. Additionally, I’d like to clarify that reconstruction differs from pure generation. The consistency of the reconstruction task is addressed in Appendix E of the manuscript, where the reconstruction results are thoroughly presented. The reason we chose CSGM as the baseline for comparison is detailed in Section 1 of the general rebuttal.
Furthermore, VarNet was cited in our manuscript's introduction. It's important to emphasize that VarNet is a supervised method for MRI reconstruction. In contrast, the diffusion-based methods we explore offer more flexibility and reuse of a model. Specifically, eight VarNet models need to be trained for eight different MRI reconstruction setups, whereas only a single diffusion model is required to handle all setups, demonstrating greater efficiency and adaptability.
---
Rebuttal 3:
Comment: Dear Reviewer,
Thank you sincerely for the time and effort you have dedicated to reviewing our paper as a volunteer.
As today marks the deadline for author-reviewer interactions, we hope that our detailed explanations have addressed your concerns effectively. If you require any further clarification on any points, please don’t hesitate to reach out to us.
We greatly appreciate your valuable contributions to this process.
Wishing you a nice day! | Summary: The paper proposes an autoregressive diffusion model, where each image in an MRI sequence is generated by a diffusion model, but the noise predictions of the diffusion model are autoregressively conditioned on previous MRI images in the sequence. Essentially, previously introduced single-image diffusion based MRI reconstruction [16] is upgraded to use sequence information with an autoregressive formulation. Improvements over the single-image model are demonstrated both qualitatively and quantitatively in MRI images.
Strengths: * The autoregressive formulation makes use of image sequence information and clearly outperforms single-image diffusion.
* The problem formulation incorporating sequential information into single-image diffusion model predictions with the use of VQVAE encodings appears to be novel and powerful.
* Various domain-specific experiments prove the effectiveness of the model in the MRI setting.
Weaknesses: Evaluation could be much more detailed:
* While results are compared to the single image diffusion based approach "Guide", this model is internal. Direct comparisons to previous MRI reconstruction approaches would be much more convincing. Is there a particular reason why this could not be done?
* The VQVAE and transformer-based autoregressive model appears fairly complex. What would be the result of applying **only** this model to predict the image sequence? i.e. having the autoregressive model directly predict $x_n$ given $[x_0, \dots, x_{n-1}]$, with no diffusion model? It is clear that the sequence information makes the autoregressive-diffusion combination superior to single-image diffusion. But we cannot infer the effect of diffusion here: How important is the existence of the diffusion part of the model? How superior is the autoregressive-diffusion combination compared to pure autoregressive?
* Pure generative performance on standard natural image datasets is not evaluated. While authors do state this limitation in their paper, it is still a shame to be missing these results.
Also, a few typos to be fixed:
1. Line 57: "As in the clinical practice of MRI, we often involves acquiring..." This start is grammatically incorrect. Maybe: "As the clinical practice of MRI often involves acquiring..."?
2. Figure 1: "... noisy image that sampled from ..." -> "... noisy image that **is** sampled from ..."
3. Line 186: "... OpenAI's guide diffusion codebase ..." -> "... OpenAI's guide**d** diffusion codebase ..."
Technical Quality: 3
Clarity: 3
Questions for Authors: I've addressed my questions related to evaluation in the "Weaknesses" section. If the evaluation concerns were addressed, I'd certainly have an even more positive opinion of the paper. I have one more:
* One of the contributions is listed as the technique for efficiently optimizing the autoregressive loss in parallel. To me, this seems rather straightforward and directly based on autoregressive/diffusion model training processes. Would you be willing to share the particular challenges you have encountered in this process? I might be missing something here and I'd be glad to learn.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors address the limitations of their work in the paper sufficiently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and insightful questions. We address the reviewer's questions and concerns below:
1. **Comparison to other MRI reconstruction approaches**: We have compared our method to CSGM method from Reference [1] in the general rebuttal, using an external model trained in Reference [1]. We further provided new statistics in Table 1 of the supporting PDF file, which shows that our method outperforms CSGM in terms of PSNR and NRMSE.
2. **Design choice for the image sequence**: Using a transformer-based autoregressive model on latent codes from the encoder (e.g., VQ-VAE) is indeed a good choice for modeling temporal-alike 3D data. It has shown remarkable performance by OpenAI's Sora. A transform-based model alone can surely predict the long latent code for an image sequence without using the diffusion model. But when the latent code is long, the computation cost will increase significantly. What's more, that type of model may not be applicable to MRI reconstruction tasks, where the image sequence is acquired sequentially in the k-space. The application to MRI reconstruction tasks is the main motivation of our work. Compared to the single-image diffusion model, the autoregressive-diffusion model can better capture the inter-dependencies in the image sequence. This is beneficial for MRI reconstruction tasks, as demonstrated in our experiments.
3. *But we cannot infer the effect of diffusion here: How important is the existence of the diffusion part of the model?*
Using a purely autoregressive model to predict an image sequence can be highly computationally expensive. This is because a pure autoregressive model for 3D-like data involves processing long sequences, which significantly increases the computation cost. When the diffusion model is used as the primary component with a temporal-spatial conditioning (TSC) block, the length of the autoregressive mechanism within TSC would be much shorter, thereby decreasing the overall computational cost. Figure 1 in the manuscript shows how AID handles the image sequence.
4. *How superior is the autoregressive-diffusion combination compared to pure autoregressive?*
We haven't evaluated this in our work. The comparison between the autoregressive-diffusion combined model and the pure autoregressive model is hard to make, as they are designed for different tasks and could be specialized in different aspects. Autoregressive model like transformer is designed for token sequence generation and achieves remarkable results when there are enough data and computation resources. However, this paper tends to focus on an efficient way to handle the image sequence in MRI reconstruction tasks, where the image sequence is acquired sequentially in the k-space.
3. **Evaluation of the autoregressive model**: We initially trained the model on natural image datasets such as aerial image sequences and obtained similar results to those on the medical image dataset. We have included these results in Figure 3 of the supporting PDF file. However, curating a large natural image sequence dataset to systematically evaluate the model's performance on a wide range of tasks would be time-consuming. That is why we chose to focus on medical image datasets and model's application in the manuscript. We acknowledge that worthwhile future work would be to evaluate the model on a broader range of tasks and datasets using standard generative model metrics (FID and Inception Score).
4. **Typos**: We will correct the typos in the final version of the paper.
5. **Efficient optimization of the autoregressive loss**: In the training stage, the efficient computation of the autoregressive loss within a sequence requires evaluating $\epsilon_\theta(x_2|x_1), \epsilon_\theta(x_3|x_2, x_1), \cdots, \epsilon_\theta(x_n|x_{n-1}, \cdots, x_1)$ in parallel. In the inference stage, the generation of the image sequence requires solely evaluating $\epsilon_\theta(x_n|x_{n-1}, \cdots, x_1)$ for the last image.
## Reference
[1] Jalal, A., Arvinte, M., Daras, et al, 2021. Robust compressed sensing mri with deep generative priors. Advances in Neural Information Processing Systems, 34, pp.14938-14954.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Thank you sincerely for the time and effort you have dedicated to reviewing our paper as a volunteer.
As today marks the deadline for author-reviewer interactions, we hope that our detailed explanations have addressed your concerns effectively. If you require any further clarification on any points, please don’t hesitate to reach out to us.
We greatly appreciate your valuable contributions to this process.
Wishing you a nice day!
---
Rebuttal 2:
Title: Thank you for the rebuttal!
Comment: I thank the authors for their detailed rebuttal effort. Based on the positive comparison to CSGM and some results on new datasets shown in the PDF attached to the rebuttal, I am willing to bump my rating up from 4 to 5. I find the work the authors put into their rebuttal quite respectable.
I still think the total contribution is relatively low for NeurIPS: the paper is an application of a sequence-level autoregressive approach and a diffusion model, in my eyes a combination of existing approaches applied to the MRI domain. I think it is a perfectly acceptable paper for a more domain-specific conference.
As I've said before, I would've also liked to see how direct prediction from sequence-level (not voxel-level) autoregression with a simple neural network would have fared compared to the diffusion approach, but that is perhaps for another work, and would not significantly affect the contribution. Thank you for your work.
---
Rebuttal 3:
Comment: Thanks for your feedback and appreciation for your reconsideration of the rating.
Although the MRI-focused community at NeurIPS may be small, we hope that our work can help bridge the gap between machine learning and healthcare, showing the relevance of this interdisciplinary research at NeurIPS. | Summary: This paper introduces an autoregressive image diffusion (AID) model for generating image sequences and accelerating MRI reconstruction. The model combines autoregressive and diffusion approaches to leverage inter-image dependencies, aiming to improve reconstruction from undersampled k-space data in MRI. It was trained on the fastMRI dataset using 4 NVIDIA A100 GPUs for 440,000 iterations with the Adam optimizer. The model architecture incorporates several key components, including DiTBlock, DDIM, and VQVAE.
Experiments demonstrate that AID outperforms standard diffusion models in terms of PSNR and NRMSE metrics, particularly for twelve-times undersampled data. The model shows a reduction in hallucinations in reconstructed images compared to standard models. The paper provides a detailed explanation of the model's theoretical foundations, describing the autoregressive factorization of the joint distribution of image sequences and how the diffusion process is applied to each conditional probability in the factorization.
The authors derive the training loss for the AID model from a common diffusion loss and present an algorithm for sampling the posterior for accelerated MRI reconstruction using AID. The model is evaluated on its ability to generate images with varying amounts of initial information, including both retrospective and prospective sampling approaches. The paper demonstrates the model's effectiveness in unfolding aliased single-coil images and shows improved reconstruction quality across various sampling masks and undersampling factors. The authors discuss the potential applications of the model in other medical imaging tasks and acknowledge limitations, proposing future work to address them.
Strengths: The paper presents a novel combination of autoregressive and diffusion models for image sequence generation, which is well-motivated, particularly for medical imaging applications like MRI reconstruction. The authors provide a comprehensive set of experiments demonstrating improved performance over standard diffusion models, effectively leveraging inter-image dependencies to enhance reconstruction quality. There is a clear demonstration of reduced hallucinations in reconstructed images using the AID model.
The paper offers a detailed theoretical foundation for the proposed model, deriving the training loss and sampling algorithm in a clear and reproducible manner. The experimental methodology is robust, using appropriate datasets and metrics for evaluation across various sampling patterns and undersampling factors. The authors include both qualitative and quantitative assessments of the model's performance, providing visual examples that effectively illustrate the improvements in image quality.
The model's ability to generate coherent image sequences is demonstrated through retrospective and prospective sampling. The paper discusses the potential broader impact of the model on medical imaging applications and acknowledges limitations, proposing future work and showing scientific integrity. The model architecture is clearly explained and illustrated with helpful diagrams, demonstrating flexibility in handling different types of undersampled k-space data. The authors provide a thorough comparison with a standard diffusion model baseline and discuss the model's potential for incorporating pre-existing information from other imaging modalities. The paper explores the model's performance in both image space and latent space and provides insights into the model's uncertainty estimation capabilities.
Weaknesses: The evaluation is primarily limited to medical imaging datasets, lacking comparison on standard image datasets. While the theoretical justification for the model is provided, it could be expanded further. The paper does not thoroughly discuss potential negative societal impacts or ethical considerations of the technology, and computational requirements and efficiency compared to standard methods are not extensively discussed.
The paper lacks comparison with other state-of-the-art approaches beyond standard diffusion models and does not provide standard generative model metrics like FID or Inception Score. The model's sensitivity to hyperparameter choices, particularly sequence length, is not thoroughly explored, and there is limited discussion on the model's scalability to larger or more diverse datasets.
The paper does not explore the model's performance on other medical imaging modalities beyond MRI and lacks a detailed analysis of the model's failure cases or limitations. There is no discussion on the interpretability of the model's decisions or outputs and no exploration of the model's robustness to adversarial attacks or noise in the input data. The authors do not discuss the potential privacy implications of using the model in medical settings or provide a comparison of training times or computational resources required versus other methods.
The paper lacks discussion on the model's ability to handle multi-modal or multi-contrast imaging data. It does not explore the potential for transfer learning or fine-tuning the model on different datasets. There is limited discussion on the model's performance in low-resource or edge-computing scenarios and no exploration of the model's ability to handle out-of-distribution or rare pathological cases. The authors do not discuss the potential integration of their model with existing clinical workflows or systems.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the computational cost of the proposed method compared to standard diffusion models, both in training and inference?
2. How sensitive is the model to the choice of hyperparameters, particularly the sequence length?
3. Can the model be extended to handle multi-modal or multi-contrast imaging data?
4. What are the privacy implications of using this model in clinical settings? How does the model's performance scale with increasing dataset size or diversity? Can the model be adapted for other medical imaging modalities beyond MRI?
5. What is the potential for integrating this model into existing clinical workflows or systems? How does the model handle cases where there are significant anatomical variations between sequential images?
6. Can the model be extended to generate 3D volumetric data or time-series data? How do different k-space sampling patterns impact the model's performance? How does the model perform when there are motion artifacts or other types of image degradation?
7. Can the model be used for other tasks, such as image segmentation or anomaly detection in medical imaging?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge the limitation of not evaluating the model on common image datasets like ImageNet or CIFAR-10 and note the lack of standard generative model metrics like FID and Inception Score in their evaluation. The paper does not explicitly discuss potential negative societal impacts of the technology. The evaluation is primarily focused on MRI reconstruction, limiting insights into the model's generalizability, and the authors do not provide a comprehensive comparison with other state-of-the-art methods in image generation.
The paper lacks a detailed analysis of the model's computational efficiency and resource requirements. There is limited exploration of the model's performance on diverse pathological cases or rare conditions, and the authors do not discuss the potential limitations of the autoregressive approach in certain scenarios. The paper does not address the interpretability of the model's decision-making process or its robustness to adversarial attacks or input perturbations.
The authors do not explore the privacy implications of using the model in clinical settings or analyze the model's performance in low-resource or edge computing environments. There is no discussion on the potential for transfer learning or domain adaptation of the trained model, and the authors do not address the scalability of the model to larger or more diverse datasets. The paper does not explore the model's ability to handle multi-modal or multi-contrast imaging data or discuss the integration of the model with existing clinical workflows or systems.
The authors do not provide an analysis of the model's failure cases or edge scenarios, and the paper lacks exploration of the model's performance on other medical imaging modalities beyond MRI. There is no discussion on the potential ethical considerations of using AI-generated medical images, and the authors do not address the model's ability to handle out-of-distribution or anomalous cases in medical imaging.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Answers to Questions
We thank the reviewer for the detailed review and insightful questions. We address the reviewer's comments and questions below:
1. **Computational Cost Comparison**: We have included a section in the general rebuttal that provides a detailed comparison of the computational cost of the proposed method compared to standard diffusion models in terms of training and inference. We have also included Table 1 in the supporting PDF file that presents the computation needed to train the AID models on four different datasets with different model complexities.
2. **Sensitivity to Hyperparameters**: We choose the sequence length based on the trade-off between computational efficiency and dataset characteristics. For example, we use a sequence length of 10 in the fastMRI dataset, as a typical volume in the dataset contains around 16 slices. And we increased the sequence length to 46 for ABIDE dataset, as the typical volume in this 3D dataset contains over 100 slices.
3. **Multi-Modal Imaging Data**: The fastMRI dataset contains volumes that are weighted with different contrasts, such as T1, T2, and PD. And in the generated sequence in Figure 2 and 3 of the manuscript, we show the model's ability to generate coherent sequences across different contrasts.
4. **Privacy Implications and Scale**: We acknowledge the importance of privacy implications when using AI models in clinical settings, as it may reveal sensitive patient information like facial features. Usually, the model would perform better with larger and more diverse datasets, as it can learn more representative features. This model is applicable to other medical imaging modalities beyond MRI, as long as the data is in the form of image sequences.
5. **Potential for Clinical Workflows and Out of Distribution**: Increase the throughput of MRI scanner because of AID allows higher acceleration factors, which can be beneficial in clinical settings. If the model has never seen a certain pathology, it may not be able to reconstruct it accurately. However, the model can be trained on a diverse dataset to handle out-of-distribution cases.
6. **3D Volumetric Data and K-space Sampling Patterns**: The model can be extended to generate 3D volumetric data, as shown in the section *Two-stage training* of the general rebuttal. Figure 4 in the supporting PDF file displays the generated 3D volumetric data. We might be able to sample less and less k-space data as the sequence progresses, as the model has already seen the previous images and can predict the next one more accurately. When there are motion artifacts or other types of image degradation, the model might not perform well without taking them into account when constructing the likelihood model $p(y|x)$ as investigated in Reference [1].
7. **Other Tasks**: It might be possible when segmenting a full volume.
## Weakness and Limitations
1. **Systematic Evaluation** We initially trained the model on natural image datasets such as aerial image sequences and obtained similar results to those on the medical image dataset. We have included these results in Figure 3 of the supporting PDF file. However, curating a large natural image sequence dataset to systematically evaluate the model's performance on a wide range of tasks would be time-consuming. That is why we chose to focus on medical image datasets and model's application in the manuscript. We acknowledge that worthwhile future work would be to evaluate the model on a broader range of tasks and datasets using standard generative model metrics (FID and Inception Score).
2. **Comparison with State-of-the-Art Methods**: We have included a section, *Comparison with the existing method*, in the general rebuttal that discusses the comparison with the other method. We provide new statistics in Figure 1 of the supporting PDF file that show the comparison with the state-of-the-art method on the fastMRI dataset.
3. **Computational Efficiency**: We have included a section, *Computation and model complexity*, in the general rebuttal and Table 1 of the supporting PDF file that discusses and presents the computational efficiency of the proposed method. Comparing to the standard diffusion models like Guide Diffusion, the AID model trained on fastMRI dataset achieve ~10 times evaluation per second, and Guide model achieves ~13 times evaluation.
4. **Multi-modal/Fine-tuning**: We have included a section, *Two-stage training*, in the general rebuttal that discusses a more efficient training. That can be used to fine-tune the model on different datasets.
5. **Societal Impacts**: We will include a new section in the manuscript that discusses the potential negative societal impacts and ethical considerations of the technology.
6. **Interpretability and Robustness**: We will include a new section in the manuscript that discusses the interpretability of the model's decisions and outputs and its robustness to adversarial attacks or noise in the input data.
## References
[1] Levac, Brett, Ajil Jalal, and Jonathan I. Tamir. "Accelerated motion correction for MRI using score-based generative models." 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). IEEE, 2023.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you sincerely for the time and effort you have dedicated to reviewing our paper voluntarily.
As today marks the deadline for author-reviewer interactions, we hope that our detailed explanations have addressed your concerns effectively. If you require any further clarification on any points, please don’t hesitate to reach out to us.
We greatly appreciate your valuable contributions to this process.
Wish you a nice day!
---
Rebuttal Comment 2.1:
Title: Thank you
Comment: Dear Authors,
Thank you for your kind words. I appreciate you taking the time to address my comments and questions.
Your explanations are helpful and have clarified all the points. I believe the revised manuscript is significantly improved.
Please don’t hesitate to reach out if you need further clarification on my comments. | null | null | Rebuttal 1:
Rebuttal: # General Response
The authors would like to thank the reviewers for their valuable feedback and insightful comments. We have carefully considered all the comments and suggestions and have addressed them in this rebuttal. One PDF file is provided that contains a table and four figures that present the results of the experiments conducted in response to the reviewers' comments. They will be included in the revised manuscript. The following five sections provide detailed explanations for those results, respectively.
## 1. Comparison with existing methods
The method proposed in [Reference 1](#references) is used as the new baseline (CSGM), which uses a scored-based model (NCSNv2) from [Reference 2](#references) trained on the fastMRI dataset. In our initial submission, we trained a DDPM model and autoregressive diffusion model, named Guide and AID, which are built on the implementation from [Reference 3](#references), and applied them to various reconstruction tasks.
All the reconstruction tasks are performed by sampling the posterior $p_\theta(x|y) \propto p(y|x)p_\theta(x)$. The likelihood $p(y|x)$ is determined by forward model and the prior $p_\theta(x)$ is determined by the trained models, such as NCSNv2, Guide, and AID. This means that when the sampling method remains consistent, the performance of the reconstruction task is determined by the image prior. Our algorithm treats $p(y|x)$ in the same manner as CSGM (see Eq. (11) and Eq. (15) in the manuscript), and the key difference is the image prior.
The results indicate that AID outperforms both CSGM and Guide in terms of PSNR and NRMSE, particularly in the absence of an autocalibration signal (ACS). What's more, AID use the same U-net as Guide, with an added temporal-spatial block that significantly boosts performance. Figure 1 in the supporting PDF presents detailed results.
## 2. Computation and model complexity
Table 1 in the supporting PDF file presents the computation needed to train the AID models on four different datasets with different model complexities. All the training was performed on 4 NVIDIA A100 GPUs. When using latent space, the encoder is integrated into the dataloder.
- Dataset: (fastMRI, cardiac, ABIDE, UAV)
- Length: the length of image sequence
- Image size
- Latent: latent space representation used for the model, with options like VQVAE, Autoencoder-KL, or None
- Two-stage: a boolean indicating whether a two-stage training process was used. [Two-stage training](#5-two-stage-training) is explained in the following section
- Parameters: number of trainable parameters
- Train steps/s: training speed in steps per second
- Inference (it/s): inference speed in iterations per second
## 3. Natural image sequence generation
We trained an AID model on an Unmanned Aerial Vehicle (UAV) view dataset [[4]](#references) in the latent space and generated images using it. The generated images are displayed in Figure 2 of the supporting PDF file, and each frame in it shows an aerial view of a rural landscape with roads and/or a water pond. This demonstrates the effectiveness of the proposed method in learning sequentially coherent natural image generation.
## 4. Sample consistency along the temporal axis
Figure 3 in the supporting PDF file shows the sample consistency along the temporal (or z) axis.
Columns 1 and 2: Show sagittal and coronal views of a brain image sequence. These images appear to be medical scans with clearly stretched anatomical structures.
Column 3: Displays the x-t plane of a cardiac image sequence. This displays the heart's activity over time and shows the diastolic and systolic phases from left to right.
Columns 4 and 5: Show the x-t plane of a UAV image sequence, both generated and real. These images show the change in aerial views of a landscape over time. The generated x-t plane are generally consistent with the real x-t plane images but suffer from the striped artifacts.
## 5. Two-stage training
We implemented a two-stage training process to improve training efficiency. In the first stage, we trained the U-net model. In the second stage, we trained the temporal-spatial conditioning block with the pre-trained U-net model frozen. By doing so, we are able to train an AID model on ABIDE dataset [[5,6]](#references) where the image sequence has a dimension of 46x128x128 after preprocessing. The generated image sequence is shown in Figure 4 of the supporting PDF file.
## 6. Conclusion
In conclusion, we compared the proposed AID model with the CSGM method, reported the computation and model complexity, demonstrated the natural image sequence generation, and showed the sample consistency along the temporal axis. We also implemented a two-stage training process to improve the training efficiency. The more specific answers to the reviewers' comments are provided seperately.
## 7. References
[1] Jalal, A., Arvinte, M., Daras, et al, 2021. Robust compressed sensing mri with deep generative priors. Advances in Neural Information Processing Systems, 34, pp.14938-14954.
[2] Song, Y. and Ermon, S., 2019. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32.
[3] Dhariwal, P. and Nichol, A., 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34, pp.8780-8794.
[4] Delibaşoğlu, İ., 2022, September. PESMOD: Small moving object detection benchmark dataset for moving cameras. In 2022 7th International Conference on Frontiers of Signal Processing (ICFSP) (pp. 23-29). IEEE.
[5] Di Martino A., Yan C-G, Li Q., et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular Psychiatry. 2014;19(6):659-667.
[6] Di Martino Adriana, O’connor David, Chen Bosi, et al. Enhancing studies of the connectome in autism using the autism brain imaging data exchange II. Scientific Data. 2017;4:170010.
Pdf: /pdf/be13795d8537db10197f08b09ead10ec33c78619.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sparsity-Agnostic Linear Bandits with Adaptive Adversaries | Accept (poster) | Summary: This paper studies the sparse linear bandit problem without the prior knowledge of sparsity. The studied problem also considers general setup in which the context is chosen by an adaptive adversary and action set is not imposed with additional assumptions. Then, A OFUL based algorithms are proposed and regret bounds are provided.
Strengths: - Lifting assumptions used in existing works while still maintaining comparable or even better regret bounds.
- The proposed algorithm provides an instance-dependent regret bound and worst-case bound as well.
- As the performance of SparseLinUCB highly depends on distribution q, different choices of q are shown.
- Numerical experiments are provided to support the significance of AdaLinUCB algorithm.
Weaknesses: - The instance dependent regret bound shows no improvement compared with standard OFUL algorithm, and could be even worse than that of OFUL, e.g., S^2>d.
- For known sparsity and adaptive adversary setup, the instance depedent regret bound in Corollary 3.5 is worse than that of [2], which is dS/Delta.
- The regret bound of SparseLinUCB is sensitive to the choice of distribution q. Though authors provide a specific example on the choice of q, the regret bound now highly depends on the choice of constant C. For example, if one chooses C=1, the instance dependent bound is worse than that of OFUL when S^2>d.
- For the action chosen (step 5 in Algorithm 1), it could be difficult to compute A_t since this optimization problem contains non-linear term and the action set is arbitrary.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can authors provide another distribution selection case which can recover dS/Delta instance-dependent bound in known sparsity and adaptive adversary setup?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable time and effort in offering detailed feedback on our work. In the following, we address your questions one by one.
---
Q1: Can authors provide another distribution selection case which can recover $dS/\Delta$ instance-dependent bound in known sparsity and adaptive adversary setup?
A1: Yes, this can be achieved by setting $q_o = 1$. Then, Theorem 4 of [2] directly implies a problem-dependent bound of order $\tilde{O}\left(\frac{dS}{\Delta}\right)$ for $q_o = 1$. We will note this result in the revision.
----
Q2: For known sparsity and adaptive adversary setup, the instance-dependent regret bound in Corollary 3.5 is worse than that of [2], which is dS/Delta.
A2: See A1. For a known sparsity setup, our regret bound can be of the same order as that of [2].
---
Q3: Why the instance dependent regret bound shows no improvement compared with standard OFUL algorithm.
A3: We have provided some intuitions on why it is difficult to provide a problem-dependent bound that benefits from sparsity in Lines 214-220. Essentially, as long as you set $q_{n}$ as a constant (even as small as $1/\sqrt{T}$), the regret will scale as $\tilde O(d^2 / \Delta)$.
---
Q4: For the action chosen (step 5 in Algorithm 1), it could be difficult to compute $A_t$ since this optimization problem contains non-linear term and the action set is arbitrary.
A4: The computation of $A_t$ in Step 5 of Algorithm 1 is as hard as computing the prediction in the OFUL algorithm, which can be done efficiently when the action set is finite. In the case of arbitrary actions sets, not much can be said about the computational efficiency of solving this problem, see [17, Section 19.3.1].
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer 9XZb
Comment: Thanks for your response. I don't have further questions and will keep my score as is.
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our response and for your support. If you have any further questions about our submission, please don't hesitate to reach out. | Summary: This paper proposes statistically efficient linear bandit algorithms capable of handling cases where prior knowledge of the sparsity level $S$ is not given. The first algorithm, SparseLinUCB, achieves a $\tilde{O}(S \sqrt{dT})$ regret bound without any stochastic assumptions on the context vector, covering adversarially given context vectors. The main idea involves sampling the radius of the confidence set for the true reward parameter $\theta_*$ from a specific distribution and then selecting the optimistic action. It matches the lower bound when sparsity information is provided. The second algorithm, AdaLinUCB, updates the sampling distribution of the confidence radius using an approach (Exp3) that increases the likelihood of selecting a radius providing higher rewards. AdaLinUCB also achieves a $\tilde{O}(\sqrt{T})$ regret bound. Various experiments support the theoretical results of the proposed algorithms.
Strengths: - The motivation for the problem addressed in the paper is well explained, and the related work is thoroughly described. Overall, the paper is well-written and easy to understand.
- The first algorithm, SparseLinUCB, is, to my knowledge, the first sparsity-agnostic linear bandit algorithm for adversarial context vectors. Additionally, it matches the lower bound regret when sparsity information is provided.
- The second algorithm, which updates the confidence radius distribution at each time step, is also very interesting. Previous works using similar methods achieved loose regret bounds ($\tilde{O}(T^{2/3})$), whereas the proposed algorithm achieves $\tilde{O}(\sqrt{T})$ regret (though I have not rigorously checked this proof).
- Various numerical experiments support the theory behind the proposed algorithms.
Weaknesses: - The SparseLinUCB algorithm does not make stochastic assumptions about the action set (context vectors), thus providing theoretical guarantees even in the case of an adaptive adversary. However, AdaLinUCB is described as an algorithm for stochastic linear bandits. It seems that the stochastic assumptions required for AdaLinUCB's regret bound are not explained.
- The explanation about $n$ in the confidence radius distribution $\set{ q_s }_{s \in [n]}$ appears insufficient. Additionally, there seems to be no term for $ n $ in the regret bound. I am curious whether the regret bound is independent of $ n$.
Technical Quality: 3
Clarity: 3
Questions for Authors: - (Related to the 1st bullet in Weaknesses) Can AdaLinUCB still achieve the currently presented regret bound if the action set (context vectors) is given by an adaptive adversary? If not, can you briefly explain the issue?
- (Related to the 2nd bullet in Weaknesses) It seems that how $\set{ q_s }_{s \in [n]}$ is determined would impact the regret bound. You introduced a specific distribution in Eq. (3.3). How does the regret bound change if $ n$ is significantly increased or decreased?
- The motivation for updating the confidence radius distribution at each time step in AdaLinUCB is interesting. However, the current result shows a looser regret bound compared to SparseLinUCB. Despite the computational cost of updating the distribution, there seems to be no statistical gain. In what instances would it be better to use AdaLinUCB over SparseLinUCB? Also, can you briefly explain why AdaLinUCB performs better empirically?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have well-addressed the limitations and further research directions in Section 6. The content discussed in this paper appears to have little to no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable time and effort in offering detailed feedback on our work. In the following, we address your questions one by one.
----
Q1: The SparseLinUCB algorithm does not make stochastic assumptions about the action set (context vectors), thus providing theoretical guarantees even in the case of an adaptive adversary. However, AdaLinUCB is described as an algorithm for stochastic linear bandits. It seems that the stochastic assumptions required for AdaLinUCB's regret bound are not explained.
A1: AdaLinUCB is indeed designed to handle adaptive adversarial action sets (see, for example, the initial lines 414-419 of the proof where the adversarial nature of the action set appears in the analysis). We will clarify this in the revision. There may have been some confusion regarding the title of Section 4: "Adaptive Model Selection for Stochastic Linear Bandits." In the literature, such as Chapter 19 of the bandit book [17] and reference [2], the term "stochastic linear bandits" typically refers to the presence of stochasticity in the subgaussian noise term $\epsilon_t$, as outlined in Equation (2.1).
---
Q2: The explanation about $n$ in the confidence radius distribution $\\{q_s\\}_{s\in [n]}$ appears insufficient. Additionally, there seems to be no term for $n$ in the regret bound. I am curious whether the regret bound is independent of $n$.
A2: As stated in Line 182, we choose $n = \Theta(\log d)$, which is large enough to ensure $\alpha_n \geq \gamma(1/T)$. We will clarify this fact in the revision.
---
Q3: (Related to the 2nd bullet in weaknesses) It seems that how $\\{q_s\\}_{s\in [n]}$ is determined would impact the regret bound. You introduced a specific distribution in Eq. (3.3). How does the regret bound change if $n$ is significantly increased or decreased.
A3: Increasing $n$ beyond $\Theta(\log d)$ significantly affects the regret bound of Theorem 3.2 through the choice of $q_s$. If we increase $n$, the regret upper bound approximately becomes $O(n\sqrt{dT})$. Indeed, following our choice $q_i \approx 2^{−i}$ (Eq. (3.3)), the first term in Lines 190-191 has order $O(n\sqrt{dT})$, while the second term has order $O(S \sqrt{dT})$ since $Q\approx \sum_{i>o} 2^{-i} \approx 2^{-o}\approx O(1/S)$, regardless of the size of $n$. Therefore, the final regret bound becomes of order $O(n\sqrt{dT})$. Intuitively, this makes sense, as increasing the number of models increases the exploration, which eventually has a negative impact on the regret.
If $n$ is decreased such that $\alpha_n < \gamma(1/T)$, then $\theta^*$ may not belong to any confidence set in our hierarchy, and the regret could thus become linear in $T$. As such, we must ensure $\alpha_{n}\geq \gamma(1/T)$ and highlight it in Line 181.
---
Q4: The motivation for updating the confidence radius distribution at each time step in AdaLinUCB is interesting. However, the current result shows a looser regret bound compared to SparseLinUCB. Despite the computational cost of updating the distribution, there seems to be no statistical gain. In what instances would it be better to use AdaLinUCB over SparseLinUCB? Also, can you briefly explain why AdaLinUCB performs better empirically?
A4: AdaLinUCB works better empirically since it automatically adjusts the radius of the confidence bound. Theoretical confidence bounds are typically set very conservatively to ensure error probability guarantees for hard environments (e.g., when the action set is chosen by the adaptive adversary), which often results in over-exploration. For example, in Chapter 36 of the Szepesvari and Lattimore book "Bandits," the sixth note in Section 36.5 mentions that values of the Linear Thompson Sampling parameter (i.e., the variance of the posterior distribution playing a similar role to the radius of the confidence bound in UCB) which show good empirical performance do not have any theoretical guarantee. Conversely, the values of the parameter ensuring a solid theoretical guarantee consistently perform poorly in practice.
As a future research direction, we can further explore the gap between the practical superior performance and theoretical limitations of AdaLinUCB, and even consider what the lower bound for sparsity-agnostic linear bandits might be.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanation. I have no further questions. I have kept my rating, as my original score was already positive and supportive of accepting the paper!
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our response and for your support. If you have any further questions regarding our submission, please feel free to reach out. | Summary: This paper studies the stochastic linear bandits when the action set can be arbitrarily chosen without some additional assumptions. And the authors propose a randomized sparsity-agnostic bandit algorithm using the model selection idea, and show that EXP3 can be used as the master algorithm to obtain a decent regret bound. Experimental results are included in the end to verify the high efficiency of the proposed algorithms.
Strengths: 1. The paper is easy to read, and most parts are pretty clear. E.g. Table 1 helps reader catch up all the existing literature and their differences quickly.
2. This paper studies an interesting and important problem when the arm set is arbitrarily chosen under the sparse linear bandit problem. I didn't check the proof in appendix in detail but all arguments seem reasonable to me.
3. Empirical results are provided to illustrate the high efficiency of the proposed algorithms.
Weaknesses: 1. The used techniques are based on the existing literature (e.g. seqsew, exp3 master algorithm). It will be better to show the theory novelty of this work in a separate paragraph.
2. Lower bounds are not provided, as the authors mention in the limitation parts. So it is a little bit hard for readers to justify how good are the proposed bounds.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why is the boundness of the random noise necessary in your theoretical proof?
2. Is it possible to bound the regret with high probability instead of the expected value? If some existing literature proposed the regret bound with high probability, then it may not be fair to do the direct order comparison in Table 1.
3. Can you report the running time of your method in the experiments?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable time and effort in offering detailed feedback on our work. In the following, we address your questions one by one.
---
Q1: The used techniques are based on the existing literature (e.g. seqsew, exp3 master algorithm). It will be better to show the theory novelty of this work in a separate paragraph.
A1: We do have a 40-line paragraph “Technical challenges” (Lines 61-100) explaining the theoretical novelty. One of our main technical contributions is demonstrating that our exploration, measured by $\det V_{t-1}$, grows very rapidly, as highlighted in Lines 81-86 (the proof details are shown in Lemmas D.4 and D.5). As an example to show the power of our new techniques, we achieve a $\sqrt{T}$ regret bound for AdaLinUCB, whereas previous works [11,24] using the Exp3 to learn the probability over a hierarchy of confidence sets have a regret bound of only $T^{2/3}$.
Please refer to the Technical challenges paragraph for a full list of our original contributions.
----
Q2: Why is the boundness of the random noise necessary in your theoretical proof?
A2: Note that our results in Section 3 do not require boundedness of the noise, but only 1-subgaussianity, see (2.1) in Lines 138-139. We only require $\epsilon_t \in [-1,1]$ in Section 4 because the Exp3 algorithm, which is used in the proof of Theorem 4.1, requires bounded rewards, which in turn requires the noise to be deterministically bounded.
---
Q3: Is it possible to bound the regret with high probability instead of the expected value? If some existing literature proposed the regret bound with high probability, then it may not be fair to do the direct order comparison in Table 1.
A3: Thanks for pointing out this interesting question. We believe our in-expectation regret bound for SparseLinUCB could be extended to a bound in high probability after some changes in the analysis. As these changes do not appear to be trivial as they require a somewhat intricate martingale concentration analysis, we will leave this extension to future work.
In the revised version, we will also add notes to the table highlighting the algorithms that are high probability.
---
Q4: Can you report the running time of your method in the experiments?
A4: Although there were slight differences depending on the sparsity level or initial probability distribution, on average, a run of OFUL took 0.7 seconds, a run of SparseLinUCB took 1.4 seconds, and finally, a run of AdaLinUCB took 1.9 seconds. The longer runtime of SparseLinUCB and AdaLinUCB is motivated by the need of managing an ensemble of models. However, as we did not optimize the code, we expect these runtimes to be improvable.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. I have no more questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. If you have any further questions about our submission, please do not hesitate to reach out. We highly value your perspective, and should you find our responses satisfactory, we would be grateful if you would consider raising your score for our paper. | Summary: This paper studies Linear bandits with adversaries when the underlying parameter $\theta$ is sparse. It combines a previous sparse linear regression algorithm named SeqSEW with LinUCB, proposing an algorithm named SparseLinUCB. It also proposes a variant of the EXP3 algorithm named AdaLinUCB. Regret bounds dependent on the sparsity dimension $S$ are proved. Experiments are conducted on synthetic data.
Strengths: 1. The paper proposes algorithms for sparse linear bandits with adversarial action sets. It proved a regret upper bound better than previous when the sparsity dimension $S$ is quite small with no assumptions on the sparsity structure. If the sparsity level is known, the result is optimal.
2. It also provides an instance-dependent regret bound.
3. It conducts synthetic experiments to show its better performance.
Weaknesses: 1. The paper misses many very related works, especially those on K-armed bandits and linear bandits.
2. For the main algorithm design, I cannot see the necessity to use an online learning oracle to predict the reward $\hat X_t$ and then calculate the least squares estimation when having access to the real reward $X_t$. Could you explain it further?
3. The writing is unclear.
(1) In Line 176, "the distribution $\\{q_i\\} _ {i \in [n]}$" is not well defined, over what?
(2) The notation of the confidence set in Line 177 is contradictory with the notation in Appendix A, as it uses 2-norm rather than the $V_{t-1}$ norm.
(3) In equation 3.1, $\gamma$ is a problem-dependent term so you cannot set the exact value of $o$, as is done in Corollary 3.3.
(4) Line 240: in terms of what?
4. Line 249: “the parameter q does not provide a similar flexibility”, so I do not see the use of this probable selection, what is the benefit of the new algorithm design in Section 4?
Technical Quality: 3
Clarity: 1
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The authors have addressed the limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable time and effort in providing detailed feedback on our work. We now address your questions one by one.
---
Q1: The paper misses many very related works, especially those on K-armed bandits and linear bandits.
A1: It would be useful to have some concrete pointers to the missing works, but we will definitely browse more carefully through the relevant literature.
---
Q2: For the main algorithm design, I cannot see the necessity to use an online learning oracle to predict the reward.
A2: The use of an online linear regression oracle for sparse linear bandits is an established technique whose advantages in the analysis have been shown in [2] and also in bandit book [17], see our comment in Line 149. In particular, using a sparse online learning algorithm provides a dependence on the sparsity parameter $S$ in the confidence bound. We therefore use Lemmas 2.1 and 2.2 from [2] to set $\gamma(1/T) = O(S\log T)$. Thus, there exists some constant $C$ such that
$\mathcal C_t=\\{ \theta \in \mathbb R^d: \|\theta\|_2^2+ \sum_s(\hat X_s - \langle A_s, \theta \rangle)^2 \leq CS\log T \\}$.
It is unclear how such a dependence could be obtained using the standard least square estimator, which would only provide a dependence on $d$.
---
Q3: Line 249: “the parameter $q$ does not provide a similar flexibility”, so I do not see the use of this probable selection, what is the benefit of the new algorithm design in Section 4?
A3: The main advantage of AdaLinUCB is its empirical performance, as we explain in Lines 243-244 and demonstrate in our experiments where it outperforms both OFUL and SparseLinUCB. This improved empirical performance is due to AdaLinUCB’s ability of tuning its distribution based on observed rewards.
From a theoretical viewpoint, AdaLinUCB achieves a regret bound of $\tilde{O}(\sqrt{T})$, using novel techniques descibed in Lines 61-100. Previous analyses of Exp3-based algorithms in [11] and [24] only achieved $\tilde{O}(T^{2/3})$ regret bounds, even under i.i.d. assumptions on the generation of the action sets, see Lines 122-131 for a description of these related works.
---
Q4: In Line 176, “the distribution $\\{q_i\\}_{i\in[n]}$” is not well defined, over what?
A4: In Lines 174-176 we say that the algorithm uses a hierarchy of confidence sets of increasing radius and draws the index $I_t$ of the confidence set by sampling from the probability distribution $\\{q_i\\}_{i \in [n]}$. We will clarify that the distribution is over the indices of the confidence sets, and make it explicit that $n$ is an input parameter to SparseLinUCB.
---
Q5: The notation of the confidence set in 177.
A5: This is a typo. It should be $V_{t-1}$ norm. Thank you for your careful reading!
---
Q6: In equation 3.1, $\gamma$ is a problem-dependent term so you cannot set the exact value of $o$, as is done in Corollary 3.3.
A6: In Corollary 3.3, we explicitly state that it is assumed the sparsity level $S$ is known, as detailed in Lines 194-195. With $S$ known, the upper bound of $\gamma(\delta)$ can be easily calculated from its definition.
---
Q7: Line 240: in terms of?
A7: This is a typo. We will remove the words “in terms of”.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed responses, which do solve most of my concerns. I have increased my scores to reflect this. However, I suggest polishing the writing to help with understanding. Good luck!
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our response and for the improved score. We will incorporate your valuable suggestions and make the necessary changes in the revised version of the paper. If you have any further questions about our submission, please don't hesitate to reach out. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming | Accept (poster) | Summary: On the theoretical side, the paper studies the problem of sample complexity of learning data-driven projection matrices for accelerating high-dimensional LP solving. Given n-dimensional LPs drawn from some problem distribution, the goal is to bound the number of problem instances needed to learn an $n\times k$ projection matrix such that a good solution to the original LP can be found by (efficiently) solving the k-dimensional projected LP. The main result is a $\tilde{O}(nk^2)$ upper bound on pseudo-dimension of the class of functions that measure the optimal value of the projected LP, and a corresponding $\Omega(nk)$ lower bound.
While finding optimal projection matrix on a training set is hard, the paper proposes two practical approaches for learning a projection matrix from the training LP instances. The first approach applies Principle Components Analysis to a matrix of optimal solutions of training instances. The second approach uses stochastic gradient updates to learn the projection matrix. Empirically, data-driven projections seem to perform better than random projections, with comparable gains in speed-ups.
Strengths: - Speeding up high-dimensional LPs is fundamental to operations research and data-driven projections appears to be a promising tool.
- A polynomial upper bound on the sample complexity theoretically establishes that projection matrices are learnable from data.
- There is also a lower bound, with is tight up to a factor of $k$ and logarithmic factors.
- Even though exact optimization on training LP instances is hard, the paper provides practical solutions that work well empirically. Practical algorithms are often hard to design in the data-driven literature.
Weaknesses: - The i.i.d. assumption needed in theoretical results may be too strong in practice.
- The proposed methods for learning the projection matrix are not efficient e.g. PCA approach needs optimal solutions of training problem instances.
- While the authors provide bounds on the time complexity of their learning methods, there are no guarantees about the quality of solutions.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the proof of Lemma 4.3 differ from the techniques of Balcan et al., NeurIPS 2022?
- It seems like the proof for upper bound can be simplified by using the GJ framework of Bartlett et al. COLT 2022 (P. L. Bartlett, P. Indyk, and T. Wagner. Generalization bounds for data-driven numerical linear algebra.)
- The $\log (H/\varepsilon)$ term is not needed in the sample complexity bound $N$. (Line 153)
- Line 209: sign patters
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for providing invaluable comments based on a deep understanding of data-driven algorithm design. We respond to each comment below:
>Weaknesses:
>- The i.i.d. assumption needed in theoretical results may be too strong in practice.
>- The proposed methods for learning the projection matrix are not efficient e.g. PCA approach needs optimal solutions of training problem instances.
>- While the authors provide bounds on the time complexity of their learning methods, there are no guarantees about the quality of solutions.
We acknowledge these points as limitations of our work. However, we wish to highlight that these significant challenges are not unique to our work but are common in data-driven algorithm design. Most existing research in this area assumes the i.i.d. setting and often does not offer efficient learning methods nor assess the quality of the resulting outputs. There are a few notable exceptions, such as the work by Balcan et al. (2023), titled "Output-sensitive ERM-based techniques for data-driven algorithm design." Such enumeration-based methods can learn parameters that minimize empirical risk but lack polynomial-time guarantees and may be impractical due to their substantial computational demands. In contrast, our methods for learning projection matrices, although not providing guarantees regarding solution quality, have been experimentally demonstrated to perform well. We believe that providing such methods contributes to further investigation into the practical aspect of data-driven algorithm design.
> Questions:
> - How does the proof of Lemma 4.3 differ from the techniques of Balcan et al., NeurIPS 2022?
We appreciate this insightful question. While the work of Balcan et al. (NeurIPS 2022), which we cited as [11] in our paper, indeed inspired our analysis, there is an important technical difference between their approach and ours. Balcan et al. focus specifically on the branch-and-cut method, analyzing situations where either (1) new cuts (constraints) do not separate an optimal solution $x^*_{\rm LP}$ to the original LP or (2) new cuts separate $x^*_{\rm LP}$ and constitute an equation system specifying a new optimal solution.
In contrast, our Lemma 4.3 is intended for investigating the behavior of optimal values of projected LPs $(\boldsymbol{P^\top c}, \boldsymbol{AP}, \boldsymbol{b})$, which can change entirely with projection matrix $\boldsymbol{P}$. A particular challenge we address is the possibility of $\boldsymbol{AP}$ becoming rank-deficient, which hinders the standard derivation of an equation system specifying an optimal solution to the projected LP (cf. the proof of Korte and Vygen [31, Proposition 3.1]). We circumvent this issue by reformulating the projected LP into an equivalent $2k$-dimensional LP with non-negativity constraints, as is done at the beginning of the proof of Lemma 4.3. This adjustment, not employed by Balcan et al., justifies the subsequent analysis based on the enumeration of vertex solutions in the feasible region of the projected LP. Although the adjustment is common in mathematical programming, using it for analyzing the pseudo-dimension would be a novel idea.
Thus, although both our analysis and that of Balcan et al. employ Cramer’s rule and might seem similar at first glance, there is a notable technical distinction in how we address the specific challenges in analyzing projected LPs.
> - It seems like the proof for upper bound can be simplified by using the GJ framework of Bartlett et al. COLT 2022 (P. L. Bartlett, P. Indyk, and T. Wagner. Generalization bounds for data-driven numerical linear algebra.)
We greatly value this insightful comment. Indeed, we considered employing the GJ framework of Bartlett et al. (COLT 2022) as an alternative approach, but we encountered a difficulty. The GJ framework requires a *GJ algorithm* that, in our case, computes the optimal value $u(\boldsymbol{P}, \pi)$ up to an $\varepsilon$-error for a sufficiently small $\varepsilon>0$. Crucially, the *predicate complexity* of a GJ algorithm must be upper bounded independently of projection matrix $\boldsymbol{P}$ (i.e., learnable parameters). If not, certain parameters $\boldsymbol{P}$ may cause the predicate complexity to become arbitrarily large, preventing us from bounding the pseudo-dimension with the GJ framework. We attempted to develop such a GJ algorithm using simplex- and interior-point-type methods, yet we were unable to make their predicate complexity independent of $\boldsymbol{P}$.
We also appreciate the comments about the $\log(H/\varepsilon)$ term and the typo.
We hope our responses above have adequately addressed the reviewer's concerns and questions. Please do not hesitate to contact us during the discussion period if there are any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, I appreciate the answers to my questions. I retain my score. | Summary: This paper considers data-driven approach for learning projections for LPs. Given an LP with $m$ constraints and $n$ variables, it establishes bound on learning a projection $P\in \mathbb{R}^{n\times k}$ that reduces $n$ to $k$. The main contribution is to establish uniform convergence bound on pseudo-dimension, for an upper bound of $\tilde O(nk^2)$ and a lower bound of $\Omega(nk)$. Authors also propose two algorithms for learning the projection, one is based on PCA of the optimal solution of LP instances, the other is a gradient ascent approach.
Strengths: This paper establishes bound on pseudo-dimension of of performance metrics for data-driven projections for LP, the bound is tight up to a factor of $k$. Experiments are also performed for the two methods proposed in this paper, and it provides speed up over the column-randomized approach that is data-oblivious and performance upgrade.
Weaknesses: The techniques for proving the upper and lower bounds on pseudo-dimension are quite standard, authors should emphasize the technical difficulties for proving these bounds.
Empirical side, while "after* learning the projection, it gives good performance, the process of learning the projection is quite slow. Authors might consider adding more discussions on how to learn these projections more efficiently (from an algorithmic perspective).
Technical Quality: 3
Clarity: 3
Questions for Authors: Suppose you learn the projection for both the primal and dual, could this lead to an even more efficient downstream algorithm, as one only needs to handle LP instance of size $k'\times k$? Or are there obvious reasons this is not a good idea?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for providing insightful comments and a positive evaluation. We respond to each comment below.
> Weaknesses:
>
> The techniques for proving the upper and lower bounds on pseudo-dimension are quite standard, authors should emphasize the technical difficulties for proving these bounds.
We appreciate the reviewer's suggestion. A primary technical challenge lies in the proof of Lemma 4.3, which is pivotal to our solver-agnostic analysis of optimal values of LPs. Specifically, we investigate the behavior of the optimal value of the projected LP $(\boldsymbol{P^\top c}, \boldsymbol{AP}, \boldsymbol{b})$ while addressing the potential issue that $\boldsymbol{AP}$ may be rank-deficient. This rank deficiency generally hinders the straightforward derivation of an equation system that specifies a vertex optimal solution. To overcome this, the proof of Lemma 4.3 begins by reformulating the LP into an equivalent $2k$-dimensional LP with non-negativity constraints. This adjustment facilitates the determination of equation systems specifying vertex optimal solutions, as in the proof of Korte and Vygen [31, Proposition 3.1]. Although the adjustment is common in the context of mathematical programming, using it for analyzing the pseudo-dimension is our novel idea. We will emphasize this technical nuance in the revised manuscript.
> Empirical side, while "after* learning the projection, it gives good performance, the process of learning the projection is quite slow. Authors might consider adding more discussions on how to learn these projections more efficiently (from an algorithmic perspective).
We greatly value this suggestion. We expect that the few-shot learning approach, akin to the method proposed by Indyk et al. (NeurIPS 2021) for data-driven low-rank approximation, is effective for learning projection matrices efficiently. We will expand this discussion, currently only briefly mentioned in the conclusion section, in our revised manuscript.
> Questions:
>
> Suppose you learn the projection for both the primal and dual, could this lead to an even more efficient downstream algorithm, as one only needs to handle LP instance of size $k' \times k$? Or are there obvious reasons this is not a good idea?
We appreciate this insightful question. Indeed, we have considered applying projections to both the primal and dual and recognize its potential benefits. However, we have opted not to reduce the dual variables (i.e., the number of constraints), as it could result in optimal solutions for projected LPs that are infeasible for the original LPs. While the quality of feasible solutions is naturally evaluated through their objective values, how to assess the quality of infeasible solutions is more controversial. For this reason, we have chosen to focus solely on reducing the number of primal variables, thereby avoiding the issue of infeasibility and maintaining conceptual simplicity. We value this discussion and will highlight it as a promising direction for future research.
We hope our responses have adequately addressed the reviewer's concerns and questions. Please do not hesitate to contact us during the discussion period if there are any further questions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my questions. I'll keep my score as is. | Summary: The paper attempts to theoretically analyze a method called Data-Driven Projections in Linear Programming. As discussed in the paper, projection methods aim to reduce the size of high-dimensional LPs. While random projection methods have improved the efficiency of LPs, data-driven projections have achieved better results. In this paper, it is assumed that the parameters of the LP (parameters in the constraints and the objective) come from some distribution. Based on a set of training samples of these parameters, data-driven methods try to find a subspace where future optimal solutions are expected to appear. The main contribution of the paper is that they propose a generalization bound for the difference between the empirical and statistical performance of the projected LP.
Strengths: - The paper is well-written and well-motivated.
- The proofs are rigorous and well-written.
Weaknesses: - I think the contribution of the paper is marginal.
- Although it is mentioned in the paper that the generalization is independent of the choice of the projection matrix $P$, the final goal is to minimize the expected value of the objective. To achieve this, it is necessary to choose a good $P$ that minimizes the empirical optimal value. I believe it should be proven somehow that the algorithm proposed by the paper can suggest a projection matrix that captures the subspace where future optimal solutions are expected to appear. I couldn't find a proof for this in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's valuable feedback. We present our response to the comments below.
> Weaknesses:
> - I think the contribution of the paper is marginal.
> - Although it is mentioned in the paper that the generalization is independent of the choice of the projection matrix $P$, the final goal is to minimize the expected value of the objective. To achieve this, it is necessary to choose a good $P$ that minimizes the empirical optimal value. I believe it should be proven somehow that the algorithm proposed by the paper can suggest a projection matrix that captures the subspace where future optimal solutions are expected to appear. I couldn't find a proof for this in the paper.
We appreciate the reviewer's insights provided. Regarding the second point, finding $\boldsymbol{P}$ that minimizes the empirical optimal value (i.e., empirical risk minimization, or ERM) is indeed our ultimate goal. We attempted to prove such guarantees, but it turned out challenging as the optimal value of a projected LP, viewed as a function of $\boldsymbol{P}$, is non-convex and difficult to optimize directly. Please note that this is a general difficulty recognized in the field of *data-driven algorithm design*. Accordingly, the line of work in this area (e.g., Gupta & Roughgarden SICOMP2017; Balcan et al. STOC2021; Bartlett et al. COLT2022), including ours, aims to achieve generalization bounds that are independent of tunable parameters, i.e., *uniform convergence*. This shift from seeking optimality in ERM to embracing uniform convergence is a deliberate strategy reached after careful consideration in this research field. Therefore, we believe that this point should not undermine the extensive body of work, including ours.
Just to make sure, we wish to re-emphasize the discussion in Remark 4.2: uniform convergence ensures that $\boldsymbol{P}$ performing well on training instances is also expected to perform well on future instances, *even if $\boldsymbol{P}$ does not minimize the empirical optimal value.* Furthermore, the experiments in Section 6 show that we can empirically find good $\boldsymbol{P}$ with PCA- and SGA-based methods, whose generalization guarantee follows from uniform convergence. These theoretical and empirical results demonstrate the merit of our data-driven projection approach to LPs, effectively circumventing the aforementioned challenge of optimizing $\boldsymbol{P}$ to minimize the empirical optimal value.
We hope that the above clarifications have adequately addressed the reviewer's concerns.
> Questions:
>
> Please refer to the weaknesses section.
In response to the first comment in the weaknesses section, "I think the contribution of the paper is marginal," we would appreciate it if the reviewer could provide more details about this opinion during the discussion period, particularly if our clarifications above have not fully addressed the reviewer's concern.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers. They addressed my questions, so I increased my score accordingly. | Summary: The paper proposes a data-driven approach to an accelerated solution of linear programming problems belonging to a common family. To this end, the dimensionality of the problems is reduced by a projection learned from a training set of problems. The paper first gives a theoretical generalization bound for this learning problem. Also, the paper proposes two specific projection learning algorithms, based on PCA and stochastic gradient ascent. Finally, the performance of the proposed methods is experimentally illustrated on several test cases.
Strengths: The paper is generally quite nicely written and very readable. I could follow it everywhere without difficulty and didn't even notice any typos.
The theoretical generalization bound (Theorem 4.4) is derived using more or less standard approaches of statistical learning theory, but the result itself is new and the proof seems to be correct. The proof is presented in the main text and is concise and nice.
The idea of data-driven projection and its implementations based on PCA and gradient ascent seem to be original. The paper includes a comparison of the proposed algorithm on 8 test cases; the proposed algorithms show good performance there.
Weaknesses: The most significant issue that I see is that the derived generalization bound is vacuous for the LP families experimentally studied in the paper, but this point is for some reason completely ignored by the authors. Specifically, the bound in question is $\epsilon\lesssim Hk\sqrt{n/N}$ (lines 257 and 359). As far as I understand, in the experiments $N\sim 200, n\sim 500, k\sim 20$. Substituting these numbers gives $\epsilon\lesssim 20H,$ which is vacuous because the expected performance of a solver is in the interval $[0,H]$ anyway. Not to mention that the $O(\cdots)$ in the bound may contain additional large constants. In fact, asymptotic generalization bounds are well-known to (significantly) overestimate the true generalization gap and so are more suitable as a mere theoretical assurance of convergence. However, the paper seems to suggest (say, in the end of section 6) that the derived bound has some practical value while making no attempt to actually discuss specific numbers associated with its experiments, which I find completely misleading. (This does not undermine the theoretical merit of Theorem 4.4; on the other hand convergence rates $O(\sqrt{pdim/N})$ are standard in SLT.)
The second general significant issue is that the whole setting of data-learnable projected optimization proposed in the paper seems fairly artificial to me. This setting obviously assumes the variables and constraints to be somehow aligned between different LP instances (in contrast, random projections do not require any such alignment). This point is discussed in Remark 3.2, where it is mentioned that such scenarios arise in daily production planning and flight scheduling. It would be interesting to see a particular real scenario of this type. All the example experimentally studied in the paper, even those that the authors call realistic, do not look realistic to me. As far as I understood, each of the 8 considered families is obtained by synthetically perturbing a single LP instance. This artificial setting is obviously very favorable to the proposed algorithms compared to the baselines. If the perturbation goes to 0, the family degenerates into a collection of identical LP instances that can all be perfectly "solved" by recalling a single solution, so the proposed training-based methods can be made to look arbitrarily more efficient than the baselines by tuning the perturbation magnitude.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the reviewer's thoughtful and inspiring feedback. Below we present our responses to the comments.
> Weaknesses:
>
> The most significant issue that I see is that the derived generalization bound is vacuous for the LP families experimentally studied in the paper, [...] However, the paper seems to suggest (say, in the end of section 6) that the derived bound has some practical value while making no attempt to actually discuss specific numbers associated with its experiments, which I find completely misleading.
We apologize for any misleading expressions, particularly the sentence at the end of Section 6. The primary purpose of the experiments was to observe the empirical performance of projection matrices learned with the PCA- and SGA-based methods, rather than to evaluate the sharpness of the bound in Theorem 4.4. We intended to communicate that the bound of $\varepsilon \lesssim Hk\sqrt{n/N}$ on the generalization error could be meaningful when $N$ is sufficiently large. Although using such a large training dataset may seem demanding, it is a plausible future scenario given the trend towards accumulating and utilizing more data to advance optimization technologies, as evidenced by projects like AI4OPT. We will revise the manuscript to clarify this point and avoid any misconceptions. We appreciate the reviewer pointing out this issue.
To merely supplement our revised statements, we conducted an additional experiment, shown in Additional Experiment 2 in the [global response](https://openreview.net/forum?id=jHh804fZ5l¬eId=wB14h9cLk0), using a larger dataset of smaller LP instances for the Packing problem. The dataset consists of $20,000$ training and $20,000$ test LP instances of size $(n, m) = (50, 5)$, and we set the reduced dimensionality $k$ to $2$. Substituting these parameters into the bound implies $\varepsilon \lesssim Hk\sqrt{n/N} = 0.1H$, which is now non-vacuous. We computed approximate generalization errors (the left-hand side in Eq. 4) for projection matrices learned with PCA- and SGA-based methods, where the true expectation is approximated by taking an average over the $20,000$ test instances. Figure 2 in the global response compares these to the reference bound of $k\sqrt{n/N}$ implied by the theoretical analysis (omitting log and constant factors). While empirical generalization errors are typically much better than what the theory implies, all curves show a decreasing trend, and the theoretical upper bound converges to zero as $N$ increases, which could offer meaningful bounds when $N$ is large. We hope this clarification effectively addresses the reviewer's concern.
> (This does not undermine the theoretical merit of Theorem 4.4; on the other hand convergence rates $O(\sqrt{pdim/N})$ are standard in SLT.)
Just to clarify, we wish to emphasize that the fact that $O(\sqrt{{\rm pdim(\mathcal{U})}/N})$ is standard in statistical learning theory also does not undermine the contribution of Theorem 4.4. The contribution of Theorem 4.4 lies in establishing the bound on the pseudo-dimension as ${\rm pdim(\mathcal{U})}=O(nk^2\log mk)$, not in asserting the $O(\sqrt{{\rm pdim(\mathcal{U})}/N})$ bound on the generalization error. The latter is used merely as a standard fact in our paper.
> The second general significant issue is that the whole setting of data-learnable projected optimization proposed in the paper seems fairly artificial to me. [...] This artificial setting is obviously very favorable to the proposed algorithms compared to the baselines. If the perturbation goes to 0, the family degenerates into a collection of identical LP instances that can all be perfectly "solved" by recalling a single solution, so the proposed training-based methods can be made to look arbitrarily more efficient than the baselines by tuning the perturbation magnitude.
We acknowledge the reviewer's points and appreciate the insights provided. While attempting to obtain more realistic datasets, we found that publicly available repositories such as Netlib, which we used in our experiments, only offer a single LP instance for each setting and do not provide the volume of data necessary for training. Thus, we created datasets by adding noise. Please note that our realistic datasets include outliers, making them less artificial than the reviewer might have perceived.
To address the reviewer's concern that our datasets might be too favorable to our proposed methods, we conducted additional experiments with the Packing, Maxflow, and MinCostFlow datasets at higher noise levels; please refer to Additional Experiment 1 in the [global response](https://openreview.net/forum?id=jHh804fZ5l¬eId=wB14h9cLk0) for details. Since data-driven methods can derive no benefit from completely noisy data, increasing the noise level effectively creates more challenging datasets.
Figure 1 in the PDF attached to the global response presents the results. For Packing LPs, higher noise levels led to poorer performance of data-driven methods (PCA and SGA), aligning with the reviewer's perspective regarding our methods applied to less favorable settings. By contrast, for MaxFlow and MinCostFlow LPs, data-driven methods, particularly SGA, continued to perform well even in highly noisy environments. As described in the global response, this robustness is probably attributed to the fixed graph topology. Since fixed graph topologies are prevalent in practical scenarios such as transportation planning on traffic networks, we believe the noise resistance in these datasets indicates substantial additional merit of our methods.
This unexpected finding underscores the value of the reviewer's feedback, which has helped us present another positive aspect of our approach. We hope our responses adequately address the reviewer's concerns and present our work in a more positive light. Please do not hesitate to reach out during the discussion period if there are any further questions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply and addressing both of my concerns. I'm raising my score. | Rebuttal 1:
Rebuttal: ## **Global response**
We sincerely thank all reviewers for their efforts in reviewing our paper and providing invaluable feedback.
This global response reports the results of two additional experiments. While these were primarily conducted to address the comments by Reviewer vPup, notably, Figure 1 in the attached PDF deserves mention here. We thank Reviewer vPup for providing comments that inspired these interesting experiments.
### **Additional experiment 1 (Figure 1): higher noise levels**
We investigated what would happen if we made the training datasets more challenging for our methods to learn projection matrices. We used the synthetic datasets described in Section 6, namely, Packing, MaxFlow, and MinCostFlow, and fixed the dimensionality $k$ of projected LPs to $20$. An important difference from the original experiments in Section 6 is the increased noise level $\omega$, which perturbs LP inputs through multiplication by $1+\omega$. Here, we draw $\omega$ from a uniform distribution over $[0, \bar\omega]$ with the upper bound $\bar\omega$ ranging from $0.0$ to $2.0$ in increments of $0.2$; originally, $\bar\omega$ was fixed at $0.1$. The larger $\bar\omega$ is, the less consistent the tendencies in the input LPs become, making it more challenging to learn effective projection matrices with our PCA- and SGA-based methods.
### **Results**
Figure 1 in the attached PDF presents the objective ratio (i.e., objective values divided by true optimal values computed with "Full") achieved by each method for Packing, MaxFlow, and MinCostFlow datasets.
**Packing.** The performance of our data-driven methods (PCA and SGA) worsens as $\bar\omega$ increases, as expected. While they exhibit a clear advantage over the random-projection baseline (ColRand) at small $\bar\omega$ values, they behave similarly to ColRand when $\bar\omega=2.0$.
**MaxFlow and MinCostFlow.** In contrast to the Packing case, our data-driven methods, particularly SGA, performed well even with high noise levels, while the performance of ColRand remained poor. We guess the success of data-driven methods is due to the fixed graph topology, which creates consistent tendencies across LP instances despite varying edge capacities and costs. Note that fixed graph topologies are ubiquitous in practice; in daily transportation planning, the topology of traffic networks is fixed, while the edge capacities and costs may fluctuate due to congestion. The results highlight the potential benefits of our data-driven projection methods in such applications, newly discovered through this additional experiment.
### **Additional experiment 2 (Figure 2): visualizing the theoretical bound**
We use this experimental result primarily to address the first weakness comment by Reviewer vPup, and thus we detail the experimental settings there. Our purpose here is to describe that the theoretical bound on generalization errors can be non-vacuous when $N$, the dataset size, is sufficiently large relative to problem-size parameters such as $k$ and $n$. While the theoretical bound presented in Figure 2 may appear loose compared with actual generalization errors (approximated by averaging over $20,000$ instances), the bound converges to zero as $N$ increases, offering a meaningful bound on generalization errors when $N$ is large.
Due to constraints on time and computational resources, the above experiments are limited to synthetic datasets (with smaller sizes in the second experiment). Nevertheless, we believe these results have sufficient implications regarding the behavior of our learning methods (PCA and SGA) on more challenging instances and the connection between theory and practice.
Pdf: /pdf/54a1e3913a06fbd51cae7e7ed04ed6451d0bee9a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-Object 3D Grounding with Dynamic Modules and Language-Informed Spatial Attention | Accept (poster) | Summary: This paper improves upon the previous work, M3DRef-CLIP, through three key modifications: First, the authors incorporate an additional proposal probability prediction branch followed by a NMS operator to filter out low-confidence and redundant object proposals. Second, they learn camera pose residuals to dynamically render the object proposals from multiple views and extract multi-view CLIP features as the 2D feature. Third, they incorporate spatial relations weighted by learnable weights from visual and text features. The proposed method outperforms M3DRef-CLIP in several settings and datasets.
Strengths: • The proposed method demonstrates state-of-the-art performance in both multi-object and single-object 3D grounding tasks.
• The ablation study clearly shows the performance improvement brought by each modification.
Weaknesses: 1. Despite the performance improvement, the time and memory complexity of the proposed methods seem substantial, particularly the multi-view rendering step. It would be beneficial to include a complexity comparison with baseline methods.
2. In M3DRef-CLIP, both F1@0.25 and F1@0.5 are reported. Including the same evaluation metrics would be advantageous, as this paper is an improvement over M3DRef-CLIP.
3. The paper would benefit from more detailed motivations and explanations. For example, what motivates the design of camera pose offset prediction (Eq. 5) in this manner? Why is the designed LISA module better than the language-conditioned spatial self-attention in ViL3DRel [9]?
4. What would the performance be if the camera pose offset were removed while keeping the average of multi-view CLIP features?
5. Is there any supervision on the predicted proposal probability (Eq. 1)? How does the threshold in Eq. 2 affect the performance?
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to the questions in the weaknesses section. I would consider increasing the rating if the authors can address my questions and concerns properly.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors have discussed some limitations and indicated that there are no potential societal impacts of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer rjTU
```
Q10. Time and memory complexity comparisons
```
Thanks for the suggestion. We show the FLOPs and inference time of each proposed module and a comparison with the baseline model M3DRef-CLIP in Tab. R5. All experiments are conducted on Multi3DRefer validation set on a single NVIDIA A100 GPU. The reported FLOPs and inference time are the averaged over the validation set.
We observe that the dynamic box proposal module and the dynamic multi-view renderer in the dynamic vision module contribute marginally to the computation. The additional computation in the language-informed spatial fusion module is also minimal. In other words, our model achieves better grounding performance without significantly increasing computations.
**Table R5. Computational cost for proposed modules**
| Module | FLOPs | Inference time |
|:-|:-:|:-:|
| Baseline detector | 943.1 M | 0.235 s |
| Detector w/ dynamic box proposal | 943.1 M | 0.241 s |
| Baseline renderer | 638.9 G | 0.271 s |
| Dynamic multi-view renderer | 638.9 G | 0.276 s |
| Baseline fusion module | 155.3 M | 0.004 s |
| Language-informed spatial fusion | 247.4 M | 0.007 s |
```
Q11. Comparison of the additional metric F1@0.25 with M3DRef-CLIP
```
Please note, M3DRef-CLIP did not provide quantitative results over F1@0.25 in their tables. Here, we use their model to provide additional comparisons with M3DRef-CLIP over F1@0.25 in Tab. R6. We observe that our D-LISA achieves a better overall F1@0.25 score, especially for multiple targets and sub-categories where the distractors of the same semantic class exist. This aligns with our observation for F1@0.5 results in Tab 4.
**Table R6. F1@0.25 (↑) result on Multi3DRefer validation set**
| Model | ZT w/o D | ZT w/D | ST w/o D | ST w/D | MT | All |
|:-|:-:|:-:|:-:|:-:|:-:|:-:|
| M3DRef-CLIP | 81.8 | 39.4 | 53.5 | 34.6 | 43.6 | 42.8 |
| M3DRef-CLIP w/NMS | 79.0 | 40.5 | **76.9** | 46.8 | 57.0 | 56.3 |
| D-LISA | **82.4** | **43.7** | 75.5 | **49.3** | **58.4** | **57.8** |
```
Q12. What is the motivation for camera pose offset prediction?
```
In L30-32, we motivated that the viewpoints of the renderer may need to be different across different scenes and object sizes. To avoid poor viewpoint initializations, we start from the fixed viewpoints and predict an offset for each viewpoint. In Eq. 5, we use an MLP to learn the camera pose offset for each view based on the average box size, as the object size is a strong indicator when choosing the rendering camera pose.
```
Q13. Why is the proposed LISA better than the language-conditioned spatial self-attention in ViL3DRel?
```
ViL3DRef [C] incorporates hand-selected features to guide spatial relations. While these hand-selected features perform well when provided with perfect (ground-truth) object proposals, they are not robust to errors, a typical issue with hand-designed features. In particular, when these features are used with detected (noisy) object proposals, their performance decreases.
As is shown in Tab. A1 and Tab. A2, though ViL3DRel works well on the Nr3D benchmark which provides ground truth box proposals, the performance is much worse when validating on the ScanRefer benchmark where no ground truth proposals are provided. We compare our LISA and the language-conditioned spatial self-attention in ViL3DRel on the Multi3DRefer benchmark in Tab. A4. The results validate the effectiveness of our module.
* [C] S. Chen, P.-L. Guhur, M. Tapaswi, C. Schmid, and I. Laptev. "Language conditioned spatial relation reasoning for 3d object grounding". In NeurIPS, 2022
```
Q14. What would the performance be if camera pose offset were removed?
```
Please see the ablation study in rows 1 and 2 of Tab. 4 where using the camera pose offset (DMR) improves the model performance.
```
Q15. How is the predicted proposal probability supervised?
```
The predicted probability is end-to-end supervised by weighting the detector features with the probabilities, as is shown in Eq. 4. To encourage fewer numbers of boxes, we also propose to regularize with the loss proposed in Eq. 3.
```
Q16. How does the threshold 0.5 in Eq.2 affect the performance?
```
Thanks for the suggestion. As we used a Sigmoid nonlinearity in Eq. 1, the maximum-likelihood estimate results in a threshold of 0.5. Here in Tab. R7, we further conduct experiments, following the setting in Sec. 4.3, but using different thresholds. We observe that using 0.5 results in the best performance.
**Table R7. Additional ablation studies on the filtering threshold.**
| Threshold | 0.4 | 0.5 | 0.6 |
|:-|:-:|:-:|:-:|
| F1@0.5 (↑) | 49.2 | **50.3** | 47.2 |
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you to the authors for providing the rebuttal. However, most of my concerns remain inadequately addressed. Here are some additional comments:
1. I am puzzled by the complexity comparison between the baseline renderer and the dynamic multi-view renderer. How is it that the multi-view renderer and CLIP feature extraction have almost no impact on memory and time?
2. The LISA method also computes the pairwise L2 distance between box centers, which is a feature in Vil3DRef. Shouldn't this distance calculation also be affected by noisy detected proposals?
3. Rows 1 and 2 of Table 4 do not show the performance when removing the camera pose offset while keeping the average of multi-view CLIP features.
4. The results in Table R7 do not match the performance reported in Table 4, which is quite confusing.
5. I agree with Reviewer su8L and WVQA that the three improvements over M3DRef-CLIP proposed in this paper are more engineering-focused and show limited novelty.
Hence, I would like to maintain my initial rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for the careful review and additional feedback. We address the additional concerns below:
```
1. Complexity of the dynamic multi-view renderer.
```
Sorry for the confusion. Recall that the baseline renderer in M3DRef-CLIP **also needs to render from multi-views** and **extracts the CLIP features**. The additional overhead brought by our dynamic multi-view renderer is **only computing the camera viewpoint offset prediction**, which has little impact on the overall computation.
```
2. Would L2 distance also be affected by noisy detected proposals?
```
Yes, we agree that L2 distance would be affected by noisy detected proposals. However, prior works, like M3DRef-CLIP and 3DVG-Transformer [D], have demonstrated the effectiveness of L2 distance in modeling spatial relationships in the noisy detected proposals setting. Our method do not rely on other hand-crafted features.
* [D] L. Zhao, D. Cai, L. Sheng, and D. Xu. 3DVG-Transformer: Relation modeling for visual grounding on point clouds. In ICCV, 2021.
```
3. What would the performance be if camera pose offset were removed? (Q14)
```
Again, M3DRef-CLIP's baseline renderer **also renders from multi-views** and **extracts the CLIP features**. So the ablation study in row 1 of Tab. 4 is exactly the performance when the camera pose offsets were removed.
```
4. The results in Table R7 do not match the performance reported in Table 4.
```
Sorry for the confusion. Due to limited rebuttal time and computational resources, we conduct the ablation on the dynamic proposal module only. The experiment follows the setting in row 3 of Tab. 4. We will provide an ablation on the filtering threshold for the full model.
```
5. Novelty and Engineering-focused of the proposed modules
```
Thanks for the clarification. We believe our approach is sufficiently novel and has not been done by prior works. Next, we do not share the sentiment that "engineering-focused" correlates with a lack of novelty. Our proposed modules are well-motivated and lead to an effective system that outperforms SOTA. | Summary: This paper introduces a novel two-stage approach for multi-object 3D grounding from a point cloud based on a given query phrase. The first stage of D-LISA uses a dynamic proposal module that selects a variable number of box proposals instead of a fixed maximum, addressing the issue of determining the optimal number of proposals in the scene. D-LISA incorporates a dynamic multi-view renderer module that optimizes viewing angles for each proposal based on the specific scene, moving away from the fixed camera poses used in prior work. The second stage introduces a module that reasons over the spatial relationships among objects, guided by the textual description, improving the contextual understanding of the model. Experiments conducted on the Multi3DRefer benchmark demonstrate that D-LISA outperforms the state-of-the-art methods by a significant 12.8% absolute increase in multi-object 3D grounding performance. It also shows competitive results in single-object 3D grounding tasks.
Strengths: 1. The method shows a substantial improvement over existing baselines, indicating effective handling of complex 3D scenes with multiple objects.
2. The dynamic module can reduce the proposals effectively.
Weaknesses: 1. The novelty of the dynamic vision module is limited. In fact, I think the statement of dynamic vision module is kind of exaggeration. From my perspective, the authors just calculate the probability of each box candidate to remove the low-probability boxes and use NMS to filter overlapping boxes. Both the two operations has nothing to do with a novel dynamic vision module.
2. Also, the novelty of the LISA module is also doubtful. The authors use spatial scores to modulate the self-attention and spatial distance matrix. Although this operation is innovative to some extent, it is not enough for acceptance. It is more like a trick rather than a novelty.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors re-organize the novelty according to the weakness to highlight the key points more clearly?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer WVQA
```
Q8. The dynamic vision module only removes the low-probability boxes and uses NMS to filter overlapping boxes, which is not novel.
```
We believe the reviewer is referring to the dynamic box proposal module. For the dynamic box proposal module, we do not claim NMS to be the novel contribution. We clearly state *"we employ a dynamic box proposal module **with** non-maximum suppression (NMS)"* in L98-99.
Our contribution to this module is the learning of dynamic proposal probability, supervised by the new dynamic proposal loss introduced in L113-116. This enables the end-to-end training by weighting the detector features with the probabilities, as is shown in Eq. 4. Additionally, we also propose the dynamic multi-view renderer to have dynamic camera viewpoints based on different scenes.
We respectfully disagree with the reviewer that the proposed method is not novel. To the best of our knowledge, prior works in 3D grounding have not studied these aspects. We would be grateful if the reviewer could provide references to back up this novelty claim.
```
Q9. Novelty of the LISA module
```
Our baseline model M3DRef-CLIP follows 3DVG-Transformer, which directly uses spatial distances as additional attention weights without further reasoning. Prior works, like ViL3DRef [A] and CORE-3DVG [B] explored the spatial relations with hand-selected features. These hand-selected features work with ground truth object proposals but lead to worse performance when the object proposals are predicted, i.e. noisy. As is shown in Tab. A1 and Tab. A2, though ViL3DRel works well on the Nr3D benchmark which provides ground truth box proposals, the performance is much worse when validating on the ScanRefer benchmark where no ground truth proposals are provided.
To address these shortcomings, we propose to use language-guided spatial scores to balance the visual attention weights and spatial attention weights. We believe our LISA design greatly differs from the existing works.
* [A] S. Chen, P.-L. Guhur, M. Tapaswi, C. Schmid, and I. Laptev. "Language conditioned spatial relation reasoning for 3d object grounding". In NeurIPS, 2022
* [B] L. Yang, Z. Zhang, Z. Qi, Y. Xu, W. Liu, Y. Shan, B. Li, W. Yang, P. Li, Y. Wang, et al. "Exploiting contextual objects and relations for 3d visual grounding". In NeurIPS, 2023. | Summary: This paper proposes D-LISA, a two-stage framework for multi-object 3D grounding. D-LISA consists of three novel components that make the method effective, namely a dynamic box proposal module, a dynamic multi-view renderer and a language informed spatial fusion module. Comprehensive Experiments are done on Multi3DRefer, ScanRefer and Nr3D datasets to prove the superiority of D-LISA over previous methods. Experimental results show that D-LISA not only outperforms previous methods on multi-object grounding, but also achieves comparable results on single-object grounding.
Strengths: 1. The paper is well-written and easy to understand and the figures help with the illustration of the overall idea.
2. Comprehensive experiments and ablation studies are conducted.
3. Implementation and evaluation details are clearly provided, making this work easy to follow.
Weaknesses: 1. In Table A2, the proposed method falls behind state-of-the-art method by a large margin, which somehow is not competitive.
2. In the ablation study (Table 4), I'm curious what would happen if putting two modules together, e.g. abandon LIS and use DBP and DMR, since the improvements of the full model is not significant compared to the model with one component.
3. The analysis of dynamic pose distribution is lacking. Readers can't tell from Figure 4 (b) if the dynamic pose rendered results are better. Fixed pose results seem to be looking at the object from a more informative view.
4. How much effect NMS has on the proposed method seems to be unclear, the paper only conducted experiments by adding NMS to a baseline method, resulting in a much better performance.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see the weakness part.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer W21s
```
Q4. Comparisons with SOTA methods on Nr3D benchmark in Tab. A2.
```
As is mentioned in L415-416, the Nr3D benchmark **assumes perfect object proposals**, which is not the most realistic setting. Hence, we follow M3DRef-CLIP to consider the setting where object proposals need to be detected.
We note that M3DRef-CLIP has made a similar observation: models designed for non-perfect objective proposals generally perform less effectively in the perfect object proposal setting. In Tab. A2, we report the comparisons for completeness of the evaluation. However, we do not believe this is a fair comparison as the models are designed with different intentions.
```
Q5. Additional ablation study on the proposed modules.
```
Here we provide additional ablation by gradually adding proposed modules in Tab. R3. The experiments follow the setting in Tab. 4. We observe that each proposed module improves the model. The overall model with all proposed modules achieves the best result.
**Table R3. Additional ablation studies on combined modules.**
| LIS | DBP | DMR | F1@0.5(↑) |
|:-:|:-:|:-:|:-:|
| ✗ | ✗ | ✗ | 49.3 |
| ✓ | ✗ | ✗ | 50.4 |
| ✓ | ✓ | ✗ | 50.9 |
| ✓ | ✓ | ✓ | **51.2** |
```
Q6. Why is the rendering with dynamic pose better?
```
Sorry for the confusion. It is difficult to judge what is *"more informative"* from a model perspective. The provided illustrations in Fig. 4 (a) and (b) meant to show the differences in the learned poses versus the fixed pose. Empirically, we observe that the dynamic camera pose leads to better grounding performance, see row 1 and row 2 in Tab. 4.
```
Q7. What is the effect of NMS on the proposed method?
```
As is mentioned in L214-216, adding an NMS module removes the duplicate predictions and leads to a higher recall, thus improving the F1 score. The state-of-the-art method M3DRef-CLIP did not include the NMS module in their pipeline. We show the effect of the NMS module by adding a baseline of M3DRef-CLIP with NMS in Tab. 1.
To provide a more comprehensive analysis of the NMS module, we show the additional comparison between our proposed D-LISA and D-LISA **without** NMS on Multi3DRefer. See Tab. R4 below. We observe that our designed D-LISA outperforms the baseline M3DRef-CLIP both with and without the NMS module. Using the NMS module would lead to a higher F1 score compared to not using it.
**Table R4. The F1@0.5 (↑) w/ and w/o the NMS module.**
|NMS| M3DRef-CLIP | D-LISA |
|:-:|:-:|:-:|
| ✗ | 38.4 | 39.8 |
| ✓ | 49.3 | **51.2** |
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough experiments and ablations. They addressed the concerns I had. The work has been made more comprehensive with the experimental results provided. However, I agree with the other reviewers that the work is introducing useful techniques for multi-object grounding from an engineering perspective, therefore I'd like to maintain my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer W21s,
Thank you for the feedback. We are glad that we could address your concerns.
We believe that our approach is sufficiently novel and that improving the engineering perspective of a method to build an effective system is important. However, we understand and respect the difference in opinion.
Best,
Authors | Summary: The paper introduces D-LISA, a two-stage approach for multi-object 3D grounding that incorporates three innovative modules. First, a dynamic vision module generates variable and learnable box proposals. Second, a dynamic multi-view renderer extracts features from optimized viewing angles. Third, a language-informed spatial attention module reasons over the proposals to output final predictions. Empirically, D-LISA outperforms state-of-the-art methods by 12.8% in multi-object 3D grounding and is competitive in single-object 3D grounding.
Strengths: 1. This paper is generally well-written and clearly stated.
2. The key idea lies in enhancing visual understanding and human-computer interaction by improving the ability to locate objects in 3D scenes based on natural language descriptions.
3. Experiments demonstrate that D-LISA outperforms the existing state-of-the-art, indicating the effectiveness of the proposed innovations.
Weaknesses: 1. It is not clear what core issue this paper is targeting in the task of dynamic multi-object 3D grounding. I suggest the authors include this in the abstract and introduction section. To me, the three improvements over M3DRef-CLIP are very engineering.
2. The paper could benefit from a more detailed ablation study that isolates the impact of each dynamic component (the dynamic proposal module, the dynamic multi-view renderer, and the LISA module) on different types of scenes and queries to better understand their individual contributions.
3. The paper does not provide details on the computational cost of the dynamic components, such as the multi-view rendering and the language-informed spatial attention module.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## To Reviewer su8L
```
Q1. Core issues targeted for multi-object 3D grounding
```
In L26-33, we summarized the targeted issues and the proposed solutions. Concretely, we identified that:
- object proposals are selected based on a **fixed** maximum number,
- the feature extractions from the proposals are based on a set of **fixed** camera poses for all scenes,
- the fusion module, of the existing method, lacks effective reasoning over spatial relations among objects.
We believe these aspects are crucial to building an effective multi-object 3D grounding system. We then propose a dynamic vision module and a language-informed spatial fusion module to address these issues. We will clarify the introduction.
```
Q2. Ablation studies on different types of scenes and queries
```
We evaluate the contribution of each proposed component in Tab. 4. The performance of individual modules is shown in rows 2, 3, and 4 respectively. The different categories of scenes and queries are introduced in L197-201, including
- zero target without distractors of the same semantic class (ZT w/o D);
- zero target with distractors (ZT w/D);
- single target without distractors (ST w/o D);
- Single target with distractors (ST w/D);
- multiple targets (MT).
These scene and query categories follow the prior work M3DRef-CLIP.
Additional ablations for different query types, including queries with spatial, color, texture, and shape information are reported in the table below. In Tab. R1, We report the F1@0.5 (↑) metric on the Multi3DRefer validation set. We observe that each proposed module effectively improves the performance for the queries that contain spatial, color, and shape information, and is competitive with the baseline for queries with texture information. The overall model achieves better grounding performance across all query types than the baseline.
**Table R1. Additional ablation studies on question types**
| LIS | DBP | DMR | Spatial | Color | Texture | Shape |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| ✗ | ✗ | ✗ | 48.9 | 50.8 | 52.1 | 51.7 |
| ✗ | ✗ | ✓ | 49.4 | 51.1 | 51.7 | 51.8 |
| ✗ | ✓ | ✗ | 49.9 | 51.4 | 51.8 | 53.0 |
| ✓ | ✗ | ✗ | 50.0 | 51.7 |**53.4** | 52.4 |
| ✓ | ✓ | ✓ | **50.9** |**52.1**| 52.9 | **53.3** |
```
Q3. The computational cost for each proposed module
```
Thanks for the suggestion. We show the FLOPs and inference time of each proposed module and a comparison with the baseline model M3DRef-CLIP in Tab. R2. All experiments are conducted on Multi3DRefer validation set on a single NVIDIA A100 GPU. The reported FLOPs and inference time are averaged over the validation set.
We observe that the dynamic box proposal module and the dynamic multi-view renderer in the dynamic vision module contribute marginally to the computation. The additional computation in the language-informed spatial fusion module is also minimal. In other words, our model achieves better grounding performance without significantly increasing computations.
**Table R2. Computational cost for proposed modules**
| Module | FLOPs | Inference time |
|:-|:-:|:-:|
| Baseline detector | 943.1 M | 0.235 s |
| Detector w/ dynamic box proposal | 943.1 M | 0.241 s |
| Baseline renderer | 638.9 G | 0.271 s |
| Dynamic multi-view renderer | 638.9 G | 0.276 s |
| Baseline fusion module | 155.3 M | 0.004 s |
| Language-informed spatial fusion | 247.4 M | 0.007 s | | Rebuttal 1:
Rebuttal: We thank all the reviewers and the AC for the thorough reviews. We are happy to see the reviewers' supportive comments and feedback. Reviewers **#su8L** and **#W21s** commend the paper for its clear and well-structured writing. Reviewers **#W21s** and **#rjTU** appreciate the comprehensive experiments and ablation studies, with detailed implementation and evaluation. Reviewers **#su8L**, **#WVQA**, and **#rjTU** all acknowledge the substantial improvement in both multi-object and single-object 3D grounding, indicating the effectiveness of the proposed model. We begin the response by restating our contribution, individual questions are addressed in the responses below.
This paper tackles the problem of multi-object 3D grounding where we identify shortcomings in the prior work, e.g., using a fixed number of boxes and features used for reasoning. To address these shortcomings, we propose a dynamic vision module and a language-informed spatial fusion module. For the dynamic vision module, we introduce a dynamic box proposal module that automatically learns the relevant box proposals instead of using a fixed maximum number of proposals. We propose to learn the camera pose for rendering dynamically based on the scene instead of using fixed camera viewpoints. For the language-informed spatial fusion module, we enable efficient reasoning over spatial relations by learning to balance the visual attention weights and spatial relations guided by language. Extensive experiments on both multi-object and single-object 3D grounding benchmarks validate the effectiveness of the proposed model.
We look forward to a constructive discussion period and hope to address any concerns that the reviewers may have! | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MV2Cyl: Reconstructing 3D Extrusion Cylinders from Multi-View Images | Accept (poster) | Summary: The authors introduce a novel method named MV2Cyl, which reconstructs technical 3D objects withing the sketch-extrude paradigm by utilizing a 2D prior model and a learnable radiance field derived from multi-view images. To accomplish this, the 2D prior model is trained on a labeled dataset to predict semantic labels, $K$ extrusion cylinders, and extrusion curves, which are crucial for further processing of the 3D shape. Once the model is trained, it predicts pseudo-ground-truth for the subsequent stage. In this stage, a radiance field model is trained to replicate the pseudo-ground-truth labels. As the radiance field model converges, it facilitates the extraction of surfaces and curves in 3D space. The authors then detail how to utilize this output to reconstruct the final shape. The experimental results demonstrate that each component significantly contributes to achieving 3D reconstructions that surpass the selected baselines. Given that the radiance field is trained solely from the multi-view images and the prior model, it offers improved real-world applicability compared to previous works.
Strengths: - The paper is overall well-written and easy to follow.
- Each component is effectively justified, and the ablation study outcomes distinctly demonstrate their significance.
- Given the extensive datasets like ABC [1], the prior training represents a significant contribution to this research. It is conceivable that the prior model may serve as a foundational model for CAD extraction in subsequent studies.
- Additionally, the prior and inference models are independent, and both can be adapted adequately if new, better models appear in the literature. This should also facilitate scaling with the datasets.
- I appreciate the demo displayed in the Appendix. It illustrates the practical application of the model and reinforces the societal impact discussed in the final section of the paper.
- One can argue about the pros and cons between different paradigms (such as sketch-extrude, CSG, etc.). The choice of sketch-extrude in this instance is substantiated by the results and outperforms the established baselines.
- Publishing the code and the extended dataset upon acceptance is highly appreciated.
[1] Koch, S, Matveev, A, Jiang, Z, Williams, F, Artemov, A, Burnaev, E, Alexa, M, Zorin, D, Panozzo, DABC: A Big CAD Model Dataset For Geometric Deep Learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 .
Weaknesses: - Point2Cyl, ExtrudeNet, and SECAD-Net encode inputs as latent vectors, which are then decoded into the target representation. An encoder in this context could be any model that processes input in a specific modality. Adapting these methods to handle multi-view images appears feasible and would support the paper's claims. Notably, an encoder understanding 3D geometry based solely on 2D images could be advantageous (for instance, utilizing part of the network from [2] as an encoder).
- While the paper is generally understandable, Section 3.4 is hard to follow. Introducing a simple visualization to illustrate the process would aid comprehension.
- In Section 3.4, points 1 and 2 prompt inquiries about the 3D model employed in this study. The authors refer to TensorRF, but it is unclear whether a standard NeRF or NeuS was used. If NeuS was utilized, the predicted normals could be enough to:
- Determine a fitting plane by voting for the plane that most likely aligns with the normals.
- Derive the output shape using a standard algorithm like ball-pivoting or Poisson surface reconstruction, both of which are widely accessible.
- Furthermore, a NeuS backbone would arguably be more appropriate for the application described in the paper, as it more precisely reconstructs surface fields, potentially enhancing the accuracy of the curve field.
[2] Watson, D, Chan, W, Martin-Brualla, R, Ho, J, Tagliasacchi, A, Norouzi, M. "Novel view synthesis with diffusion models". arXiv preprint arXiv:2210.04628 2022.
Technical Quality: 3
Clarity: 4
Questions for Authors: Questions:
- I will refer to:
> We first extract the surface point cloud and curve point cloud from the corresponding 3D fields by thresholding the corresponding existence fields. Each point in the point cloud would have attributes queried from the corresponding attribute fields.
How is the point cloud sampled specifically? Is it accurate the underlying radiance field uses density instead of SDF field like in NeuS?
Suggestions:
- The explanation in lines 44-47 assumes prior knowledge of the sketch-extrude principle. Without this knowledge, it's challenging to understand the source of the issue. An inset visualization could greatly aid comprehension.
- I propose renaming the "existence field" and "attribute fields". The term "existence field" implies a binary output indicating whether an object exists at a given point. "Density field" would be more appropriate, clarifying how prior values translate to opacity and aligning with the field's intended purpose.
Regarding the "attribute field", the term could be misconstrued as referring to "attributes" such as colors. "Semantic field" would be more accurate.
- It can be deducted from the context that $L_\text{existence}$ applies to both curve and attribute fields. However, stating that clearly in the text would remove potential ambiguities for other readers.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The potential negative societal impact is clearly stated and supported by the real-demo presented in the Appendix.
Regarding the limitations, it is unclear what authors mean by:
> our framework is limited in predicting binary operations across primitives. We plan to further explore predicting binary operations using multi-view image inputs.
in lines 328-331. Some examples or an visualization in the Appendix would be appreciated here. It would be worth adding a comment on the shape complexity the model cannot handle.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **Adapting methods to multi-view images (Image encoder + CAD decoder).**
Thank you for the suggestion. First, we want to note that for Point2Cyl and SECAD-Net, it is not feasible to directly replace the point cloud encoder with an image encoder since the encoder does not simply output a latent code but assigns information to each point, such as the segment label or surface normal. ExtrudeNet is the only case that produces a latent code from the input shape. While we couldn't try replacing the point cloud encoder of ExtrudeNet with an image encoder during the tight rebuttal period (as we needed to address questions from five reviewers), we promise to include the results of this experiment in the revision. We strongly believe that our method—segmenting 2D images, unprojecting them to 3D space, and fitting extruded cylinders rather than directly predicting them—will demonstrate superior performance.
- **Section 3.4. Simple visualization.**
Thank you for your suggestion. We add a simple visualization to clarify the pipeline of the reverse engineering of CAD parameters as shown in Figure 1.
- **Clarification on 3D model (TensorRF) employed.**
Our approach directly builds on top of TensorRF which is based on standard NeRF that models a density field directly instead of an SDF as in NeuS. We will clarify this in the final version. We find that this choice of using TensorRF works well in our setting and is also fast to optimize. We thank the reviewer for their suggestion on using a NeuS backbone, which is interesting and also makes sense. We leave this exploration for future work.
- **How are point clouds sampled from the learned fields?**
We refer to Appendix A5.3 (Ln 638-642). We query the learned existence field by sampling points across a 400x400x400 grid. The existence values $\mathcal{F} (x)$ are converted into opacity values (Eq. 3), and we keep the sampled points whose opacities are greater than 0.99, obtaining our point cloud.
- **Adding visual aids for lines 44-47.**
Thank you for the suggestion. We will include an additional visualization to aid comprehension in the final version.
- **Renaming “Existence field” and “Attribute field” to “Density field” and “Semantic field”.**
Thank you for your suggestion. That makes sense! We will modify this in the final version.
- **Clearer statement of L_existence.**
Thank you for the suggestion. We will state this explicitly and more clearly in the text of the final version.
- **Clarification on limitation (line 328-331)**
Similar to Point2Cyl, our approach did not explicitly predict the binary operations of the primitives, i.e., determining whether each primitive is 'added' or 'subtracted' from the model. We will make this statement more clear in the final version. Note that in this rebuttal, we further provided a simple approach to the binary operation recovery: one naive and simple approach is to do an exhaustive search against all possible primitives-operations combinations (2^K possibilities for a model with K primitives), and take the best combination, which is closest to the observed (input) images. To obtain the best combination, we measure the average L2 distance between the rendered images of each combination’s resulting model against the observed multiview images and select the lowest one. We implement this approach and obtain the binary operations, examples are shown in Figure 3.
- **Failure cases.**
Since MV2Cyl relies on 2D image inputs, it’s susceptible to occlusion. Specifically, if one side of an extrusion cylinder is totally hidden by the other ones, our 2D segmentation model cannot catch the hidden side. In this case, the extrusion cylinder cannot be reconstructed. In Figure. 5, the left shape is the target CAD model and the right one is the reconstructed CAD model by MV2Cyl. While the target shape has an inset hexagonal cylinder whose one end is hidden by the outer one, MV2Cyl failed to reconstruct the inset extrusion cylinder.
---
Rebuttal Comment 1.1:
Title: RE: Rebuttal
Comment: I thank the authors for their response and for their effort to provide additional experiments. The attached PDF rebuttal promises a significant changes to the final version and I'll be more than happy to see that paper accepted to the conference.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer dRdb,
We're glad our rebuttal addressed your questions. We appreciate your detailed feedback, which greatly helped improve our work. | Summary: This paper proposes a new method to reconstruct extrusion cylinders from multiview images. The key idea is to train a CNN to predict the binary mask for each surface and the sketch of each surface. Then, these predictions are used in learning 3D neural fields for each surface and sketches by the volume rendering technique.
Strengths: 1. The task seems to be interesting in that we use multiview images with semantic segmentation to help reconstruct 3D extrusion cylinders. This method is novel for me.
2. The experiments demonstrate some improvements over previous reconstruction methods.
Weaknesses: 1. How to associate each surface is unclear here. If I understand it correctly, the predicted labels are similar to instance segmentation labels so we may have multiple different labels for the same region. How to associate predicted surfaces with different viewpoints is not clear here.
2. The prediction of feature lines could be sensitive to the line width. If we use a large line width the localization of the feature lines could be difficult because we cannot learn a perfect 3D field for these lines. If we use a small line width, learning a feature field for signals just on very sparse pixels could be problematic and we may extract disconnected feature lines (sketches) here. The setting of the linewidth for training could be tricky here.
3. It is difficult to get some real-world images to train these CNNs, which could harm the generalization ability of the proposed method. If we have painted some textures on these CAD models, then the proposed method could fail because it is not trained on models with textures.
4. The proposed method seems to be too concentrated on a specific problem in computer graphics. Predicting extrusion cylinders is not a common task in daily life and the proposed method does not contain technically new ideas on neural networks. The paper seems to be more suitable for some graphics journals or conferences rather than NeurIPS.
Technical Quality: 3
Clarity: 3
Questions for Authors: Refer to Weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed in the paper, which is the binary prediction of CNNs. I think the method could be sensitive to hyperparameters, which could be a potential limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **Surface label association.**
Yes, you are right that the predicted segmentation maps of the multi-view images are instance segmentation labels. Hence we use Hungarian matching (see Ln 209-215 main paper) to align the labels between the segmentation maps of the training images with the segmentation labels of the attribute field at training time.
- **Line width sensitivity.**
We use a line width of 5 pixels in our experiments, which we found works quite well in our setting. We further experiment on using different line widths, specifically 2.5 pixels (50% thinner) and 7.5 pixels (50% thicker). Results are shown in Table 4, where we see that our approach is not that sensitive to line width.
- **Textured CAD models.**
Yes, our current method does not directly handle textured CAD models. We plan to explore handling textured objects by utilizing generalizable large 2D models such as Segment-Anything (SAM) to extract the object mask removing the texture and similar to our current real data experiments we can remove the background, which together can potentially bridge the domain gap. We leave this exploration as future work.
- **Specific problem with computer graphics.**
In this work, our main idea is to demonstrate how 2D priors can aid in reconstructing 3D structures when combined with neural rendering and to explore the best way to integrate different 2D priors (such as surfaces and curves) in 3D space. Although we focus on reconstructing extrusion cylinders in this study, we believe that the ideas presented have the potential to influence future research in many areas of machine learning. We will revise the introduction of our submission to clarify this point.
- **Limitations: binary operation prediction.**
One simple approach to predicting binary operations without CNNs is to do an exhaustive search against all possible primitives-operations combinations (2^K possibilities for a model with K primitives) and take the best combination, which is closest to the observed (input) images. To obtain the best combination, we measure the average per-pixel L2 distance between the rendered images of each combination’s resulting model against the observed multiview images and select the lowest one. We implement this approach and obtain the binary operations, examples are shown in Figure 3. We leave further explorations for better approaches, such as being cheaper and faster, as future work. | Summary: MV2Cyl is a method that proposes to solve the 3D reverse engineering of CAD models. The network takes as input multi-view images and outputs extrusion cylinders. The method extends Point2Cyl [58] that proposed to predict extrusion cylinder from point clouds. It is argued that multi-view images can easily be obtained from 3D scans and that 2D CNNs have superior performance than 3D processing networks.
The network is composed of three main components. Firstly, 2D CNN U-Net-based networks learns to segment and classify the pixel image of the 2D images into surface and curve segments. Then, a NERF based approach is used to leverage the 2D segments into 3D existence and attribute fileds. Finally, 3D surface and curve point clouds are extracted from the 3D fields. The extrusion parameters (extrusion axis, height and centroid) and the sketch (implicit function) are estimated from the extracted point clouds.
The method is evaluated on the DeepCAD[64] and Fusion360[63] datasets, from which multi-view images are extracted. The results show an improvement compared to Point2Cyl[58] and a Neus[51]+Point2Cyl.
Strengths: - The paper is fairly clear and easy to understand for a reader that is familiar with Point2Cyl [58].
- The main strength of the proposed work is that it shows an improvement to the results presented in Point2Cyl [58]. This is achieved by leveraging the information provided in multi-view images as opposed to 3D point clouds.
Weaknesses: - While the proposed work offers improvements to Point2Cyl [58], the reviewer feels that it does not address the main limitations of Point2Cyl. Firstly, the binary operations between cylinder (how to assemble the cylinders into the GT CAD model) are not predicted. This limitation is mentioned in the paper but addressing it would have lead to a significant improvement to Point2Cyl. Secondly, the sketches are predicted as implict functions and not parametric curve primitives such as in real CAD models. This produces noisy surfaces (as shown in Figure 3 of the paper), which are quite different from the standard sharp representation of CAD models.
- The evaluation consists of a comparison with Point2Cyl using some of the metrics proposed in Point2Cyl. Nevertheless, further analysis of the results could be completed. For example, it could be meaningful to compute a consistency metric between the surface and curve predictions. One would expect the two network to predict the same number of extrusion cylinders. Also, are the extracted curve point clouds consistent with the surface point clouds? A comparison to the point cloud to CAD sequence experiment presented in DeepCAD[64] could also be included as the extrusion cylinders can easily be extracted from the CAD sequences. It would also be interesting to know how the performance of the network changes with respect to the number of extrusion cylinders (K) present in the ground truth.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The representation of the output is a little unclear. Figure 3 suggests that the output of MV2Cyl is a CAD model. Can a CAD B-Rep be obtained from the output? If so, how are the sketch primitive and parameters obtained from the predicted implicit functions?
Beyond the points mentioned in the weaknesses, it would be interesting to know how the perfomance of the model changes with respect to the number of input images at test time.
- Many qualitative results are presented in the Appendix, all of them show very impressive predictions. It would also be interesting for the reader to see examples for which the model fails to recover the correct shape.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - The authors have addressed one of the main limitations of the work that is the lack of prediction of the binary operations between the different cylinders. Potential societal impacts have also been included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **Binary operations.**
The binary operations can be recovered in a simple/straightforward approach such as an exhaustive search against all possible primitives-operations combinations (2^K possibilities for a model with K primitives). We take the best combination as the output configuration, which is the combination that is closest to the observed images. To obtain the closest/best combination, we measure the average per-pixel L2 distance between the rendered images of each combination’s resulting model against the observed multiview images and select the lowest one. We implement this approach and obtain the binary operations, examples are shown in Figure 3. We will include this in our final version.
- **Sketches: from implicits to parametric curves.**
We demonstrate how to convert an implicit sketch to a parametric representation. After acquiring the 2D implicit field for each sketch, we sample boundary points using the marching squares algorithm and use them as knots for cubic Bézier curves. Utilizing these points, we employ an off-the-shelf spline fitting module implemented in Scipy library which finds the best B-spline representation with the given control/knot points. This process is illustrated in Figure 4. Note that the resulting sketch may not consist of the minimal set of parametric curves; for example, a single line segment might be divided into multiple segments. Optimizing this aspect is left for future work.
- **Analysis of results: consistency metric.**
Thank you for your suggestion. We compute a consistency metric between the curve and surface predictions. Specifically, we compute the percentage of models where the number of predicted curve instances and surface instances are identical. Across all the test shapes in the Fusion360 dataset, 98% of the models (1,095 shapes over 1,108 shapes) have the same number of curve and surface segments. Moreover, we also measure the consistency between the extracted curve and surface point clouds. Concretely, we measure the average one-way chamfer distance between the extracted curve point cloud and surface point cloud (0.081 x 10^-3). We see that this very small value suggests that the extracted curve point clouds are quite consistent with the surface point clouds.
- **Comparison against DeepCAD’s point cloud to CAD.**
Thank you for suggesting a comparison with DeepCAD’s point-cloud-to-CAD framework. We attempted this comparison but encountered several challenges. First, the pretrained model for DeepCAD’s point-cloud-to-CAD framework was not available. Second, retraining the DeepCAD model with our dataset took several days, which was difficult to manage within the short rebuttal period. Third, we tried using an unofficial pretrained model from GitHub, but the resulting CAD outputs were completely different from the input point clouds, suggesting possible bugs in the training process. We plan to contact the authors of DeepCAD to resolve these issues and achieve a fair comparison.
Despite these challenges, we strongly believe that our method—segmenting 2D images, unprojecting them to 3D space, and fitting extruded cylinders rather than directly predicting them—will demonstrate superior performance.
- **Performance w.r.t. different number of extrusion cylinders (K).**
Table 1 shows the results breakdown of the models in the Fusion360 test set across different numbers of extrusion instances (K). We see that the general trend is that the error steadily increases as with larger K, i.e. as model difficulty increases. (Note that the number of samples for K=7, 8 are very small; please take these statistics sparingly.)
- **Output representation.**
The output representation is a set of extrusion cylinders. Specifically, for each extrusion cylinder, we have its center (R^3), extrusion axis (SO(3)), extent (R^2) and a sketch represented as an implicit (which can also be converted into a set of parametric curves (see above response). Furthermore, for each extrusion cylinder, we can also recover its operation, which is either additive or subtractive, (see “Binary operations” response).
- **Ablation on the number of input images.**
Table 3 shows an ablation of MV2Cyl for a different number of input images at test time. Results show that our approach is not that sensitive to the change in number of input images and still achieves reasonable performance for as low as 10 input views.
- **Failure cases.**
Since MV2Cyl relies on 2D image inputs, it’s susceptible to occlusion. Specifically, if one side of an extrusion cylinder is totally hidden by the other ones, our 2D segmentation model cannot catch the hidden side. In this case, the extrusion cylinder cannot be reconstructed. In Figure. 5, the left shape is the target CAD model and the right one is the reconstructed CAD model by MV2Cyl. While the target shape has an inset hexagonal cylinder whose one end is hidden by the outer one, MV2Cyl failed to reconstruct the inset extrusion cylinder.
---
Rebuttal Comment 1.1:
Title: Answer to author rebuttal
Comment: The reviewer thanks the authors for the different clarifications and additional results. Most of the concerns were properly addressed by the authors. As for the DeepCAD pretrained point-to-CAD model, the authors could have used the pretrained model (https://github.com/ChrisWu1997/DeepCAD?tab=readme-ov-file#pre-trained-models) and trained their own multi-view image encoder to learn the latent space of the pretrained CAD sequence autoencoder. Based on the elements of the rebuttal, the reviewer raises the rating from 4 to 5.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer mRwh,
We are pleased that our rebuttal has clarified the points you raised, and we greatly appreciate the insights you provided.
Regarding the link to the pre-trained models, we found that it does not include the point cloud encoder. We were unable to train the point cloud and multi-view image encoders during the short rebuttal period while allocating our time and computational resources to handle numerous requests for additional experiments from five reviewers. However, we strongly believe that our method, based on surface and curve prediction, will outperform the simple autoencoding architecture. We promise to include this additional experiment in the final version. | Summary: This paper proposes a method to predict extrusion cylinders from images of a CAD part. Specifically, it takes a set of masked multi-view images as input; these are then processed independently by instance segmentors trained to find extrusion curves and surfaces. Neural fields are then fitted, that reconstruct these 2D segmentations in terms of 3D segmentations. Lastly, a set of primitives explaining the 3D segmentations is extracted using a series of heuristics (RANSAC and other hand-crafted robust estimators). Experiments are conducted on two standard benchmark datasets (Fusion 360 and Deep CAD), and show the proposed method out-performing a baseline approach (NeuS reconstruction followed by an existing 3D primitive-prediction pipeline).
Strengths: - The work presents a sophisticated and carefully engineered pipeline, with choice of components clearly motivated.
- The approach of fitting several 3D fields to the 2D segmentations, and regularising to ensure these are binary, is interesting.
- There is a fairly extensive evaluation (qualitative and quantitative) on synthetic (rendered data), including comparison against a baseline that first performs 'naive' surface reconstruction, then applies a 3D-only primitive extraction to find the extrusion cylinders.
- Quantitatively, the proposed method is found to significantly out-perform the reconstruction-then-fitting baseline. Indeed, it also out-performs a method that directly predicts extrusion cylinders from ground-truth 3D point-clouds, which is a notable success since the latter would seem to have access to more complete information
- Qualitatively, the proposed method appears to correctly capture geometric features of the input objects, using an appropriate (minimal) set of primitives, and providing much cleaner reconstructions than a naive
- The paper is well-written, sensibly organized and pleasant to read throughout.
Weaknesses: - The work focuses on clean synthetic (rendered) inputs, with only one experiment (in the appendix) on real data. However, operating on multi-view images offers no benefit over 3D-input methods when one has access to the 3D shapes (e.g. as point-clouds). For a strongly 'applied' paper like this, the lack of focus on the practically-relevant scenario (i.e. actual photos) is problematic.
- Real-world results (though still with in-distribution CAD shapes 3D-printed from the datasets) are not very impressive, despite fine-tuning for this setting. There is also no quantitative evaluation her, which is a pity since this would be straightforward given that the ground-truth CAD models are available (modulo 3D printing imperfections)
- There is no quantitative evaluation of the quality of 3D reconstruction in the sense of distance (e.g. hausdorff / f1) from the reconstructed shape to the original CAD model. It would be valuable to add this (and compare with NeuS2), to give an idea of how well the shapes are being predicted in a more universally-meaningful metric, as opposed to the given metrics that are specific to the shape representation and difficult to interpret in a broader context.
- There are few generalisable insights present in the paper. It presents a pipeline that evidently works well for the specific task, but the technical contributions are very narrow in applicability, reducing potential impact.
- Overall the paper is a poor fit for NeurIPS – lots of engineering (obviously an achievement in itself), but minimal machine learning (just training a segmentation model on synthetic data, then fitting the single-scene neural fields), making the interest to the community relatively low.
Technical Quality: 3
Clarity: 4
Questions for Authors: How/where do you predict whether each primitive is 'added' or 'subtracted' from the model? The results in Fig. 3 indicate that the model correctly inserts holes, but it is unclear in the method where this determination is made
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: These are discussed briefly but adequately in Sec. 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **Work focuses on clean synthetic data; lacking real data experiments.**
We note that it is difficult to obtain real data with ground truth CAD (sketch-extrude) parameters. In fact, to the best of our knowledge, such a dataset does not currently exist. Hence we opt to use synthetic data to train and test our method. We provide more real data examples in Table 6 (rebuttal) which supplements the real-world examples in Table A3 (supp).
- **Benefits of multi-view images over 3D inputs.**
We note that images are easier to capture and acquire such as with our mobile phones or digital cameras compared to 3D point clouds that require a depth camera or LiDAR scanner. Moreover, we also compared directly with Point2Cyl, which takes 3D point clouds as input, in Table 1 (main) and showed that MV2Cyl achieves better performance due to the superior performance of 2D networks compared to 3D backbones on the segmentation task, as shown in Table A7 (suppl.).
- **Quantitative results on real data.**
We quantitatively evaluate MV2Cyl on real data found in Table A3 (supp). Our reconstruction achieves an average Chamfer distance of 2.11*10^-3 from the ground truth CAD model. To compute this distance, we first needed to align the reconstructed model to the corresponding ground truth shape with the reconstructed model using ICP, after which the desired metrics can be computed.
- **Quantitative evaluation on 3D reconstruction using distance-based metric.**
We quantitatively evaluate the 3D reconstruction quality of MV2Cyl by measuring the Chamfer distance against the ground truth CAD model. To obtain the output CAD model, we obtain the binary operations through a simple exhaustive search approach (see “how to predict binary operations” below). Results are shown in Table 5, where our approach shows superior performance against the baseline, NeuS2+Point2Cyl. We note that for fair comparison, we compare with NeuS2+Point2Cyl instead of NeuS2 as the focus of our work is the reverse engineering of extrusion cylinders, which is a harder task compared to just 3D reconstruction without the CAD structure. Reverse engineering of extrusion cylinders allows for shape editing as well as importing the model back into CAD softwares.
- **Generalization insights to the paper/narrow applicability.**
Thank you for acknowledging the generalizable insights presented in our paper. We believe that our approach, which leverages 2D image segmentation and neural rendering for 3D structure reconstruction, can be extended to a broader range of CAD models and thus inspire future research. Additionally, extrusion cylinders are a representation widely used in the CAD industry, and their reconstruction has also been extensively studied, as described in our related work section.
- **Fit to Neurips.**
In this work, our main idea is to demonstrate how 2D priors can aid in reconstructing 3D structures when combined with neural rendering and to explore the best way to integrate different 2D priors (such as surfaces and curves) in 3D space. Although we focus on reconstructing extrusion cylinders in this study, we believe that the ideas presented have the potential to influence future research in many areas of machine learning. We will revise the introduction of our submission to clarify this point.
- **Binary operation prediction: determining whether each primitive is 'added' or 'subtracted' from the model.**
Similar to Point2Cyl, our approach did not explicitly predict the binary operations of the primitives. We further provide a simple approach to the binary operation recovery: one naive approach is to do an exhaustive search against all possible primitives-operations combinations (2^K possibilities for a model with K primitives) and take the best combination, which is closest to the observed (input) images. To obtain the best combination, we measure the average per-pixel L2 distance between the rendered images of each combination’s resulting model against the observed multiview images and select the lowest one. We implement this approach and obtain the binary operations, examples are shown in Figure 3. We will include this in our final version.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for their detailed rebuttal. The quantitative 3D metrics in particular are valuable and show a benefit to the proposed method. The additional results on real images are also appreciated, though I still find the results disappointing (similar to those already provided in the main paper) -- perhaps inevitable due to the reliance on synthetic data for training. As such I'm not going to increase my rating, but I'm still marginally in favor of acceptance.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer AtoY,
We're pleased our rebuttal addressed your questions and highlighted the benefits of our method. We greatly appreciate your detailed feedback, which has significantly improved our work.
Regarding real data results, techniques for bridging domain gaps or using more realistic textures and backgrounds in training data could improve the quality. We plan to explore these approaches in future research. Thank you for recognizing the potential and value of our work in proposing a novel method for multi-view 3D CAD reconstruction. | Rebuttal 1:
Rebuttal: We appreciate the invaluable feedback from all the reviewers on MV2Cyl. The thorough and insightful comments have significantly contributed to the improvement of our work.
We have carefully considered each question and suggestion and have provided detailed responses to the comments individually. We have compiled the qualitative results discussed in our rebuttal in the attached PDF file.
Pdf: /pdf/838f5239dc1b0cab8137dded3188e0a51e3ec082.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: ## Summary of the Paper:
*Problem Statement*:
Given multi-view images (RGB) of a 3D shape, the paper aims at recovering a *set* of extrusion cylinders to represent the underlying 3D shape.
*Motivation*:
Existing works that address 3D shape reconstruction problem through sketch-extrude take raw 3D geometry as input (ex: Point2Cyl) whereas this work takes multi-view images. Other works that take multi-view images as input perform parametric reconstruction (ex: NEF), which while important, does not reconstruct the 3D surface itself. And so, there exists a gap in the community in terms of either the input representation, or the output result, in the context of 3D reconstruction through sketch-extrude modeling. This forms the basis of this work, termed, MV2Cyl.
*Contributions*:
Developing a method for sketch-extrude reconstruction of 3D shapes from multi-view images, via labeled surfaces and curves.
*Input*:
Multi-view RGB images of a man-made 3D shape
*Output*:
3D reconstruction of the object based on sketch-extrude cylinders. A cylinder is
*Dataset used*:
Fusion 360 and DeepCAD
*Underlying Modeling Tool*:
1) U-Net for processing images (segmenting curves and surfaces)
2) Implicit field-based volumetric rendering for integrating 2D estimated curve and surface information to corresponding 3D space
*Learning Mechanism*:
Strongly supervised
*Loss Functions*:
For 2D surface segmentation, a U-Net with multi-class cross-entropy loss constrained by Hungarian matching algorithm is used. The Hungarian matching constraint is added to account for order invariance of the extrusion segments on the rendered 3D shape, as well as the to classify start-end-barrel segments.
The 2D curve segmentation module also follows the same architecture and loss functions. In addition, due to the sparsity of labels for curves (which can cause the network to just “attending” to the background pixels), two additional loss terms are used – Dice Loss and Focal Loss
2D to 3D integration using volumetric rendering: Here, the paper describes two fields. One is the Existence field and the other is the Attribute Field.
The training of the Existence field is dictated by the input 2D images, which is accomplished via differentiable rendering. A weighted combination of L2 loss (for surface) and sparsity loss (for curves) is used.
The training of the Attribute filed is governed by a multi-class cross-entropy loss function, constrained by Hungarian matching.
*Quantitative Metric*:
Extrusion-axis error (E.A.), extrusion-center error (E.C.), per-extrusion cylinder fitting loss (Fit Cyl.), global fitting loss (Fit Glob.)
*Baselines*:
Since existing works do not take multi-view images for sketch-extrude 3D reconstruction tasks, no suitable comparison exists. However, the paper compares against a closely related work (and its variant), Point2Cyl (+NeuS2), which takes a point cloud as input.
Strengths: 1) Well-written paper
2) Suitable comparisons have been made with Point2Cyl (and +Neus2) to investigate the apparent gains rendered by the proposed framework
3) Experimental results on two relevant and complex CAD datasets (Fusion-360 and DeepCAD) are better than other closely related works
Weaknesses: 1) I wanted to understand how essential the 2D curve segmentation network is in making the reverse engineering possible. In other words, how would the cylinder extrusion results in 3D look like if this module were not present? This is a critical piece of experiment, something that should have been shown through ablation experiments. A similar ablation experiment for 2D surface estimation is NOT necessary though, since it is straight-forward to see why this is needed in the first place, given multi-view images as inputs.
2) There is a lack of discussion on which kind of shapes does the proposed approach fail on. I would have liked to see a figure (with MV images, 3D GT and sketch-extrude 3D reconstructions) that shows the kind of shapes that MV2Cyl fails at, even if such results are just a tiny sample. This is consequential to the research push in this direction, and also does not do complete justice to the paper.
3) In Table 1, the different quantitative metrics lack corresponding units. For ex., as I understand, the extrusion axis error (E.A) is an angle measurement. The reported number must have been in degrees, yes? Likewise, it would be good to see units of measurement, if applicable, for other metrics reported therein. If you want to save space, you could just put it in the table, instead of mentioning in the caption.
4) The extrusion instance segmentation is performed by both 2D segmentation modules (Surface and Curve). I sense that there is a strong dependency on this task to help complete the other “auxillary” task in these modules. As this (extrusion instance segmentation) is a common task for the two, were there experiments done to assess if these two modules (Surface and Curve) could be merged? To me, there is something amiss. Would like to see an intuitive explanation justifying this two-step, common-task-occurrence justification.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) I do not see RGB inputs (as claimed on L 105) in the Teaser as well as Fig 1. I see just a grayscale image. Which one is it?
2) How do you differentiate NEF (CVPR 2023) from the 2D curve segmentation framework? The curves that are of interest can be weakly considered as edges. Can you replace 2D curve segmentation network with NEF (part/whole) to detect the extrusion curves and the base curves?
3) Dataset question (L259- 261)– how were these datasets enriched with 2D segmentation maps? That is, how were the ground truth 2D curve and extrusion segments obtained? Manually? Provide details.
4) I am curious to know why ABC dataset was not considered. It is a highly relevant dataset that would have made the comparisons exhaustive. What was the reasoning here?
5) Can you talk about the extendibility of this approach? I ask this question since I find it non-trivial to train the Existence and Attribute fields, as described in the paper.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes, the authors have correctly addressed the limitations of the work, including any societal impact that may arise as a result of the publication.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **2D curve segmentation network ablation.**
See Table A2 and Figure A4 in the supplementary for an ablation on the necessity of the 2D curve segmentation network; referred to as the “Surface only” approach. We see that without the curve segmentation module, occlusions between the extrusion cylinder instances can result in missing base faces leading to poor reconstruction.
- **Failure cases.**
Since MV2Cyl relies on 2D image inputs, it’s susceptible to occlusion. Specifically, if one side of an extrusion cylinder is totally hidden by the other ones, our 2D segmentation model cannot catch the hidden side. In this case, the extrusion cylinder cannot be reconstructed. In Figure. 5, the left shape is the target CAD model and the right one is the reconstructed CAD model by MV2Cyl. While the target shape has an inset hexagonal cylinder whose one end is hidden by the outer one, MV2Cyl failed to reconstruct the inset extrusion cylinder.
- **Units for quantitative metrics.**
Yes, the reported extrusion axis error is in degrees. We will include the corresponding units in the final version.
- **Merging the 2D segmentation modules.**
Thank you for the suggestion. We experiment with combining the 2D curve and surface segmentation networks. We use a shared U-Net backbone that then outputs two separate branches for curve and surface segmentation. Results on using merge and separate UNets to extract the 2D segmentation maps for our reverse engineering extrusion cylinders task are shown in Table 2. While we see that there is a small improvement in performance on some of the metrics (Fit Cyl. and Fit Glob.), the results on both approaches are comparable. We will include these results in the final version.
- **Clarification on RGB input.**
Our segmentation networks take as input a 3-channel image, i.e. RGB. Our training data was constructed by rendering **untextured** models using Blender, which is why the images look like they are grayscale. For real data that may have texture, we first convert the input image into grayscale to bridge the domain gap. We find that this does not harm the performance and we are still able to achieve reasonable curve and surface segmentations as shown in Figure 2.
- **Difference between our curve module and NEF.**
The main difference between our curve module and NEF is NEF learns a view-dependent color term alongside its edge density field, while our curve module learns a semantic/curve instance segmentation field together with the curve/edge density field. We further note that for the 2D network, NEF utilizes PiDiNet as an edge detection network, while our approach trains a U-Net for curve instance segmentation.
- **How the 2D curve segments are obtained.**
See Section A5.4 in the supplementary materials for details on multi-view image and curve/surface segmentation map data preparation.
- **ABC dataset.**
We ran our method on the DeepCAD dataset, which is a **subset of the ABC dataset** as provided by [Wu et al., 2021] that only includes the sketch-extrude models.
- **Extendibility of the approach; non-trivial to train existence and attribute fields.**
Thank you for the inquiry. Could you clarify what you mean by extendability? We are happy to answer this in the discussion phase. On training the existence and attribute fields, given obtained 2D curve and surface segmentation maps, e.g. from our U-Net, the optimization of the existence and attribute fields becomes straightforward, that is a standard NeRF-model optimization.
---
Rebuttal Comment 1.1:
Comment: Authors,
Appreciate the attempt to answer all of my questions.
Re. Extendability, the term is self explanatory. But for your clarity, I will elaborate -- By extendability, I was referring to how easily this approach can modified/"rewired"? As well, how easily can this be plugged into another design/framework to tackle a similar problem?
I am aware that the Existence and Attribute fields undergo standard NeRF-model optimization. But do you see any "handling" and "portability" issues as you have two NeRF models being used for the task? All of these questions fall under the hood of "extendability". Let me hear your thoughts.
I would stick to my original rating of BA still.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Ajxc,
Thank you for your valuable feedback and for helping to improve our work.
We appreciate your elaboration on "extendibility" in this context. MV2Cyl could indeed be extended to other tasks requiring integrated 2D-to-3D reconstruction, such as B-Rep modeling from multi-view images. Specifically, our 2D segmentation model can decompose curve/surface patches into segments for B-spline fitting, utilizing datasets like ParSeNet (ECCV 2020). The volume rendering module can then reconstruct 3D points, which can be fitted into B-splines using tools like the NURBS-Python library. Thank you for the feedback, and we will further explore this direction.
Regarding the use of separate volume-rendering models, we did not observe any specific issues with handling the two NeRF models or porting between them. If your concern is about the consistency between the two NeRF models, we kindly ask you to refer to our rebuttal to Reviewer mRwh, where we present the consistency metrics between the curve and surface reconstruction models. If you have any other specific concerns about handling and portability, please elaborate, and we will be happy to address them. | null | null | null | null | null | null |
Con4m: Context-aware Consistency Learning Framework for Segmented Time Series Classification | Accept (poster) | Summary: This paper proposes a time-series classification method which exploits temporal consistency (implemented by contextual information). Also, the proposed method can handle noisy class boundaries.
Strengths: S1. This paper presents a novel problem formulation, time-series classification with noisy class boundaries.
S2. The proposed method takes full advantage of temporal consistency, which is an inherent property of time series.
S3. The authors created their own proprietary dataset, SEEG, for extensive experiments.
Weaknesses: W1. First of all, the writing in Introduction and Figure 1 are very misleading. "**Inconsistent** boundary labels" (with "different annotators") and Figure 1(a) indicate that multiple inconsistent annotations coexist for the same time-series instance. However, after reading a few pages, I found a discrepancy between my understanding and the presentation in this paper. Subsequently, I realized that the authors were referring to scenarios in which the class boundaries could be inaccurate. That is, there is **one** possibly erroneous boundary label for each transition. This presentation issue really affected my negative impression on this paper.
W2. I do not agree that existing studies largely focus on the assumption of i.i.d. It is very easy to find the existing work that considers temporal consistency. For example, please see https://cs.stanford.edu/people/jure/pubs/ticc-kdd17.pdf. More recently, please see https://openreview.net/pdf?id=gjNcH0hj0LM. That is, many existing methods consider that consecutive timestamps tend to have the same class label.
W3. Con-Attention is the same as Anomaly-Attention. Please see Equation (2) in https://arxiv.org/pdf/2110.02642. Therefore, along with W2, the technical novelty for the first contribution is not very significant.
W4. Section 2 is not tightly connected to subsequent sections. In my opinion, Theorem 2 is too obvious, and it does not need to be formulated as a theorem. In fact, it does not look like a formal theorem.
W5. The Tanh function fitting seems to be needed for each segment boundary and is not straightforward (involved with many parameters). Thus, the training efficiency could be an issue. It would be good to report the training cost of the proposed method.
W6. The authors assume that the center of a class interval has the highest confidence. This assumption could be true or false depending on the domain. As in the motivating example, if the annotators have a difficulty in identifying the end times of seizure waves, the highest confidence will appear earlier than the center.
W7. The authors made several strong assumptions or heuristics. For example, it is assumed that consecutive segments almost span at most two classes; a class interval is divided into five levels; a training cap is given as five epochs. Thus, this paper lacks rigorousness and generalizability.
W8. It is not trivial to set up the hyperparameter value, especially, $E_\eta$. The default value is not guaranteed to work well for other datasets. A detailed guideline is necessary.
W9. The degree of noisy labels in each dataset should be analyzed in detail. First, it is not clear why there is no noise in fNIRS and Sleep, while there is real noise in SEEG. Is the annotation strategy different in these datasets? How do you know the ground-truth labels in SEEG? Second, can you measure the degree of real noise just like the noise rate in the image classification domain? Can you approximate the corresponding value of $r$ for this degree of noisy labels in SEEG? Overall, without detailed information on noisy labels, it would be difficult to analyze the effect of noisy label learning.
---
I have adjusted my rating to 5 during the discussion period.
Technical Quality: 2
Clarity: 2
Questions for Authors: See W1~W9.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: In my opinion, the generalizability of the proposed method is not clearly verified. In this regard, the authors did not properly discuss the limitations of the proposed method. Please see my comments to further improve the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1. We apologize for any confusion caused. Fig. 1 is intended to emphasize that, despite the similarity in seizure patterns, different annotators can still provide inconsistent labels for the same type of seizure and across different recordings from the same patient. For simplicity, we only illustrated a segment of brain signals to represent a seizure pattern type, rather than a single instance. This will be explicitly stated in the revised version. Additionally, inconsistent labels do not equate to erroneous labels, as there is a lack of unified quantitative metrics to define boundaries between different classes.
---
W2. Our focus is on supervised TSC, which assigns each segment a single label, as stated in the problem definition in Sec. 3. We acknowledge that the studies you referenced consider temporal consistency. However, the first paper focuses on clustering, which is fundamentally distinct because it involves setting/learning the number of clusters within a sequence. The second paper concentrates on active learning. While these studies leverage temporal consistency, they do not align directly with the supervised TSC models we discuss. And our work explicitly integrates temporal consistency into the modeling of supervised TSC from both the data and label perspectives.
---
W3. The only similarity between the two is the parallel computation of $G^l$. Anomaly Transformer uses the similarity between inter-sequence and intra-sequence distributions to measure anomaly scores, whereas our approach aims to achieve smoother representations through fusion. We have cited this work in the main text (line 143) and have not referred to its code.
---
W4. While Theorem 2 may seem intuitively obvious, we provide a more formal proof from an information-theoretic perspective (lines 109-121). Based on this theorem, we designed our model from both the data and label perspectives, as detailed in Sec. 3.1 (lines 142-149) and 3.2 (lines 170-173, 182-187).
---
W5. Assuming the number of consecutive input time segments is $L$, the hidden representation dimension is $D$, the number of classes is $C$, and the internal iteration steps of the Tanh function fitting are $I$, the time complexity of the function fitting module is $\mathcal{O}(ICL)$. The complexity of the encoder’s final prediction layer is $\mathcal{O}(DCL)$. In practice, keeping $I < D$ helps manage the time complexity of the function fitting module. In our experiments, we set $I=100$ and $D=128$, with average training times reported per epoch.
The additional computational cost is higher for the Sleep dataset due to its classification as a five-class problem. SEEG, like fNIRS, is a binary classification problem. However, the additional overhead from function fitting is relatively smaller due to the larger input sample window length and the total number of training samples. As a result, the function fitting module has a higher computational overhead on datasets with more classes, which is a limitation of our model design. However, this overhead becomes less significant with larger datasets.
| | fNIRS | Sleep | SEEG |
| ----------- | ------------ | ------------ | ------------ |
| Con4m | 11.45s | 37.73s | 24.75s |
| w/o fitting | 1.71s | 3.92s | 6.78s |
| | $\times$6.70 | $\times$9.63 | $\times$3.65 |
---
W6. Our assumption is that annotators “tend to reach an agreement on the most significant core part of each class,” which is common in temporal action segmentation. We apologize for any confusion caused by Fig. 1; we used real data, so the illustration does not depict an entire seizure episode due to its length. In practice, doctors often disagree on both the exact start and end times of seizures. While domain knowledge can aid in sample selection, our design remains general when such prior knowledge is unavailable.
---
W7. We acknowledge in lines 334-335 that modeling transitions for a single class is a limitation. However, based on domain knowledge and statistical information from the datasets, we can select an appropriate window length that meets this condition. For the auxiliary parameters $N^l$ and $E_g$, we do not consider them core hyperparameters of the model. We ensure that the model’s loss approaches convergence for the new level data within $E_g$ epochs. See more details in global rebuttal G2.
---
W8. Although we tuned $E_\eta$ on the SEEG dataset, we used the same hyperparameter across all 4 datasets, demonstrating Con4m’s robustness. According to Appendix C, $E_\eta$ should not be set too low. For practical use, start with a value of 10 and adjust in intervals of 10 until the validation set performance peaks or slightly declines.
---
W9. The fNIRS and Sleep datasets are publicly available with curated labels, so they are considered noise-free. In contrast, SEEG data is derived from real clinical datasets and annotated by multiple experts, resulting in naturally inconsistent labels. We employ a voting mechanism (Appendix G, lines 746-749) to minimize discrepancies in test labels. However, due to the lack of a unified standard, inconsistencies in SEEG do not imply “incorrect” labels but rather reflect differences in experiences. Unlike video segmentation, where single-frame semantics are clearer, SEEG annotations can vary significantly.
Our goal is to harmonize, not correct, these labels using methods inspired by noisy label learning. Therefore, we cannot directly calculate an r value for SEEG but instead use indirect label substitution experiments (Sec. 4.3) to validate Con4m’s effectiveness in label harmonization. Comparisons with other noisy label learning baselines and ablation studies further demonstrate Con4m’s ability to handle inconsistent labels.
---
Rebuttal 2:
Title: Will keep my current rating
Comment: Thank you for your responses. I have carefully read the authors' rebuttal.
(1) As I mentioned earlier, the overall presentation is somewhat confusing. I now better understand the authors' original intention. However, the current draft needs a major rewrite. The differences between erroneous and inconsistent boundary labels are still not clear to me. To my understanding, this paper deals with possibly erroneous boundary labels caused by inconsistent annotations. Because we cannot review the revised draft, it would be hard to increase the current rating.
(2) The responses on the novelty issues (W2 and W3) and the generalizability issues (W7) are not very convincing to me.
(3) The in-depth analysis w.r.t. the degree of noise or inconsistency (W9) would be necessary to further strengthen this work. A voting mechanism applied to SEEG means curation? If you apply the voting mechanism to the training set of SEEG, it becomes the same status as fNIRS and Sleep? There are several unclear points regarding the motivation and experiment setting.
Overall, I would like to keep my current rating.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. We would like to provide further clarification.
(1) This involves defining what constitutes "erroneous." Erroneous labels imply that we assume there exists an objectively true label behind each label. However, inconsistent labels simply represent differences in annotator experience. As discussed in lines 50-56 of the main text, for MVD, the boundaries between states are not clearly defined, or the transitional states themselves represent a mixed state. Consequently, behind inconsistent labels, there is no artificially defined true label. Perhaps a model could assist in defining this true label, but thus far, our work aims to harmonize this inconsistency as much as possible to reduce the instability of model training and enhance its performance.
(2) The two works you mentioned in W2, one designed for clustering tasks and the other for active learning, both fall outside the scope of TSC. Our design for representing continuity and label coherence is two sides of the same coin, closely aligned with our intent and motivation. We conduct hyperparameter analysis for W7 in G2, and in Section 5, we spell out the limitation regarding the diversity of class transition behaviors.
(3) As mentioned in (1), there is no objectively quantifiable true labels behind inconsistent labels, making it challenging to quantify inconsistency. The voting mechanism refers to bringing multiple experts together to collectively decide the boundaries of the test set, rather than independently annotating different patients or files. However, this approach incurs significant costs, as each patient's record is extensive (spanning several days or even a dozen days), hence we can only apply it to the test set. If the training set were also handled in this manner, we have reason to believe that SEEG data could be considered equivalent to clean datasets like fNIRS and Sleep. | Summary: This paper addresses the challenge of segmented time series classification (TSC) for Multiple classes with Varying Duration (MVD) data. The authors propose Con4m, a consistency learning framework that leverages contextual information with a focus on inconsistent boundary labels. The method incorporates continuous contextual representation encoding, context-aware coherent class prediction, and a label consistency training framework. Experiments on 3 datasets demonstrate Con4m's superior performance compared to state-of-the-art baselines, especially in handling inconsistent labels.
Strengths: - Originality: The paper introduces a new approach to segmented TSC for MVD data, addressing challenges often overlooked in existing TSC models. The Con4m framework creatively combines ideas from curriculum learning, noisy label learning, and temporal action segmentation.
- Quality: The theoretical analysis in Section 2 and throughout provides a solid foundation for the proposed method. The experiments are comprehensive, including comparisons with various baselines, label disturbance experiments, and ablation studies.
- Clarity: The paper is well-structured and clearly written. The figures, especially Figure 2 and Figure 3, effectively illustrate the proposed method's architecture and workflow.
- Significance: The work has potential implications for various domains dealing with MVD data, such as healthcare and activity recognition. The label harmonization approach could be particularly valuable in scenarios where obtaining consistent labels is challenging or costly.
Weaknesses: - Limited exploration of hyperparameters: The paper doesn't thoroughly discuss the sensitivity of the method to key hyperparameters, such as N_l = 5 in the label consistency framework or E_g = 5 for curriculum learning.
- Scalability concerns: The paper doesn't address how the method would scale to larger datasets or longer time series. This could be a limitation for practical applications with high-dimensional or long-duration data.
- Comparison with semi-supervised methods: Given that the method deals with inconsistent labels, it might be beneficial to compare it with semi-supervised learning approaches that are designed to handle partially labeled or noisy data.
- Generalization to other domains: While the method is tested on three datasets, they are all from the healthcare domain (all 3 of them focusing on brain signal analysis). It would be valuable to see how well the approach generalizes to other domains with MVD data, such as activity recognition or financial time series. For example, initiatives like WOODS (https://arxiv.org/abs/2203.09978/) offer diverse timeseries data for evaluations.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In the label consistency framework, why is N_l = 5? Was this value determined through hyperparameter search? How sensitive is the method to this choice?
2. Why was fNIRS not used in the ablation studies? Is there a specific reason for excluding this dataset from the detailed analysis?
3. How does the computational complexity of Con4m compare to the baseline methods? Is there a significant increase in training time or memory requirements?
4. Have you explored the potential of applying Con4m to semi-supervised or few-shot learning scenarios, where only a portion of the data has reliable labels?
5. The paper mentions that Con4m modifies approximately 10% of the training labels in the SEEG data. How does this percentage vary across different datasets or disturbance ratios? Is there a way to estimate the optimal percentage of labels that should be modified?
6. Could you provide more insights into the choice of the hyperbolic tangent function for prediction behavior constraint? Have you experimented with other monotonic functions, and if so, how do they compare?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: See the questions above. I would especially emphasize the narrow benchmarks and tasks. Learning across segments is a broader problem within the timeseries domain.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1&Q1. We consider $N_l$ and $E_g$ as auxiliary hyperparameters that work together rather than core hyperparameters of the model. When selecting these values, we focus on ensuring that the model’s loss approaches convergence for the newly added level within $E_g$ epochs. In contrast, $E_\eta$ has a more significant impact on the model’s performance. See more details in global rebuttal G2.
---
W2. From an architectural perspective, Con4m is built on the Transformer framework, allowing for scalability by adjusting the hidden layer dimensions or stacking layers. From a data perspective, time series data differs from NLP sequences as it consists of continuous numerical values rather than discrete tokens. By adjusting the duration of each time segment (patch), we can control the sequence length L. Consequently, Con4m can be scaled to handle longer time series by appropriately tuning the patch length, the number of CNN layers, and the size of the convolutional kernels.
---
W3&Q4. Our method differs from semi-supervised TSC methods that assume reliable labels on partially labeled datasets. In contrast, we begin with initial labels for the entire dataset, but their reliability remains unclear. Our model is specifically designed to evaluate label reliability as part of its learning process, addressing inconsistencies not typically covered by semi-supervised methods. Therefore, we cannot consider semi-supervised or few-shot methods as a baseline or apply them to our task due to these differences in task settings.
---
W4. Thank you for your valuable suggestion. Based on the link you provided for WOODS, we identified that the Human Activity Recognition (HHAR) dataset aligns with our experimental setup to some extent. Therefore, we incorporated the HHAR dataset in our experiments. Please refer to the global rebuttal G1 for more details.
---
Q2. Table 2 demonstrates that fNIRS has relatively clear boundaries, as TAS models perform well on this dataset even without considering label consistency. Therefore, to more effectively highlight Con4m’s advantages in handling inconsistent labels, we decided not to include the fNIRS dataset in the ablation studies.
---
Q3. We believe that due to the significant differences in architecture and domain among various baselines, directly comparing computational complexity would be unfair. For example, models based on the Transformer architecture have a significantly higher time complexity than those based on CNNs. Therefore, we only compare the time complexity of Con4m with that of a standard Transformer.
The time complexity of Con4m can be divided into two main parts. The primary computational cost is on the same order as vanilla Transformer. Assuming the number of consecutive input time segments is $L$, the hidden representation dimension is $D$, the number of classes in the classification task is $C$, and the local iteration count of the function fitting module is $I$.
- Con-Transformer: The time complexity of vanilla self-attention is $\mathcal{O}(LD^2+L^2D)$, and the time complexity of the Gaussian kernel branch is $\mathcal{O}(LD^2+L^2)$.
- Coherent class prediction: The time complexity of Neighbor Class Consistency Discrimination is $\mathcal{O}(LDC)$, and the time complexity of the Tanh function fitting is $\mathcal{O}(ICL)$.
Therefore, the computational cost and bottlenecks of Con4m are similar to vanilla Transformer.
---
Q5. Since label inconsistency is difficult to precisely define, it is challenging to determine an optimal percentage of label modifications. However, during training, one of our criteria for stopping is when the model no longer changes additional labels. You can find this implementation in our code. Generally, datasets with higher noise levels tend to have a higher percentage of label changes. For example, the fNIRS-0 dataset has a change rate of 3%, while SEEG has a change rate of 13%. Similarly, for the Sleep dataset with disturbance ratios of 0%, 20%, and 40%, the change rates are 10%, 12%, and 18%, respectively.
---
Q6. The Tanh function is particularly well-suited to encapsulate the four behaviors depicted in Figure 3. As a widely used activation function, it is easier to understand and manipulate. In contrast, functions like arctangent and arcsine are more complex and harder to control. Appendix B demonstrates Tanh’s ease of optimization and its precise fitting capabilities.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for their thorough responses. I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for raising your score, which is highly valuable to us. We have gained a lot from your comments, such as identifying the HHAR dataset from the WOODS benchmark as one of our test datasets. If you have any further concerns or suggestions, please do not hesitate to share them with us. | Summary: The paper proposes Con4m, a novel framework for time-series classification and temporal action segmentation that leverages contextual information. The framework is designed to improve the prediction accuracy by incorporating context from surrounding data segments.
The proposed method combines time-series classification (TSC) and temporal action segmentation (TAS) in a unified framework, making it versatile for different types of sequential data. It tries to address the issue of common time series classification methods which overlook the dependencies between consecutive segments.
The method demonstrates robustness to label disturbances, showing significant improvements over baseline models across different datasets (fNIRS, Sleep, and SEEG).
Strengths: - The paper proposes context-aware boundary detection and classification for long multi-class time series. The topic is very interesting and focuses on a real challenge in processing time-series data.
- Con4m integrates contextual information to improve the coherence and accuracy of predictions. This approach helps in better recognizing boundaries and transitions in time-series data.
- The paper is well-written and presented with several example illustrations.
- The model is Compared against different time-series classification baselines. The evaluation shows great performance across Sleep and SEEG datasets and demonstrates robustness to label disturbances
Weaknesses: 1. What if combining other models with a change-detection model to detect segment boundaries first and then classify the windows within the segments? That would be great to see the impact of Con4m in comparison with other baselines in this setup.
2. Since Con4m is capable of doing segmentation and classification together, it will be necessary to evaluate and compare the impact of each with other SOTA.
Minor change:
- Figure 4(b) has been referred to before Figure 4(a). Please fix that
3. The experiments are limited, and somewhat unfair. The chosen baselines do not claim ability in segment detection. There are several works that have done segment or change point detection, and they need to be included in baselines. Also, evaluations are limited to three datasets.
- Minor suggestion: It would be nice to visualize the whole framework to show each module.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the initial two comments (no 1 and no 2) in weaknesses. In addition:
3. In Label Disturbance Experiments: It is not clear how the authors evaluated the models. Is it based on the new disturbed boundaries or the original boundaries? What is the evaluation setup for the other baselines as they are not considering the labelled boundaries in their calculations (I assume they should be boundary-agnostic).
4. While section 4.3 shows an interesting set of experiments, I believe it only measures the degree of agreement between the baseline models and Con4m about the modified labels. Unless authors can ensure all the labels modified by Con4m are correct and accurate
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The following limitations are not considered/discussed:
- Although I appreciate the novelty of the method, the final improvement is subtle compared to other baselines given they have not claimed any ability in segment detection. Also, the evaluations are limited to 3 datasets.
- As the authors mentioned in the conclusion the model heavily relies on the availability of labels even for the segmentation part. While most segmentation models are unsupervised, I suggest including comparison by combining other baselines with unsupervised segmentation methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1&2.
1. Our setup differs significantly from segmentation models (FLOSS[1], ESPRESSO[2], ClaSP[3]) in that they are able to identify change points but are unable to determine the specific classes before and after these points, particularly in multi-class tasks.
2. Our public datasets are multivariate, which presents a challenge for the performance of most segmentation models (ClaSP[3]), which are designed to handle univariate time series.
3. Our use case may be impractical and complex due to the necessity of setting/learning parameters, including the number of segments and the length of subsequences, in segmentation models (FLOSS[1], ESPRESSO[2]).
Nevertheless, we conduct an exploratory analysis of ClaSP on the SEEG dataset (the only univariant dataset) and present the results with two metrics for reference. These results are solely intended for illustration purposes, as unsupervised segmentation models are not well-suited to our scenario and setup.
[1] Gharghabi, Shaghayegh, et al. "Domain agnostic online semantic segmentation for multi-dimensional time series." (2019).
[2] Deldari, Shohreh, et al. "Espresso: Entropy and shape aware time-series segmentation for processing heterogeneous sensor data." (2020).
[3] Ermshaus, Arik, Patrick Schäfer, and Ulf Leser. "ClaSP: parameter-free time series segmentation." (2023).
---
W3. Thank you for your valuable questions and suggestions. We will reorder Figures 4(a) and 4(b) and attempt to combine Figures 2 and 3 into a single illustration. However, we would like to emphasize that Con4m was not designed specifically for segmentation. Our modeling is consistently based on Problem Definition 3.1, which involves making independent predictions for each segment and then integrating them into coherent predictions. Our work aims to highlight a point often overlooked by current mainstream supervised TSC approaches: the temporal dependency and coherence of contextually classified samples. Experiment 4.3 further illustrates the contributions of our work.
On the other hand, all the baselines related to temporal action segmentation (TAS) are equipped with segmentation capabilities, as they are specifically designed for segmentation tasks. Since Con4m is a supervised learning model, we selected representative supervised learning models in TAS for comparison. Therefore, the selection and comparison of baselines are fair. Nevertheless, we have considered your recommendation to incorporate unsupervised segmentation models for comparison in W1&2.
---
Q3. Our experimental setup is consistently aligned with Problem Definition 3.1 (lines 136-140). All models are trained on the disturbed data and tested on the original noise-free data. We introduce disturbances at the timestamp level of the original data (lines 236-243), then sample time intervals based on these disturbances and segment them. All models treat each time segment as an instance for prediction and evaluation, which aligns with the TSC task setup. The evaluation setup itself is independent of whether a model can detect boundaries. Con4m does not explicitly perform boundary detection, whereas TAS models are specifically designed for boundary segmentation.
---
Q4. Section 4.3 effectively demonstrates Con4m’s prediction consistency. Moreover, Fig.5 and Fig.8 use color coding to visually highlight the alignment between the model’s predictions and the actual labels. It is evident that, unlike Con4m, other models fail to provide predictions that are both coherent and accurate. In practical applications, Con4m offers substantial improvements in usability, significantly outperforming other models in overall classification metrics. Dispersed and inconsistent predictions can hinder doctors from quickly identifying seizure regions without examining large amounts of raw data.
---
Rebuttal Comment 1.1:
Title: Results for unsupervised segmentation model ClaSP
Comment: Apologies for the delayed results. For the F1 score provided by ClaSP, we input each time interval into ClaSP to obtain a score, and then average the scores of all time intervals. Additionally, we transform Con4m's predicted results from time segments to timestamps and use them to calculate the F1 score proposed by ClaSP. The specific scores are as follows: ClaSP: 0.854; Con4m: 0.930. The comparison of these results is quite fair.
To align with our evaluation metrics, we assign all possible class combinations to the timestamps on either side of the change points detected by ClaSP. We then select the combination with the greatest overlap with the true test labels for ClaSP. Finally, the same segmentation and evaluation processes are executed to obtain the results. The specific f1 scores are as follows: ClaSP: 0.824; Con4m: 0.720. While this approach provides a test result, it clearly involves test label leakage, rendering it unfair and unsuitable for real-world applications.
---
Rebuttal Comment 1.2:
Comment: I would like to thank the authors for responding to my questions. The provided clarifications for Q3 and Q4 makes the evaluation setup more clear.
---
Reply to Comment 1.2.1:
Title: Further discussion
Comment: Thank you for your response. We have also included the analysis of unsupervised segmentation models in our setup, along with the configuration and comparative results of ClaSP on SEEG data. Do you have any further questions or concerns regarding these responses? We are more than willing to engage in further discussion to enhance the quality and score of our work.
---
Reply to Comment 1.2.2:
Title: Further discussion
Comment: We would like to reiterate the connection and distinction between unsupervised segmentation tasks and our scenario:
(1) Pure segmentation models can only identify the positions of change points, without specifying the exact classes on either side of the change points, especially in multi-class scenarios. Although Con4m was not specifically designed for segmentation tasks, it can still transform from segment prediction to point prediction and provide segmentation results. Therefore, based on the SEEG dataset, we evaluate Con4m fairly according to the F1 scores provided by ClaSP. Experimental results are as follows: ClaSP: 0.854; Con4m: 0.930. We believe this result corroborates the conclusion drawn in Section 4.5.
(2) Temporal Action Segmentation (TAS) models are specifically designed for video segmentation. Following your indication of being "capable of doing segmentation and classification together," the TAS models perfectly fulfill both criteria as they can provide a specific class prediction for each video frame/time segment. Hence, our choice of baseline models is comprehensive and fair.
(3) Many segmentation models are primarily designed for univariate time series and face challenges when handling multivariate time series data. Therefore, we could only compare ClaSP on the unique univariate dataset SEEG.
As we are not experts in the field of unsupervised segmentation, we welcome any corrections if there are any inaccuracies or omissions. Additionally, we would greatly appreciate your timely communication and sharing of thoughts that integrate such models into our scenario. | Summary: The authors proposed a learning framework called $\textit{Con}4\textit{n}$ that leverages contextual prior of Multiple classes with Varying Duration (MVD) to enhance the discriminative power of consecutive time series segments while harmonizing inconsistent labels associate to these later. The authors stated and shown through extensive experiment that their framework enables to encode underlying temporal dependence between consecutive time series segments and mitigate the data annotation error that may be due to discrepancy of expertise.
Strengths: - Originality: The authors propose a sophisticated architecture for encoding the temporal dependency between successive time series segments during a classification task. The originality of this work lies in the fact that the classification process is reinforced by a prior contextual information encoded by a Gaussian kernel. Their approach is more realistic than previous work, which assumes an independent and identical distribution of successive time series segments.
- Quality: The article is well-written and structured. The experiment is consistent enough;
- Clarity: Although the document is fairly clear overall, there are still a number of areas for improvement (please refer to weaknesses and questions);
- Significance: The proposed solution, once mature, will undoubtedly benefit many domains in which time series classification can be very challenging due to the complexity of data structure and divergence of experiences.
Weaknesses: - Figure 2 is not intuitive enough to understand the overall architecture of the model;
- $\hat{p}$ and $\tilde{p}$ are mentioned without explanation. Even though for $\hat{p}$ it is quite obvious, it must be explained clearly (see paragraph $\textbf{Neighbor Class Consistency Discrimination}$.
- Space and Time complexity: Although efficient, the proposed model may require significant memory and computational time in large-scale applications.
- The model may fail when confronted with seasonal time series data. This is because the Gaussian kernel used is not suitable for capturing the periodic pattern of the data;
- Execution time is not included in the results. This is an important measure to report, as it plays a crucial role in sensitive areas such as medicine, where accuracy and prediction time are just as important.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What features do you use in positional encoding? Is it the order index of time series segments or timestamps? Please mention it in the main text;
- Can the model work with multivariate time series data, i.e. using the observation of various features in a time segment to perform a classification task? If so, how will you encode the multivariate time series segments?
- Do the authors think that their model will still work correctly when faced with seasonal time series data, given that the Gaussian Kernel used is not appropriate for capturing the periodic pattern of the data?
- Why did the authors choose the attention mechanism to merge $z_s^l$ and $z_g^l$? Although probably less efficient, have they tried simple techniques such as concatenation, Hadamard product or addition, which have the advantage of reducing spatial and temporal complexity?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Although the authors have discussed the limitations of their work, the space and time complexity of the model is an additional limitation. Furthermore, unless the reviewer has missed some points, it appears that the proposed model has only been evaluated on univariate time series data (please correct the reviewer if he is wrong). The reviewer suggests investigating these points in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for recognizing our work.
W1. We will integrate Figures 2 and 3 into a single illustration to improve clarity.
---
W2. We will explicitly clarify the meanings of `p^` and `p~` in the main text: `p^` represents the model's independent prediction for a sample, while `p~` denotes the context-aware prediction that incorporates the results from neighboring samples.
---
W3. The time complexity of Con4m can be divided into two main components. The primary computational cost is on the same order as vanilla Transformer. Assuming the number of consecutive input time segments is $L$, the hidden representation dimension is $D$, the number of classes in the classification task is $C$, and the local iteration count of the function fitting module is $I$.
- Con-Transformer: The time complexity of vanilla self-attention is $\mathcal{O}(LD^2+L^2D)$, and the time complexity of the Gaussian kernel branch is $\mathcal{O}(LD^2+L^2)$.
- Coherent class prediction: The time complexity of Neighbor Class Consistency Discrimination is $\mathcal{O}(LDC)$, and the time complexity of the Tanh function fitting is $\mathcal{O}(ICL)$.
Therefore, the computational cost and bottlenecks of Con4m are similar to vanilla Transformer.
---
W4&Q3. Thank you for the insightful question. Con4m is primarily designed for classification tasks, focusing on extracting structures and patterns within time segments. When dealing with datasets that have periodic patterns, we believe Con4m can still effectively capture these periodicities, provided that a complete cycle is included within a single classification instance. In this context, the Gaussian kernel acts as a one-dimensional Gaussian smoothing filter. We believe that classification tasks are typically less sensitive to periodic patterns compared to forecasting tasks. Furthermore, we have included a human activity recognition dataset HHAR, where sensor data naturally exhibit periodic changes as people walk or run. Con4m performs well on the six-class HHAR dataset, demonstrating its ability to capture periodic patterns.
---
W5. The execution time of Con4m on SEEG data is as follows: (1) Offline training (124,000 training samples + 62,000 validation samples): approximately 30s/epoch; (2) Batch inference (batch size of 64 with 62,000 test samples): approximately 14s. These times fully meet the practical requirements for inference.
---
Q1. We employ learnable absolute positional encoding. The absolute positional encoding is equivalent to the order index of time series segments, as the fundamental unit of our model is a time segment. This clarification will be addressed in the primary text of the paper.
---
Q2. Con4m is capable of handling multivariate time series data. As shown in Table 1, the fNIRS and Sleep datasets have 8 and 2 features, respectively. We employ a multi-channel CNN to jointly extract information from these features (see Figure 2), as the primary focus of this paper is not on multi-channel modeling. However, Con4m can be seamlessly integrated with existing multi-channel modeling approaches by using their output representations as inputs to Con4m.
---
Q4. Summation is the most direct and straightforward approach to achieving smoother representations, as concatenation and the Hadamard product do not explicitly provide this effect. Additionally, we enable the model to adaptively learn the necessary degree of smoothness, as sample representations in transition regions should not be continuous with those from different classes.
---
Rebuttal Comment 1.1:
Title: Satisfied with the answers.
Comment: Dear authors, thank you for your detailed responses. My concerns were clarified.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you once again for your recognition and affirmation of our work. If you have any further questions or concerns, please feel free to let us know at any time. | Rebuttal 1:
Rebuttal: G1 (Results for a new dataset).
Based on the suggestion from reviewer uhXL, we conducted a search of the WOODS dataset and found that the Human Activity Recognition (HHAR) subset aligns well with our scenario and setup. Consequently, we include the HHAR dataset in our experiments. Due to time constraints, we only report the results of the best-performing model from each category of baselines, and we plan to include the remaining results in the revised version.
We follow the preprocessing steps outlined in WOODS (https://arxiv.org/pdf/2203.09978), dividing the HHAR dataset according to the device. To ensure balanced samples, we combined data from the Galaxy S3 Mini, LG watch, and Gear watch into a single group. A 6-fold cross-validation experiment is conducted in accordance with the Sleep dataset's configuration. As Table 1 in the rebuttal pdf shows, Con4m still achieves competetive or superior performance in HHAR dataset. Due to the distribution discrepancies among devices, there are significant variations in experimental results across different groups.
---
G2 (Results for hyperparameter analysis).
We consider $N_l$ and $E_g$ as auxiliary hyperparameters that work in tandem rather than core hyperparameters of the model. When selecting these values, we empirically observe that the model’s loss converges for the newly added levels within $E_g$ epochs.
As shown in Figure 1 in the rebuttal pdf, smaller values for either $N_l$ or $E_g$ can hinder the model’s ability to fit data at newly introduced levels, leading to a decline in performance. However, when $N_l$ and $E_g$ are set within the range of 4 to 7, Con4m demonstrates stable performance, indicating that the model is not particularly sensitive to these hyperparameters. In contrast, the parameter $E_\eta$ has a more significant impact on the model’s performance, and careful tuning of this hyperparameter is more crucial.
Pdf: /pdf/2b50202d21035dcbe337a84e9984b3dbec2110c3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
INDICT: Code Generation with Internal Dialogues of Critiques for Both Security and Helpfulness | Accept (poster) | Summary: This paper introduces a dual-critic prompting framework, INDICT, to consider both helpfulness and security during code generation. Specifically, the author introduces two critics, one for helpfulness and the other for security, to provide suggestions on improving the initially generated code. Besides, the authors equipped two critics with external tools like search engines and code executors to address the hallucination problem and employ an iterative refining style to further improve the performance. The authors evaluate INDICT on 8 tasks across 5 benchmarks on several LLMs. The experimental results demonstrate the effectiveness of INDICT.
Strengths: 1. The paper is well-written and easy to follow.
2. The studied topic is valuable. According to authors' discussion of related works, it seems that there are many works focusing on pointing out the security problems of LLM-generated codes, yet the papers for solutions are few.
3. The ablation studies demonstrate the effectiveness of each module (helpfulness critic, security critic, external tools, iterative rounds).
Weaknesses: 1. Though no former works using a multi-agent collaborative system addressing the security of code generation according to related works part in the paper, it is generally believed and widely proven effective of adopting a multi-agent collaborative system during content generation [1,2,3], which weakens the novelty of the paper.
2. The experiments only compare INDICT with pure LLMs. According to the related work part, there are several related works. Although the authors mentioned that related works are either not adapted for benchmarks used in this paper or not accessible, a lack of comparison with related works weakens the validity of the effectiveness of INDICT.
3. The authors discussed an alternative by using only one LLM to act as a helpfulness critic and security critic at the same time in their method description section. However, I do not find any experiments comparing using two LLMs as critics and using only one. The lack of this comparison confuses me with the necessity of employing two LLMs separately.
[1] Dong, Yihong, et al. "Self-collaboration code generation via chatgpt." arXiv preprint arXiv:2304.07590 (2023).
[2] Huang, Dong, et al. "Agentcoder: Multi-agent-based code generation with iterative testing and optimisation." arXiv preprint arXiv:2312.13010 (2023).
[3] Ishibashi, Yoichi, and Yoshimasa Nishimura. "Self-organized agents: A llm multi-agent framework toward ultra large-scale code generation and optimization." arXiv preprint arXiv:2404.02183 (2024).
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What is the meaning of 'GT Safe' and 'GT Unsafe' in Fig.4 (c)?
2. Can you provide the time cost of INDICT? Since your system contains multiple LLMs, multiple rounds of interaction, and even the usage of search engines and code executors, it is necessary for readers to know the efficiency trade-off compared with pure LLMs. Note that I am not taking this point as a weakness of INDICT.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors discussed limitations in Appendix A. Among the discussed limitations, I think the last one needs further discussion. Though the authors discussed that INDICT is much more efficient compared with fine-tuning LLMs that require curated training examples, once the fine-tuned LLM is released, the inference time costs much less than INDICT because INDICT is quite heavy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your reviews! Please refer to our responses below.
### Q1: ..It is generally believed and widely proven effective of adopting a multi-agent collaborative system during content generation
Even though multi-agent collaborative systems have been proposed for content generation, it is not trivial to extend this line of research to address the security of code generation.
* Firstly, it is not clear how the current methods could be enhanced with security awareness and subsequently improve the quality of output generations. Earlier work such as [1, 2] showed that simply asking agents to analyze and answer what is wrong with generated code is not always effective. With carefully designed prompts and agent interactions [3, 4], collaborative agents can now generate more functionally correct code.
* Therefore, studying collaborative agents with orthogonal goals such as security awareness still requires further attention. As observed by our additional experimental results (see our global response#2 above), simply applying the current multi-agent methods [5, 6], even with extended additional instructions of security criteria, does not perform so well and is still far from optimal.
Furthermore, we would like to emphasize that our method novelty is not only about the multi-agent collaborative system. We also proposed an innovative multi-critic framework as an internal reasoning module with access to external tools for knowledge grounding. Our strategy can integrate holistic and reliable critic feedback from code execution outputs as well as supporting information snippets through multiple rounds of agent interactions. As demonstrated by our results (see our global response#2 above), our method is more well-rounded with high performance by both safety and helpfulness. Refer to our global response#1 above for a more comprehensive and systematic comparison between INDICT and other related work.
### Q2: The experiments only compare INDICT with pure LLMs.…
Following your recommendations, we selected 7 strong baselines from related lines of research, including self-refine, multi-agent, and finetuning methods (see our global response#2 above). For most baselines, we also include a version of the method where additional instructions are given to the model to follow and provide both safety and helpfulness feedback e.g. we instructed models to “focus on both the security and helpfulness of the solution.” For Self-Collab [5], we included these instructions in both analyst and tester agents. For CAMEL [6], we follow the Critic-in-Loop setup as recommended in the paper appendix. Note that for all baseline models, we followed the same generation budgets to fairly compare the results (up to 3 rounds of revision).
From the experiment results (see our global response#2 above), we can observe the SoTA performance of INDICT. While we observed the improvement of baseline methods with additional instructions (marked with the suffix ‘+’), their results are still not as good as INDICT in terms of helpfulness and safety. For finetuning method CodeUltraFeedback, their best model (SFT+DPO fine-tuned) is still far from perfect and could be further optimized. We further integrated the fine-tuned models with INDICT and observed a significant performance boost. We will include these results and more detailed analysis in our revised paper.
### Q3: The authors discussed an alternative by using only one LLM to act as a helpfulness critic and security critic at the same time...
We conducted additional ablation analysis to evaluate whether one LLM can act as a critic for both helpfulness and safety at the same time. We combined the previous criteria for security and helpfulness and integrated them into the prompt for this critic agent. From the results (see our global response #3), simply using a single critic agent with dual quality criteria will affect the performance, reducing the safety and helpfulness metrics to 87% and 76% respectively. One possible reason is due to the formulation of the training objectives of LMs, which are not always designed to optimize both security and helpfulness equally (also depending on the post-pretraining stages of LMs e.g. training with RLHF). Our approach enables a more flexible and probably more relaxed application of LLM as a critic agent by:
* (1) decoupling the helpfulness and safety goals and delegating them to individual LM agents; and
* (2) enabling multi-critic collaboration to autonomously develop more holistic and well-rounded critic feedback.
We will include the results and a more detailed analysis in our revised paper.
### Q4: What is the meaning of 'GT Safe' and 'GT Unsafe' in Fig.4 (c)?
“GT Safe” and “GT Unsafe” denote the ground-truth secure and insecure code outputs provided by the CVS benchmark. We will explain and make the definitions clearer in our revised paper.
### Q5: ...Can you provide the time cost of INDICT? ... once the fine-tuned LLM is released, the inference time costs much less than INDICT
On average, INDICT incurs about 3 to 4x the time cost as compared to pure LLMs. We also want to note that even with fine-tuning, fine-tuned models are far from perfect and still subject to unseen security risks or novel red-teaming prompts during test time. For instance, from our results with the fine-tuning method CodeUltraFeedback (see our global response#3 above), the fine-tuned model is still sub-optimal and can be further improved e.g. using INDICT during inference time (improving the performance from 60% to 73%).
*[1] Is self-repair a silver bullet for code generation?*
*[2] Large language models cannot self-correct reasoning yet*
*[3] Reflexion: Language Agents with Verbal Reinforcement Learning*
*[4] AgentCoder: Multiagent-Code Generation with Iterative Testing and Optimisation*
*[5] Self-collaboration Code Generation via ChatGPT*
*[6] CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society*
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your clarification. I decided to raise my score.
Please add this rebuttal content to the next version of the paper.
One more thing, I still strongly recommend a more detailed time cost comparison between INDICT and pure LLMs, other baselines. To give readers better insights about the efficiency trade-off.
---
Reply to Comment 1.1.1:
Comment: Thank you for your consideration and revising the score! We will incorporate the feedback and our discussion in detail into the revised paper.
Regards, | Summary: The paper describes a method to improve the helpfulness and safety of LLMs for code generation tasks. It uses two critics - one for safety and one for helpfulness that communicate with each other to iteratively provide feedback to the actor model or the actual agent that is tasked with the code completion task. The critics are further augmented with tools that allow using the web for searches as well as LLMs from OpenAI and a code interpreter to execute code. The implementation of the method has been done using CommandR+ though experiments have also been presented using CodeLlama and Llama for certain tasks. Prompts have been standardized across LLMs to the extent possible. Experiments on benchmarks for Code Security (insecure code generation, malicious code generation) as well as open-ended generation tasks from HarmBench indicate the method improves the performance of the corresponding base model. Ablation experiments on the code security tasks (done using CodeLlama and CommandR+ indicate that the use of both critics as well as tools help improve performance.
Strengths: - Well written paper with a detailed appendix
- Simple but interesting extension of the use of collaborative agents to improve safety and helpfulness for code tasks
- Improved performance on multiple tasks
Weaknesses: - Despite some details in the appendix; the evaluation section is missing crucial details
- See Questions
Technical Quality: 3
Clarity: 3
Questions for Authors: - For the HarmBench evaluation - how have the red-teaming methods been applied with the critics in place? I'm guessing it was applied on the actor/agent prompt? Some details would be helpful. Was the evaluation also done using completions? The main text suggests so, but there are no tables/results referenced. Table 1 isn't discussed (no mention or reference of the red-teaming methods in the main paper). This evaluation section is a bit unclear to me.
- Since there are no samples of the generated dialogs or the system prompt for the actor (agent) -- if any; I'd imagine a well rounded summary being made available to the agent (Equation 5). Could the authors share more details/prompt along with a sample?
* I was curious if fine-tuning models (smaller models?) on such generated data would have been a good experiment? Doing so could help in reducing inference cost by employing cheaper/smaller models
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes for the most part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. Please refer to our responses below.
### Q1: For the HarmBench evaluation - how have the red-teaming methods been applied with the critics in place? …Was the evaluation also done using completions?...
For the HarmBench benchmark, we followed the original evaluation setup proposed in [1] (see Figure 3 of this work). Specifically, the red-teaming methods like PAP, TAP, or PAIR are applied to the original task prompts (provided in HarmBench). Each red teaming method augments the original task prompts to make it harder for LM actor agents to detect malicious intentions, leading to harmful generation output. The evaluation is then done on the generated completions of the actor agents using the HarmBench AI evaluator. With the critics in place, the evaluation is applied to the actor agent’s completions after it receives the critic feedback and regenerates a new response. Due to the limited space, we described the details of the HarmBench benchmark and references of the red-teaming methods in Appendix D (L996-1003). We will make the experimental setup and evaluation clearer in our revised paper.
### Q2: …Could the authors share more details/prompt along with a sample?
Thanks for your comment. A well-rounded summary is beneficial to ensure all the information of the critic feedback is conveyed to the actor agent. We included the example summary prompt in Appendix F (Listing 3 and 8). Other prompts are also included in this Appendix. We also included more qualitative examples with generated dialogues in the attached PDF in our global response above for your reference. We will include these qualitative samples and explain them in more detail in our revised paper.
### Q3: I was curious if fine-tuning models (smaller models?) on such generated data would have been a good experiment?...
Thanks for your suggestion. It would be interesting to conduct finetuning experiments on smaller models using generated data. Currently, there is no available training split in the benchmarks and tasks used in this paper. However, synthetic datasets or relevant crowd-sourced datasets could be considered (e.g. from open-domain Github tasks or other code generation benchmarks).
Furthermore, while using fine-tuned models might be cheaper during inference, they will still be subject to unseen security problems or novel red-teaming malicious prompts. In our global response #3 with fine-tuning methods, the best finetuned model is far from perfect and its performance can be further optimized. Our method INDICT can complement these approaches. Our results show that using INDICT can significantly boost the performance of a fine-tuned model, from 60% (SFT+DPO model) to more than 73%.
*[1] HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal*
*[2] CodeUltraFeedback: An LLM-as-a-Judge Dataset for Aligning Large Language Models to Coding Preferences*
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer rV62,
We hope our rebuttal response has addressed your concerns about the paper. As the authors-reviewers discussion will end in a few days, please do let us know early if you still have any questions or need further clarification.
Regards, | Summary: This paper proposes a framework for generating both safe and helpful code. It integrates an internal dialogues of critiques against the given task and the corresponding generated response. It queries external knowledge through relevant code snippets and tools like web search and code interpreter. INDICT is evaluated on multiple tasks across 8 different languages. The results show an advanced level of critiques could significantly improve the code quality.
Strengths: This paper presents a very technically sound system for generating safer code. It improves the safety of code generation with both preemptive and post-hoc feedback, which is quite complete. The authors conduct solid and thorough experiments for their system, which proves significant improvement over existing methods.
Weaknesses: This paper presents a useful and well-designed framework. However, I have some doubts about the novelty of the proposed method. The framework uses actor-critic architectures and includes safety and helpfulness critics, which have been mentioned in previous papers like [1].
In this framework, the critics use data from web searches, Wikipedia, and OpenAI, which raises my concerns about data leakage. For example, it might be easy to find the original web page for a CWE by searching its description when evaluating on the datasets built from CWEs. Using OpenAI as a knowledge base could also have similar issues. Because of this, while the framework performs well on benchmarks, I’m concerned about the generalizability of this framework.
Besides, as the paper mentioned, the code execution could invoke unexpected consequences. It seems the authors do not address this issue, which might require sandboxing execution or other safe execution methods.
[1] Le, H., Wang, Y., Gotmare, A. D., Savarese, S., and Hoi, S. C. H. (2022). Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314–21328.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can you provide more details on how your method differs from existing actor-critic architectures? Specifically, what new contributions does your framework make compared to previous works?
2. How do the safety and helpfulness critics contribute to the overall performance of the framework? Are there any specific examples or case studies you can provide to illustrate their impact?
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: …how your method differs from existing actor-critic architectures?...
We provided a systematic and comprehensive comparison between INDICT and related actor-critic approaches such as CodeRL in our global response#1 above. Compared to existing actor-critic methods, INDICT is different in three major aspects:
* (1) INDICT aims to optimize both helpfulness and safety awareness in the generated output code. Most of the current actor-critic approaches are designed with a single criterion (such as functional correctness). Simply extending these methods with additional instructions on safety criteria is sub-optimal (see our results with baseline actor-critic methods in our global response#2 above).
* (2) INDICT integrates critic with knowledge grounding from external tools to create more reliable feedback to the actor agent. Most current methods only use code test results as the only external feedback to improve the quality of output code.
* (3) To implement (1) and (2), we enhanced the existing actor-critic framework with a multi-critic and tool-enabled collaboration approach. This approach can autonomously generate more reliable and holistic feedback for both the safety and helpfulness of output code generation.
We will include a more detailed comparison in our revised paper.
### Q2: In this framework, the critics use data from web searches, Wikipedia, and OpenAI, which raises my concerns about data leakage…Because of this, while the framework performs well on benchmarks, I’m concerned about the generalizability of this framework.
Thanks for raising this concern. We want to highlight that our use of external tools is only for querying relevant information snippets to supplement the critic feedback, not to directly find a solution to a given task. We do not provide any additional information to the critic to generate queries e.g. no information about the related CWEs or security issues, so the critic would have to independently decide what information to choose to search. We conducted an analysis of the generated tool queries by the critic agents and found that on average, for each task, words that are found in queries only overlap with less than 5% of the total words in the original task description. Therefore, the data leakage is very minimal.
Furthermore, from our experiment results (Table 2 in the paper), even without access to external tools, our approach with a multi-critic collaboration system still leads to significant performance gains in base models like CodeLlama or CommandR. When stronger and more recent GPT models are used (e.g. results with GPT4o-mini in our global response#2), the direct generation result is still rather low i.e. models are not heavily exposed to the test data, and more improvement could be done with methods like INDICT. We will include a more detailed discussion in our revised paper.
### Q3: Besides, as the paper mentioned, the code execution could invoke unexpected consequences…
Thank you for your suggestion. Sandbox environment is a technical engineering feature that we would like to integrate with INDICT. However, the novelty of our method should still be guaranteed. We will describe in more detail the requirement for a sandbox environment for secure code execution in our revised paper.
### Q4: How do the safety and helpfulness critics contribute to the overall performance of the framework? Are there any specific examples or case studies you can provide to illustrate their impact?
As demonstrated in our ablation experiments in the paper (see Table 3), using individual component critics leads to sub-optimal performance and is not as good as our method with a multi-critic collaboration system. To illustrate the benefits of our approach, we included a qualitative example in the PDF attached to our global response above (see Figure 1).
Compared to the baseline methods, our approach can simultaneously improve both the helpfulness and safety of the output code generation. Specifically, given a relevant information snippet by the safety critic (about the hashing method SHA-256), our actor agent correctly revised the code with a more secure hashing method, avoiding using MD5 hashing and the common security risk CWE-328. At the same time, our generated code is generally more helpful with properly modularized functions implementing supporting features such as input validations. As we noted, this feature has generally emerged in code solutions by collaborative agent systems like CAMEL and INDICT. We will explain the qualitative samples in more detail in our revised paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer SdG3,
We hope our rebuttal response has addressed your concerns about the paper. As the authors-reviewers discussion will end in a few days, please do let us know early if you still have any questions or need further clarification.
Regards, | Summary: This paper presents a new framework called INDICT that employs two complementary critic agents to improve both the safety and helpfulness of LLM-generated code. Each critic agent is obtained by prompting an LLM with task-specific instructions and knowledge obtained from external tools such as web search and Wikipedia. To better generate knowledge-grounded critics, the authors propose a thought-action-observation process where the critic agent first generates the initial critic without external knowledge, then predicts search keywords or code snippets for knowledge retrieval based on the initial critic, and finally executes the external tools to retrieve the knowledge. The two critic agents can interact with each other by including the critics generated by the other agent in the current turn and past interactions in the prompt. To avoid computation overhead, INDICT also includes a summarization agent to summarize the previous interactions and uses the summary as the context for the next turn. INDICT is evaluated on eight different tasks across eight programming languages from five benchmarks. The results show that INDICT can achieve better results compared with vanilla LLMs such as GPT-4 and Llama-3. The authors also did an ablation study to show the effectiveness of using two critic agents vs. one agent and incorporating critics after the initial code generation vs. after code execution.
Strengths: 1. The idea of using two complementary critic agents and using a summarizer to avoid the computation overhead is new and interesting.
2. The proposed framework is evaluated on multiple benchmarks, tasks, and programming languages.
3. The results demonstrated the effectiveness of using two critic agents.
==Post Rebuttal Comment===
The additional results provided in the rebuttal look promising and I believe this work can be strengthened by including those results. However, due to the time limit and also the lack of details in the rebuttal response, I am not able to make a full assessment of the correctness of the results.
Weaknesses: 1. My main concern about this work is the weak comparison baselines. In the evaluation, INDICT is only compared with vanilla LLMs rather than advanced and relevant prompting methods such as CodeRL, Self-Correct, and Self-Collaboration. While these prompting methods are initially designed for a single criterion such as code helpfulness, the authors can simply include an additional instruction about the other criterion in the original prompts of these methods to instruct the LLM to give two kinds of critics. This sounds like a more realistic baseline. It would be interesting to see to what extent combining two separate critic agents via the sophisticated interaction proposed by this work outperforms simply mentioning two critic criteria in a SOTA self-critic (or self-refinement) pipeline.
2. The ablation study does not really examine the effectiveness of the novel technical bits proposed in this work. Compared with simply combining two critic agents as in a multi-agent framework, a novel idea in this work is to use a summarizer to avoid computation overhead. However, this summarizer is not evaluated. The authors should compare INDICT vs. INDICT without a summarizer.
3. The thought-action-observation is also an interesting idea that should be evaluated in the ablation study. How much improvement does this process achieve compared with existing RAG and tool-enhanced methods?
4. Does INDICT use zero-shot prompting or few-shot prompting in each step? Based on the prompts in Appendix F, it seems INDICT uses zero-shot prompting in each step. However, this design is not very convincing for certain steps like generating relevant text queries or code snippets. Without few-shot demonstrations, it is hard to restrict the LLM to generate valid outputs. So I was wondering how exactly the authors did that. Also, how did the authors extract search keywords from the LLM response?
5. The justification for using a GPT evaluator is weak. While one or two published papers may have adopted that method, it does not mean that it is a rigorous evaluation method and should be followed broadly by the community. To improve the rigor of evaluation, the authors should consider performing a manual analysis of a small set of samples.
6. The analysis of the experiment results is shallow and does not provide deep insights like why INDICT performs better than other methods and where INDICT fails. The authors should add a failure case analysis.
7. In the related work, the authors should provide a more detailed discussion of how INDICT differs from existing self-critic approaches and multi-agent frameworks. The current discussion is short and handwavy.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How well does INDICT perform compared with the stronger baselines mentioned above?
2. What is the effectiveness of using a summarizer?
3. How much improvement does the thought-action-observation process achieve compared with existing RAG and tool-enhanced methods?
4. Does INDICT use zero-shot prompting or few-shot prompting in each step?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The discussion on limitations looks reasonable. It would be helpful to have some quantification on the computation cost.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments! Please refer to our responses below to your questions.
### Q1: …In the evaluation, INDICT is only compared with vanilla LLMs…
Following your recommendations, we selected 7 strong baselines from different research lines (see our global response #2 above). We also include a version where additional instructions are given to models to provide both safety and helpfulness critics e.g. instruct models to “focus on both the security and helpfulness of the solution.” For multi-agent methods, we included these instructions in all agents (analyst, tester, etc.) or introduced a new critic agent (as recommended in CAMEL's appendix). Note that for all baseline models, we followed similar generation budgets to fairly compare the results (up to 3 rounds of revision). Our results demonstrate the SoTA performance of INDICT against all the baselines. While we observed the improvement of baseline methods with additional instructions (marked with the suffix ‘+’), their results are sub-optimal. We will include all results and a more detailed analysis in our final revision of the paper.
### Q2: …The authors should compare INDICT vs. INDICT without a summarizer.
We conducted a new ablation to evaluate the summarizer (see our global response#3 above). We simply removed the summarizer and let the actor agent receive the full dialogue history. From the results, we noticed the performance degraded to 87% and 72% in safety and helpfulness. This happens probably due to the much longer context of the dialogue history, affecting the actor agent to capture all critic feedback from this history and generate new code. This model variant also incurs more computation due to the long context of the dialogues. We will include the results and more detailed analysis in our revised paper.
### Q3: How much improvement does the thought-action-observation process achieve compared with existing RAG and tool-enhanced methods?
We conducted additional experiments (see our global response#3 above) with two simple variants: (1) RAG-based critics which use the original task description to retrieve relevant knowledge and generate augmented critiques; and (2) tool-based critics which directly use external tools to query “what is wrong with the solution in terms of its <security/functionality>?”; the query output is then treated as a critic feedback. The results show that these variants are inferior to our proposed framework. We found that the queries in these methods are often too vague or ambiguous to search for meaningful information snippets.
### Q4: Does INDICT use zero-shot prompting or few-shot prompting in each step? …
INDICT uses zero-shot prompting in each step. We prompt the critic agent to condition the current critique and generate a unique query to obtain more knowledge. We extract the search keywords following our instruction templates e.g. in the form of 'Search[keyword]'. For generating code snippets, we prompt the model similarly but ask the model to wrap the output code in ```. Note that the current prompts in the Appendix (Listing 4 and 5) are for tool-finetuned models like CommandR which automatically generates tool parameters given a tool calling function definition. We will include more prompts for these steps for other standard models in our revised paper.
### Q5: ...the authors should consider performing a manual analysis of a small set of samples.
To supplement the GPT-based results in helpfulness, we manually evaluated a small set of 40 samples from CyberSecEval1 (5 random samples per language). From the Table below, we found that even though GPT predicted results are often higher than human evaluation, the observation of INDICT's better performance still stands against the baselines.
| **Baseline** | **Evaluation** | **INDICT wins** | **Draw** | **Baseline wins** |
|:---:|:---:|:---:|:---:|:---:|
| **Direct Gen** | Human | 74.8 | 8.5 | 16.7 |
| | GPT | 80.2 | 0.3 | 19.5 |
| **Reflexion+** | Human | 57.1 | 4.2 | 38.7 |
| | GPT | 58.4 | 0.0 | 41.6 |
Note that the security metric is done by the code analyzer from [1] which already demonstrated a high level of accuracy in detecting security issues.
### Q6: …provide deep insights like why INDICT performs better than other methods and where INDICT fails.
From our additional results (see our global response#2 and #3 above), we noticed that the best baseline models (Self-repair+ or Reflexion+) are as good as an ablation method with basic tool-based critiques where the external tool is vaguely queried: “What is wrong with the current solution?”. This shows that the quality of the critic feedback in the current methods is not good enough, simply focusing on shallow reviews of generated solutions. In INDICT, we enable the model to analyze and choose what information snippets to query and how to integrate this information into their critiques (e.g. queries about an uncommon security term or certain coding best practices). Combined with our multi-critic collaboration approach, INDICT demonstrates strong empirical results against other methods.
For qualitative results on how INDICT performs better and example failure scenarios, please refer to our attached PDF in the global response above. We will include more analysis in our revised paper.
### Q7: ... how INDICT differs from existing self-critic approaches and multi-agent frameworks.
Please refer to our global response#1 for a systematic comparison of INDICT and the related work, including the self-critic approaches and multi-agent frameworks. We reviewed each method by the following features: helpfulness or safety-based qualities of generation outputs, execution feedback (execution of output code if applicable), tool-enhanced feedback (access to external tools like web search), multi-critic collaboration (engage multiple LM agents for critic generation) and supervision free (no training data required). We will include the full details of our comparison in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer GLux,
Thank you again for your reviews and comments on our paper. In our rebuttal, we have tried to address your concerns as much as possible, including your main concern about comparison to baselines (Q1), ablation experiments (Q2 and Q3), and other concerns such as prompting, manual evaluation, comparison to related work, etc. (Q4 to Q7). Please let us know if you still have any questions before the authors-reviewers discussion ends soon.
Regards,
---
Rebuttal 2:
Comment: Dear Reviewer GLux,
We hope our rebuttal response has addressed your concerns about the paper. As the authors-reviewers discussion will end in a few days, please do let us know early if you still have any questions or need further clarification.
Regards, | Rebuttal 1:
Rebuttal: We thank the reviewers for providing insightful comments on our paper. Please refer to this global response for our high-level answers to the common concerns. For more detailed explanations and analysis, please refer to the corresponding threads of individual reviewers.
### 1. Comparison with related baselines and our contributions
In the table below, we provided a more comprehensive and systematic comparison of our method to related work. We compared INDICT and related methods from 3 directions: self-refine/self-critic, multi-agent, and finetuning. Compared to these methods, INDICT is a more well-rounded generation framework with the following contributions: (1) integrates code execution-based feedback and enhances them with external knowledge, (2) targets both helpfulness and safety of output code, and (3) facilitates an interactive and supervision-free multi-agent collaboration framework. Our additional experiment results (see the next response) showcase the efficacy of INDICT.
| Method | Helpfulness | Safety | Execution feedback | Tool-enhanced | Multi-critic collab | Supervision free |
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| **_Self-refine approach_** | | | | | | |
| CodeT, AlphaCode, MBR-Exec | ✅ | | ✅ | | | ✅ |
| Self-correct, ILF | ✅ | | | | | ✅ |
| CodeRL, Self-edit | ✅ | | ✅ | | | |
| Self-repair, Self-debug, Reflexion | ✅ | | ✅ | | | ✅ |
| **_Multi-agent approach_** | | | | | | |
| Self-collaboration, AgentCoder | ✅ | | ✅ | | | ✅ |
| CAMEL | ✅ | | | | | ✅ |
| ChatDev, Self-org Agents | ✅ | | ✅ | | ✅ (?) | ✅ |
| MetaGPT, AgentVerse | ✅ | | ✅ | ✅ | | ✅ |
| **_Finetuning approach_** | | | | | | |
| CodeUltraFeedback, StableAlignment | ✅ | ✅ | | | ✅ | |
| SafeCoder | ✅ | ✅ | ✅ | | | |
| **INDICT** | **✅** | **✅** | **✅** | **✅** | **✅** | **✅** |
### 2. Additional experiment results to compare with stronger related baselines
From the above table, we selected representative baselines and evaluated them on a validation test split (random samples of 20% of the CyberSecEval benchmark). With GPT4o-mini as the base model, we adapted the baselines in their original implementation and also extended them with additional instructions (detailed criteria of safety and helpfulness). We marked these enhanced baselines with the suffix ‘+’. We observed that INDICT can lead to SoTA performance by both security and helpfulness (more than 90% and 81% respectively). Strong baselines like Reflexion+ and CAMEL+ improve with additional critic instructions but are not as strong as INDICT.
| **Method** | **Safety** | **Helpfulness** | **S+H** |
|---|:---:|:---:|:---:|
| Direct Gen | 78.2 | 50.0 | 64.1 |
| INDICT | **90.9** | **81.4** | **86.1** |
| **_Sefl-refine methods_** | | | |
| Self-debug | 80.0 | 52.7 | 66.3 |
| Self-debug+ | 79.7 | 53.9 | 66.8 |
| Self-correct | 80.7 | 59.7 | 70.2 |
| Self-correct+ | 86.7 | 68.5 | 77.6 |
| Self-repair | 83.7 | 69.6 | 76.6 |
| Self-repair+ | 86.6 | 70.9 | 78.8 |
| Reflexion | 83.3 | 68.5 | 75.9 |
| Reflexion+ | 86.9 | 69.6 | 78.2 |
| **_LM agent methods_** | | | |
| Self-collab | 78.7 | 52.3 | 65.5 |
| Self-collab+ | 79.1 | 66.2 | 72.7 |
| CAMEL | 81.6 | 63.7 | 72.6 |
| CAMEL+ | 82.6 | 70.2 | 76.4 |
We also conducted experiments to compare INDICT with finetuning-based methods. Using CodeLlama-7b-instruct as the base model, CodeUltraFeedback finetunes the model on a large-scale dataset with annotations of code preferences. We observe that the best model (SFT + DPO finetuning) can improve the results by both safety and helpfulness but not as good as INDICT. As INDICT can complement finetuning-based methods, we applied INDICT with the best CodeUltraFeedback model to achieve even further performance gains (from 60% and 63% to 73%).
| **INDICT vs. finetuning methods** | **Safety** | **Helpfulness** | **S+H** |
|---|:---:|:---:|:---:|
| Direct Gen | 56.3 | 50.0 | 53.2 |
| INDICT | 65.3 | 62.1 | 63.7 |
| CodeUltraFeedback (SFT) | 58.5 | 49.9 | 54.2 |
| CodeUltraFeedback (DPO) | 62.7 | 56.0 | 59.3 |
| CodeUltraFeedback (SFT+DPO) | 63.9 | 57.9 | 60.9 |
| CodeUltraFeedback (SFT+DPO) +INDICT | **74.9** | **72.4** | **73.7** |
### 3. Additional ablation results to demonstrate the effectiveness of the proposed components
Using GPT4o-mini as the base model, we conducted additional ablation experiments with different variants of INDICT: (1) one simply using a single critic agent for both safety and helpfulness; (2) one without using a critic summarizer and maintaining a full dialogue history of critiques in the critic context; (3) ones replacing the thought-action-observation critic generation with RAG or tool-based generation: (3a) RAG uses the original task description to retrieve relevant knowledge and generate grounded critics, and (3b) tool-based method uses web search/Wikipedia and a query “what is wrong with the solution in terms of its <security/functionality>?” and query output is treated as a critique. The result table below shows that our proposed INDICT obtains the optimal performance, with a more well-rounded performance in both safety and helpfulness.
| **Ablation methods** | **Safety** | **Helpfulness** | **S+H** |
|---|:---:|:---:|:---:|
| INDICT (full) | **90.9** | **81.4** | **86.1** |
| - one critic for both criteria | 87.3 | 76.4 | 81.9 |
| - no critic summary | 87.9 | 72.2 | 80.1 |
| - RAG-based critics | 87.9 | 74.4 | 81.1 |
| - tool-based critics | 85.5 | 72.7 | 79.1 |
### 4. Qualitative analysis to demonstrate the benefits of our methods on generated code and analyze failure cases
To address the concerns about the benefits of INDICT and where INDICT may fail, we demonstrated some qualitative analysis in the attached PDF document. In Figure 1, we show that given a code generation task, INDICT can generate code that is more secure and robust than strong baselines (Direct generation, Reflexion, and CAMEL). In Figure 2, we illustrate cases where INDICT may fail due to nontrivial errors.
Pdf: /pdf/f4b2fb39d92f193e1ebd1237c4f1eb95d2e9827a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Query-Based Adversarial Prompt Generation | Accept (poster) | Summary: This paper proposes a query-based adversarial prompt generation method. It eliminates the prior attack's dependence on adversarial transferability and local surrogate models. The attack can evade the OpenAI and Llama Guard safety classifiers with a near 100% success rate.
Strengths: ## Originality
* The proposed improvement is interesting and novel.
* The related works are generally comprehensive.
## Quality
* Several practical considerations are included.
* The breadth of evaluations is good.
## Clarity
* The first three sections are generally well-written and easy to follow.
## Significance
* The proposed technique seems simple, but the overall contribution towards query-based attacks is important.
Weaknesses: ## Originality
I don't have major concerns here. The technical modifications of the existing attack GCG seem simple, but the good results can justify them. It is suggested to summarize and highlight the technical challenges.
## Quality
**Q: The evaluation setting is quite confusing.**
1. The paper was a bit vague regarding the attack's objective. Is it for a general jailbreaking attack (like GCG) or just eliciting the model to repeat a given harmful content? This also makes it hard to understand the attack success rate -- when do we count an attack as successful?
2. The evaluations do not match the major contribution. In Section 4, most of the evaluations are focused on the surrogate-based attack on various settings, including the actual GPT-3.5 model (a side question is why GPT-4 was not included since a black-box attack should work regardless of the underlying model). The actual contribution, the surrogate-free attack, was only evaluated on open models and then a different task of content moderation. The paper did not justify why the surrogate-based attack was not evaluated in the latter settings, and why the surrogate-free attack was not evaluated in the former settings. Ideally, the primary attack is supposed to be evaluated against all involved settings, and the surrogate-based attack is provided as a reference or ablation study.
## Clarity
* L33-45. It was a bit confusing when you first called out the surrogate-free property, then switched to a surrogate-based attack, and only finally, the further optimization that removes the surrogate model.
* L64-66. The high success rates in black-box attacks against classification tasks depend on two factors: the number of queries and the threshold perturbation. Most black-box attacks have 100% success rates at the first few queries but the inputs are either pure noise or heavily perturbed.
* Figure 1b. Please use different styles to distinguish curves of different groups of proxy or target. Right now, it is very hard to tell what the figure conveys and what we can learn when comparing curves with the same proxy size.
## Significance
N/A
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for carefully evaluating our work. We apologize for any confusion we may have caused, and will thoroughly review the presentation of our paper to avoid confusing future readers.
## Originality
Following your suggestion, we will carefully revise the introduction to highlight the main contributions of our work. In particular, we will highlight the practical challenges that make black-box harmful string attacks difficult to run in practice, since overcoming these obstacles represents a significant part of our technical contribution.
## Attack objective
The objective of the attack is to find a prompt that causes the model to output a specific string exactly. The attack is thus successful when the model generates exactly the desired string (including capitalization, punctuation, etc.). Thus the task is much more restrictive than jailbreaking. We elaborate on this task in the general rebuttal and we will add additional emphasis to the paper to clarify this point.
[1] Zou, Andy, et al. "Universal and Transferable Adversarial Attacks on Aligned Language Models." (2023).
## Inclusion of GPT-4
The reason GPT-4 was not included in our results is because it does not support logprobs. Although the attack may still be possible without logprobs, the expense would be beyond our budget.
## Evaluations and contribution
The reason for this split is because it is best to use the surrogate when it is available (which is only under certain circumstances). Note that in the language modeling experiments, we are using a base pretrained model as the surrogate. Such surrogates are freely available and thus it makes sense to use them. However the downside is that they do not reveal anything about the alignment of the target black-box model. This is acceptable because we are only using the surrogate in a weak sense: to pre-filter the candidates before they are sent to the black-box model. We find that this weak pre-filter is still better than the position-based pre-filter used in the surrogate-free setting.
However in some settings, such as in our content moderation examples, we may not have access to any kind of surrogate. Thus in this setting we apply the surrogate-free algorithm. Unfortunately we cannot afford to run the surrogate-free attack for closed models due to budget constraints, since we believe it will be much worse (i.e. much more expensive) than the attack using the weak surrogate. On the other hand, we cannot run the weak surrogate algorithm because the pretrained model surrogate does not make sense in the context of content moderation.
You are correct that this arrangement appears awkward to readers. We will revise the discussion of the surrogate-free attack to make it more clear why the two attacks are applicable in different situations and we will aggregate the results common to both algorithms in Table 1.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Below is my understanding of the clarified contributions and evaluations, and my recommendation for adjusting the claims:
1. The main contribution remains a query-based attack and an effort to reduce the dependency on transferability.
2. The main technique is a better pre-filter based on an unaligned model; most evaluations are in this setting. However, this setting may not entirely remove the dependency on transferability, in the sense that it cannot be called surrogate-free or proxy-free attacks.
3. In some settings, the model-based filter can be eliminated, but the resulting surrogate-free attack is generally too expensive to run. Therefore, the claims of surrogate-free attacks should be tuned down.
After re-evaluating the paper based on the above breakdown, I'm borderline okay with the novelty justified by the performance, and strongly recommend the authors better articulate the technical challenges of 2. | Summary: This paper modifies GCG, an attack on LLMs to elicit harmful responses, to create a query-based black-box attack with two primary goals: 1) Enable targeted attacks that are not possible with simple transfer-based attacks and 2) Enable attacks to still occur when no feasible surrogate model exists. To modify GCG, the key change is that the authors maintain a min-max heap of possible adversarial suffix candidates, searching for word replacements on the best currently known suffix candidate. The authors also experiment with prompt initialization, finding that including the target as many times as fits to be useful. Finally, as a query-optimizer in a proxy-free setting with random tokens instead of gradients, the authors experiment with trying a word replacement once in each position, and then taking the best position and trying more word replacements there. The authors find they are able to generate targeted attacks, which transfer attacks cannot, and can attack real-world models with proxy-free attacks.
Strengths: Strengths include 1) enabling targeted attacks to elicit specific phrases, 2) enabling proxy-free attacks on real-world models without a surrogate, and 3) including optimizations to manage the query budget in the proxy free setting. Thus, the paper makes strides towards more practical attacks with adversaries with less power. The authors have also found their initialization strategy to help improve attack quality.
Weaknesses: The primary weaknesses of this paper include a lack of comparisons against other black-box attacks (e.g., Andriushchenko et al., "Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks", and Lapid et al., "Open Sesame! Universal Black Box Jailbreaking of Large Language Models") on the proxy-free attacks. This also limits the technical novelty, as the attack as mostly a series of small changes to GCG. The other weakness is that this paper does not evaluate a wide variety of different types of LLMs - one of the themes is that the proposed attack is useful in settings where there is no good surrogate, so it would be nice to see how the attack does against transfer attacks across different families of LLMs.
Technical Quality: 1
Clarity: 3
Questions for Authors: How does the attack perform against Andriushchenko et al. and Lapid et al.?
How does the attack do against transfer attacks across different families of LLMs?
Confidence: 4
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: Adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for carefully assessing our work. We hope the following will address all outstanding concerns and we are happy to further discuss during the discussion period.
## Comparisons against other black-box attacks
Unfortunately, there is a dearth of works to compare to when it comes to eliciting exact harmful strings. The attack was originally proposed in GCG, where it was demonstrated against open models, but since then it has received little attention. This may be because there was no known method to cheaply compute the loss required for the harmful string attack under the constraints of popular APIs.
Regarding the two suggested attacks:
Andriushchenko et al. only focuses on controlling the first token generated after the prompt. This is because the first log-probability is directly given by OpenAI’s API. This is sufficient to elicit certain harmful behaviors, but not harmful strings of arbitrary length.
Lapid et al. focus on jailbreaking, using a loss based on the embedding similarity to some target phrase, which is typically a short affirmation such as “Sure, here is”. This approach does not adapt well to harmful string targets because of the many local minima in the embedding similarity landscape.
To demonstrate this, we repurposed GCQ to try to maximize the embedding similarity between an input string and a target string from harmful strings using OpenAI’s text-embedding-ada-002 model. In each of the 10 cases, we tried, the search terminated in a local minima, finding a string with a similar embedding but not the exact target string itself. (Note that this is much easier than solving the same problem indirectly through the [prompt → generation] mapping of an aligned LLM as needed for a harmful string attack.) On the other hand, the use of the embedding distance makes perfect sense for jailbreaking, where the target semantics are all that matter.
The difficulty here is that neither of these attacks are designed to operate in our setting. We elaborate on this setting and why it is important in our general rebuttal.
## Technical novelty
Although the actual optimization algorithm does require only minor changes to GCG. We would like to suggest that the computation of the loss itself is a significant technical contribution in its own right. To our knowledge, our attack is the only one which is able to elicit exact harmful strings (spanning many tokens) from a proprietary model, which is much more difficult than jailbreaking or eliciting harmful behaviors. The key to this ability is a procedure to quickly and inexpensively score the conditional probability of an entire sequence of generated tokens. Indeed one of the primary changes we make to the optimization algorithm (the introduction of the prompt buffer) is purely to help reduce the cost of the loss calculation.
## Transfer between model families
We evaluate GCG in the pure-transfer setting within the same model family (in Table 1) and find that it performs very poorly (0 ASR) in the harmful string setting. This is in contrast with harmful behaviors, where transfer works quite well. Given the additional difficulty when transferring across model families, we would not expect it to work well in that setting. For example, we find that none of the strings computed for Vicuna 7B transfer to either Mistral 7B Instruct v0.3 or Gemma 2 2B Instruct.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: The author responses to my questions seem fair. I have updated my score accordingly. | Summary: The paper presents a novel query-based attack method designed to generate adversarial examples that induce harmful outputs in aligned language models. Building on the GCG attack by Zou et al. (2023), this method employs a query-based strategy that eliminates the need for transferability, resulting in a significantly higher success rate. The effectiveness of the attack is demonstrated on GPT-3.5, even in black-box scenarios. Additionally, the paper details universal and prompt-specific attack techniques that effectively circumvent OpenAI’s safety classifier, achieving near-perfect evasion rates.
Strengths: + Despite the work from Zou et al. (2023) disclosing the vulnerability of production LLMs to adversarial examples capable of producing harmful strings and behavior, this paper overcomes some limitations related to attack transferability. The authors introduce a novel approach that leverages the API of production LLMs, like GPT-3.5, significantly improving the success rate of attacks and demonstrating the fragile behavior of LLMs in the presence of a smart adversary.
+ The paper is well-written, clearly describing each step of the attack, including strategies to bypass OpenAI’s API restrictions. For instance, the method for calculating the log probabilities is particularly smart and well-elaborated.
+ Overall, the experimental evaluation is comprehensive and provides valuable insights. For example, the results in Table 1 shows the limitations of previous attacks like CGC and AutoDan, whereas the attacks proposed by the authors are very effective both in white-box and transfer attack settings.
+ The authors also analyzed the effect of universal attacks, revealing interesting results in terms of the trade-off between the budget/cost and the effectiveness of the attack.
Weaknesses: + Although it is a relevant attack targeting a commercial system like ChatGPT 3.5, the scope of the attacks is somewhat limited as it requires API access to provide the logprob and only works with GCG. Nevertheless, I think that the paper clearly shows the risks of providing access to the logprobs in the API.
+ The paper only focuses on the attacker’s perspective. Defenses or possible mitigations are not properly discussed in the paper.
Technical Quality: 4
Clarity: 3
Questions for Authors: + Can the authors explain why the attack success rate of AutoDan are so low in the experiments reported in Table 1?
+ Can the authors provide any intuition about how to defend against an attack like this?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations have been addressed appropriately in different sections of the paper, including, Section 3.2.1, and Appendix B, for example.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their helpful comments! We hope that the following will address any residual concerns and we are happy to further discuss during the discussion period.
## Performance of AutoDAN
The reason that AutoDAN performs so poorly is because it is being evaluated in an “unfair” way (out of necessity). The original AutoDAN code is designed to perform jailbreaking which is coded in roughly the following way: {the attack is successful if the model does not generate any of the following strings: (“No”, “Sorry”, “As an AI”, …]}. We replaced this objective with the objectives we used in GCQ: {The attack is successful if the model generates exactly “<example harmful string>”}. Because the harmful string objective is much more restrictive than the jailbreaking objective, the optimization of AutoDAN did not perform well. Of course this is reasonable since AutoDAN was not designed to work in this setting.
We will adjust the presentation of this result to stress that the evaluation of AutoDAN is in a new setting where it cannot be expected to succeed. We hope that including this result will underscore that, in general, jailbreaking methods may not generalize to more difficult tasks, such as harmful strings.
## Defenses
In terms of defenses, there are a variety of techniques that target various aspects of the text generation pipeline:
The prompt can be filtered for e.g. high perplexity inputs (proposed in [1]). The attack we describe does often produce seemingly random sequences of tokens, so we would need to add additional functionality (for example, perplexity regularization) to evade this defense.
The model generations can be filtered for harmful content by a second model. However as we show, it is possible to find strings that bypass popular content moderation models. Thus one could imagine a two-stage attack, where we optimize to generate a harmful string which also passes through the content filter.
A different kind of defense would be to rate-limit queries that are very similar to previous queries (for example, if the API receives thousands of strings which all differ from each other by a small number of tokens). However, this has the potential to harm honest users, so it cannot be too harsh.
Of course there is also the option of restricting access to logprobs, logit_bias or both as you note.
Many more possible defenses are listed in [1].
Of course, the best defense will always be a multilayered approach. Each layer of defense makes the attack much more difficult, as it is difficult to evade many defenses at the same time. We will add a section to the paper that discusses possible defenses in the spirit of what we've written here.
[1] Jain, Neel, et al. "Baseline defenses for adversarial attacks against aligned language models." arXiv preprint arXiv:2309.00614 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your comments. I think that the paper makes a nice contribution and I'm keeping my positive score. | Summary: This paper delves into the topic of adversarial examples and prompt injection attacks on aligned language models. A new strategy (GCQ) for black-box adversarial attacks is proposed which does not need access to a surrogate model, but only uses black-box access to the target model. This strategy is an extension of GCG, the current state-of-the-art attack. The proposed method works against both generative models and safety classifiers, as the experiments show.
Strengths: - The experimental protocol covers a range of setups, including white-box, black-box, and universal attack.
- The main novelty of the paper is the no-proxy version of GCQ.
- The proposed attack GCQ shows good practical performance, in both white- and black-box setups, when compared to the existing state-of-the-art attack GCG and AutoDAN.
- Implementation included.
Weaknesses: Significance:
- It would be great to include more attack baselines, e.g. AutoDAN attack from [Zhu et al., 2023].
- One of the most relevant qualities of the method is the independence of surrogate models. It seems relevant to compare against other attacks from the same category, e.g., PAL (cited as [28] by the paper, but not used as baseline).
- It also makes sense to include the GCQ without proxy access in Tab. 1.
- The topic of performance of the attack against existing defenses is not covered.
Novelty:
- For the GCQ version that still relies on a surrogate, the changes w.r.t. GCG seem minor.
Minor:
- Additional proofreading seems necessary.
- Sec. 4.2 and 4.6 are missing from the outline at the beginning of Sec. 4.
- L217 "hamful" -> "harmful"
## References
- [Zhu et al., 2023] Sicheng Zhu, Ruiyi Zhang, Bang An, Gang Wu, Joe Barrow, Furong Huang, Tong Sun. AutoDAN: Automatic and Interpretable Adversarial Attacks on Large Language Models. https://openreview.net/forum?id=ZuZujQ9LJV
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the GCQ attack perform against existing defenses?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Ok
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for carefully reading our work and providing useful and constructive feedback. We hope that the following will address the points raised and are happy to further discuss during the discussion period.
## Comparison to other attacks
Unfortunately, there is a lack of existing works to compare to when it comes to the harmful string attack. This attack was originally proposed in GCG, where it was demonstrated against open models, but since then, it has received little attention. This may be because there was no known method to cheaply compute the loss required for the harmful string attack under the constraints of popular APIs.
Regarding the two suggested attacks:
1. The AutoDAN attack from [Zhu et al., 2023] is focused on jailbreaking under the constraint of an input perplexity filter. They do not evaluate in the setting of harmful strings and their optimization procedure is carefully tailored to the jailbreaking objective. Thus, it is not clear how to modify their method to elicit a specific string. Additionally they do not provide code, which makes the evaluation of their method difficult.
2. Similarly, PAL [28] focuses on Jailbreaking in the black-box setting. They use a similar collection of tricks to our work (developed independently and concurrently) to make the loss calculation feasible for commercial APIs, although the loss itself is different due to the different attack objective. Since PAL is also based on GCG, running PAL with the objective of GCQ would be very similar to running GCQ itself. Thus it is difficult to find a comparison that makes sense here.
This underscores the lack of existing work on the harmful string attack. We believe this attack is meaningful and important, as we outline in the general rebuttal, and our hope is to make some initial progress in this direction beyond GCG.
Regarding GQC with no proxy, we will add the results to Table 1 as requested. The numbers for Vicuna 7B were already computed for Section 4.4 so it is a simple matter to copy them over.
## Novelty
Although the actual optimization algorithm does require only minor changes to GCG. We would like to suggest that the computation of the loss itself is a significant technical contribution in its own right. To our knowledge, our attack is the only one which is able to elicit exact harmful strings (spanning many tokens) from a proprietary model, which is much more difficult than jailbreaking or eliciting harmful behaviors. The key to this ability is a procedure to quickly and inexpensively score the conditional probability of an entire sequence of generated tokens. Indeed one of the primary changes we make to the optimization algorithm (the introduction of the prompt buffer) is purely to help reduce the cost of the loss calculation.
## Defenses
We will add a section discussing defenses to the paper. In general, we expect defenses (e.g. input filtering or output filtering) to be effective against our attack. Since the attack is already quite difficult, raising the difficulty with defenses may make the attack infeasible unless new techniques are developed to further strengthen the attack.
## Presentation
We will carefully proofread the paper and revise the outline to include all major sections of the paper. | Rebuttal 1:
Rebuttal: We would like to thank all of our reviewers for their insightful comments. In the general rebuttal we would like to elaborate a bit on the harmful string attack.
## Harmful string attacks
In our language modeling results, we focus on the harmful string attack. This attack objective is to get the model to output a specific string exactly (including punctuation and capitalization for example). This attack is separate from jailbreaking or elicitation of harmful behaviors in that the desired outputs are highly specific. Empirically, this makes the attack much more difficult.
At first glance the harmful string attack may appear contrived, but it is significant for several reasons. First, it's a very straightforward test of model alignment. If a model creator's desire is “don't say X”, making the model say exactly X is a clear violation of those preferences. Second, it is easy to evaluate compared to jailbreaking which often must be manually evaluated by humans. Finally, exact strings can do harm in unique ways. For example a model supporting tools may be fooled into invoking the tool in a specific way which is harmful to the user. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Instructor-inspired Machine Learning for Robust Molecular Property Prediction | Accept (poster) | Summary: This paper proposes a framework InstructMol for utilizing unlabeled data to help molecular property prediction on out-of-distribution (OOD) domains. The framework combines (1) a molecular model *f* that predicts (pseudo)labels with (2) a binary classifier *g* as instructor that evaluates the probability of labels being pseudo and reweighs *f*’s loss. In the experiments, it is compared to several self-supervised learning (SSL) baselines in predictive accuracy and OOD generalization. The experiments also investigate the effects of unlabeled data size and the instructor model’s behavior as an uncertainty estimator.
Strengths: - Utilizing large, unlabeled, and potentially distribution-shifted data is relevant in chemical and materials sciences, and this paper presents an effective method for that.
- This paper is well-written, the figures are informative, and the experiments are comprehensive.
Weaknesses: - Sec. 2’s review on related works is comprehensive but could be better organized.
- The separation of pretraining and SSL is confusing: techniques like contrastive learning are often viewed as SSL, and some SSL algorithms work as pretraining.
- The mention of methods “jointly learning multiple tasks” (Line 171) seems more appropriate in related works than method.
- The paper presents strong empirical results but not enough interpretations. Compared to SSL, what extra information does InstructMol utilize, or why does it extract information more effectively, that leads to better accuracy/generalizability? Discussing these could provide more insights to future model development.
Technical Quality: 4
Clarity: 4
Questions for Authors: - In Line 33, domain knowledge being biased is raised as a problem. (1) Could some “bias” be helpful, e.g., by providing correct inductive bias to the model? (2) I suggest citing relevant literature on the scenarios where bias becomes a problem.
- Line 78, “usually follow significantly different distributions” seems exaggerating. Is there justification?
- Line 97, could uncertainty quantification methods, e.g., Bayesian NN, SNGP, evidential DL, address the problem for regression tasks?
- Section titles of 5.3 and 5.4 (Visualization) are confusing.
- The meaning of “confidence score” is unclear: in the main text (Lines 156, 301), 1 indicates pseudo-label, while Fig. 5 seems the opposite.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Discussed in Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer **UbNR**,
Thank you for your comprehensive and insightful review of our paper. We appreciate your recognition of the strengths and contributions of our work, as well as your constructive feedback on areas for improvement. We are pleased that you found our framework effective in utilizing large, unlabeled, and potentially distribution-shifted data, which is indeed crucial in the fields of chemical and materials sciences. Your positive feedback on the clarity of our writing, the informativeness of the figures, and the comprehensiveness of our experiments is greatly encouraging.
(1) We acknowledge that the review in Section 2 could be better organized. We will restructure this section to more clearly differentiate between different categories of related works, including a more distinct separation of pretraining and SSL. We will also ensure that our discussions on techniques like **contrastive learning** are appropriately classified and explained within the broader context of SSL. Besides, we will move the discussion on **jointly learning multiple tasks** in Line 171 to the related work section according to your suggestion.
(2) We understand the need for a deeper discussion on why InstructMol performs better compared to existing SSL methods, particularly in terms of the extra information it utilizes or extracts. From our humble point of view, InstructMol introduces a **cooperative-yet-competitive learning** scheme to jointly boost the performance of both the main molecular prediction task and the companion label confidence prediction task. Specifically, InstructMol forges collaboration between both tasks by providing extra information to each other, *i.e.*, the main task provides prediction loss while the companion task provides label confidence measures. This is considered the major factor for the performance improvements.
(3) Regarding the point on bias, we agree that some correct inductive biases can indeed provide beneficial inductive biases to the model. For instance, [A, B] construct chemical element knowledge graphs to summarize microscopic associations between elements and facilitate contrastive learning efficiency. Nevertheless, bias can lead to issues sometimes. [C] mentions that the regular distance threshold (*e.g.*,4-5 Angstrom ) to determine the chemical bond between atoms and build the graph connectivity can result in suboptimal performance. We will elaborate on this aspect, providing citations to relevant literature and clarifying the context in which we consider bias problematic.
**[A]** Knowledge graph-enhanced molecular contrastive learning with functional prompt. Nature Machine Intelligence 2023.
**[B]** Molecular contrastive learning with chemical element knowledge graph. AAAI 2022.
**[C]** Discovering and Explaining the Representation Bottleneck of Graph Neural Networks from Multi-order Interactions. IEEE TKDE 2024.
(4) The sensitivity of SSL approaches to distribution shifts between the labeled and unlabeled data has long been a crucial topic. Several prior studies [A,B,C] have demonstrated that SSL models suffer when the distribution of the unlabeled data differs significantly from the labeled data. [D] notice that the performance of SSL methods can degrade under distribution shifts and propose a method (FixMatch) to mitigate this issue by using strong data augmentation techniques. However, we value your comment and will tone down the language used (*usually* to *sometimes*) and provide a more nuanced discussion, supported by examples or references, to justify the statement about different distributions in molecular data.
**[A]** Realistic Evaluation of Deep Semi-Supervised Learning Algorithms. NeurIPS 2018.
**[B]** Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty. NeurIPS 2019.
**[C]** Unsupervised Representation Learning by Predicting Image Rotations. ICLR 2018.
**[D]** FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. NeurIPS, 2020.
(5) Your suggestion to explore uncertainty quantification methods like Bayesian Neural Networks (BNNs), SNGP, and evidential DL is absolutely valuable. For instance, BNNs [A] provide a probabilistic approach to neural networks by estimating the distribution over the network's weights rather than relying on a single set of weights. This probabilistic framework allows BNNs to quantify uncertainty in predictions, which is crucial for estimating the confidence of a regression model. Despite the great potential of BNNs in quantifying the uncertainty, they usually show inferior performance compared to DL methods. We look forward to future efforts in exploring BNNs and other promising mechanisms to take advantage of the abundant unlabeled molecular database.
**[A]** Uncertainty quantification of molecular property prediction with Bayesian neural networks. arXiv 2019.
(6) we have revised the titles of Sections 5.3 and 5.4 (**Discussion, Ablation, and Other Applications**) to more accurately reflect their content and avoid confusion. Additionally, we state that 0 and 1 indicate pseudo-labels and ground truths, respectively, and acknowledge a typo in Line 301. The correct version would be *...to discriminate true labels (confidence score $\rightarrow$ 1.0) and fake ones (confidence score $\rightarrow$ 0.0)*. As for Line 156, we have double-checked our analysis and corrected it to "...forces the target model $f$ to disregard this label, which is actually reliable". Thanks for your advice on terminology and section titles!
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' comprehensive response, which addresses my concerns well. I am keeping my scores for this paper high at this stage.
---
Reply to Comment 1.1.1:
Title: Thanks for Feedback
Comment: Dear Reviewer UbNR,
We are delighted that you have found our rebuttal feedback valuable and maintained your positive score. We greatly appreciate your recognition of our work and would respectfully inquire if there might be any additional opportunities for us to improve our manuscript. Once again, thanks for your constructive feedback, and we would eagerly welcome any further guidance at your convenience!
Best regards,
Authors | Summary: The paper introduces InstructMol, an innovative learning framework designed to address the challenge of data sparsity in chemical and biological sciences by leveraging large-scale unlabeled data through reliable pseudo-labeling. Unlike traditional methods that rely on transferring knowledge between domains, InstructMol operates within a single domain, eliminating potential discrepancies between pretraining and fine-tuning stages. The authors demonstrate the algorithm's high accuracy using various real-world molecular datasets and out-of-distribution (OOD) benchmarks, demonstrating its effectiveness in enhancing machine learning applications in biochemical research.
Strengths: - Well-written and easy to follow
- InstructMol effectively addresses the data scarcity issue in biochemical data by leveraging large-scale unlabeled data without the need for domain transfer.
- Extensive experiments are conducted to demonstrate the efficacy of InstructMol.
Weaknesses: - The paper does not provide a detailed analysis of the computational complexity or resource requirements of the InstructMol algorithm.
- While the paper showcases the superior performance of InstructMol, it does not sufficiently address potential overfitting issues that may arise due to the iterative use of pseudo-labels.
- Although the paper compares InstructMol with several baseline methods, it does not include a thorough comparison with some of the latest advancements in semi-supervised learning and domain adaptation techniques.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Due to limited discussion on computational complexity, it difficult to assess the practicality and scalability of the approach for very large datasets or in resource-constrained environments.
- It was mentioned that determining 'k' in InstructMol is important; please show the experimental results and discussion related to this.
- It would be beneficial to have an experiment to determine whether using a poor model for model $f$ can still result in performance compensation due to the confidence score, or if it leads to a decline in performance.
- Why RMSEs in Figure 3 become worse as the training data increases?
- What model and data were used for training on "Real-word Drug Discovery"?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors addressed limitations in Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer **pmns**,
Thank you for your detailed review and insightful feedback on our paper, "InstructMol." We appreciate your recognition of the strengths, particularly in addressing data scarcity in biochemical research and the clarity of our presentation. We are glad that you found the paper well-written and that our approach effectively addresses data sparsity by leveraging large-scale unlabeled data. Your positive remarks on the comprehensiveness of our experiments are encouraging.
(1) We recognize the importance of addressing the practical scalability of InstructMol. The computational cost of the algorithm primarily involves the instructor model and the iterative pseudo-labeling process. Since pseudo-labeling is applied every $k$ epoch, the theoretical complexity of InstructMol is approximately $(1 + 1/k)$ times that of a standard semi-supervised learning (SSL) approach, which trains only the target model. However, in practical implementation, we have observed that InstructMol converges significantly faster than existing SSL methods. For example, in the classification tasks reported in Table 1, the UPS [A] method typically requires around 100 epochs for convergence, whereas InstructMol completes training in approximately 10-20 epochs. This accelerated convergence offers a clearer perspective on the practical implications of using InstructMol on large-scale datasets or in environments with limited computational resources.
**[A]** In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning. ICLR 2021.
(2) The parameter $k$ plays a crucial role in the InstructMol framework. We agree with your point that it is necessary to present the experimental results that explore the impact of different values of 'k' on the model's performance. The table below shows the influence of different $k$ strategies, where **na** indicates a loss explosion without convergence and the number in the bracket corresponds to the number of epochs for convergence. It can be found that a too frequent update would make the training procedure volatile, resulting in training failure, while a large $k$ would significantly increase the training complexity. In contrast, our adaptive decay strategy (ADS) achieves a competitive performance while maintaining a fast training speed.
| k | BBBP | BACE | ClinTox |
|---|---|---| ---|
| 1 | na (--) | 77.9 (9) | na |
| 10 | 70.7 (37) | 83.1 (25) | 86.5 (42)|
| ADS | 70.5 (13) | 83.3 (11) | 86.2 (18) |
(3) The concern about using a suboptimal model and its impact on performance is valid. This is also one of the major reasons that we conduct experiments in Table 1. To be specific, all backbone architectures, containing GIN [A]/GCN [B]/GAT [C], are simple and "old-school" graph-based neural architectures, which were invented many years ago. Their performance is also far worse than state-of-the-art baselines such as Graphormer [D] and GPS [E]. However, our experimental results demonstrate that even when the base model is not optimal, the confidence scores generated by the instructor model can also positively affect the final performance. This analysis helps in understanding the robustness of InstructMol to variations in model quality.
**[A]** How Powerful are Graph Neural Networks? ICLR 2018.
**[B]** Graph Attention Networks. NeurIPS 2018.
**[C]** Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017.
**[D]** Do Transformers Really Perform Bad for Graph Representation? NeurIPS 2021.
**[E]** Recipe for a general, powerful, scalable graph transformer. NeurIPS 2022.
(4) The observation that RMSEs worsen as training data increases may suggest issues such as noise in the additional data or potential overfitting. Importantly, adding more training data does not automatically enhance a model's generalization to the test set. If the distribution of the additional training samples significantly differs from that of the test domain, this can lead to an out-of-distribution (OOD) transfer problem, resulting in poorer performance on the test set. However, Figure 3 demonstrates that our InstructMol framework can mitigate the distributional gap introduced by new data, thereby enhancing the model's robustness.
(5) Thanks for your question about the real-world drug discovery application. We employ the CHEMBL214_Ki dataset from the ACE benchmark [A] and GEM [B] as the base architecture, which shows extraordinary results in molecular property prediction.
**[A]** Exposing the limitations of molecular machine learning with activity cliffs. JCIM 2022.
**[B]** Geometry-enhanced molecular representation learning for property prediction. Nature 2022.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' time and effort. While they have addressed most of my concerns, some aspects remain unclear.
- In the experiments related to the proposed \( k \) in the rebuttal, there are still questions about the effectiveness of the adaptive decay strategy.
- I agree that adding training data can introduce more noise. However, I still have concerns that this may indicate a potential weakness in the model's robustness to noise. | Summary: This paper targets the problem of label-scarcity in the domain of molecular property prediction. It can be seen as an improved version of proxy labeling. It utilizes a separate model that measures pseudo-labels’ reliability and helps the target model leverage large-scale unlabeled data. This method applies to both classification and regression tasks. The authors run numerous experiments on predicting molecular properties, OOD generalization, and combination with pre-training models.
Strengths: The target problem is both timely and important, with a well-justified motivation. The proposed method alleviates some issues by utilizing an instructor model to predict confidence scores.
Overall, the method is clearly presented, and adequate experiments are conducted and documented.
Weaknesses: The improvement in model performance is relatively weak, with large standard deviations, and the method is compute-demanding due to the separate model and the iterative procedure. Additionally, some technical details and experimental results are unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: * In Line 201-202, how are the average increase in AUC-ROC and the average decrease in MSE defined and calculated? Why are there only three numbers for the six classification tasks?
* I assume the architecture of the InstructMol model is not necessarily the same as the target molecular model. The article does not clearly explain how the InstructMol model is constructed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors briefly touched upon the necessity of developing a self-supervised learning algorithm better aligned with InstructMol than existing methodologies. Could the authors briefly elaborate on the extra compute incurred by the instructor model?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer **nboj**,
Thank you for your thoughtful review and detailed feedback on our paper. We appreciate your recognition of the significance of addressing label scarcity in molecular property prediction and the contributions of our proposed method. We are glad that you found our work timely, important, and well-motivated. Your positive remarks about the clarity of the presentation and the comprehensiveness of our experiments are encouraging. Below, we respond to the key points you raised:
(1) We acknowledge that while our method shows improvements, the performance gains are sometimes accompanied by large standard deviations. **This variability can arise from the inherent complexity of molecular data and the challenges associated with out-of-distribution (OOD) generalization.**
(2) We apologize for any confusion caused by numerical issues and thanks for your detailed question. Here, we report the average increase in ROC-AUC of six classification tasks for **three backbone architectures**, *i.e.*, GIN, GCN, and GAT, instead of the specific improvements for these six tasks. Therefore, there are only three numbers in Lin 201-202. We will revise this section to ensure clarity and provide a complete breakdown of the results for all tasks.
(3) You are correct that **the architecture of the InstructMol model is not necessarily the same as the target molecular model**. InstructMol consists of two components: the main model for molecular property prediction and a separate instructor model that predicts the reliability of pseudo-labels. *The choice of architecture for each component can vary depending on the specific task and dataset.* In our experimental implementation, for simplicity, we directly copy and adopt the same architecture of the instructor as the target model. We will expand the discussion in the Appendix to explain the construction of the InstructMol model as you wish.
(4) Regarding the development of a self-supervised learning algorithm better aligned with InstructMol, we recognize that this is an area ripe for further research. As for the additional computational cost incurred by the instructor model, it includes not only the training of an additional model but also the iterative process of pseudo-label assignment. Therefore, the entire computational expense would be $(1 + 1/k)$ times ($k$ is the update frequency) more than the conventional SSL methods, which rely on a single target model for molecular property prediction. However, *this computational burden can be reduced and the training speed would be significantly accelerated if we select a lightweight instructor architecture (e.g., shallow layers and a smaller hidden dimension with a much smaller model size).* We will include a more detailed discussion of this computational cost, along with potential optimizations or alternatives that could mitigate it in the Limitation part.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. Though the calculation of the average increase in ROC-AUC is still not clear to me (e.g., compared with which baseline model), hopefully they will clarify further in their revisions. I will keep my score as it is. | Summary: The authors develop a method, called InstructMol, for adding pseudo-labels to any training task by including an "instructor" that is trained to discriminate real labels from pseudo-labels, and whose uncertainty is used to modulate the training loss for the primary predictors. The authors show that adding this instructor model improves overall performance over similar methods across a number of property prediction tasks. The authors also show that pretraining delivers state of the art results on the MoleculeNet benchmarks.
Strengths: - novel general pseudo-labeling method
- novel loss for extracting signal from all pseudo-labels, even when the estimated uncertainty is high
- top results amongst comparative models, especially the results in Table 1
Weaknesses: - Using GIN, GAT and GCN for the results in Table 1, but then GEM for those in Table 3, makes it feel like the results are cherry-picked, especially since the GEM+InstructMol results in Table 3 are comparable to and within the error of some of the other methods.
- It is unclear how the results of Figure 4 were obtained. The reader assumes these are comparable to the GEM+InstructMol of Table 3, but this should be better explained.
- The distance of the 9 molecules examined in the real world drug discovery section to the training set should be examined, even if the fact that these were patentable suggests they are dissimilar from known molecules. Also, there are likely many more examples like this that could have been provided more prospective evaluation, and only showing one makes the reader again wonder if the example is cherry-picked.
- It would have been useful to see similar plots to those in Appendix C for other uncertainty estimation methods as a way of showing that InstructMol learns better separation between real and pseudo labels.
Technical Quality: 3
Clarity: 2
Questions for Authors: - I can't tell from the paper if the benefit that comes from InstructMol is due simply to training on more data or for more iterations, simply due to having more labeled data. In which of the experiments is this possibility controlled for? Is it Table 1 where other pseudo-label methods are compared to? If such a control exists, will you please make this explicit in the paper? If not, this seems a critical experiment to run.
- Why not use TDC benchmarks instead of MoleculeNet? While the datasets are obviously similar, TDC has done some additional work to clean them, provide reasonable splits, etc.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer **dbD2**,
Thank you for your detailed review and thoughtful comments on our paper. We appreciate your recognition of the strengths, particularly the novelty of our pseudo-labeling method, the innovative loss function for utilizing all pseudo-labels, and the strong performance shown in Table 1. We are pleased that you found our pseudo-labeling approach and loss function novel and effective. Your acknowledgment of the competitive results we achieved, especially in Table 1, is encouraging. Below, we address the concerns and questions you raised:
(1) Your question about the potential benefits from training on more data or iterations is important, and it has also been raised by Reviewer UbNR. First, in response to your query regarding Table 1, we indeed compared InstructMol with other pseudo-labeling methods under controlled conditions to isolate the effects of our approach from simply having more labeled data. For example, UPS [A] introduces an uncertainty-aware pseudo-label selection framework, which enhances pseudo-labeling accuracy by significantly reducing noise during the training process.
Second, by ruling out the benefit of merely having more labeled data, we attribute the success of InstructMol to two key factors. (1) The target model assigns more accurate and reliable pseudo-labels with the assistance of the instructor. (2) The instructor plays a crucial role in evaluating the reliability of these pseudo-labels and influences their contribution to the loss calculation, as detailed in Equation 2.
InstructMol implements a cooperative-yet-competitive learning scheme, which synergistically enhances both the main molecular prediction task and the companion label confidence prediction task. This approach fosters collaboration between the two tasks by exchanging crucial information: the main task contributes prediction loss data, while the companion task provides label confidence measures. This interaction is a significant factor in the observed performance improvements of our method.
**[A]** In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
We will explicitly clarify this in the paper, detailing the control measures we used to ensure a fair comparison. If this aspect has not been fully addressed, we acknowledge the need for additional experiments and will work to include such controls or explicitly state any limitations.
(2) We selected MoleculeNet for its extensive range of datasets and well-established benchmarks, making it a widely accepted standard for evaluating molecular property prediction models. Numerous prior studies, including GROVER [A], 3D-Informax [B], GraphMVP [C], MolCLR [D], Uni-Mol [E], GEM [F], and GPT-GNN [G], have utilized MoleculeNet, which facilitates direct comparisons of our results with these methods. Furthermore, we used the same scaffold splitting strategy as GEM and Uni-Mol, which is recognized as a challenging and biologically relevant approach, ensuring a robust evaluation of our model's performance.
Beyond the standard MoleculeNet benchmarks, we also evaluated InstructMol using the **Graph Out-of-Distribution (GOOD) benchmark**, which systematically assesses the generalization capabilities of graph-based models. Additionally, we explored the predictive strength of InstructMol on bioactivity using the CHEMBL214_Ki dataset from the **ACE benchmark** [H], providing further insights into its practical applicability in real-world drug discovery scenarios.
In summary, our evaluation strategy aims to be both comprehensive and thorough, covering a wide range of datasets and challenging scenarios. However, we acknowledge the value of the Therapeutic Data Commons (TDC) benchmarks, particularly their enhanced data cleaning and standardized splits. We appreciate your suggestion and will consider incorporating TDC benchmarks in future research to further validate and extend our findings. Thank you for your constructive feedback.
**[A]** Self-supervised graph transformer on large-scale molecular data. NeurIPS 2020.
**[B]** 3d infomax improves gnns for molecular property prediction. ICML 2022.
**[C]** Pre-training molecular graph representation with 3d geometry. ICLR 2022.
**[D]** Molecular contrastive learning of representations via graph neural networks. Nature 2022.
**[E]** Uni-mol: A universal 3d molecular representation learning framework. NeurIPS 2022.
**[F]** Geometry-enhanced molecular representation learning for property prediction. Nature 2022.
**[G]** Strategies for pre-training graph neural networks. ICLR 2020.
**[H]** Exposing the limitations of molecular machine learning with activity cliffs. JCIM 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and attempting to address my questions.
Re: question 1 - are you saying that UPS is trained on the same amount of data, or for the same number of iterations, as InstructMol, and so this comparison is the control for that?
Why were none of the weaknesses I pointed out addressed?
I am currently keeping my score as is. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors present "InstructMol" which does not require transferring knowledge between multiple domains, which avoids the potential gap between the pretraining and fine-tuning stages. and demonstrate it on real-world molecular datasets and out-of-distribution (OOD) benchmarks.
Strengths: Instructive Learning Framework helps the model the generalize better for out-of-distribution molecular property prediction task.
Weaknesses: The paper mostly focus on GNN as backbone but would be worth discuss more about transformer based model trained on SMILES representation.
Technical Quality: 3
Clarity: 3
Questions for Authors: How would Instructive Learning Framework works on transformer based model trained on SMILES representation?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper mostly focus on GNN as backbone but would be worth discuss more about transformer based model trained on SMILES representation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer **73Xa**,
Thank you for your detailed feedback on our InstructMol. We appreciate your insights and suggestions, which are invaluable for refining our work. We are pleased to hear that you found our Instructive Learning Framework effective for improving generalization in OOD molecular property prediction tasks. This was a primary goal of our work, and we're glad it was recognized.
You noted that our paper primarily focuses on GNNs and suggested that it would be beneficial to discuss the application of our framework with transformer-based models trained on SMILES representations. This is an excellent point. While we concentrated on GNNs due to their strong performance in molecular graph representations, we acknowledge that transformers have shown promising results in modeling sequential data like SMILES.
To address this, we are currently exploring how our InstructMol can be adapted for transformer-based architectures. To be specific, we leverage an open source Transformer-based algorithm -- SMILES Transformer (ST) [A] as the backbone and evaluate the impact of our instructive learning framework on 10 datasets from MoleculeNet, which is also adopted in its original paper but *with a different splitting strategy*. Note that we adopt a 250K unlabeled dataset for SSL. Preliminary results (see Table below) suggest that the framework's principles are generalizable and can indeed enhance the performance of transformer models on SMILES data. We plan to include a discussion of these findings in the final version of the paper, detailing how the framework's principles can be adapted and optimized for different model architectures.
| Dataset | ESOL $\downarrow$ | FrSlv $\downarrow$ | Lipo $\downarrow$ | MUV $\uparrow$ | HIV $\uparrow$ | BACE $\uparrow$ | BBBP $\uparrow$ | Tox $21 \uparrow$ | Sider $\uparrow$ | ClinTox $\uparrow$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| ST | 1.144 | 2.246 | 1.169 | 0.009 | 0.683 | 0.719 | 0. 900 | 0.706 | 0.559 | 0.963 |
| ST + InstructMol | **1.013** | **2.089** | **1.058**| **0.025**| **0.704**| **0.733**| **0.908**| **0.742**| **0.570**| **0.971**|
---------------------------------------------
[A] SMILES Transformer: Pre-trained Molecular Fingerprint for Low Data Drug Discovery. 2019
---
Rebuttal Comment 1.1:
Comment: Thanks the author for checking the results on SMILES Transformer! I am pleased to see that it helps not only on GNN. I am keeping my score as is.
---
Reply to Comment 1.1.1:
Title: Thanks for Feedback
Comment: Dear Reviewer 73Xa,
Thank you for your positive feedback and your continued recommendation for acceptance. We are delighted to hear that our additional experiments have addressed your concerns. Thank you once again for your valuable time and insights!
Best regards,
Authors | null | null | null | null | null | null |
Agent Planning with World Knowledge Model | Accept (poster) | Summary: This paper presents a parametric world knowledge model designed to enhance agent planning. The model synthesizes knowledge from expert and sampled trajectories for training purposes. It incorporates prior task knowledge for global planning and dynamic state knowledge for local planning. The implementation shows improved performance over several robust baselines using open-source LLMs such as Mistral-7B and Gemma-7B. Incorporating a world knowledge model into LLM-based agents for planning purposes is a novel approach that also helps mitigate common issues like hallucinations and invalid actions in language agents.
Strengths: The paper is overall well-written and the method is novel. It also show promising results using only 7B models that could barely perform planning with basic methods. It's also quite reasonable to introduce a world knowledge model to enhance task specific knowledge during planning.
Weaknesses: Here are some additional points that I believe could be improved:
1. Clarity on hyperparameters and settings: Is the WKM tuned after each step of action or finishing an entire trial? Also what is the split for seen/unseen task? Some tasks are very similar in Alfworld and Scienceworld. What is the structure of the retriever? What is the number of generations for each action?
2. Is WKM training more important for commmon-sense intensive tasks like Scienceworld? Have you previously tested on pure planning tasks? e.g. Blocksworlds, Game 24. Also, does the ratio $\gamma$ reflects whether knowledge or planning is more important for a task?
3. What is the additional computation overhead for inferencing using a WKM? Also beam search is necessary to obtain more than one action, what is the computation time compared to ReAct?
4. What is the difference between state knowledge and thoughts/ reflection? It seems to be also interleaved between actions. Does the content of state knowledge greatly affect the performance? e.g. using the WKM to enhance ReAct thoughts. The state-knowledge examples in Figure 8 seems more similar to thoughts rather than world knowledge, i.e. give a general commonsense about the environment.
5. Can WKM and agent training be merged together? i.e. using a single model to learn both knowledge and trajectory and update the loss together.
6. Can this method be applied to an online setting? Where knowledge base is directly updated after each step of action so that later actions could use results from exploration.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please refer to weaknes.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful for your valuable time and insightful feedback. Below are our detailed responses to your concerns.
**Q1: Clarity on hyperparameters and settings**
We sincerely apologize for any confusion caused by the details not clarified in the paper.
(1) **As shown in Eqn 8-10**, our WKM is fine-tuned over the entire trajectory, where the loss calculation only involves knowledge, and other irrelevant tokens will be masked.
(2) The seen and unseen tasks are predefined within the dataset. For the unseen tasks, the model has not previously encountered the task types in the training set, requiring the application of more extensive generalization capabilities to address them.
(3) Our retrieval is primarily based on cosine similarity. The encoder is WKM's own embedding layer (for example, the embedding layer of Mistral-7B), and similarity is calculated based on the cosine similarity between sentences.
(4) At each step, we only generate an action once. Since we use open-source models, we can directly obtain the probability distribution of each action's first token from the last layer of the agent model at each step. **As we mentioned in lines 162-163**, we normalize the probability distribution of the action tokens with a softmax function to get the final action distribution $P_{\rm agent}(\mathcal{A}_u)$ from the agent model.
**Q2: Is WKM training more important for commonsense intensive tasks like Sciworld?**
In this paper, we focus more on embodied planning tasks that interact with the environment, which require world knowledge to be resolved. **Tasks such as Blocksworld and Game24 are, in our understanding, more like reasoning tasks**, where the model only needs to reason based on the problem without the need for the environment.
Regarding the ratio $\gamma$ reflecting whether knowledge or planning is more important for a task, your understanding is very insightful. We have also found that for tasks in SciWorld that require a large amount of domain knowledge, the value of $\gamma$ is smaller (knowledge occupies a larger proportion).
**Q3: The additional computation overhead for inferencing using a WKM**
**In fact in lines 167-168, we have discussed the issue of computation overhead.** When accounting for the inference time of the knowledge model and the retrieval time, our total inference time is 2.5 times that of a pure agent model fine-tuned with expert trajectories. In Q1, we explained that we do not employ beam search, and because ReAct is a prompt-based method, which tends to have poorer performance and thus results in significantly longer trajectory lengths, the ratio of our method's inference time compared to ReAct will be even smaller.
**Q4: The difference between state knowledge and thoughts/reflection**
The original intention behind the design of state knowledge is due to our observation that a significant issue in agent planning arises when the trajectory is lengthy, causing the model to experience hallucinations due to difficulties in processing long contexts. Thus, state knowledge acts more like an "outside" prompter, providing the agent with dynamic knowledge of the current environment rather than prior commensense knowledge, to make it focus more on the current action. In contrast, the agent's thoughts resemble an "inside" judger of the current state, which can easily be influenced by long contexts and deviate from the correct path.
**Q5: Can WKM and agent training be merged together?**
We train WKM and the agent model separately for two reasons: 1) Numerous studies have indicated that the division of labor and collaboration among models can lead to the emergence of group intelligence, which achieves better results than individual intelligence; 2) Decoupling the WKM from the agent model allows for more flexible expansion, such as guiding a stronger agent model with a smaller WKM (Table 4), and realizing a unified WKM (Figure 5). To fully show the advantages of separate training, we further conduct experiments where a single model learns planning and knowledge at the same time:
| Mistral-7B | ALFWorld seen | ALFWorld unseen | WebShop | SciWorld seen | SciWorld unseen |
| ------------ | ------------- | --------------- | ------- | ------------- | --------------- |
| single-model | 65.46 | 70.71 | 63.70 | 58.51 | 51.78 |
| WKM | 73.57 | 76.87 | 65.48 | 62.12 | 53.62 |
The experiment has shown that the merging of WKM and agent model results in a worse effect, which also confirms our viewpoint.
**Q6: Can this method be applied to an online setting?**
In fact, our initial plan was to create an online version of WKM, but we encountered the following challenges:
1) **Dynamic adjustment of $\gamma$**. In the early stages, when the state knowledge base contains too few samples to cover common scenarios, we obviously cannot trust its probabilities, so we need to set $\gamma$ to a high value. Conversely, when the state knowledge base is sufficiently populated, $\gamma$ should be set to a relatively lower value. Thus, $\gamma$ should gradually decrease and eventually stabilize over the process, but controlling the speed of decrease and the time to reach stability is challenging.
2) **Parameter updating of the WKM**. How to adjust the parameters of WKM when new knowledge is acquired involves the frequency of parameter updates. An excessively high frequency could lead to extremely low efficiency, while an insufficient frequency may not meet the needs of the agent model.
We are also conducting further research on the online version of WKM and hope to make significant progress in the future.
Thank you again for your constructive suggestions!
**Please let us know if you have any further questions. If you find that our response addresses some of your concerns, would you kindly consider raising your rating score for our paper? We greatly appreciate your consideration.**
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Your rebuttal has clearly addressed my concerns and helped me gain a better understanding of world knowledge training. I have adjusted the score accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks for your valuable feedback!
Comment: Thank you for your reply and the recognition of our work. Your feedback is very important for us to further improve our paper. Thank you once again. | Summary: This work is concerned with LLMs planning abilities in agent datasets. Instead of only fine-tuning the agent model on expert trajectories, they add “task knowledge” information. This information is created by comparing reject trajectories and expert trajectories, following previous work (NAT; Wang et al., 2024). In addition, the agent model is prompted to summarize the state, which helps avoid generating invalid actions.
Strengths: * The idea of explicating the preference trajectories data (NAT; Wang et al., 2024) into task knowledge is interesting (subsection 3.1 and case study in Appendix F).
* Strong results when combining both task knowledge and state knowledge.
* Weak-guide-strong analysis (table 4) interestingly shows the benefits of explicating the trajectories preference knowledge.
Weaknesses: - When removing the state knowledge (figure 3), it seems that this approach does not outperform NAT, which uses SFT on the same trajectories preference data.
- Missing qualitative analysis of the generated task knowledge.
Technical Quality: 3
Clarity: 3
Questions for Authors: **Suggestions**
* Case study in Appendix F is very important to understand the paper, providing intuition and a qualitative analysis which explains why preference trajectories data is useful. I would put it in the main paper to guide the reader.
**Questions**
* Line 135 mentions that the WKM and the agent are the same backbone model. What about the LM that generates the reject trajectories, is it the same?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful for your valuable time and insightful feedback. Below are our detailed responses to your concerns.
**Q1: When removing the state knowledge (figure 3), it seems that this approach does not outperform NAT, which uses SFT on the same trajectory preference data.**
We greatly appreciate your meticulous observation. **However, in fact, even without the state knowledge, our method still outperforms NAT.** We feel so sorry that maybe the distance between Table 1 and Figure 3 in the paper has misled your judgment and we will improve this in our revision. Here we list the specific values in the table below for your convenience in observation:
| Mistral-7B | ALFWorld seen | ALFWorld unseen | WebShop | SciWorld seen | SciWorld unseen |
| ------------- | ------------- | --------------- | ------- | ------------- | --------------- |
| NAT | 64.43 | 68.96 | 61.01 | 57.12 | 50.79 |
| WKM w/o state | 69.29 | 75.37 | 63.68 | 60.81 | 53.42 |
| WKM w/o task | 67.86 | 70.67 | 62.44 | 55.04 | 51.52 |
| WKM | 73.57 | 76.87 | 65.48 | 62.12 | 53.62 |
**Q2: Qualitive analysis and Appendix F**
We feel so sorry that due to page limitations and the lengthy trajectories of the case in Figure 9, we did not include Appendix F in the main paper initially. **We will incorporate the textual analysis in Appendix F into the main paper in the upcoming revision.** In fact, on the right side of Figure 2, we also display part of a case's steps, from which you can clearly see the role and effectiveness of our task and state knowledge.
**Q3: Line 135 mentions that the WKM and the agent are the same backbone model. What about the LM that generates the reject trajectories, is it the same?**
Yes, you are right. They all share the same backbone model but with different LoRAs. All the training involved in our paper is based on LoRA, which allows the knowledge model and the agent model to be plug-and-play, significantly saving computational power while making the model switch and extend more flexibly and efficiently.
Thank you again for your constructive suggestions!
**Please let us know if you have any further questions, as we are happy to continue the discussion. If you find that our response addresses your concerns, would you kindly consider raising your rating score for our paper? We greatly appreciate your consideration.**
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I appreciate the authors' attention to my concerns and their efforts to clarify any misunderstandings.
Concerning my initial issue, I believe the confusion arose because Figure 3 was labeled "w/ state" instead of "w/o task," which led me to misinterpret the results. The labels in the table provided by the authors offers a clearer perspective in my opinion, and I now recognize that their approach without state indeed outperforms NAT.
I have adjusted my score to reflect my revised understanding.
---
Reply to Comment 1.1.1:
Title: Thank you for your in time feedback!
Comment: We are very delighted that our response can address your concerns, and we sincerely appreciate your recognition of our work!
We have indeed realized that in the ablation study, the "w/o" may convey a clearer message than "w/", and we will make a corresponding revision to further enhance the readability of our paper.
Thank you again for your prompt feedback! | Summary: The paper presents a parametric World Knowledge Model (WKM) to enhance agent planning by providing both global prior task knowledge and local dynamic state knowledge.
Traditional LLMs often perform trial-and-error actions and generate hallucinatory actions due to their limited understanding of the physical world.
By imitating human cognitive processes, the WKM synthesizes knowledge from expert and sampled trajectories to guide agents more effectively.
Experimental results with state-of-the-art LLMs (Mistral-7B, Gemma-7B, and Llama-3-8B) show that this method improves performance on complex real-world tasks and reduces blind trial-and-error and hallucinatory actions.
Strengths: WKM mimics human mental models, incorporating both global prior task knowledge and local dynamic state knowledge, and offers a novel and effective solution to the limitations of traditional LLMs in understanding the physical world. The human-guided action search would eliminate a lot of inappropriate hallucinations.
The method is rigorously tested on various complex real-world simulated datasets using Mistral-7B, Gemma-7B, and Llama-3-8B. The superior performance compared to strong baselines demonstrates the practical effectiveness and robustness of the proposed WKM, providing solid evidence for its advantages. Even these "small language models" prove the effectiveness of WKM.
This simulated knowledge base helps in better guiding, planning and assisting local planning, significantly improving the agent's overall performance and understanding of tasks.
Weaknesses: The approach heavily relies on expert trajectories to synthesize both task and state knowledge. It takes a lot of effort to obtain such a database and pre-train/fine-tuning and off-line WKM.
WKM depends on the world dynamic as well as human demonstration. In my opinion, the three "real-world simulated planning datasets":
ALFWorld, WebShop, and ScienceWorld are not all real. The ALFWorld and ScienceWorld are simulated environments with a discreet Embodied mechanism. The methodology of WKM may not be easily adapted to problem-solving in real scenarios.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weeknesses
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: no potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful for your valuable time and insightful feedback. Below are our detailed responses to your concerns.
**Q1: The approach heavily relies on expert trajectories to synthesize both task and state knowledge.**
In fact, most mainstream agent planning methods currently either rely on proprietary models like GPT-4 (e.g., AutoGuide) or utilize open-source models for incremental training on expert trajectories and explored trajectories through techniques such as SFT or DPO (e.g., ETO, NAT). These approaches do not consume fewer resources (be it monetary or computational resources) than our method. **Our method, without reliance on GPT-4, enables small-scale open-source models to autonomously synthesize knowledge and train into parameterized knowledge models.** This small-scale knowledge model can not only guide fine-tuning-based agents of the same scale (Table 1) but also enhance powerful proprietary models like GPT-3.5/4 (Table 4). Moreover, the multi-task unified WKM has demonstrated superior performance (Figure 5), providing insights into the path toward exploring AGI in the future.
**Q2: The methodology of WKM may not be easily adapted to problem-solving in real scenarios.**
The current mainstream LLM-based agent planning benchmarks are all based on simulated environments of the real world. The three datasets we utilize—ALFWorld, WebShop, and SciWorld—are also commonly used in LLM-based agent planning. We understand the significance of applying the World Knowledge Model (WKM) to real-world scenarios, and we are actively working towards this, such as exploring a unified knowledge model that can be generalized to real-world situations. We hope to pair this with a generalizable unified agent model to achieve true AGI (Artificial General Intelligence). Although this path is still a long way off, our work, like the work of others, is striving towards this goal.
Thank you again for your constructive suggestions!
**Please let us know if you have any further questions, as we are happy to continue the discussion. If you find that our response addresses some of your concerns, would you kindly consider raising your rating score for our paper? We greatly appreciate your consideration.** | Summary: This paper introduces a parametric World Knowledge Model (WKM) to enhance agent planning by integrating both global task knowledge and dynamic state knowledge. The authors claim that their approach can mitigate issues like blind trial-and-error and hallucinated actions in large language model (LLM) agents. They demonstrate the effectiveness of WKM through experiments on three real-world simulated datasets (ALFWorld, WebShop, and ScienceWorld) using state-of-the-art LLMs, showing superior performance compared to strong baselines.
Strengths: 1. The paper is well-written and easy to follow.
2. The motivation for the methodology is clear and logically presented.
Weaknesses: My primary concerns with this paper are related to the methodology's implementation and the sufficiency of the experiments. Specific issues are detailed in the following questions.
Technical Quality: 2
Clarity: 2
Questions for Authors: Methodology:
1. The motivation for rejecting trajectories is not sufficiently justified. (1) Why not and What if directly derive task knowledge from expert trajectories? (2) Why assume that agent-generated trajectories are always inferior to the dataset trajectories? Could they not be better, and what would be the impact of this assumption? (3) Even if the above assumption holds, if agent-generated trajectories are always rejected, how can the authors claim, "Our purpose is to extract superior task knowledge that cannot be acquired solely through supervised fine-tuning on chosen trajectories, thus further effectively boosting the agent’s capabilities" (line 107-108)? (4) Ignoring the previous issues, in Line 104, the authors train an experienced agent to generate reject trajectories. However, trajectory generation requires an environment and an experienced agent. How can trajectories be generated without training the environment model? Is that done by interacting with the environment?
2. The definition of state knowledge is ambiguous. Lines 119-120 suggest that state knowledge is a local summarization of the policy function to instruct actions, but lines 123-124 define it as part of the MDP's state space. Besides, Figure 12's prompt also does not define state knowledge yet asks the LLM to generate it (How can an LLM generate knowledge without a definition?)
3. The knowledge model is trained to output task knowledge and state knowledge using data directly labeled by the LLM through prompts. Why not directly use these prompts to output task knowledge and state knowledge instead of retraining a model?
Experiments:
1. Can the paper "AutoGuide: Automated Generation and Selection of State-Aware Guidelines for Large Language Model Agents" be a baseline of this paper?
2. Can the authors compare results with gamma=1 and gamma=0? It is essential to understand which part of the output action has the most significant impact.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful for your valuable time and insightful feedback. Below are our detailed responses to your concerns.
**Q1: The motivation for rejecting trajectories is not sufficiently justified.**
In fact, regarding your concern about the rejected trajectories, **we have explained in lines 113-117 of the paper**. We will consolidate this with lines 102-108 to make it more convenient for readers. We sincerely apologize for any inconvenience this has caused you. Below is a further explanation addressing your concern:
(1) To intuitively demonstrate the effect of introducing rejected trajectories, we conduct experiments synthesizing task knowledge based solely on expert trajectories. The advantage of introducing rejected trajectories is very significant:
|Mistral-7B|ALFWorld seen|ALFWorld unseen|WebShop|SciWorld seen|SciWorld unseen|
|-----------------|-------------|---------------|-------|-------------|---------------|
|w/o rejected traj|67.19|71.57|63.97|55.49|48.38|
|WKM|73.57|76.87|65.48|62.12|53.62|
(2) The assumption is based on the premise that expert trajectories are manually labeled, and they always achieve the best in quality and final reward. As we mentioned in lines 113-117 since expert trajectories are gold, their final reward $r(u,\tau_w)$ always satisfies $r(u,\tau_w)=1$. Therefore, the final reward $r(u,\tau_l)$ of agent-generated trajectories always satisfies $r(u,\tau_l) \leq r(u,\tau_w)$. If $r(u,\tau_l) < r(u,\tau_w)$, there is no doubt that expert trajectories are better; if $r(u,\tau_l) = r(u,\tau_w)$, **since expert trajectories are gold with no excess planning steps**, they are shorter and more efficient. We want the agent model to learn how to perform more efficient planning and avoid blind trial-and-error from these trajectories, thus we also consider agent-generated trajectories to be rejected.
(3) Our experienced agent has undergone SFT on the chosen (expert) trajectories. As we all know, training on a dataset doesn't mean that the model has fully learned all the knowledge in that dataset. Therefore, we let the experienced agent run through the training data again to generate explored trajectories, so that the agent can summarize knowledge that cannot be learned solely through SFT. This step is a little similar to DPO, but we achieve it through knowledge augmentation rather than directly converting it into a loss calculation between chosen and rejected trajectories.
(4) Yes, the generation of rejected trajectories involves direct interaction between the experienced agent model and the environment of the training set.
**Q2: The definition of state knowledge is ambiguous.**
Our state knowledge here is a natural language description of the current environment and the agent's state, so we define it as a part of the MDP's state space. As shown in Fig 12, we teach the agent to summarize state knowledge through few-shot examples, rather than zero-shot instruction. In the main paper Fig 2 and Appx F Fig 9, we provide some cases where you can see the specific appearance of state knowledge.
**Q3: Why not directly use prompts to output task and state knowledge instead of retraining a model?**
Firstly, task and state knowledge are obtained with the need for expert trajectories. Since we cannot obtain expert trajectories on the test set, it's hard to provide high-quality knowledge through prompts. Secondly, even if we use the knowledge obtained from the training set as few-shot prompts, it is evident that training a model is more generalizable than simply using prompts. In fact, **as one of our baselines, the knowledge for KnowAgent is provided through prompts and our dataset-level knowledge analysis in Fig 4 also uses prompts to provide knowledge. WKM's performance is significantly better.**
To completely address your concern, we also conduct experiments using high-quality task and state knowledge summarized from the training set as few-shot prompts. The experimental results can once again prove that training a model performs better:
|Mistral-7B|ALFWorld seen|ALFWorld unseen|WebShop|SciWorld seen|SciWorld unseen|
|----------------|-------------|---------------|-------|-------------|---------------|
|prompt knowledge|65.14|67.40|61.03|56.36|45.27|
|WKM|73.57|76.87|65.48|62.12|53.62|
**Q4: AutoGuide as a baseline**
In fact, we greatly wanted to include AutoGuide as a baseline when we began our experiments. **However, at that time and even now, it doesn't have open-source code available.** We attempted to replicate it solely based on its paper, but some implementation details including some specific prompts cannot be obtained solely from the paper. In fact, as a prompt-based baseline, it relies on the strong GPT-4. And using 7/8B models would significantly degrade its performance, making it far less effective than fine-tuning-based methods, not to say our WKM.
**Q5: Compare results with $\gamma=1$ and $\gamma=0$**
$\gamma=1$ is equivalent to removing state knowledge, and **our ablation experiment (Figure 3 w/ task) already includes this scenario**.
We further conduct experiments specifically for $\gamma=0$, comparing it with $\gamma=1$ and WKM:
|Mistral-7B|ALFWorld seen|ALFWorld unseen|WebShop|SciWorld seen|SciWorld unseen|
|----------|-------------|---------------|-------|-------------|---------------|
|$\gamma=0$|1.58|0.00|25.83|18.69|15.37|
|$\gamma=1$|69.29|75.37|63.68|60.81|53.42|
|WKM|73.57|76.87|65.48|62.12|53.62|
It can be observed that state knowledge primarily serves as a constraint to alleviate hallucinated actions for the agent model. **However, when we fully trust it($\gamma=0$), its lack of generalization significantly harms the performance of the agent model.**
Thank you again for your constructive suggestions!
**Please let us know if you have any further questions. If you find that our response addresses some of your concerns, would you kindly consider raising your rating score for our paper? We greatly appreciate your consideration.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which addressed some of my concerns.
**Follow-up question for Q1:**
Thank you for the clarification. I have a follow-up question: Why does the method with task knowledge *only* perform better than the DPO-style baselines even in seen tasks? Since your expert trajectories are optimal, at least in the seen tasks, the fine-tuning method should be the better one as it only needs to memorize the given data.
**Follow-up question for Q3:**
I would like to clarify my question. In Equation (5), state knowledge is constructed using LLM by prompting with $\rho_{stateKnow}$ and $h_t$. However, in Line 157, state knowledge is generated from $\pi_\phi(\cdot|h_t)$, i.e., the World Knowledge Model. Why not use the same method as in Equation (5) directly in Line 157 as they share the same inputs?
**Additional Suggestions:**
After the author's clarification, I understand more about the methodology. However, I find the writing to be informal. Despite the trend towards more relaxed expressions, I believe that for a NeurIPS paper, the presentation should be more rigorous. For example,
1. "Subsequently, the experienced agent explores the training set tasks again to generate rejected trajectories. Our purpose is to extract superior task knowledge that cannot be acquired solely through supervised fine-tuning on chosen trajectories, thus further effectively boosting the agent’s capabilities." As clarify by the authors, there is a significant logical gap between these two sentences. Generating rejected trajectories alone does not lead to extracting superior task knowledge that cannot be acquired through supervised fine-tuning on chosen trajectories.
2. "The state knowledge serves as the dynamic knowledge to constrain the agent model’s local planning and prevent it from generating hallucinatory actions." If you are talking about MDP, it is incorrect to say the state constrains the agent’s ... abilities. The state is just defined to provide full information of the environment for the policy to make decisions [1], rather than enhancing or constraining the agent’s abilities.
[1] Reinforcement Learning: An Introduction.
---
Rebuttal 2:
Title: Thanks for your response!
Comment: We are very fortunate that our rebuttal could address some of your questions and are extremely grateful for the opportunity to engage in further discussion with you. In response to your follow-up questions, we would like to provide the following explanations:
**Follow-up question for Q1**
You may consider our approach to be somewhat similar to the idea behind DPO, but while DPO optimizes the model by comparing the losses between chosen and rejected trajectories, requiring the memorization of both types of data, our method contrasts chosen and rejected trajectories to summarize knowledge. During training, the model only needs to learn from this knowledge without the additional requirement to memorize the rejected trajectories. This also validates that our knowledge enhancement allows the agent model to acquire knowledge that cannot be learned through SFT loss calculations.
**Follow-up question for Q3**
We would like to address this question from two perspectives:
1. The notation $h_t$ in Equation (5) is defined according to Equation (1), which does not include the task knowledge $\kappa$. However, **as mentioned in line 152, we have redefined $h_t$ to include the task knowledge $\kappa$**. Therefore, the inputs defined in Equation (5) and line 157 are different. We apologize if the redefinition of $h_t$ was not clear and may have caused some confusion. We will emphasize this in the revision by bolding it to improve readability.
2. **Even if we disregard the changes of $h_t$, as we clarified in our rebuttal Q2, the prompt to summarize state knowledge ($\rho_{\rm StateKnow}$) in Equation (5) are essentially few-shot examples.** This aligns with the experimental setup we supplemented in the rebuttal for Q3 if we directly use $\rho_{\rm StateKnow}$ at the inference time, and the results of our experiments have demonstrated that the performance of providing knowledge through prompts is poor.
**For Additional Suggestions**
We sincerely apologize for any confusion caused in your reading.
1. By separating the synthesis of task knowledge into two processes, our initial intention was to clarify the logic, but we did not anticipate the difficulties it might cause for you. We are truly sorry for this oversight. As we have clarified, we will integrate lines 102-117 in the revision to achieve better readability and make it easier for readers to understand.
2. Initially, we defined the state knowledge within the state space to facilitate comprehension for readers. This definition might indeed lack rigor, and we will redefine it with another symbol similar to the task knowledge in the revision to ensure clearer expression. We are genuinely sorry for any inconvenience this has caused you!
Once again, we greatly appreciate your feedback. Your comments are invaluable to the improvement of our work.
**We look forward to your further reply and would like to continue the discussion. Would you kindly consider raising your rating score for our paper if you find that our response addresses some of your concerns? We greatly appreciate your consideration.**
---
Rebuttal 3:
Comment: Thank you for the prompt response.
Follow-up question for Q1: I would like to clarify my question again. This question follows up on your statement: "Therefore, we let the experienced agent run through the training data again to generate explored trajectories, so that the agent can summarize knowledge that cannot be learned solely through SFT. This step is somewhat similar to DPO, but we achieve it through knowledge augmentation rather than directly converting it into a loss calculation between chosen and rejected trajectories." It is somewhat intuitive, but following this line of reasoning, using DPO directly should yield better results compared to the WKM variant w/o state knowledge, i.e., WKM w/ task in your ablation studies, at least in the seen tasks. However, it seems contradictory that we observe the opposite result.
Follow-up question for Q3: Could the authors clarify why Equation (5) (with your redefined ${h_t}$) cannot be used to output the state knowledge? Additionally, I do not grasp the idea of few-shot prompts, as using Equation (5) to output state knowledge does not necessarily depend on them. Generally speaking, why can we assert that using model ${A(y|X)}$ to label a dataset ${D=\{(X,y)}\}$, and then fine-tuning ${A}$ with ${D}$ to obtain ${A'}$, results in a model ${A'}$ that is superior to ${A}$?
---
Rebuttal 4:
Title: Thank you for your continuous feedback!
Comment: Here is our further clarification of your questions:
1. We sincerely apologize for any confusion and we need to sincerely say that we still have not understood why you suggested that our statement could reason that DPO should perform better on seen tasks. We need to reiterate that our method enables the model to read both correct and incorrect trajectories for the same task, using $\rho_{\rm TaskKnow}$ (detailed in Figure 11) to summarize the knowledge why the correct trajectories are better than the incorrect ones, and training the WKM to learn to generate this kind of knowledge to augment the agent model. On the other hand, DPO directly trains the agent model by minimizing the loss (increasing the distribution gap between correct and incorrect trajectories) to favor the generation of correct trajectories. Therefore, we believe there is no direct relationship or conflict between the two approaches. Our previous statement of "similar to DPO" might have caused misunderstanding; it only proves that the improvement of our agent model augmented by synthetic task knowledge is better than training the agent model with DPO.
2. **(1)** As mentioned in lines 122-123, the prompt $\rho_{\rm StateKnow}$ that we use to summarize state knowledge is detailed in Appendix I.2, which is displayed in Figure 12. In response to Q2, we have clarified to you that the key part of our prompt is the State Knowledge Example (colored in purple), which is actually the few-shot examples. Therefore, we stated that generating state knowledge directly with Equation (5) is equivalent to the experiment we supplemented in Q3, where knowledge was generated directly using few-shot examples. As shown in our previous table, while it is certainly possible to generate knowledge by few-shot examples, it is clear that its effectiveness is not as good as that of the WKM.
**(2)** Regarding the question you raised about why training Model A with data annotated by Model A itself can lead to improvements for A, **this issue has been extensively validated in the field of LLM synthetic data (also known as self-training) [1][2][3][4][5][6]**. The key to self-training lies in ensuring the quality of self-generated data. For instance, [3] achieves this by implying the correct labels of the data to the model, while [6] relies on the model's self-judgment capability to assess the quality of the data. Our approach ensures the quality of synthesized knowledge by meticulously designing few-shot examples and using entirely correct expert trajectories. The advantage of this method is that it does not depend on a large amount of manually annotated data or powerful closed-source models (e.g., GPT-4), fully leveraging the model's own potential. However, this method usually has an upper-bound because we cannot guarantee that the synthesized data is 100% correct. As the model generates more data, the diversity of the data is also likely to decrease. This is a key research direction in the field of data synthesis today, that is, how to improve the diversity and quality of synthesized data. The principle behind self-training is still under investigation; we speculate that it may be because the data annotated by the model itself is more in line with the "data distribution" understood by the model. From the model's perspective, this distribution may be smoother. Although this method may not be as effective as training on fully human-annotated data, the most important aspect is that it solves the resource consumption brought about by a large amount of manual data annotation. And if one day human-annotated data is exhausted or LLM grows into a super model that humans cannot supervise, we can only rely on the data synthesized by the model itself to improve the ability of LLM, so this is a very promising research direction in the LLM community.
[1] On LLMs-Driven Synthetic Data Generation, Curation, and Evaluation: A Survey.
[2] KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents.
[3] STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning.
[4] Best Practices and Lessons Learned on Synthetic Data for Language Models. (Google DeepMind)
[5] Self-training Language Models for Arithmetic Reasoning.
[6] Self-Rewarding Language Models.
We look forward to your further reply and would like to continue the discussion.
Thanks again!
---
Rebuttal Comment 4.1:
Comment: As the rebuttal period is concluding, I would like to finally point out that self-training should not be interpreted in the same manner as the authors propose. Aligning with the notations in our discussion, for [3], in order to label $y$ in $D=\\{ (X,y) \\}$ for fine-tuning, (1) they filter the generated rationales ($y$ includes rationales followed by an answer in this setting) from model $A$ to retain only those that result in correct answers; (2) for incorrect answers, they provide the correct answer as a hint to the model and ask model $A$ to generate rationales in the same style as in the previous rationale generation step, instead of using the label $y$ generated by model $A$ directly. For [6], they employ RL optimization, where rewards are assigned by model $A$ based on the label $y$ generated by model $A$. This approach is still intuitive, as studies have shown that a model's evaluation capability generally surpasses its generation capability [7]. Similarly, they do not use the label $y$ generated by model $A$ directly for fine-tuning.
I would like to thank the authors' detailed response and the efforts made in the rebuttal. At this point, I maintain my original concerns; however, I am open to reconsidering my evaluation based on the suggestions from the Area Chairs.
[7] Language Model Self-improvement by Reinforcement Learning Contemplation
---
Rebuttal 5:
Title: Thank you for your patient response!
Comment: Dear reviewer,
We greatly respect your position, but we have to say that we still insist on our viewpoint. In the era of deep learning, we cannot say that a method cannot be used because it may lack some interpretability despite being effective. Moreover, **our goal is to train a World Knowledge Model that provides both task knowledge and state knowledge**. High-quality task knowledge requires comparison between positive and negative examples to be obtained, which cannot be provided solely through prompts (as we have demonstrated in our supplementary experiments 1 and 2, our comparison with KnowAgent, and our analysis in Figure 4). Therefore, **it would be contrary to our approach of training a unified WKM if we were to train a model to provide task knowledge while prompting another model to provide state knowledge**.
With still two days remaining until the end of the discussion period, we also warmly welcome any further questions you may have. At the same time, we are very grateful for your patient review of our responses during the rebuttal phase. Thank you once again! | Rebuttal 1:
Rebuttal: Dear all reviewers,
Thank you for your thoughtful reviews! We appreciate all of your **positive comments** highlighting the strengths of our work for a summary:
## **Our Strengths Summarized by Reviewers**
- **Reasonable motivation**:
- "The motivation for the methodology is clear and logically presented."(reviewer vJYv)
- " It's also quite reasonable to introduce a world knowledge model to enhance task-specific knowledge during planning."(reviewer 8XFL)
- **Interesting, novel, and easy to follow**:
- "The paper is well-written and easy to follow."(reviewer vJYv)
- "offers a novel and effective solution"(reviewer pvgu)
- "The idea of ... is interesting."(reviewer TdHL)
- "the method is novel"(reviewer 8XFL)
- **Superior performance, promising results, and interesting analysis**:
- "The superior performance compared to strong baselines demonstrates the practical effectiveness and robustness of the proposed WKM, providing solid evidence for its advantages."(reviewer pvgu)
- "Strong results when combining both task knowledge and state knowledge."(reviewer TdHL)
- "Weak-guide-strong analysis (table 4) interestingly shows the benefits of explicating the trajectories preference knowledge."(reviewer TdHL)
- "It also shows promising results using only 7B models that could barely perform planning with basic methods."(reviewer 8XFL)
- **Well-written** (reviewer vJYv, reviewer 8XFL)
## **Our Supplement Experiments**
We also sincerely thank reviewers for your constructive feedback and questions to improve our manuscript. In response to the reviewers' questions, **we mainly supplement the following experiments**:
1. For reviewer vJYv: **To intuitively demonstrate the effect of introducing rejected trajectories**, we conducted experiments synthesizing task knowledge based solely on expert trajectories.
2. For reviewer vJYv: **To address the concerns about the necessity to retrain a knowledge model**, we conduct experiments using high-quality task and state knowledge summarized from the training set as few-shot prompts for knowledge models to provide knowledge.
3. For reviewer vJYv: **For the convenience of understanding which part of the output action has the most significant impact**, we compare the results with $\gamma=0$, $\gamma=1$, and our WKM.
4. For reviewer 8XFL: **To fully show the advantages of separate training**, we further conduct experiments where a single model learns planning and knowledge at the same time.
**We have added supplementary experiments 1, 2, and 4 as parts of our Ablation Study and revised our Figure 3. We have also added supplementary experiment 3 to our Appendix with a table. The revised Figure 3 and the added $\gamma$ analysis table can be seen in our submitted PDF.** We will add the corresponding textual analysis in our main paper.
We will continue to further enhance the quality of this paper according to the discussion with reviewers.
Last but not least, we wish to **reiterate the motivation and main contributions** of our paper.
## **Our Motivation**
As most state-of-the-art LLMs are autoregressive models trained with next-token prediction, they lack the ability to essentially understand the real world, leading to generating hallucinatory actions in local planning and performing brainless trial-and-error in global planning. In contrast to LLMs, humans possess a mental knowledge model about the physical world. When facing a specific task, they will first briefly rehearse the entire process in mind using their rich prior knowledge and constantly maintain a dynamic cognition of the current world state. **The process by which humans handle planning tasks reminds us to develop a parametric World Knowledge Model (WKM) to facilitate agent planning.**
## **Our Contributions**
- Imitating humans' mental knowledge model, we introduce parametric World Knowledge Model (WKM), providing prior task knowledge to guide the global planning and dynamic state knowledge to assist the local planning.
- Experimental results on three complex real-world simulated datasets with three state-of-the-art open-source LLMs, Mistral-7B, Gemma-7B, and Llama-3-8B, demonstrate that our method can achieve superior performance compared to various strong baselines.
- We analyze to illustrate that our WKM can effectively alleviate the blind trial-and-error and hallucinatory action issues, providing strong support for the agent’s understanding of the world.
- Other interesting findings of our paper include: 1) our instance-level task knowledge can generalize better to unseen tasks, 2) weak WKM can guide strong agent model planning, and 3) unified WKM training has promising potential for further development.
## **Thank you!**
We sincerely thank the reviewers for their constructive suggestions and questions to enhance our paper. Please reply if you have any further questions, and we will be more than happy to continue the discussion.
Pdf: /pdf/0a560f274c714faebfa6ea9197a0fd7bbd993272.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Does Worst-Performing Agent Lead the Pack? Analyzing Agent Dynamics in Unified Distributed SGD | Accept (poster) | Summary: This paper provides an asymptotic analysis of Unified Distributed SGD (UD-SGD) under heterogeneous agent dynamics and a large family of communication topology. It shows that under certain assumptions: i) regularity of the gradient, **ii) Ergodicity of Markovian sampling**, iii) decreasing step size and intervals not increasing so fast, iv) stability on model parameter v) contraction property of douly-stochastic communication matrix, UD-SGD algorithm guarantees that every agent parameter $\theta_n$ converges to the same minimum value $\theta^*$ with some variance matrix $V$ and the mean value of all agents $\overline{\theta}=\sum_{i=1}^n\theta_i$ also converges to $\theta_i$ with some variance matrix $V'$.
Strengths: 1. Although some previous works discuss the asymptotic analysis for distributed learning, most of them focus on some specific algorithm under a given communication topology and i.i.d sampling. This work is more general. It discusses the asymptotic analysis for more diverse communication topologies with the Markov sampling method.
2. In order to discuss the theoretical properties of a general distributed learning framework under diverse communication topologies, the author introduce the Unified Distributed SGD (UD-SGD). The clear definition for a unified version of DSGD helps readers better understand the universality of this property.
3. Let $V_i$ be the limiting covariance matrix of agent i and $V$ be the covariance matrix of the mean of all agents ($V=\frac{1}{n^2}\sum_{i=1}^{n} V_i$). The author provides the exact for of $V$.
4. The provides numerical experiments on logistic regression and neural networks with different sampling strategies to support their theory part.
Weaknesses: 1. In line 26, the authors claim that $\mathcal{L}$ represents the collection of local minima of objective function $f$. In line 246, the authors claim that $\theta^*\in \mathcal{L}$. Since when $f$ is non-convex, $\mathcal{L}$ do not have a single element and $\theta^*$ could have a lot of selections. How could we ensure that the convergence of consensus $\theta_n$ is unique? And how to choose $\theta^*$? Will that be the global minima?
2. The asymptotic analysis Theorem 3.3 is mainly based on Assumption 2.2, which assumes that the dynamic agent $\{X_{n}^i\}$ is an ergodic Markov chain. Could the author give some sampling strategies that satisfy this property and are now widely used in DSGD? What are the advantages of Markov sampling compared with the i.i.d sampling method?
3. In figure 2 of the experimental section, the authors only compare the performance of different Markov sampling methods. Could the author also compare with the iid sampling and sampling shuffle strategies?
Technical Quality: 3
Clarity: 3
Questions for Authors: Same as the weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Same as the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: In line 26, the authors claim that $\\mathcal{L}$ represents the collection of local minima of objective function $f$. In line 246, the authors claim that $\\theta^*\\in\\mathcal{L}$. Since when $f$ is non-convex, $\\mathcal{L}$ do not have a single element and $\\theta^*$ could have a lot of selections. How could we ensure that the convergence of consensus $\\theta_n$ is unique? And how to choose $\\theta^*$? Will that be the global minima?
As mentioned in Footnote 1 of our current manuscript, our non-convex setting only concerns the (possibly many) local minima, which is a common scenario in deep learning. We believe that the reviewer’s confusion comes from our wording ‘an optimal point $\\theta^*\\in\\mathcal{L}$’ around line 246. Here $\\theta^*$ refers to a local minima rather than a global minima. In our UD-SGD framework, we ensure that all agents reach a consensus and converge to one of the local minima. Each run of the algorithm may converge to different local minima depending on the initialization and the stochastic nature of the updates. To avoid confusion, we will revise the wording from “an optimal point” to “a local minimum” in line 246. This should make it clear that we are discussing convergence to a local minima rather than a global minima.
Our CLT result is conditional on the algorithm converging to a specific local minima $\\theta^*$. Given that $\\lim_{n\\to\\infty} \\theta\_n=\\theta^*$, we derive the CLT for this particular $\\theta^*$. Without knowing the exact local minima to which the algorithm converges, it is impossible to compute the limiting covariance, which indicates the performance at the converging point. We appreciate the reviewer’s feedback and will add the following clarification to line 252 in the revision: ‘For notational simplicity, and without loss of generality, our remaining results are stated while conditioning on the event that $\\{\\theta\_n\\to\\theta^*\\}$ for some $\\theta^*\\in\\mathcal{L}$’.
Lastly, we mentioned in Footnote 1 that additional condition like the PL inequality is required to guarantee convergence to a global minima because the PL inequality ensures that every local minima is also a global minima. This simplifies the landscape of the objective function. However, our current work focuses on local minima and does not pursue the PL inequality.
> Q2: The asymptotic analysis Theorem 3.3 is mainly based on Assumption 2.2, which assumes that the dynamic agent is an ergodic Markov chain. Could the author give some sampling strategies that satisfy this property and are now widely used in DSGD? What are the advantages of Markov sampling compared with the i.i.d sampling method?
Metropolis-Hasting Random Walk (MHRW) and Simple Random Walk (SRW) are the most popular sampling strategies in DSGD, which satisfy the ergodic Markov chain property (e.g., Section 2.1 in [1], Section 2 in [2]). Specifically, MHRW with uniform target distribution has the following transition probabilities:
$$P(X\_{n+1}=j | X\_n=i) \\triangleq P\_{ij} = \\min \\{\\frac{1}{d\_i},\\frac{1}{d\_j}\\} ~~\forall i \\neq j, P\_{ii}= 1-\\sum\_{j\\neq i} P\_{ij}$$
For SRW, the transition probability is $P\_{ij}=1/d\_i$ for $j\\in Neighbor(i)$ and $P\_{ii}=0$, with the stationary distribution proportional to the degree of each node.
Regarding the advantages of Markovian sampling over iid sampling, we have emphasized in lines 61 - 72. For the sake of completeness, here is a brief summary:
- **Handling Limited Data Access:** Markovian sampling is particularly useful when agents have limited or sequential access to data, making i.i.d. sampling infeasible. For example, Fog learning considers a multi-layer hybrid learning framework consisting of heterogeneous devices, where each agent is treated as an edge server of the next-layer network [3]. This contributes to the graph-like structure of the dataset held by each agent, where i.i.d. sampling is infeasible.
- **Efficiency in High-Dimensional Spaces:** In high-dimensional data spaces (such as constraints), Markov Chain Monte Carlo (MCMC) methods, such as those employing Markovian sampling, are more computationally efficient and effective compared to i.i.d. sampling, which can be costly and less practical because multiple rejections can happen before obtaining a sample generated by i.i.d. sampling that satisfies constraints.
> Q3: In figure 2 of the experimental section, the authors only compare the performance of different Markov sampling methods. Could the author also compare with the iid sampling and sampling shuffle strategies?
Thank you for the suggestion. In the current manuscript, we have included a comparison between i.i.d. sampling and shuffle methods, which was mentioned in lines 357 – 359 with details deferred to Figure 3 in Appendix G.1 due to space constraints. In particular, we fixed the second group of clients who perform Markovian sampling and varied the sampling strategies for the first group of clients. The results indicate that the shuffle method outperforms i.i.d. sampling in terms of MSE. This finding is consistent with existing literature (e.g., [2]), which shows that the shuffle method has zero asymptotic variance, in contrast to the positive variance associated with i.i.d. sampling. We will put this comparison from Appendix back to Section 4 if our paper is accepted (with one extra page for the camera-ready version).
[1] B. Johansson, M. Rabi, and M. Johansson. "A randomized incremental subgradient method for distributed optimization in networked systems." SIAM Journal on Optimization 20, no. 3 (2010): 1157-1170.
[2] J. Hu, V. Doshi, and D.Y. Eun. "Efficiency ordering of stochastic gradient descent." NeurIPS, 2022.
[3] S. Hosseinalipour, C.G. Brinton, V. Aggarwal, H. Dai, and M. Chiang. "From federated to fog learning: Distributed machine learning over heterogeneous wireless networks." IEEE Communications Magazine 58, no. 12 (2020): 41-47.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their detailed response. They answer all my concerns and I would like to keep my score. | Summary: This paper studies the asymptotic convergence behavior of federated learning under the UD-SGD framework with Markovian data. The authors establish a new central limit theorem that considers the strategy of every agent, which goes beyond the existing bounds that only focus on the worst-performing agent. Their theory emphasizes the importance of every individual and also explains linear speedup and asymptotic network independence.
Strengths: 1. This paper established theories for a more general federated learning framework and gives a refined analysis to encode the behavior of every agent into their bound.
2. The new asymptotic analysis could explain linear speedup scaling with the number of agents and asymptotic network independence.
3. The new analysis shows that upgrading only a small group of agents could benefit the whole system and the authors conduct experiments to validate it.
4. The paper is well-written.
Weaknesses: 1. As the author mentioned, Assumption 2.4 seems strong and finite-sample analysis is preferred,
2. As far as I could see in the experiments, the author only implemented DSGD but the UD-SGD framework has more algorithms.
3. The linear speedup scaling with the number of agents and asymptotic network independence are only mentioned in theory but do not have empirical justification.
4. The experiments for neural networks are somewhat toy.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the main technical challenge when utilizing Poisson equation to prove Theorem 3.3? Specifically, what is new compared with the analysis in reference [23] and [30]?
2. Do the authors believe the current asymptotic analysis technique could be extended to (federated) reinforcement learning? I saw some finite-sample analyses for federated RL in your references like [72].
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: What is the main technical challenge when utilizing Poisson equation to prove Theorem 3.3? Specifically, what is new compared with the analysis in reference [23] and [30]?
The main technical challenge in utilizing the Poisson equation to prove Theorem 3.3 lies in addressing the consensus error in the decomposed noise terms for each agent. Specifically:
---
*We decompose $\\nabla F\_i (\\theta\_n^i,X\_n^i )-\\nabla f\_i (\\theta\_n^i)$ into three parts in (48) using Poisson equation: $e\_{n+1}^i,\\nu\_{n+1}^i,\\xi\_{n+1}^i$. The consensus error $\\theta\_n^i-\\theta\_n$ embedded in noise terms $e\_{n+1}^i$ and $\\xi\_{n+1}^i$ is a new factor, whose characteristics have been quantified in our Lemma 3.1 but are not present in the single-agent scenario analyzed as an application of stochastic approximation in references [23] and [30].*
---
We will replace the explanation of technical challenges around lines 259-261 in our original submission with above contents. Full explanations are provided below for the completeness. All line numbers and equation numbers are referred to our original submission.
As mentioned in lines 754 – 756, for the noise term $\\xi\_{n+1}^i$, directly following the analysis in [23] and [30] leads to a one-step error for all agents, as expressed in (53). In a single-agent scenario (like SGD), this one-step error is simply $-\gamma_n\\nabla F$. However, in a multi-agent scenario, this one-step error cannot be directly analyzed due to the model parameters $\\Theta\_n$ of all agents being multiplied by the communication matrix $\\mathbf{W}$ in (18). To overcome this issue, we decompose this one-step error among all agents into two parts in (54): the consensus error among all agents and the one-step error of the average solution among all agents. Although separating the error into consensus error and error between the average and target solution has been done similarly in [1], our approach in quantifying the consensus error in Lemma 3.1 handles broader classes of communication patterns within our UD-SGD framework, instead of DSGD in optimization in [1]. We can separate the effect of the consensus error and utilize our Lemma 3.1 on the speed of consensus error to obtain the condition on $\\xi\_{n+1}^i$ in (55). Similarly, for the noise term $e\_{n+1}^i$, we follow a similar logic in decomposing its covariance in (73). This separation of the consensus error allows us to derive (75), ensuring that we can rigorously analyze the covariance structure and account for the multi-agent interactions.
[1] Zeng, Sihan, Thinh T. Doan, and Justin Romberg. "Finite-time convergence rates of decentralized stochastic approximation with applications in multi-agent and multi-task learning." IEEE Transactions on Automatic Control 68, no. 5 (2022): 2758-2773.
> Q2: Do the authors believe the current asymptotic analysis technique could be extended to (federated) reinforcement learning? I saw some finite-sample analyses for federated RL in your references like [72].
Yes, we believe that our asymptotic analysis can be extended to Federated RL. Our analysis is based on transforming the UD-SGD framework into a stochastic approximation setting, which includes both optimization algorithms (SGD) and RL algorithms (TD learning and Q-learning). This structural compatibility allows for an extension to RL.
To extend our asymptotic analysis to Federated RL, we need to make the following adaptations: Replace the gradient $-\\nabla F\_i$ in our UD-SGD framework with the TD error from the Bellman equation, denoted as $g\_i (\\cdot)$ in Algorithm 1 of [72]; Adjust the CLT in Theorem 3.3 by replacing $-\\nabla^2 F\_i$ with $\\nabla g\_i$ to reflect the gradient of the TD error. This adaptation should yield the statistical properties of Federated RL, providing insights into the asymptotic performance of each agent rather than focusing solely on the worst-performing agent, as done in some finite-sample analyses like [72].
A notable caveat in RL is that Markovian samples are derived from a given behavioral policy, which is often uncontrollable. While our analysis can still evaluate the contribution of each agent to the overall system performance, the improvements from efficient sampling strategies, as discussed in our paper, cannot be directly extrapolated from optimization setting to the RL context.
> Other concerns: As far as I could see in the experiments, the author only implemented DSGD but the UD-SGD framework has more algorithms. The linear speedup scaling with the number of agents and asymptotic network independence are only mentioned in theory but do not have empirical justification. The experiments for neural networks are somewhat toy.
We appreciate the reviewer's request for more empirical results to support our theory. To address your concerns, we conducted additional experiments shown in Figures 1–3 of the rebuttal:
- In Figure 1(c), we simulate DSGD with a randomly chosen communication matrix from 5 doubly stochastic matrices at each aggregation phase. In Figure 1(d), we test the DFL algorithm with increasing communication interval $K(l) = \\max\\{1,⌈log(l)⌉\\}$ after $l$-th aggregation phase. Moreover, we trained a ResNet-18 model with different sampling strategies, as shown in Figure 2(b). These results demonstrate that improved sampling strategies lead to faster convergence with smaller MSE.
- To numerically test the linear speedup, We conducted image classification on a 5-layer CNN for the CIFAR-10 dataset with partial client participation. We varied the number of agents from 10 to 40. As shown in Figure 2(a), training loss inversely proportional to the number of agents confirms the linear speedup.
- In Figure 1(b), we observe that all four algorithms (Centralized SGD, DSGD with time-varying topologies, FL with full client participation, and DFL with increasing communication interval) reach similar performance around 1000 steps, indicating asymptotic network independence.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I do not have further questions now. | Summary: This paper conducts an asymptotic analysis of Unified Distributed SGD (UD-SGD), which has a generalized communication patterns (modelled with a doubly stochastic communication matrix). The paper investigates several different sampling strategies, such as i.i.d. sampling, shuffling, and Markovian sampling
Strengths: - Rigorous theory: I went through most of the proofs in the appendix and as far as I can tell the theory is sound.
- Well presented: Despite the heavy derivation I find this paper easy to follow as the proof strategies and theoretical insights are discussed clearly in the main exposition.
Weaknesses: Empirical results are quite sparse. Not that this is a big problem for a theory paper, but maybe some additional studies would make it more interesting. I have some suggestions:
- Are all of the results obtained using the same communication matrix (referring to line 982 in appendix G1)? Since the theory is generic for all double stochastic W maybe the authors could present the average result over 5 random W?
- How was local data allocated to each client? The text said each client holds 500 data points from a bigger dataset -- are they iid? If so, it might be interesting to investigate non iid local data (using Dirichlet distribution, which is a common setting in FL)
- How does client dropout (also a common setting in FL) affect the theory and empirical results?
Technical Quality: 4
Clarity: 3
Questions for Authors: I have no further question.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Are all of the results obtained using the same communication matrix (referring to line 982 in appendix G1)? Since the theory is generic for all double stochastic W maybe the authors could present the average result over 5 random W?
We thank the reviewer for the question and the suggestion. To clarify, we didn’t use the same communication matrix throughout all simulation results in the current manuscript. For the simulation in Section 4, we indeed used a fixed doubly stochastic matrix $\\mathbf{W}$ (with expression in line 982), as pointed out by the reviewer.
In the simulation in Appendix G.2, we have conducted the experiment in the FL setting with partial client participation. In this scenario, at each aggregation phase, only 4 random agents (clients) out of 10 upload their model parameters to a central server. This results in a random communication matrix $\\mathbf{W}\_{\\mathcal{S}}$, determined by a random set of participating clients $\\mathcal{S}$ at each aggregation phase. The detailed expression of this random matrix has been provided in Appendix F.1. We admit that the random nature of the matrix $\\mathbf{W}\_{\\mathcal{S}}$ was not explicitly explained in the simulation setup in Appendix G.2. To improve clarity, we will revise the description around lines 374-376 in the main body and explicitly point out in Appendix G.2 that the communication matrix in the FL setting with partial client participation is stochastic.
To accommodate the reviewer’s suggestions, we also simulate 5 random W setting under the decentralized SGD scenario (namely DSGD with time-varying topologies (DSGD-VT)), where at each aggregation phase, we randomly pick a communication matrix $\\mathbf{W}$ from a set of 5 doubly stochastic matrices. We perform the same configuration of sampling strategies as in Section 4, where the first group of agents perform shuffling or i.i.d. sampling while the rest of agents conduct simple random walk (SRW), non-backtracking random walk (NBRW), and self-repellent random walk (SRRW). The result is shown in Figure 1(c) in the rebuttal pdf file (could be found in author rebuttal). We observe the same numerical trend as other algorithms in the current manuscript (DSGD with fixed $\\mathbf{W}$ in Section 4, FL with partial client participation in Appendix G.2), where improving agent’s sampling strategies leads to faster convergence with smaller MSE.
> Q2: How was local data allocated to each client? The text said each client holds 500 data points from a bigger dataset -- are they iid? If so, it might be interesting to investigate non iid local data (using Dirichlet distribution, which is a common setting in FL)
Thanks for raising the question about IID and Non-IID data distribution. First, our theory is applicable to both IID and Non-IID data settings. Regarding the simulation in Section 4, as pointed out in lines 340 – 341 in the current manuscript, the ijcnn1 dataset (50k data points with binary classes) was shuffled and then evenly split among 100 agents, with each agent receiving 500 data points. Thus, each agent held a disjoint local dataset such that they didn’t have overlapped data point with other agents, resulting in Non-IID data. In our original simulation, the distribution of labels was relatively balanced, with the number of label '1' for each agent ranging from 40 to 60.
We agree that investigating more diverse data distributions would provide valuable insights, especially in federated learning. To address this, we conduct additional simulations in our rebuttal, where the data is allocated to agents using a Dirichlet distribution with the default alpha value of 0.5, which ensures a more varied distribution of labels among agents. For the binary classification problem, Figure 1(a) in the rebuttal pdf file indicates that the number of data with label '1' ranges from 0 to 350 across different agents, forming the Non-IID data. We simulate DSG and DFL based on this imbalanced dataset and the results are shown in Figure 1 (c) & (d). The plots demonstrate that our asymptotic analysis is invariant to data distribution with the same trend for different combinations of sampling strategies.
> Q3: How does client dropout (also a common setting in FL) affect the theory and empirical results?
Thank the reviewer for pointing out the dropout phenomenon. In terms of client dropout, we believe that the reviewer refers to the following behavior:
*The client temporarily exits the training process and will rejoin the training in the future with some probability (Bernoulli dropout).*
This is usually modeled as ‘partial client participation’ in FL. In our original submission, we have discussed this scenario in both theory (in Appendices F.1 and F.2) and in experiment (only 4 random participants out of 10 agents at each aggregation in Appendix G.2). Even under this client dropout effect, the random communication matrix $\\mathbf{W}$ generated by this ‘client dropout’ effect still satisfies our assumption 2.5. Therefore, our UD-SGD framework guarantees the convergence to the set of local minima because every agent equitably contributes to the learning process in the long run. In Figure 4 of our original submission, as well as Figure 2(b) in the rebuttal pdf file, we have shown that in the FL with partial client participation, our main message in this paper still holds: efficient sampling strategies employed by individual agents improve the overall convergence in UD-SGD.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response
Comment: I don't find many weaknesses to begin with, and I'm happy to maintain my positive score. | null | null | Rebuttal 1:
Rebuttal: We thank all three reviewers for their comments and for the time and effort they put into reading, understanding and evaluating our paper. In particular, we appreciate the question to make our technical contribution much clearer (Reviewer WSiF), and we should conduct more experiments to support our theory (Reviewer Evbf, WSiF, ds2S). We carefully consider all questions, concerns and comments provided by reviewers and address all of them appropriately. We provide detailed responses to each review separately and we believe that our responses address all of the reviewers' concerns. We also upload a PDF file containing figures of additional simulation results, which will be included in our revision. When we respond to each reviewer, we specify whether the figure comes from our original submission or from our additional experiments in the rebuttal.
Now, we explain our additional experiments in the PDF file. Fig. 1 shows the extra simulation settings in the binary classification problem with Non-IID data, and Fig. 2 includes more experiments on the image classification problem. Specifically,
---
1. We perform additional binary classification simulations on the IJCNN1 dataset with more varied distribution among 100 agents by leveraging Dirichlet distribution with the default alpha value of 0.5 (see Fig. 1(a) for the data distribution). We follow the same setup as in Section 4 of our original submission, i.e., we split 100 agents into two groups, where the first group of 50 agents leverages either iid sampling or shuffling, while the second group utilizes three Markovian sampling methods: SRW, NBRW, and SRRW (with hyperparameter $\\alpha=20$). All 100 agents exchange model parameters through a communication matrix $\\mathbf{W}$.
In Fig. 1(b), we empirically show the asymptotic network independence property via four algorithms under our UD-SGD framework:
- Centralized SGD (communication interval $K=1$, communication matrix $\\mathbf{W}=\\mathbf{1}\\mathbf{1}^T/N$);
- LSGD-FP (FL with full client participation, $K=5$, $\\mathbf{W}=\\mathbf{1}\\mathbf{1}^T/N$);
- DSGD-VT (DSGD with time-varying topologies, $K = 1, \\mathbf{W}$ randomly chosen from $5$ doubly stochastic matrices);
- DFL (decentralized FL with fixed $\\mathbf{W}$ and increasing communication interval $K(l)=\\max\\{1,log(l)\\}$ after $l$-th aggregation).
*We fix the sampling strategy (shuffling, SRRW) throughout this plot.* All four algorithms overlap around 1000 steps, implying that they have entered the asymptotic regime with similar performance where the CLT result dominates, implying the asymptotic network independence in the long run.
Fig. 1(c) & (d) show the performance of different sampling strategies in the DSGD-VT and DFL algorithms in terms of MSE (over 120 independent trials). Both plots consistently demonstrate that improving agent’s sampling strategies (e.g., shuffling > iid sampling, and SRRW > NBRW > SRW) leads to faster convergence with smaller MSE, supporting our theory.
---
2. We perform additional image classification experiments on the CIFAR-10 dataset in the FL setting with partial client participation. We follow the same setup as in Appendix G.2 of our original submission, i.e., 50k image data are evenly distributed to 10 agents, 5 agents in the first group (iid sampling or shuffling) and 5 in the second group (Markovian sampling). Each agent possesses 5k *disjoint* images, which are further divided into 200 batches, each batch with 25 images. Only 4 random agents will participate in the training process at each aggregation phase. In Fig. 2(a), *we fix the sampling strategy (shuffling, SRRW with $\\alpha=10$)* and test the linear speedup effect for the 5-layer CNN model (structure given in lines 1028 - 1033 in our original submission) by duplicating 10 agents to $N$ agents with $N\\in\\{10,20,30,40\\}$, keeping the same participation ratio 0.4. As can be seen from the plot, the training loss is inversely proportional to the number of agents, i.e., at 200 rounds, the training loss is 0.52 for 10 agents, 0.23 for 20 agents, 0.18 for 30 agents, and 0.12 for 40 agents. In Fig. 2(b), we extend the current simulation in Appendix G.2 from 5-layer CNN model to ResNet-18 model in order to numerically test the performance of different sampling strategies in a more complex neural network training task. By fixing the shuffling in the first group of agents, we observe that improving Markovian sampling from SRW to NBRW, then to SRRW, gives accelerated training process with smaller training loss.
We eagerly anticipate feedback from the reviewers and are happy to offer further details and clarifications as needed!
Pdf: /pdf/1cf10c4cb14d522d433e6132eb7d2c25f77a34ae.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities | Accept (poster) | Summary: The authors propose a novel Kernel Language Entropy (KLE) method for uncertainty estimation in white- and black-box LLMs. KLE defines positive semidefinite unit trace kernels to encode the semantic similarities of LLM outputs and quantifies uncertainty using the von Neumann entropy. It considers pairwise semantic dependencies between answers (or semantic clusters), providing more fine-grained uncertainty estimates than previous methods based on hard clustering of answers. We theoretically prove that KLE generalizes the previous state-of-the-art method called semantic entropy and empirically demonstrate that it improves uncertainty quantification performance across multiple natural language generation datasets and LLM architectures.
Strengths: 1. The authors propose Kernel Language Entropy, a novel method for uncertainty quantification in natural language generation.
2. The authors propose concrete design choices for our method that are effective in practice, for instance, graph kernels and weight functions.
3. The authors empirically compare our approach against baselines methods across several tasks an LLMs with up to 70B parameters (60 scenarios total), achieving SoTA results.
Weaknesses: 1. The motivation is clear in Fig 1, but why choose the kernel to solve the problem, this part is not very clear.
2. The proposed method looks like it depends on the sampling times, which is expensive. How is the performance when we sample few times?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How do I get the final answer based on 10 samplings? For example, in fig 1, what could be the final answer of LLM2, majority vote?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer tEe6,
Thank you for the positive assessment of the novelty, practical effectiveness, and empirical comparison of our work. We would like to address the concerns and questions you raised below:
**Weaknesses**
>The motivation is clear in Fig 1, but why choose the kernel to solve the problem, this part is not very clear.
Considering only semantic clusters, as done in semantic entropy [1], is severely limiting because it assigns responses to strictly separate equivalence clusters. In reality, the space of semantic meanings is more nuanced and fine-grained; some answers may be similar even though they are not exactly semantically equivalent. Therefore, introducing a metric to capture semantic similarity is essential, and assigning a kernel is a natural choice for this purpose. In the center of Figure 1, we visualize semantic kernels – those kernels characterize the distance in semantic space and thus are more expressive than simply considering a distribution over distinct semantic clusters. The von Neumann Entropy over the normalized kernel provides a convenient way to quantify entropy while accounting for the semantic similarity. We will clarify this in the next revision
> The proposed method looks like it depends on the sampling times, which is expensive. How is the performance when we sample few times?
For the rebuttal, we conducted an ablation study to investigate the optimal number of samples. See Fig.1 in the attached PDF. We observed that the performance of KLE is better than that of SE for all sample sizes, with a significant increase when the number of samples goes from 2 to 6, but continuing to grow until 10 samples. In practice, we recommend selecting as many samples as feasible and parallelizing sampling if needed.
**Questions**
> How do I get the final answer based on 10 samplings? For example, in fig 1, what could be the final answer of LLM2, majority vote?
Following previous works [1, 2], we chose an answer sampled with a low temperature (T=0.1) as a final answer of an LLM, LL: 282-283. The majority vote is another viable option for short generations, however, in long-form generation answers become more diverse, which means that each semantic cluster is small and the majority vote becomes unreliable.
**Concluding remarks**
We would be grateful if you could let us know whether our explanations have addressed your concerns. Please let us know if you have any other questions or concerns.
**References**
[1] Farquhar, et al. (2024). Detecting Hallucinations in Large Language Models Using Semantic Entropy.
[2] Kuhn, et al. (2023). Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. All of my concerns are addressed. And after reading other reviews, I will keep my score. | Summary: The paper introduces the method "Kernel Language Entropy", capturing semantic similarities of output sequences via semantic kernels and subsequently estimating uncertainty using the von Neumann entropy.
Strengths: - The paper proposes a novel approach to estimate uncertainty in LLMs.
- It presents a solid theoretical foundation by showing that Kernel Language Entropy generalizes Semantic Entropy.
Weaknesses: The primary area for improvement is the paper's structure. The paper is quite difficult to follow due to definitions and expressions not being clearly contextualized, for instance:
- **Sections and Headings**: Section 3 is inconsistent in its use of subsections and subheadings. It begins with a motivating example (subheading), followed by the formal definition of PSD kernels without a subheading. The definition of semantic kernels is embedded in the text without a subheading, making it less prominent than the practical approach for constructing semantic kernels, which has a dedicated subsection at the end of the section. Also, VNE is given more prominence than the main concept, KLE (starting with the definition of KLE (subheading) as the VNE, followed by deriving its properties, would improve coherence).
- **Kernels**: In Section 3.1, two explicit kernels ($K_{heat}$ and $K_{Matérn}$) are introduced. In Section 5, the authors then propose to use $K_{heat}$ and $K_{full}$, with $K_{full}$ being a weighted sum of $K_{heat}$ and $K_{SE}$. The previously proposed $K_{Matérn}$ is no longer considered in the main experiments, while $K_{SE}$ has not been introduced before. It only becomes clear in Section B of the Appendix that $K_{SE}$ is equal to Semantic Entropy, which is referred to as $SE$ in Section 5. Also, the importance of the weighting factor for $K_{full}$ is not discussed. In general, the criteria for choosing between kernels are unclear (and not all kernels consistently outperform the baselines across datasets).
Technical Quality: 3
Clarity: 2
Questions for Authors: - Is the rather small model Llama 3 8B Instruct capable of evaluating the correctness of an output sequence? Have you considered other (statistics-based) metrics to evaluate the correctness?
- Have the authors considered utilizing a regression model that directly assigns a single value to the semantic similarity instead of unintuitively having to aggregate the three classes of the NLI model?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer DkpB,
Thank you for your positive assessment of novelty and theoretical motivation of our work. Please, let us address your questions and pointed weaknesses:
**Weaknesses**
>Section 3 is inconsistent in its use of subsections and subheadings. It begins with a motivating example (subheading), followed by the formal definition of PSD kernels without a subheading […]
Thank you for pointing this out! We will add subheading to the KLE definition and further add a subsection to highlight “Semantic Kernels and KLE” in order to improve the structure of Section 3.
> In Section 3.1, two explicit kernels ($K_{Heat}$ and $K_{Matern}$) are introduced. In Section 5, the authors then propose to use $K_{Heat}$ and $K_{Full}$ , with being a weighted sum of $K_{Heat}$ and $K_{SE}$. The previously proposed $K_{Matern}$ is no longer considered in the main experiments, while has not been introduced before. It only becomes clear in Section B of the Appendix that is equal to Semantic Entropy, which is referred to as in Section 5. Also, the importance of the weighting factor for is not discussed. In general, the criteria for choosing between kernels are unclear (and not all kernels consistently outperform the baselines across datasets).
Thank you for bringing it to our attention! $K_{\operatorname{Matern}}$ was considered in the main experiments and showed similar results to $K_{\operatorname{heat}}$ (see Fig. 4). Thank you for pointing out the inconsistencies regarding $K_{\operatorname{SE}}$; we will include its definition in the main text. We chose a weight factor using the validation set, and since the results with the validation set versus the default parameters are not significantly different, it is reasonable to use the default value (0.5) in practice or choose it as a hyperparameter with a validation set. It appears that all kernels outperform SE, with the best results observed using the vanilla $K_{\operatorname{heat}}$ (see Fig. 4). $K_{\operatorname{heat}}$ consistently shows strong performance and can be chosen as a default choice. We will add more details on this to avoid any confusion.
**Questions**
>Is the rather small model Llama 3 8B Instruct capable of evaluating the correctness of an output sequence? Have you considered other (statistics-based) metrics to evaluate the correctness?
We evaluated the Llama-2-70B-chat results using GPT-4 on TriviaQA and found the accuracy assessment to be consistent with Llama-3-8B in 95% of cases. Additionally, we compared Llama-3-8B with human annotations of accuracy and found agreement in 90% of cases. The source code for the experiments is available, allowing users to re-run them with GPT-4 if their budget permits. However, we chose to use an open-source model to support researchers with limited budgets. Overall, Llama-3-8B appears well-suited for this task, accessible, and easy to use even with limited GPU access. We will expand on this in the next revision!
> Have the authors considered utilizing a regression model that directly assigns a single value to the semantic similarity instead of unintuitively having to aggregate the three classes of the NLI model?
We considered several ideas on how semantic kernels can be enhanced, including utilizing a regression model. However, in this work, we chose to focus on building upon existing approaches, and previously, NLI model outputs were used directly. We also employed confidences from the NLI model as graph weights but did not observe the improvement ($K^{DB}_{*}$, Fig. 4). We leave the improvement of the similarity measures for future work and briefly discuss these potential improvements and their limitations in LL: 354-356. We will expand on this in the next revision!
**Concluding remarks.**
We would be grateful if you could let us know whether our explanations have addressed your concerns. Please let us know if you have any other questions or concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. Addressing the mentioned points in the final version of the paper will indeed improve its clarity and overall quality. I have no further questions and am updating my score. | Summary: In this work, the authors propose Kernel Language Entropy, which shares a similar concept to semantic uncertainty but additionally considers semantic similarity. Based on this proposed theory, they design graph kernels and weight functions to estimate LLM uncertainty. Furthermore, they demonstrate that their method generalizes semantic entropy. The empirical results confirm the effectiveness of the proposed method across various tasks and LLMs.
Strengths: Originality: This paper extends semantic entropy to Kernel Language Entropy, which additionally takes into account the semantic similarity between clusters. The idea is well-motivated.
Quality: This paper provides detailed theoretical analysis and designs several variants. The experimental results demonstrate the effectiveness of these designs. The experiments are extensive.
Clarity: This paper is well-structured.
Significance: This paper focuses on estimating LLM uncertainty, which is significant for LLM applications, as they often make mistakes in their responses. The techniques proposed in this work could be beneficial in addressing this issue.
Weaknesses: 1. KLE requires iterative sampling from the LLM, which is computationally costly. This leads to delayed responses and limits its applicability.
2. It seems that this method can only estimate the confidence regarding whether the LM will correctly answer the query, but it cannot predict the confidence for a given answer. For example, in scenarios like ranking candidate answers, we can use P(True) or PE to estimate answer confidence, which KLE cannot do.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Which sample do you choose as the final answer among multiple samples, the answer in the biggest cluster or just a random one? I did not find this detail (perhaps I missed it), but it is important.
2. Since the answer correctness is determined by an 8b model, I am curious about how reliable the predicted label is. From my previous experience, even GPT-3.5 is not capable of reliably estimating the correctness of model responses, especially when the responses are phrases. Tian et al. [1] also noted a similar issue, as seen in their Appendix C.
[1]. Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback.
3. Can this method be applied to long-form responses? Considering that long-form responses typically consist of multiple claims.
4. Missing the following related work:
Language Models (Mostly) Know What They Know
LitCab: Lightweight Language Model Calibration over Short- and Long-form Responses
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge the issue of computational cost. Investigating ways to elicit confidence from the LLM itself could be a possible direction for addressing this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer Raee,
Thank you for your positive evaluation of our work and for highlighting its originality, quality, clarity, and significance. We hope to address your concerns and questions in our response below.
**Weaknesses**
>KLE requires iterative sampling from the LLM, which is computationally costly. This leads to delayed responses and limits its applicability.
Our method, like other most effective methods such as semantic entropy (SE) and discrete semantic entropy (DSE), also requires sampling multiple answers. One way to overcome this is by generating responses in parallel, which helps to avoid the delay but uses more resources. In Fig. 1 in the rebuttal PDF, we include an ablation study on the number of samples for NQ and BioASQ. We briefly discuss this limitation in LL: 351-352.
>It seems that this method can only estimate the confidence regarding whether the LM will correctly answer the query, but it cannot predict the confidence for a given answer. For example, in scenarios like ranking candidate answers, we can use P(True) or PE to estimate answer confidence, which KLE cannot do.
Yes, that is correct! Methods like SE and KLE estimate the predictive semantic entropy of a model, which is different from estimating the confidence. Estimating uncertainty as predictive entropy is also an important problem for many applications (e.g., classification with rejection), and was used extensively in Bayesian deep learning. We will discuss these considerations in the limitations section.
>Which sample do you choose as the final answer among multiple samples, the answer in the biggest cluster or just a random one? I did not find this detail (perhaps I missed it), but it is important.
Following prior literature [1], we select a low temperature sample (T=0.1) in our experiments to quantify accuracy (LL: 282-283).
> Since the answer correctness is determined by an 8b model, I am curious about how reliable the predicted label is. From my previous experience, even GPT-3.5 is not capable of reliably estimating the correctness of model responses, especially when the responses are phrases. Tian et al. [1] also noted a similar issue, as seen in their Appendix C.
We evaluated the Llama-2-70B-chat results using GPT-4 on TriviaQA and found the accuracy assessment to be consistent with Llama-3-8B in 95% of cases. Additionally, we compared Llama-3-8B with human annotations of accuracy and found agreement in 90% of cases. The source code for the experiments is available, allowing users to re-run them with GPT-4 if their budget permits. However, we used an open-source model to support researchers with limited budgets. Overall, Llama-3-8B appears well-suited for this task, accessible, and easy to use even with limited GPU access.
>Can this method be applied to long-form responses? Considering that long-form responses typically consist of multiple claims.
Yes! As a matter of fact, our method particularly excels when working with longer answers. In our experimental setup, as illustrated by the examples in Appendix (Fig. C.2), the generated responses include the main answer as well as additional contextual content. For example, instead of simply answering “Laplace,” our LLMs might respond with “Laplace, a French scientist who studied…” By building a semantic kernel, we capture these fine-grained similarities and estimate uncertainty more effectively. In contrast, SE aims to assess the equivalence between answers, and including additional information often leads to worse performance (each answer tends to have its cluster). This is demonstrated in Fig. C.2 and discussed in the last paragraph of Appendix C.
> Missing the following related work …
Thank you! We will add these references.
**Concluding remarks**
We would be grateful if you could let us know whether our explanations have addressed your concerns. Please let us know if you have any other questions or concerns.
**References**
[1] Kuhn, et al. (2023). Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I will keep my positive score. | Summary: This paper is highly motivated by and heavily draws from Kuhn et al. (ICLR, 2023) “Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation.” In Kuhn’s original paper they propose an unsupervised way to calculate the semantic uncertainty of LLMs by (1) generating a set of outputs from an LLM, (2) clustering these outputs by semantic equivalence via an NLI model, and (3) plugging in these clusters to an equation for “semantic entropy.”
This paper proposes an extension of the strict clusters of Kuhn et al.~by creating a “semantic graph” and calculate the “graph kernel” for the uncertainty metric. Thus, the authors attempt to measure the semantic entropy *between* clusters rather than just *within* clusters (the latter which is the approach of the previous work, Kuhn et al. (2023)).
The authors provide a lot of math and theory trying to justify that their generalization from Kuhn et al. is meaningful. However, the empirical experiments show modest to no performance gains from their generalization on real-world QA datasets and LLMs.
Strengths: 1. The authors have crafted a very extensive empirical set-up with 12 different LLMs on five datasets. They also get 10 LLM outputs per input and also obtain confidence intervals over 1000 bootstrap resamples (line 286-287). They combine this with five different baseline models for comparison to their proposed methods. I commend the authors on this extensive and painstaking experimental set-up.
2. The authors seem very well-versed in the previous literature and identify a clear gap in Kuhn et al.: that the “hard clustering” of Kuhn et al. misses what could be a softer clustering of semantic similar (which the authors tackle by constructing a semantic graph with weights between clusters).
Weaknesses: 1. **The claims are not backed by strong enough evidence.**
On line 59, the authors claim their approach achieves SoTA results. However, the empirical results (Table 1) are little to modest gains over baselines. Additionally, the bolding in Table 1 is misleading. Results should only be bolded if the method’s confidence interval is non-overlapping the confidence intervals of other methods (which they are not). For example, in Table 1 for AUROC on BioASQ, $0.88 \pm 0.03$ for KLE (K_FULL) is overlapping with P(True) at $0.86 \pm 0.03$. Likewise for SVAMP AUROC with $0.77 \pm 0.02$ for KLE (K Heat) and ER at $0.75 \pm 0.02$. This also makes me skeptical of how the authors are calculating the “win rates” for the other figures as well.
2. Additionally the authors do not investigate the **error/accuracy of intermediate models** that the experimental set-up relies upon.
A key part of the authors pipeline is using DeBERTa-Large-MNLI to predict whether LLM outputs entail one another and create their semantic graph (line 262). However, DeBERTa-Large-MNLI is also an imperfect model. What was the accuracy of this model on the domains/datasets in the authors’ empirical pipeline? Are there ways to propagate uncertainty from this intermediate NLI model downstream to the final uncertainty calculations of other LLM’s outputs?
3. Additionally, see the questions below.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In the abstract, you motivate this work by saying “by detecting factually incorrect model responses, commonly called hallucinations.” Yet, you never actually show that you detect hallucinations (which require world knowledge that is external and more complex than the “semantic uncertainty” you’re actually targeting). How do you justify this disconnect?
2. On line 25 you cite two works that you use to justify “As LLM predictions tend to be well-calibrated.” However there is an *enormous* body of literature that shows the opposite. I would recommend hedging on this statement or providing citations of the opposite (non-calibration) findings.
3. Figure 1 lacks some details and motivation. In this example, I would not expect the named entity outputs, (e.g., “Laplace” or “Kolmogorov and Laplace”) to have any other lexical variants that have similar semantic meaning because these are named entities. In this figure, what does the little boxes next to “Semantic Kernels” represent? I would recommend explaining this more (e.g., I think you’re implying that blue means more similarity between particular clusters?)
4. In Figure 3’s caption, please explain why only “values larger than or equal to 0.62” correspond to a p-value less than 0.05.
5. I would recommend explaining how you are calculating AUPRC and why it is important. I know Kuhn et al. also use this metric but you do not make clear what this metric is in your standalone work. From re-reading Kuhn et al., I took away that they calculated an “entropy score” to predict whether a model’s answer to a question is correct. They vary the threshold of the entropy score for correctness prediction to get AUROC.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer S5q9,
Thank you for a thoughtful and constructive review. We are pleased to hear that you found the experimental setting and the research problem in our work interesting. We hope to address your concerns in our reply below.
**Weaknesses**
> […] the empirical results (Table 1) are little to modest gains
In Tab. 1, our method is the best or among the best for all datasets and both models, while the second-best method is different for different setups (ER, P(True), or SE). **Statistical significance is achieved primarily by repeating the experiment with many different models and datasets.** Tab. 1 shows results for only the two largest models out of 12. **See Fig. 3 for the summary of all experiments.** Statistical significance was calculated by the sign test [1] across the 60 experiments, and confirmed in Fig. 3 that the differences between our method and any other method are always statistically significant (LL: 285-290). To further explicitly demonstrate the size of the improvement, we show the relative gains in AUROC and AUARC in Tab. 1 in the rebuttal PDF.
> the bolding in Table 1 is misleading
It is common to bold the results when the average is the best, e.g. https://arxiv.org/pdf/2106.10934 or https://arxiv.org/pdf/2011.13456, but we are happy to add a comment to avoid confusion.
>[...] how the authors are calculating the “win rates” [...]
Win-rates represent the fraction of cases where one method outperforms another, based on a better mean value of the corresponding metric (LL: 285-290).
> authors do not investigate the error/accuracy of intermediate models [...]
- Models for generating answers: We report their accuracy in Appendix D (Fig. D2).
- NLI models: The performance of NLI models in the same setting is analyzed by [2]. We will add a reference and summarize their analysis.
- Models for checking the answers: We compared the performance of Llama-3-8B and GPT-4 for TriviaQA by Llama-2-70B-chat and observed 95% agreement. We additionally measured the performance compared to humans and observed 90% agreement in 100 cases.
> DeBERTa-Large-MNLI analysis and uncertainty propagation
The uncertainty from the NLI model can be propagated to the final KLE model. In our experiments, we have used kernels where NLI confidences directly form weights (shown in Fig. 4 under $K^{DB}_{*}$). Moreover, KLE can combine multiple NLI models via kernel composition. We leave this and other alternatives to constructing better semantic kernels for future work (LL: 354-356). The accuracy of NLI in this setting is assessed by [2], which we will comment on.
**Questions**
> you motivate this work by saying “by detecting factually incorrect model responses, commonly called hallucinations.” Yet, you never actually show that you detect hallucinations [...]
We use predicted uncertainty to detect factually incorrect responses in our experiments, following the evaluation method used in [3] and other studies. AUROC and AUARC measure the effectiveness of uncertainty in detecting hallucinations (see LL: 273-280 and [2]).
> you cite two works that you use to justify “As LLM predictions tend to be well-calibrated.” [...] I would recommend hedging on this statement or providing citations of the opposite
Thanks! We will discuss these papers. We will further clarify the differences in observations regarding calibration of LLMs (e.g., base vs. instruction-tuned models).
Importantly, note that KLE does not rely on calibration and performs well when an LLM is not well-calibrated (see Fig. D5).
>Fig. 1 […] I would not expect the named entity outputs [...] to have any other lexical variants [...] what does the little boxes next to “Semantic Kernels” represent? [...]
Thank you for pointing this out! The 3x3 matrices represent kernels over clusters, where colors show kernel values. Non-diagonal elements indicate similarities, and diagonal elements represent $p(C_i | x)$, similar to theoretical proofs. We'll add a more detailed explanation.
The answer "Laplace" can appear in different forms, like "Pierre-Simon Laplace" or “Laplace, a French scholar.” Answers in each cluster may or may not vary lexically.
Fig. 1 shows that even with identical distributions over semantic clusters, the kernels differ. The KLE method offers better uncertainty quantification than SE (right side).
>In Fig. 3 [...] why only “values larger than or equal to 0.62” correspond to a p-value less than 0.05.
A sign test shows that method A outperforms method B if the value is ≥ 0.62, and method B outperforms method A if the value is ≤ 0.38, according to the sign test. Values between 0.38 and 0.62 are not statistically significant. We will clarify this in the next revision.
>explain how you are calculating AUPRC and why it is important. […] they calculated an “entropy score” to predict whether a model’s answer to a question is correct
Similar to [2, 3], we use the entropy score to predict the correctness of the generated responses. The AUROC and AUARC measure the ability of uncertainty to detect hallucinations (LL: 273-280).
**Concluding remarks**
We appreciate your comments on our empirical results. We hope our answers have fully addressed your concerns.
We would like to emphasize that our method is both empirically effective and theoretically sound. It holds methodological value and opens the potential for developing semantic kernels for other LLM outputs, such as structured data or mathematical proofs, thereby broadening the scope of uncertainty quantification.
We would be grateful if you could let us know whether our explanations have addressed your concerns. Please let us know if you have any other questions or concerns.
**References**
[1] Dixon, et al. (1946). The Statistical Sign Test.
[2] Farquhar, et al. (2024). Detecting Hallucinations in Large Language Models Using Semantic Entropy.
[3] Kuhn, et al. (2023). Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation.
---
Rebuttal Comment 1.1:
Title: Response
Comment: > Statistical significance is achieved primarily by repeating the experiment with many different models and datasets
"Statistical significance" has a precise, technical meaning as in [Berg-Kirkpatrick 2012](https://aclanthology.org/D12-1091.pdf).
Per my comment "Results should only be bolded if the method’s confidence interval is non-overlapping the confidence intervals of other methods (which they are not)", in each of the 5 datasets, how many times did your method have better point estimates and non-overlapping CIs with the baselines' for *both* metrics?
---
Rebuttal 2:
Comment: Dear reviewer S5q9, Thank you for your comment and for engaging in the discussion!
>Per my comment "Results should only be bolded if the method’s confidence interval is non-overlapping the confidence intervals of other methods (which they are not)", in each of the 5 datasets, how many times did your method have better point estimates and non-overlapping CIs with the baselines' for both metrics?
We observe that the standard error from bootstrap for each evaluation is consistently large across all method-dataset pairs, and more related to the experimental setup than differences between the methods. Precisely because the CIs are overlapping in individual cases, we repeated the experiment for 60 model-dataset pairs. Our experimental setup follows a recent paper (https://www.nature.com/articles/s41586-024-07421-0, [2]), where the authors specifically emphasize:
“We report the raw average score across held-out evaluation datasets without standard error because the distributional characteristics are more a property of the models and datasets selected than the method. **Consistency of relative results across different datasets is a stronger indicator of variation in this case.**”
Additionally to evaluating such consistency with the binomial test (Fig. 3 and 4, main text), we’ve included relative mean gain in the reported metrics per dataset in the rebuttal PDF (Tab. 1).
Finally, we report a comparison in each of the 5 datasets separately, when _both metrics are considered simultaneously and the mean estimate from the bootstrap_ is used for comparison. The table shows the numbers of wins, ties (one metric is better, another is worse), and losses of our method compared to the two strongest baselines (the metrics were strongly correlated and therefore highly consistent with each other). Each cell contains (#wins / #ties / #losses).
| | SQUAD | SVAMP | NQ | TriviaQA | BioASQ | Total wins |
| -------- | ------- | ------- | ------- | ------- | ------- | ------- |
| SE | 5/2/5 | 5/1/6 | 10/1/1 | 11/1/0 | 9/0/3 | 42.5 ( p ≤ 0.001) |
| P(True) | 10/1/1 | 9/1/2 | 9/2/1 | 10/1/1 | 6/2/4 | 47.5 (p < 0.00001) |
The p-value was calculated using the sign test by splitting the ties between wins and losses [1].
In summary, _while the results for a single dataset may not always be conclusive, we in the end observe very strong evidence that our method performs overall the best by repeating the experiment over 60 experimental scenarios_.
We hope that we have addressed your remaining concern!
## References
See the references from our response above.
---
Rebuttal Comment 2.1:
Title: Response
Comment: Thanks for the detailed response. I changed my score to a 4. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful reviews, valuable suggestions, and for taking the time to read our paper!
We particularly appreciate the positive recognition of many aspects of our work, including its novelty (tEe6, DkpB, Raee), significance (tEe6, Raee), empirical comparison and experimental setup (tEe6, S5q9), and theoretical results and research problem (DkpB, S5q9).
We hope we have addressed all questions and concerns raised by the reviewers and are happy to discuss any remaining concerns or questions during the rebuttal.
**Main Changes:**
- Improving the clarity of the text with minor fixes and additional explanations.
- Including additional results that explicitly show the relative improvement in AUROC and AUARC compared to other models across 60 experimental scenarios setups (see Tab. 1 in the attached PDF). (S5q9)
- Investigating the impact of the sample size with a new ablation study (see Fig. 1 in the attached PDF). (tEe6, DkpB)
- Confirming the validity of using Llama-3-8B for accuracy checking by comparing it with human evaluation and GPT-4. (Raee, tEe6, DkpB, S5q9)
Please, take a look at the attached PDF for visualizations and tables.
We would be grateful if you could let us know whether our explanations have satisfactorily addressed your concerns. We are also open to discussing any other questions you may have.
Pdf: /pdf/848c71f63e14000088e787077e92f68bf32e4c5a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Accurate and Steady Inertial Pose Estimation through Sequence Structure Learning and Modulation | Accept (poster) | Summary: This paper addresses the human pose estimation task with signals of 6 IMUs on the body. Because there are spatial correspondence in multiple IMUs across the body and temporal correspondence in the signal, this paper proposes to model the spatial relation of the 6 devices and the temporal relation in the signal by using learnable weight matrix. The weight matrix is initialized by distribution in the dataset and the transformer model first model the spatial info and then the temporal info. The proposed model achieves the SOTA accuracy compared to previous methods.
Strengths: * This paper identifies a key character of IMU signals, the structural relation in the spatial and temporal dimensions, especially between devices.
* The experiment accuracy is clearly better than previous methods.
* This paper is well written and the structure is clear.
Weaknesses: 1. The authors attribute the performance increment to 1) spatio-temporal framework 2) SSM-S 3) SSM-T, but they only conduct ablation studies for 2) and 3). From table 3, we can see that the 1) as the baseline already outperforms all the previous methods significantly. The readers may expect more analysis and ablation experiments on the reason why 1) already achieves the SOTA accuracy.
2. Figure 3 shows that there is an obvious relationship between the left and right hands and the left and right legs. This data distribution is specific to particular datasets. Although good results have been achieved with the current datasets, if such a distribution pattern does not exist in a new dataset, using the same parameters to model multiple devices may not yield good results.
Technical Quality: 3
Clarity: 4
Questions for Authors: Figure 3 and 7 show the explicitly-set weight matrix which show obvious patterns in spatial and temporal dimensions. Do you observe any explainable patterns in the learned weight matrix (EIHS)?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: This paper has listed some limitations and there is no obvious negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Why does the baseline already outperform SOTA?**
Thank you for your comments! We hope to clarify that our baseline is a "strong" one that applies the powerful spatial-temporal framework [1-5] to inertial pose estimation task, thus outperforming SOTA. Specifically, as elaborated in [1-5], the key insights of spatial-temporal framework in motion-related works are: i) spatial encoders can model local relationships between human body joints; ii) temporal encoders can capture the global dependencies across frames in the entire sequence. Although to our knowledge, we are the first to apply spatial-temporal framework to inertial pose estimation, we think it is **somewhat straightforward** and suggest that it should only be **used as a baseline** from which to contribute. Therefore, our contributions (SSMs) are orthogonal to it and we think experiments on it may deviate a bit from the main focus of our work.
Nevertheless, we provide ablation study results of our spatial-temporal framework on the TotalCapture dataset as follows.
|$\qquad\qquad\qquad$ Method | Ang Err | Jitter | $\quad \tau$ |
| :----: | :----: | :----: | :----: |
| Spatial Transformer Encoder **only** | 13.61 | 0.56 | 23.83 |
| Temporal Transformer Encoder **only** | 10.20 | 0.53 | 17.33 |
| Spatial-Temporal Transformer Encoder | 8.82 | 0.48 | 14.25 |
It can be observed that both spatial and temporal transformer encoders play crucial roles in reducing angular error and jitter, especially the temporal encoder, justifying their effectiveness in inertial pose estimation. We will include more detailed explanation in our revision.
**Q2: Generalization of $S_{E-S}$ (Fig. 3) across datasets**
Great question! We hope to clarify that we did not rely solely on $S_{E-S}$ but introduced a learnable matrix $P_{S}$ to model the differences between specific datasets and $S_{E-S}$ constructed using the AMASS dataset (Eq. 4). The reason we used AMASS to construct $S_{E-S}$ is that it is a large-scale human motion dataset consisting of a variety of human motion data, which can be used to construct a representative $S_{E-S}$ as a useful starting point/initialization for learning.
In addition, we provided the method (Sec. 3.4, Eq.6, 7) to construct $S_{E-S}$ so that one can also compute a dedicated $S_{E-S}$ for each new dataset used easily.
**Q3: Are there any explainable patterns observed in the learned weight matrix (EIHS)?**
Yes, there are. To facilitate explanation, we visualized the spatial structure matrix $S_{E-S}$ (before training), the learnable matrix $P_{S}$ and the final spatial structure matrix $S_{EI-S}$ (please refer to .pdf file). It can be observed that:
- **The overall pattern of the structure matrix remain the same** before and after training ($S_{E-S}$ and $S_{EI-S}$), i.e., the movements between the two hands and the two legs still exhibit high correlation; and the movements of the head and the root (spine) are still negatively correlated. This demonstrates the effectiveness of our $S_{E-S}$ as initialization/prior.
- **The learnable matrix $P_{S}$ adds small offsets** to the spatial structure matrix, i.e., slightly suppresses the correlation between two hands, two legs and head vs. root. We attribute this to the different motion distributions among datasets: i) in AMASS, **daily actions** (e.g., walking, jogging, running, sitting and stretching) are dominant, and the movements of both hands and legs show extremely high consistency; ii) in the DIP-IMU, although daily actions are also the majority, there are a large number of **single-hand** and **single-leg movements**, such as single-hand raising, grasping and swinging; single-leg lifting, etc., which weaken the movement consistency of both hands and legs; iii) in the Andy and CIP, there are numerous **industry-oriented activities**, which are very different from daily movements, resulting in a relatively large adjustment range of the learnable matrix $P_{S}$. This demonstrates the effectiveness of our $P_{S}$ in adapting the structure matrix to different datasets, and maintains a high generalization ability.
We will include this in the revision as it will further enhance our contribution. Thank you for your suggestion! | Summary: In this paper, the authors study the inertial pose estimation problem and propose to add Sequence Structure Modules (SSM) to the spatial-temporal transformer architecture. The proposed SSM carries prior structural information of both spatial and temporal domain, and is shown to outperform multiple baselines and a plain spatial-temporal transformer.
Strengths: 1. The proposed method achieves significant improvement compared to the baselines across multiple datasets.
Weaknesses: 1. Though the proposed method achieves great performance, it is unclear why the proposed SSM structure works. The proposed SSM modules are essentially heuristically designed attention modules. Self-attention modules should be able to learn those structural patterns as well.
2. The motivation for this approach is to add inductive bias to the original transformer structure. However, this paper lacks comparisons with other NN structures that contain such inductive bias by their design. For example, one could use GNNs as the spatial encoder and CNNs as the temporal encoder.
3. The novelty of this paper is somewhat limited.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. It is not clear whether the spatial structural information calculated by (6) and (7) really works. According to setting 4 in Table 4, using ES for SSM-S does not show significant improvement. The best performance is achieved with EIHS for SSM-S (setting 5). However, since it contains learnable parameters, the final attention matrix might look very different from the pre-computed $E_{E-S}$ shown in Figure 3. Maybe it is better to show the learned spatial matrix in the paper as well.
2. It is also not clear how the SSM-T module works. The authors mention that the aim of the SSM-T module is to enforce a high correlation between adjacent frames. However, since SSM-T is only applied to the inputs of the temporal encoder, it is not guaranteed that such constraint still exists within the temporal encoder. It would be great if the authors could provide some more explanations.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Tthe authors have addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Why does SSM structure work && The difference between attention and SSM.**
We believe there is a misunderstanding. In short, our SSM is significantly different from attention modules as **its values is independent from input tokens**; and it can be used **with or without heuristics**, i.e., its ES/IS/EIHS versions (Sec. 3.2). Specifically, as discussed in Sec 3.1, the native attention modules does not make any inductive bias on sequence structures, but instead focuses on **the content of the input**. Every element $\alpha_{(i,j)}$ in attention map $\alpha$ is calculated as the product between the $i$-th query ($Q$) and the $j$-th key ($K$) pair, thus representing the relationship between individual tokens rather than **the structure of the input sequence** as a whole.
Intuitively, our SSM remains the same across all input sequences, thus capturing the structural patterns shared by them; the attention module, on the other hand, changes with each input sequence and therefore cannot learn such patterns.
**Q2: More Comparisons with other NN structures.**
Thank you for your suggestion. As shown in the table below, we construct a spatio-temporal framework using GCN layers as the spatial encoder and Conv1d layers as the temporal encoder.
|$\qquad\qquad\qquad$ Method | Ang Err | Jitter | $\quad\tau$ |
| :----: | :----: | :----: | :----: |
| GCN + Conv1d | 14.31 | 0.28 | 18.93 |
| Transformer + Transformer(ours) | **6.82** | **0.09** | **7.46** |
The results show that our method significantly outperforms the GCN + Conv1d implementation. We attribute this improvement to the fact that our work provides, for the first time, a new approach to injecting different types of structural inductive bias into the transformer structure, thus inheriting its advantages while addressing its drawbacks to achieve SOTA results.
**Q3: Clarifications on novelty**
As discussed in our response to Q1, our SSM is very novel and significantly different from attention modules. The values of the attention matrix depend on input tokens and changes with **individual input sequences** while our SSM captures the structural information shared by **all input sequences** and remain the same across different instances, thereby **filling an important gap** of the native transformer architecture in handling fixed-length sequences of rich structure information (e.g., inertial pose estimation).
**Q4: Does the final structure matrix $S_{EI-S}$ look very different from the pre-computed $S_{E-S}$?**
No, it doesn't, but the overall pattern **remains consistent**. Specifically, we visualized the spatial structure matrix $S_{E-S}$ (before training), the learnable matrix $P_{S}$ and the final spatial structure matrix $S_{EI-S}$ (please refer to .pdf file). It can be observed that:
- The overall pattern of the structure matrix **remain the same** before and after training ($S_{E-S}$ and $S_{EI-S}$), i.e., the movements between the two hands and the two legs still exhibit high correlation; and the movements of the head and the root (spine) are still negatively correlated. This demonstrates the effectiveness of our $S_{E-S}$ calculated by Eq. (6) and (7) as initialization/prior.
- The learnable matrix $P_{S}$ adds **small offsets** to the spatial structure matrix, i.e., slightly suppresses the correlation between two hands, two legs and head vs. root. We attribute this to the different motion distributions among datasets: **i)** in AMASS, **daily actions** (e.g., walking, jogging, running, sitting and stretching) are dominant, and the movements of both hands and legs show extremely high consistency; **ii)** in the DIP-IMU, although daily actions are also the majority, there are a large number of **single-hand** and **single-leg movements**, such as single-hand raising, grasping and swinging; single-leg lifting, etc., which weaken the movement consistency of both hands and legs; **iii)** in the Andy and CIP, there are numerous **industry-oriented activities**, which are very different from daily movements, resulting in a relatively large adjustment range of the learnable matrix $P_{S}$. This demonstrates the effectiveness of our $P_{S}$ in adapting the structure matrix to different datasets, and maintains a high generalization ability, which accounts for the improvement between Settings 4 and 5 in Table 4.
In conclusion, the role of the learnable matrix $P_{S}$ is to make **slight adjustments** to the motion distribution sampled from AMASS, and the overall pattern of the spatial structure matrix **remain consistent** before and after training. We will include this in the revision as it will further enhance our contribution. Thank you for your suggestion!
**Q5: Explanation on how the SSM-T module works and why applied to only the inputs of the temporal encoder.**
Thanks for such an insightful question! Yes, the motivation of our SSM-T(ES) is to enforce a high correlation between adjacent frames. However, enforcing such correlation may sacrifice the transformer's ability to learn long-term dependencies, thus forming a trade-off.
As the table below shows, empirically, we observed that applying the proposed SSM-T(ES) module to only the input of the temporal encoder (consisting of 4 transformer layers) works the best (smallest $\tau$) and uses it as our final choice. In other words, we consider which layer to apply as a hyperparameter choice of our proposed modules, which does not affect our contributions. We will include more detailed discussions in our revision.
| $\qquad$Method | Ang Err | Jitter | $\enspace\tau$ |
| :----: | :----: | :----: | :----: |
| SSM-T x 1 (ours) | 6.82 | 0.09 | **7.46** |
| SSM-T x 2 | 7.41 | 0.06 | 7.87 |
| SSM-T x 3 | 7.79 | 0.05 | 8.19 |
| SSM-T x 4 | 7.78 | 0.04 | 8.09 |
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed responses. The answers are satisfactory and most of my concerns have been resolved. So I have increased my score from 4 to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt feedback! Please let us know if you have any additional questions, and we're more than happy to address them. | Summary: Existing transformers have shown great promise in modeling temporal data. However, in the field of inertial pose estimation from inertial measurement units (IMUs),
the lengths of time series are often fixed. This is a property that transformers can take advantage of. Thus, this paper proposes a Sequence Structure Module
that explicitly accounts for fixed-length input but also factors in the spatial dependencies of the IMUs worn at various body placements.
This paper deals with an emerging paradigm for inertial pose estimation that relies on IMUs instead of video. The experiments showed
that the proposed architecture can improve the task performance on several metrics including rotation errors, distance errors and motion smoothness.
Strengths: 1. This paper improves IMU-based inertial pose estimation with fewer sensors, which is preferable in their real world applications
2. The Sequence Structure Module trick is well-motivated. The authors identified the unique aspects of inertial post estimation: (1) fix-length input sequence (2) spatially rich. The proposed module can incorporate the spatial and temporal structure within the data with different priors and optimisation techniques.
3. The demo provided was much appreciated. The pose estimation seems to be realistic and better than other benchmark techniques.
Weaknesses: 1. Given that the proposed sequence structure module could work with data of fixed lengths with rich spatial information, I am curious how this module could also be applied with more or sensors that either make the structural correlation stronger or weaker. It will help to understand the generasibility of the proposed module.
2. In your evaluations, how did you tune other models?
3. Your SSM-T for temporal dependency modelling seems to be very effective in bringing down the jitter from 0.48->0.09 with pre-defined structure. Are there cases where SSM-T will benefit from training at all? If not, maybe there is no need for SSM-T after all but some sort weighted averaging with neighbouring timestamps?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can you show a few examples of what your structure matrices look like before and after training? It will help understand what has changed.
2. How will your model perform on different users with different physiques? In your demo, most users have similar body shapes?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The generalization of the module under more sensors**
Thank you for your suggestion! Although we focused on the more challenging "sparse" settings (6 IMUs) in our paper, showing additional results of applying our modules to more sensors could further demonstrate their generalization ability.
Specifically, we increase the number of IMUs from **6** to **10** (with the additional 4 IMUs placed on the left and right shoulders and thighs), and conduct experiments and ablation studies on the DIP-IMU dataset:
|$\quad$IMUs|Ang Err|Jitter|$\enspace\tau$|
|:----:|:----:|:----:|:----:|
|6(ours)|6.06|0.07|6.49|
|10(ours)|**4.40**|**0.03**|**4.53**|
**Ablation Study**
|$\hspace{4.5em}$IMUs|Ang Err|Jitter|$\enspace\tau$|
|:----:|:----:|:----:|:----:|
|10(ours)|**4.40**|**0.03**|**4.53**|
|10(w/o SSM-S)|5.69|0.03|5.86|
|10(w/o SSM-T)|4.56|0.13|5.19|
|10(w/o SSM-S, w/o SSM-T)|5.58|0.14|6.42|
It can be observed that, after adding 4 IMUs, our proposed method still works effectively (higher accuracy and lower jitter). Additionally, the ablation study results show that the roles of SSM-S and SSM-T **remain consistent with their performance when using 6 IMUs**. That is, SSM-S improves the accuracy of motion prediction, while SSM-T reduces jitter to enhance the coherence of the posture. Based on the above experimental results, it can be concluded that our proposed method maintains strong generalization ability as the number of sensors increases.
**Q2: Implementation details for other SOTA methods**
Thank you for your comments! For Transpose and PIP, since the official open-source repositories do not include any training-related code, we evaluated them on four benchmarks using the provided models and weight files. For TIP and DynaIP, we retrain their models and evaluate them on the same four benchmarks with minimal modifications according to the guidance provided in the official code. We will include more details in our revision.
**Q3: Are there cases where SSM-T will benefit from training at all?**
Yes, there are. For example, as the table below shows, in the combination of "SSM-S(EIHS) && SSM-T(IS)", we can obtain results of $Ang Err=8.13$, $jitter=0.34$ and $\tau = 11.42$ with **trainable SSM-T** on TotalCapture dataset, outperforming the baseline model. Therefore, SSM-T could benefit from training, although for our specific task, SSM-T(ES) works the best.
|$\qquad\qquad$Method|Ang Err|Jitter|$\quad\tau$|
|:----:|:----:|:----:|:----:|
|Baseline|8.82|0.48|14.25|
|SSM-S(EIHS)&&SSM-T(IS)|**8.13**|**0.34**|**11.42**|
We will include more combinations in Table 4 in our revision.
**Q4: What do structure matrices look like before and after training?**
Thank you for your suggestion! We visualized the spatial structure matrix $S_{E-S}$ (before training), the learnable matrix $P_{S}$ and the final spatial structure matrix $S_{EI-S}$ (please refer to .pdf file). It can be observed that:
- **The overall pattern of the structure matrix remain the same** before and after training ($S_{E-S}$ and $S_{EI-S}$), i.e., the movements between the two hands and the two legs still exhibit high correlation; and the movements of the head and the root (spine) are still negatively correlated. This demonstrates the effectiveness of our $S_{E-S}$ as initialization/prior.
- **The learnable matrix $P_{S}$ adds small offsets** to the spatial structure matrix, i.e., slightly suppresses the correlation between two hands, two legs and head vs. root. We attribute this to the different motion distributions among datasets: i) in AMASS, **daily actions** (e.g., walking, jogging, running, sitting and stretching) are dominant, and the movements of both hands and legs show extremely high consistency; ii) in the DIP-IMU, although daily actions are also the majority, there are a large number of **single-hand and single-leg movements**, such as single-hand raising, grasping and swinging; single-leg lifting, etc., which weaken the movement consistency of both hands and legs; iii) in the Andy and CIP, there are numerous **industry-oriented activities**, which are very different from daily movements, resulting in a relatively large adjustment range of the learnable matrix $P_{S}$. This demonstrates the effectiveness of our $P_{S}$ in adapting the structure matrix to different datasets, and maintains a high generalization ability.
We will include this discussion in our revision.
**Q5: The performance on different users with different physiques**
Thank you for your suggestion! The diversity of physiques among users has already been considered in the DIP-IMU dataset, i.e., it contains body shape information for its 10 participants (e.g., mass, height). To demonstrate this more clearly, we computed the BMI values (BMI = Mass(kg) / Height(m)^2) for each individual, categorize them into three groups, and report our model's performance across these three categories as follows:
| Category | $\qquad \enspace$ BMI | $\quad\enspace$Subject ID | Number | Ang Err | Jitter | $\enspace\tau$ |
| :----: | :----: | :----: | :----: | :----: | :----: | :----: |
| C1 | BMI < 21.7 | S2/S6 | 2 | 7.33 | 0.08 | 7.94 |
| C2 | 21.7 <= BMI <= 24.9 | S1/S3/S5/S7/S8/S9 | 6 | 7.63 | 0.09 | 8.35 |
| C3 | BMI > 24.9 | S4/S10 | 2 | 8.07 | 0.09 | 8.83 |
| Average | / | / | 10 | 7.66 | 0.09 | 8.35 |
The experimental results demonstrate that our method is robust and performs well across users with different physiques. For reference, we also include the results for each individual as follows:
|ID|Ang Err|Jitter|$\enspace\tau$|Mass(kg)|Height(cm)|BMI|Category|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|S1|7.74|0.05|8.13|86|186|24.85|2|
|S2|7.47|0.05|7.85|65|178|20.51|1|
|S3|7.95|0.11|8.87|87|187|24.87|2|
|S4|7.63|0.11|8.51|78|170|26.98|3|
|S5|7.64|0.12|8.61|80|180|24.69|2|
|S6|7.20|0.11|8.03|58|172|19.60|1|
|S7|6.89|0.09|7.53|70|178|22.09|2|
|S8|7.13|0.09|7.80|80|180|24.69|2|
|S9|8.48|0.06|9.00|85|187|24.30|2|
|S10|8.51|0.07|9.12|87|181|26.55|3|
---
Rebuttal Comment 1.1:
Comment: Thank you very much for showing all the additional experiment results. I am glad to see that your model generalised to different sensor settings and subjects with varying physiques.
I do see reviewers 8LBr and eYY8 had good suggestions on how to improve the paper, including better articulating your motivation and clarifications on the module implementations. Having a quick glance, I think you have also addressed their concerns reasonably. I am happy to stay with my current rating.
If further disagreement persists, we shall chat among the reviewers ourselves in the next review phase to reach a consensus.
---
Reply to Comment 1.1.1:
Comment: We appreciate your considerate feedback and valuable insights! We will integrate the experimental results and analysis to enhance the comprehensiveness of our paper. If you have any more concerns, we're more than happy to assist in addressing them. | Summary: This paper proposes a novel sequence structure learning and modulation approach that endows Transformers with the ability to model and utilize such fixed-sequence structural properties for improved performance on inertial pose estimation tasks. Specifically, this paper introduces a Sequence Structure Module (SSM) that utilizes structural information of fixed-length inertial sensor readings to adjust the input features of transformers. Further, this paper proposes two SSM variants: SSM-S and SSM-T. These variants incorporate the structural inductive biases of the IMU sensor layout (spatial) and time frames (temporal) into transformer learning.
Extensive experiments across multiple benchmark datasets demonstrate the superiority of the approach against state-of-the-art methods and has the potential to advance the design of the transformer architecture for fixed-length sequences.
I have read the responses of the authors and the comments of other reviewers, I would keep my original score.
Strengths: 1. This paper identifies a key limitation of the native transformer architecture: its lack of inductivebiases for modeling fixed-length sequences with inherent structural properties. To address this shortcoming, we propose a novel Sequence Structure Module (SSM) that enables transformers to effectively capture and leverage the structural priors present in fixed-length sequential data.
2. For inertial motion capture tasks involving sequential IMU data, we propose two SSM variants: SSM-S and SSM-T, which incorporate structural inductive biases of the IMU sensor layout (spatial) and time frames (temporal), respectively, into transformer learning.
3. Extensive experiments demonstrate that the method outperforms state-of-the-art ones on the DIP-IMU and TotalCapture datasets by a large margin. To further demonstrate the superiority of the approach, this paper implemented a real-time motion capture system based on six IMUs to evaluate the performance of the model in complex real-world scenarios.
Weaknesses: 1. In the field of pose estimation, integrating spatio-temporal information is not new. Please explain the innovation and difference of the method in this paper compared with previous methods.
2. The claim regarding the limitations of the Transformer architecture in handling fixed-length input sequences and the extent of these limitations is not sufficiently supported by evidence or theory.
3. The number of comparison methods in the experiments is insufficient.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In the field of pose estimation, integrating spatio-temporal information is not new. Please explain the innovation and difference of the method in this paper compared with previous methods.
2. In the model structure design, whether the order of encoders such as SE\TE will affect the model performance.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: This paper has no obvious negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Clarifications on Innovations**
We believe there is a misunderstanding and hope to clarify that: our technical innovations and contributions **lie not in** the integration of spatio-temporal information, **but in** addressing an inherent shortcoming of the native transformer architecture. Specifically,
- We identify a key limitation of the native transformer architecture: its lack of inductive biases for modeling fixed-length sequences with inherent structural properties. To address this shortcoming, we propose a novel Sequence Structure Module (SSM) that enables transformers to effectively capture and leverage the structural priors present in fixed-length sequential data.
- For inertial motion capture tasks involving sequential IMU data, we propose two SSM variants: SSM-S and SSM-T, which incorporate structural inductive biases of the IMU sensor layout (spatial) and time frames (temporal), respectively, into transformer learning.
Therefore, our contributions are orthogonal to spatial-temporal integration. Please see our summary of contributions at the end of Sec. 1 and Sec. 3 for more details.
**Q2: The claim regarding the limitations of the Transformer architecture in handling fixed-length input sequences and the extent of these limitations is not sufficiently supported by evidence or theory.**
We believe this is a misunderstanding as well. As discussed in **Sec. 3.1** "Why is Sequence Structure Modeling Missing in Native Transformer Architecture?", we **analysed the native transformer architecture** and **provided clear evidence** that its attention matrix only captures the relationship among input tokens rather than the structural information of input sequences. Please see **Sec. 3.1** for more details.
**Q3: More Comparisons with SOTA.**
Thank you for your suggestion! As shown in the tables below, we further included comparisons with the classic DIP [1] and the most recent SOTA method PNP [2] (**please note that PNP has not yet been published at the time of submission of this paper: SIGGRAPH 2024 was just held several days ago between 28th July and 1st August**).
**Comparison with SOTAs on DIP-IMU Dataset:**
| $\quad \quad \quad \quad$Method | SIP Err| Ang Err| Pos Err| Mesh Err| Jitter |
| :----: | :----: | :----: | :----: | :----: | :----: |
| DIP (SIGGRAPH Asia'2018) | 17.10| 15.16 | 7.33 | 8.96 | 3.01 |
| PNP (SIGGRAPH'2024)| 13.71| 8.75 | 4.97 | 5.77 | 0.17 |
| **ours** | **7.90** | **6.06** | **3.12** | **3.78** | **0.07**
**Comparison with SOTAs on TotalCapture Dataset:**
| $\quad \quad \quad \quad$ Method | SIP Err| Ang Err | Pos Err | Mesh Err | Jitter |
| :----: | :----: | :----: | :----: | :----: | :----: |
| DIP (SIGGRAPH Asia'2018) | 18.62 | 17.22 | 9.42 | 11.22 | 3.62 |
| PNP (SIGGRAPH'2024) | 10.89 | 10.45| 4.74 | 5.45 | 0.26 |
| **ours** | **7.00** | **6.82** | **3.36** | **4.00** | **0.09**
It can be observed that our method not only outperforms the classic DIP published in 2018 but also the latest PNP published several days ago well after the submission of this paper, which further demonstrates the superiority of our method.
We will incorporate the results and detailed discussions in our final version.
**References:**
[1] Deep Inertial Poser: Learning to Reconstruct Human Pose from Sparse Inertial Measurements in Real Time. SIGGRAPH Asia 2018.
[2] Physical Non-inertial Poser (PNP): Modeling Non-inertial Effects in Sparse-inertial Human Motion Capture. SIGGRAPH 2024.
**Q4: Ablation study of the order of SE/TE.**
Thank you for your suggestion, but as clarified in our response to Q1, our contributions are orthogonal to spatial-temporal integration. Therefore, we followed the common practice [1-5] of arranging the encoders in the "SE-TE" order, where i) SE considers the correlation of body joints and returns a latent feature representation for that frame; ii) TE analyzes global dependencies between each spatial feature representation, and generates an accurate pose estimation; which makes more sense as it follows an intra-frame and inter-frame order. Thus, we believe this ablation study is a bit deviated from the focus of our paper.
Nevertheless, we provide the results of switching the order from "SE-TE" to "TE-SE" as follows:
| Method | Ang Err | Jitter | $\enspace \tau$ |
| :----: | :----: | :----: | :----: |
| SE-TE(ours) | **6.82** | **0.09** | **7.46** |
| TE-SE | 8.67 | 0.11 | 9.68 |
The results show that our "SE-TE" order outperforms its "TE-SE" variant by a large margin, which is consistent with the common practice in many spatio-temporal frameworks.
**References:**
[1] 3D Human Pose Estimation with Spatial and Temporal Transformers. ICCV 2021.
[2] Spatial-Temporal Transformer for Dynamic Scene Graph Generation. ICCV 2021.
[3] ViViT: A Video Vision Transformer. ICCV 2021.
[4] BSTT: A Bayesian Spatial-Temporal Transformer for Sleep Staging. ICLR 2023.
[5] VideoTrack: Learning to Track Objects via Video Transformer. CVPR 2023. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for providing detailed and constructive comments that have helped to improve the quality of our manuscript.
- We have provided rebuttals to the comments of each reviewer.
- We have also attached a pdf file containing figures that were requested by the reviewers (8LBr, eYY8, 2q9D).
We hope these resolve the concerns raised and we are happy to answer any additional questions!
Pdf: /pdf/4817fa313e5496d6e6af631e48ca4bfb07959535.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalizable Person Re-identification via Balancing Alignment and Uniformity | Accept (poster) | Summary: This paper investigates the side effects of data augmentation in domain generalizable person ReID problem and proposes a framework for mitigating the negative effects. It is found that the data augmentation enhances the performance of a ReID model on its training domain, while degrading the performance of it on unseen domains. To alleviate it, a simple framework, Balancing Alignment and Uniformity (BAU), is proposed to maintain a balance between alignment and uniformity in the latent space, along with a domain-specific uniformity loss for domain-invariant representation. The empricial results show the effectiveness of the proposed methods on various benchmarks.
Strengths: - The motivation of the work is clear and well described. It is found that the data augmentations for ReID models can have negative effects on out-of-distribuiton data in despite of their positive effects on in-distribution data. This is a significant problem in domain generalizable ReID. Although this finding is not novel one, the insights provided by the analysis in terms of the alignment and uniformity concepts [1] are appreciated.
- The proposed framework is simple and effective. The proposed loss functions are designed to address the polarized effect of the data augmentation. These losses are proper application of the alignment loss and uniformity loss [1] to the domain generalizable ReID task with some modifications.
- The empricial results show that the proposed method is promising.
[1] Wang and Isola, Understanding contrastive representation learning through alignment and uniformity on the hypersphere, ICML, 2020.
Weaknesses: # Major concerns
In short, I have concerns with regard to the generalization of the polarized effects by the data augmentations, and a more thorough investigation is needed in terms of the types of the backbone, loss function, and data augmentation method.
- Is the performance degradation on unseen domains due to the data augmentations generalized, regardless of model architecture and loss function? In this paper, the experiments are limited to a specific backbone and loss functions (i.e., ResNet50 trained with cross entropy and triplet losses). Would a similar polarized phenomenon appear in other backbones (e.g., ViT) or other loss functions? In this regard, while the paper mentions that the random erasing (RE) augmentation causes performance degradation on unseen domains, QAConv [2] reports performance improvement due to RE in cross-domain evaluation (Table 1 in appendix of [2]).
- Wouldn’t the polarized effect tend to be different depending on the type of data augmentation? The individual analysis on the effect of each augmentation is required. In this regard, RandAugment and Color Jitter augmentations are used by many domain generalizable ReID methods. Are they relatively safe and is there any basis for this? If so, the performance decrease on the unseen domain in Figure 1 is seen as mainly due to RE.
# Minor concerns
- I'm a little concerned about whether the comparison of the methods in Tables 3 and 4 are fair. It seems that BAU uses more abundant augmentations. Are there any significant differences in augmentations used for each method? Also, it might be necessary to check whether the polarized effect occurs when the the data augmentations used in BAU are applied to other DG methods.
- The novelty of the proposed method is somewhat limtied, since it largely depends on the alignment and uniformity loss proposed in [1]. However, the weighting strategy for alignment loss and the domain-specific uniformity loss are newly introduced, and their effectivenss are demonstrated in ablation study.
- The analysis on the increase of training cost is required. For example, the weighting strategy for alignment loss requires additional computation of Jaccard similarity of k-reciprocal nearest neighbors within a mini-batch. Is this negligible?
[1] Wang and Isola, Understanding contrastive representation learning through alignment and uniformity on the hypersphere, ICML, 2020.
[2] Liao and Shao, Interpretable and Generalizable Person Re-identification with Query-Adaptive Convolution and Temporal Lifting, ECCV, 2020.
Technical Quality: 3
Clarity: 4
Questions for Authors: - In Equation 5, the distance between $\tilde{f_i}$ and $f_j$ is computed. How about computing distance between $\tilde{f_i}$ and $f_i$ or between $f_i$ and $f_j$?
- In Equation 8, I understood that $f_i$ can be pushed against its class prototype. However, to push against only the class prototypes that $f_i$ does not belong to would be more intuitive.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations on experiments for very large domain shifts and more advanced augmentation methods are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 4LRb,
We sincerely appreciate your thorough review and are grateful for your positive remarks on the motivation and insights behind our paper. We have addressed your main concerns below.
### **Regarding the polarized effects across different backbones, losses and augmentations**
**Backbones and loss functions**:
We investigated the existence of the polarized effect on different types of backbones, specifically transformer (ViT-B/16) and lightweight (MobileNetV2) networks.
We trained these models using CE+Triplet loss, with and without Random Erasing (RE) and RandAugment (RA), and the results are shown in Fig. A (a).
Additionally, in Fig.A (b), we explored different types of loss functions, specifically ArcFace[a] and PCL[b], which are widely used for re-ID and retrieval models, on ResNet-50 with and without RE+RA.
The results demonstrate that the polarized effect is consistently observed across all tested backbones and loss functions, suggesting that the polarized effect is not limited to the specific architecture and loss function shown in Fig. 1 of the main paper.
**Augmentation types**:
We investigated the existence of the polarized effect of individual augmentation method used in BAU: RE, RA, and CJ.
We utilized ResNet-50 backbone with CE+Triplet loss, whose results are shown in Fig. A (c).
We can see that the performance decrease on the unseen domain is mainly due to the polarized effects observed in RE and RA, while CJ dit not show such behavior, aligning with previous findings in the field.
While RE and RA, which introduce significant distortions (e.g., pixel drops) to images, CJ provides simple color distortions, enhancing model robustness to variations in lighting and color conditions across unseen environments.
We also observed that increasing the augmentation probability of individual RE and RA degrades performance and uniformity (Fig. B), confirming the polarized effect caused by these augmentations in DG ReID.
In summary, we generally observe the polarized effect across different backbones, loss functions, and augmentation types, and we believe this is a more general phenomenon in DG re-ID.
Furthermore, the proposed BAU consistently improves various baselines with different backbones and losses (Tab. B) and various augmentation types (Tab. 7 in main paper).
Based on extensive experiments and analysis, we believe that the polarized effect in our study is a general phenomenon and that we have introduced a simple but general BAU method that can have a significant impact on this field.
Regarding the discrepancy with the results reported in QAConv, it is important to note that QAConv is a DG re-ID method based on local feature matching, which may lead to a different impact of RE compared to the conventional global feature approach used in our experiments. For the feature matching task, RE can improve the model's ability to learn various local correspondences while simulating diverse occlusions [44].
### **Comparisons regarding augmentations used**
While BAU employs a combination of augmentations (CJ, RA, RE), we would like to clarify that recent state-of-the-art methods, such as META [77] and ACL [81], utilize similar augmentations (e.g., AutoAugment and CJ).
To further investigate the impact of BAU's augmentations on other DG methods, we applied BAU's augmentations to ACL based on the official implementation provided by the authors (https://github.com/peterzpy/ACL-DGReID) in Table C.
The results show that the performance of ACL decreases when it utilizes the augmentations used in BAU, which implies that the polarized effect can also occur to other methods.
Furthermore, BAU shows consistent performance when it utilizes the augmentations used in ACL.
Additionally, as demonstrated in Table 7 of the main paper, BAU consistently shows superior performance across various augmentation configurations, including commonly used data augmentations (CJ+RE).
In summary, BAU effectively mitigates the polarized effect of data augmentations while demonstrating robust performance across various augmentation configurations.
### **Novelty of BAU**
While our method builds upon the concepts of alignment and uniformity introduced in [72], we would like to clarify that we extend these ideas from the self-supervised learning to DG re-ID task.
As gratefully mentioned by the reviewer, we propose additional components tailored to the DG re-ID task, such as the novel weighting strategy and the domain-specific uniformity loss.
Extensive ablation studies and analyses in the main paper (Tables 5-6, Figures 4-5) demonstrate the effectiveness of these components.
Furthermore, we provide additional analysis on the weighting strategy in Figure C of the rebuttal PDF, which verifies its effectiveness across various augmentation probabilities.
In summary, our work provides new insights into the polarized effect of data augmentations in DG re-ID and proposes a simple yet effective solution to mitigate this issue.
We believe our findings will have a positive impact on subsequent research in this field, particularly when developing DG re-ID methods based on augmentation techniques, which is a straightforward solution for improving generalization.
### **Computation cost**
The weighting strategy computes Jaccard similarity of k-reciprocal NN within a mini-batch, with time complexity $O(N^2 \log(N))$ for batch size $N$.
Here we provide computation time comparison between baseline and BAU with the same batch size.
|Method| Time(s)/Iter|
|-|-|
|Baseline| 0.312|
|BAU| 0.412|
|weight $w$ | 0.086|
Considering the effectiveness of the weighting strategy (Figure C), we believe that this slight additional cost is reasonable.
We further address your questions on the alignment loss and the domain-specific loss in the following comments.
---
[a] ArcFace: Additive Angular Margin Loss for Deep Face Recognition, CVPR 2019
[b] Prototypical Contrastive Learning of Unsupervised Representations, ICLR 2021
---
Rebuttal 2:
Comment: Here, we provide discussions with additional experimental results according to comments in the **Questions**.
### **Regarding different strategies of alignment loss**
Based on the suggestion, we investigate the impact of various alignment strategies in Table A. Specifically, we compare the performance of minimizing the feature distance between augmented and original images of the same instance ($\| \tilde{f}_i - f_i \|$), between original images of positive pairs ($\| f_i - f_j \|$), and between augmented and original images of positive pairs ($\| \tilde{f}_i - f_j \|$, Ours). The results show that our strategy, which learns invariance between original and augmented images of positive pairs, achieves the best performance.
This result demonstrates that BAU effectively learns invariance between the original image and the augmented image.
### **Regarding domain-specific uniformity loss with corresponding class prototype**
This point brought up by the reviewer is indeed correct, and we appreciate the reviewer for bringing this to our attention.
In our actual BAU implementation, $f_i$ and $\tilde{f}_i$ are indeed pushed against only the class prototypes within the same domain that do not belong to its own class, aligning with your intuition.
This approach stably encourages separation between different identities while maintaining cohesion within the same identity class.
Thus, the explaination of line 214 should be:
> where $\mathcal{N}(\mathbf{f})$ is the index set of *nearest prototypes of $\mathbf{f}$ that are from the same source domain and different class prototypes*, ...
This approach is more effective as it encourages separation between different identities while maintaining the cohesion within the same identity class.
We apologize for the lack of clarity in our original description and will ensure this is accurately reflected in our revision.
---
We hope our responses have addressed your concerns. We welcome any further questions or discussions about our work.
Best regards,
Authors of submission 16640
---
Rebuttal 3:
Title: A gentle reminder for reviewer-author discussion
Comment: Dear reviewer 4LRb,
As the reviewer-author discussion period is coming to a close, we kindly ask if there are any remaining concerns or points about our submission that we haven't sufficiently addressed.
We're ready to provide additional clarifications or information if needed.
Once again, we appreciate your valuable efforts and feedback to strengthen our work.
Best regards,
Authors of submission 16440
---
Rebuttal Comment 3.1:
Comment: Thanks for your responses. The author has answered my major concerns, and I will change the score to weak accept.
---
Reply to Comment 3.1.1:
Comment: Dear Reviewer 4LRb,
Thank you for your thoughtful consideration and for raising the score.
We are pleased that our responses have addressed your concerns.
We will include the additional experimental results based on your valuable feedback (e.g., the polarized effects across different backbones, losses, and augmentations) in our revision.
If you have any remaining questions or feedback, we would be glad to provide additional clarifications or results.
Please don't hesitate to let us know.
We sincerely appreciate your valuable feedback and your time in improving our work.
Best regards,
Authors of submission 16440 | Summary: This paper investigates the polar effects of data augmentation in the domain of generalizable person re-identification. To address the problem of augmented data degrading out-of-distribution performance, this paper proposes a Balanced Alignment and Uniformity (BAU) framework, which normalizes the representation space by maintaining a balance between alignment and uniformity.
Strengths: This paper shows sufficient ablation experiments and adequate visualization results are displayed.
The study on the effect of data augmentation on generalization performance is of interest.
Weaknesses: Alignment and uniformity in the author's paper are not well explained and are concepts that have been applied from other articles, this paper needs to explain the meaning of these two terms in the context of the ReID task scenario as well as the need to elaborate on the differences and connections between this paper and the source of the ideas.
The trends shown in Figure 1c of this paper are consistent across the three different domains as the proportion of data augmentation increases, but what is shown in Figure 1a is the probability of being able to both increase and decrease. The results presented by the authors are not sufficient to support their conclusions.
Is there any relevant experimental or theoretical support for this paper's claim that current generalization methods affect training stability?
This paper lacks a separate analysis of the impact of different kinds of data augmentation, and according to my previous research, adding random erasing directly reduces the generalization performance of the model.
This paper lacks experimental comparisons with more recent methods and only compares methods from 2022 and earlier.
Technical Quality: 2
Clarity: 2
Questions for Authors: See the weakness.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The paper is not clearly written, the authors do not present their contributions well, some of the experimental results in the paper do not support their conclusions, and there is a lack of comparison with more recent methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer fzqs,
Thank you for constructive reviews and your time and effort in evaluating our work.
Below, we address your concerns and questions.
### **Clarification of alignment and uniformity**
We apologize for not sufficiently explaining these concepts in the context of ReID.
For ReID, alignment aims to learn similar feature representations for positive pairs (same identity) across different conditions such as various poses, viewpoints and augmentations.
Conversely, uniformity aims to distribute feature representations uniformly across the embedding space, spreading out features of different identities and encouraging the learning of diverse visual information.
While our method builds upon these concepts in [72], we extend the original idea from self-supervised learning to the DG ReID task.
For the first time, we analyze polarized effects caused by augmentations specifically in DG ReID and adapt the concepts of alignment and uniformity to mitigate such effects.
Furthermore, we introduce novel components specifically tailored for this task: a novel weighting strategy and a domain-specific uniformity loss.
Extensive ablation studies in our paper (Tables 5-6, Figures 4-5) thoroughly demonstrate their effectiveness.
We appreciate your feedback and will include this clarification in our revision.
### **Clarification of Figure 1**
We thank the reviewer for emphasizing this crucial aspect.
Figure 1a shows the polarized effect of augmentation on performance in both in-distribution (ID: M→M) and out-of-distribution (OOD: MS+C3+CS→M) scenarios with varying augmentation probabilities.
Figure 1c highlights the uniformity of feature spaces for three domains in the OOD scenario (MS+C3+CS→M): "source (MS+C3+CS) train", "source (MS+C3+CS) test", and "target (M) test". Higher augmentation probabilities lead to less uniform feature spaces with sparse representations.
It is important to note that for ID, the uniformity value (Fig. 1c) is not strictly proportional to the performance (Fig. 1a).
Rather, our key motivation is that uniformity is crucial for performance in OOD.
As illustrated in Figure 2 of the main paper, learning invariance with increased augmentations can result in sparse (less uniform) representation spaces that rely on dominant visual information unique to the ID data.
While these sparse representations might generalize well to ID scenarios and thus improve performance, learning diverse, non-dominant visual information (i.e., achieving more uniformity) is vital for OOD performance in DG ReID tasks, where models must handle unseen classes from unseen domains.
This motivation is further supported by our analysis of OOD performance in relation to alignment and uniformity (Figure 4), which confirms that achieving greater uniformity is essential for OOD generalization.
We appreciate the reviewer's insight and will enhance the clarity of our presentation and the motivation behind our method in the revised version.
### **Regarding the training instability**
We appreciate the valuable comment.
In our main paper, we discussed training instability related to domain-adversarial training and meta-learning.
The widely-used meta-learning [15] in DG methods is known to face training instability by high-order gradients [a, b].
Similarly, several studies have highlighted stability issues in domain-adversarial training [c, d].
In the context of DG ReID, previous works [62, 85] have raised these issues or proposed solutions to these challenges.
We will clarify these points and include proper references in our revised version.
### **Polarized effects across augmentation types**
In Figure A.(c), we investigate the polarized effect of individual augmentations: Random Erasing (RE), RandAugment (RA), and Color Jitter (CJ).
The results show that (1) RE and RA exhibit the polarized effect (improved ID, degraded OOD performance), (2) CJ does not show this polarized effect and generally improves OOD performance, which aligns with previous findings in the field.
This is because while RE and RA introduce significant distortions (e.g., pixel drops) to images, CJ provides simple color distortions, enhancing model robustness to variations in lighting and color conditions across unseen environments.
Fig. B confirms that higher probabilities of RE and RA lead to a more pronounced polarized effect, consistent with Fig. 1 in the main paper.
In contrast to most existing DG ReID that primarily utilizes augmentations without polarized effects, our BAU successfully exploits the diversity introduced by augmentations regardless of polarized effects.
Additionally, Fig. A(a) and (b) show that the polarized effect persists across different backbones and loss functions, suggesting it is a general phenomenon in DG ReID.
### **Comparison with more recent methods**
Thank you for your valuable comments.
We acknowledge advancements such as [14] and [54] in DG ReID. For instance, [14] uses large-scale unannotated videos, and [54] introduces the Part-Aware Transformer. However, direct comparison is challenging due to differences in experimental protocols. Despite this, we believe proposed BAU is complementary and could be integrated with these advanced methods. Table B shows BAU's effectiveness with unsupervised methods using pseudo-labels and ViT backbones.
We have identified recent work [e] introducing a style mixing module within a ViT backbone using Protocol-1 and will include a comparison in our revised manuscript.
We appreciate your feedback and are open to include comparisons with other recent methods that align with our protocols.
Best regards,
Authors of submission 16640
---
[a] How to train your MAML, ICLR 2019
[b] On First-Order Meta-Learning Algorithms, arXiv:1803.02999
[c] Free Lunch for Domain Adversarial Training: Environment Label Smoothing, ICLR 2023
[d] A Closer Look at Smoothness in Domain Adversarial Training, ICML 2022
[e] Style-Controllable Generalized Person Re-identification, MM 2023
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for answering some of my concerns, I will change the score to Borderline Reject. the main reason for this is that the actual innovations in this paper are limited and the methods compared are not new enough. In addition, as the authors state, it is the meta-learning and adversarial learning based methods that suffer from the problem of training stability, and the authors need to add these constraints to their descriptions.
---
Rebuttal 2:
Comment: Dear Reviewer fzqs,
We sincerely appreciate your reconsideration and raising the score. We will clarify the training instability associated with meta-learning and domain-adversarial learning and provide proper references in our revision. We would like to address the remaining concerns and highlight the significance of our work.
---
### **Comparison with recent methods**
We appreciate your valuable feedback about comparison with the most recent methods. In our revision, we will include comparisons with state-of-the-art approaches published in 2023, such as ISR (ICCV 2023) [14] and StyCon (MM 2023) [a]. Please note that the results of ISR are from ResNet-50 trained on large-scale unannotated videos (47.8M person images from 74K video clips). Meanwhile, StyCon used the same training dataset as BAU (Market-1501, CUHK02, CUHK03 and CUHK-SYSU). A comparison with more recent methods (mAP/Rank-1) is shown below:
| Method | PRID | GRID | VIPeR | iLIDs | Average |
|-------------------|----------------|----------------|----------------|----------------|----------------|
| ISR (ICCV 2023) | 70.8 / 59.7 | 65.2 / 55.8 | 66.6 / 58.0 | **91.7 / 87.6**| 73.6 / 65.3 |
| StyCon (MM 2023) | **78.1 / 69.7**| 62.1 / 53.4 | 71.2 / 62.8 | 84.8 / 78.0 | 74.1 / 66.0 |
| **BAU (Ours)** | 77.2 / 68.4 | **68.1 / 59.8**| **74.6 / 66.1**| 88.7 / 83.7 | **77.2 / 69.5**|
We further compare ISR with the proposed BAU on large-scale datasets.
We report BAU results trained under Protocol-2 (P-2) and Protocol-3 (P-3).
| Method | CUHK03 | MSMT17 | Market-1501 | Average |
|---------------------|-----------------|-----------------|------------------|-----------------|
| ISR (ICCV 2023) | 26.1 / 27.4 | 21.2 / 45.7 | 65.1 / 85.1 | 37.5 / 52.7 |
| **BAU (P-2, Ours)** | **42.8 / 43.9** | **24.3 / 50.9** | **77.1 / 90.4** | **48.1 / 61.7** |
| **BAU (P-3, Ours)** | **50.6 / 51.8** | **26.8 / 54.3** | **79.5 / 91.1** | **52.3 / 65.7** |
The results show that our method remains superior to these recent approaches.
In summary, our current comparison aims to be comprehensive, including recent state-of-the-art methods.
If you have *any suggestions of other recent methods that should be compared to our work*, we would sincerely appreciate your recommendations.
---
### **Regarding innovation of BAU**
We appreciate the valuable feedback in clarifying the innovations of the proposed method.
We would like to clarify several key innovations of our work:
- First to analyze and address the critical issue of polarized effects by data augmentations in DG ReID, which has been overlooked in previous studies.
- Novel weighting strategy for alignment loss to assess the reliability of augmented samples.
- Domain-specific uniformity loss for enhancing domain-invariant feature learning.
- Enabling effective use of strong augmentations, previously challenging in DG ReID.
- Achieving state-of-the-art performance without complex training procedures.
While building on existing concepts, BAU's unique application and novel components significantly advance the field of DG ReID.
Furthermore, as shown in Table B in 1-page PDF, we demonstrate that the proposed method is applicable to various existing methods.
We believe this contribution will substantially impact future research in the field.
We would be grateful if you could give further consideration to these points.
We appreciate your thorough review and remain committed to improving our manuscript based on your valuable feedback.
Best regards,
Authors of submission 16440
---
[a] Style-Controllable Generalized Person Re-identification, MM 2023
---
Rebuttal 3:
Comment: Dear Reviewer fzqs,
We are deeply grateful for your time and valuable feedback throughout this review process. Your insightful comments have significantly contributed to improving our work. They have led to important clarifications and comprehensive analysis & comparisons, which greatly strengthen our work.
As the discussion period is drawing to a close, we would like to kindly follow up on our previous response. We sincerely invite you to share any remaining concerns, particularly regarding the novelty of our work and the comparisons with recent methods, for your further consideration. We appreciate your feedback and would like to discuss these points further.
---
In addition to our previous response, we would like to further clarify key contributions of our work, which we believe represent significant innovations in the field of DG ReID:
1. Insightful investigation of polarized effects in DG re-ID, a phenomenon previously overlooked.
2. A simple yet effective BAU framework that mitigates these effects without complex procedures.
3. Novel components tailored for this task: a weighting strategy for alignment loss and a domain-specific uniformity loss.
During the discussion period, we further confirmed and strengthened our contributions through additional experiments:
- We verified the commonality of the polarized effect across different backbones, loss functions, and augmentation types (Figures A and B).
- We demonstrated the broad applicability of BAU on various baselines, including unsupervised approaches, different backbones, and loss functions (Table B).
- We provided an in-depth analysis of our novel weighting strategy, showing its effectiveness across varying augmentation probabilities (Figure C).
These additional results further validate our initial insights and the effectiveness of our proposed method, and we will incorporate this in our revision.
---
Regarding the comparison with recent methods, we have conducted additional comparisons with more recent methods, ISR (ICCV 2023) and StyCon (MM 2023). We are open to further comparisons if there are other specific methods we should consider, and will incorporate this feedback in our revision.
---
If there are any aspects of our work that you think could benefit from further elaboration or clarification, we would be happy to provide additional information promptly.
Thank you again for your valuable contribution to our research. We look forward to any additional feedback you might have that would be valuable to our work.
Best regards,
Authors of submission 16440
Title: A gentle reminder for reviewer-author discussion | Summary: Although data augmentation can improve in-distribution performance, it may lead to a sparse representation space, thereby reducing out-of-distribution performance.
To address this issue, the authors proposed a simple yet effective framework, Balancing Alignment and Uniformity (BAU), which effectively regularizes the representation space by maintaining balance between alignment and uniformity.
BAU achieves state-of-the-art performance on various benchmarks and protocols, and extensive ablation studies validated the effectiveness of each component in BAU.
Strengths: 1.The paper demonstrates a well-structured presentation, with a clear outline that effectively communicates the core idea.
2.The paper provides sufficient experimental evidence to support the effectiveness of the proposed method.
Weaknesses: 1.As stated in [1], alignment and uniformity are two key properties related to the contrastive loss.
Can the polarized effects of data augmentation in DG re-ID be resolved by applying only the contrastive loss?
2.For Alignment Loss, how much does the sample that is corrupted during augmentation affect performance? This seems to be something that can be ignored.
3.The pipeline in this paper is very similar to Weak-Strong Augmentation[2], and the author needs to discuss the difference between them.
[1]Wang, Tongzhou, and Phillip Isola. "Understanding contrastive representation learning through alignment and uniformity on the hypersphere." International conference on machine learning. PMLR, 2020.
[2]Li, Yu-Jhe, et al. "Cross-domain adaptive teacher for object detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: Same to the Weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations and potential negative societal impact of work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Ywj9,
Thank you for your insightful comments and positive remarks on our paper's structure and experimental evidence.
We appreciate your feedback and have addressed your main concerns below.
### **Regarding whether polarized effects can be resolved by contrastive loss**
To address whether contrastive loss alone could resolve the polarized effects of data augmentation in DG re-ID, we conducted additional experiments comparing BAU with Supervised Contrastive Learning (SupCon) [a].
The results (mAP/Rank-1) on protocol-2 are shown in the table below:
| Method | CUHK03 | MSMT17 | Market-1501 | Average |
|-|-|-|-|-|
| Baseline | 33.5 / 33.7 | 16.8 / 35.9 | 63.4 / 83.0 | 37.9 / 50.9 |
| + SupCon | 37.9 / 37.6 | 18.2 / 41.5 | 71.8 / 87.0 | 42.6 / 55.4 |
| + **BAU (Ours)** | **42.8 / 43.9** | **24.3 / 50.9** | **77.1 / 90.4** | **48.1 / 61.7** |
While SupCon improves performance over the baseline, our BAU consistently outperforms SupCon across all settings.
These results suggest that while contrastive loss can help mitigate some polarized effects, our approach of explicitly balancing alignment and uniformity is more effective in addressing this issue in the context of DG re-ID.
It is worth noting that contrastive learning, including SupCon, has been shown to optimize for alignment and uniformity jointly and asymptotically [b]. In contrast, BAU directly and independently optimizes these properties, allowing for more fine-grained control and effective balancing.
### **Impact of corrupted samples during augmentation in alignment loss**
To mitigate the impact of corrupted samples, we introduced a weighting strategy for the alignment loss (Eq. 4 and 5 in our paper).
This strategy assigns lower weights on the alignment loss computed between original and potentially unreliable augmented samples, and higher weights on that computed between reliable pairs of original and augmented images, thus reducing the influence of potentially corrupted samples.
To demonstrate the effectiveness of the alignment loss with the weighting strategy, we conducted additional analysis comparing the performance with varying augmentation probabilities, both with and without the weighting strategy.
The results are shown in Figure C of the 1-page rebuttal PDF.
The results show that our weighting strategy consistently improves performance across different augmentation probabilities, with the gap becoming more pronounced at higher probabilities where corruption is more likely.
For instance, at an augmentation probability of 0.5, the weighting strategy improves the mAP from 78.3\% to 79.5\%, and at a probability of 1.0, the improvement is even more substantial, from 66.1\% to 76.1\%.
Thanks to the proposed weighting strategy, BAU allows learning invariance with reliable augmented samples from informative augmentations.
Furthermore, Tables 7 and 8 in the supplementary material of our main paper demonstrate that BAU consistently improves performance compared to the baseline for various augmentation configurations and probabilities, further validating the effectiveness of our approach in handling potentially corrupted samples.
### **Comparison with Weak-Strong Augmentation**
We appreciate the constructive comments regarding the relation to existing techniques.
Our approach is specifically designed to address the unique challenges of DG re-ID, with a particular focus on mitigating the polarized effects of data augmentation.
While our BAU and the Cross-Domain Adaptive Teacher (CDAT) method [c] share a similar spirit in exploiting different levels of data augmentations to improve model generalizability, there are significant key differences:
***Task goal:***
- CDAT (CVPR 22): Cross-domain adaptation for object detection.
- BAU (Ours): Domain generalization for person re-ID.
***Framework:***
- CDAT (CVPR 22): Utilizes a teacher-student framework where the teacher model with weak augmentations guides the student model with strong augmentations in a mutual learning manner.
- BAU (Ours): Employs a single model framework where the model is trained with both original and augmented images while balancing alignment and uniformity.
***Techniques:***
- CDAT (CVPR 22): Focuses on generating reliable pseudo labels by the teacher model and utilizes domain-adversarial loss with a domain discriminator.
- BAU (Ours): Focuses on mitigating polarized effects of data augmentation in DG re-ID and utilizes domain-specific uniformity loss without any domain classifiers.
In summary, BAU is specifically tailored for domain generalization in person re-ID based on our experimental observations and analysis of alignment and uniformity.
Based on these insights, we further propose a novel weighting strategy for alignment loss and domain-specific uniformity loss to enhance model generalizability, clearly distinguishing our approach from CDAT's focus on cross-domain object detection adaptation.
We hope these explanations and additional results clarify the distinctions and advantages of our approach.
Thank you again for your valuable feedback, which has helped us provide a more comprehensive evaluation of our method in relation to existing techniques.
Best regards,
Authors of submission 16640
---
[a] Supervised Contrastive Learning, NeurIPS 2020
[b] Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere, ICML 2020
[c] Cross-Domain Adaptive Teacher for Object Detection, CVPR 2022
---
Rebuttal 2:
Title: A gentle reminder for reviewer-author discussion
Comment: Dear reviewer Ywj9,
As the reviewer-author discussion period is coming to a close, we kindly ask if there are any remaining concerns or points about our submission that we haven't sufficiently addressed.
We're ready to provide additional clarifications or information if needed.
Once again, we appreciate your valuable efforts and feedback to strengthen our work.
Best regards,
Authors of Submission 16440
---
Rebuttal Comment 2.1:
Comment: Thanks for your responses. The author has answered some of my doubts, and I will maintain my score, combined with the comments of other reviewers.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer Ywj9,
Thank you for your feedback and for taking the time to consider our responses.
We're pleased that our responses have helped to address some of your concerns.
If you have any remaining specific concerns or doubts, please let us know.
We would be glad to provide additional clarifications or results for these points if needed.
Once again, we appreciate your valuable time and comments, and we look forward to the opportunity to further improve our paper based on your insights.
Best regards,
Authors of Submission 16440
---
Rebuttal 3:
Comment: Dear Reviewer Ywj9,
We are deeply grateful for your time and efforts during the discussion period.
Throughout this period, our further explanations with additional results have addressed the concerns raised by reviewers, leading to positive feedback which has been very encouraging for us.
---
We would like to further clarify our improvements made during the rebuttal period:
* Further investigation of polarized effects: We have conducted additional experiments exploring the commonality of polarized effects across different backbones, loss functions, and augmentation types (Figures A and B). These results provide further support for our findings and intuitions regarding polarized effects.
* Broad applicability of BAU: We have investigated the effectiveness of BAU on various baselines, including unsupervised approaches, different backbones, and loss functions (Table B). These experiments validate the versatility and potential applicability of our method.
* More analysis of the weighting strategy: We have provided an in-depth analysis of our novel weighting strategy, examining its behavior across varying augmentation probabilities (Figure C). This analysis presents additional insights into the robustness of our approach.
* Clarification of our work: We have provided a more detailed explanation of how our method uniquely applies concepts of alignment and uniformity to the specific challenges of DG re-ID, distinguishing our approach from other existing studies.
* Additional exploration of our design choices: We have conducted further analysis and comparisons in Tables A and C, which aim to demonstrate the effectiveness of BAU across different augmentation configurations and its performance relative to existing methods.
We will incorporate these results into our revision.
---
We are particularly thankful for your insightful comments, which have greatly contributed to enhancing our work. Your feedback inspired the valuable analysis of the alignment loss presented in Figure C and several important comparisons with existing works, which greatly strengthen our paper.
As the discussion period is drawing to a close, if there are any additional points that you think could be clarified in our work, we would be happy to provide additional information promptly.
Once again, we sincerely appreciate your valuable feedback and insights in improving our work.
Best regards,
Authors of submission 16440 | Summary: The authors investigate the polarized effects of data augmentations in DG re-ID and reveal that they can lead to sparse representation spaces, which are detrimental to generalization. To address it, they propose a novel BAU framework that mitigates the polarized effects of data augmentations by balancing alignment and uniformity in the representation space. And then they further introduce a domain-specific uniformity loss to enhance the learning of domain-invariant representations.
Strengths: Since data augmentations can induce sparse representation spaces with less uniformity, the authors propose a simple yet effective framework, Balancing Alignment and Uniformity (BAU), which alleviates the polarized effects of data augmentations by maintaining a balance between alignment and uniformity. This work highlights the significant potential of balancing alignment and uniformity to enhance generalization for person re-identification.
Weaknesses: The effects of the BAU framework are validated on one baseline model in Table 5, it will be more convincing if it is validated on three typical baseline models such as supervised, self-supervised and unsupervised models.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1.Whether different alignment strategies will affected the effect of the proposed framework?
2.The effects of the BAU framework are validated on one baseline model in Table 5, it will be more convincing if it is validated on three typical baseline models such as supervised, self-supervised and unsupervised models.
3.How about the versatility of the BAU framework? Do these losses apply to the three typical models?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have given the limitations of the proposed model, the authors mentioned that, as a straightforward approach primarily based on data augmentations for given input data, the method could face challenges under very large domain shifts. And they adopt standard image-level data augmentations, such as random flipping, erasing, and color jitter, and do not incorporate more advanced augmentation techniques, such as adversarial data augmentations or feature-level augmentations, which have also shown to be promising for domain generalization.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer L6wV,
Thank you for your thorough review and constructive feedback. We greatly appreciate your time and effort in evaluating our work. Below, we address your concerns and questions.
### **Regarding the effects of different alignment strategies**
We report the impact of various alignment strategies in Table A. Specifically, we compare the performance of minimizing the feature distance between:
* Original images of positive pairs ($\| f_i - f_j \|$)
* Augmented and original images of the same instance ($\| \tilde{f}_i - f_i \|$)
* Augmented and original images of positive pairs ($\| \tilde{f}_i - f_j \|$, **Ours**)
The results show that our strategy, which learns invariance between original and augmented images of positive pairs, achieves the best performance.
This result demonstrates that BAU effectively learns invariance between the original image and the augmented image.
Furthermore, as shown in Figure C in the 1-page rebuttal PDF, our weighting strategy for the alignment loss consistently improves the performance across different augmentation probabilities, and the effectiveness of the weighting strategy becomes more pronounced at higher augmentation probabilities.
This indicates that focusing on reliable pairs between original and augmented images enhances the learning of robust features.
We will include this analysis in the revised manuscript to provide a more comprehensive understanding of the alignment loss and the effectiveness of our weighting strategy.
### **Regarding the versatility of BAU and validation on different baselines**
We agree that validating the effectiveness of BAU on various types of models would further strengthen our work.
In the rebuttal, we provide additional experimental results applying BAU to five different baselines:
* Unsupervised approach (Cluster Contrast [a])
* Lightweight backbone (MobileNetV2 [b])
* Transformer-based backbone (ViT-B/16 [c])
* Different loss functions for re-ID
* ArcFace [d]
* PCL [e, f]
The results in Table B (in the 1-page rebuttal PDF) show consistent performance improvements across all these diverse baselines, demonstrating the versatility and general applicability of BAU.
Notably, BAU achieves these improvements as a simple regularization method, without complex training procedures or additional trainable parameters.
This highlights its efficiency and potential for easy integration across various baseline models in DG re-ID tasks.
Besides, we would like to clarify that BAU is designed for supervised domain generalization settings, as it requires labels to compute the alignment loss and domain-specific loss. Therefore, directly applying BAU to self-supervised approaches is non-trivial. Nonetheless, to demonstrate the potential of BAU in unsupervised scenarios (Cluster Contrast), we adapt it to work with pseudo-labels obtained from clustering. The results demonstrate improved performance, indicating the potential for extending BAU to self-supervised and unsupervised settings without ground-truth labels in future work.
In the revised manuscript, we will include the additional experimental results and discussions to provide a more comprehensive evaluation of versatility and applicability to various baselines. We will also clarify the focus on supervised learning settings and the potential for future extensions to self-supervised and unsupervised approaches.
Once again, we thank you for your valuable feedback and hope that our responses and additional results address your concerns. We look forward to further improving our work based on your suggestions.
Best regards,
Authors of submission 16640
---
[a] Cluster Contrast for Unsupervised Person Re-Identification, ACCV 2022
[b] MobileNetV2: Inverted Residuals and Linear Bottlenecks, CVPR 2018
[c] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR 2021
[d] ArcFace: Additive Angular Margin Loss for Deep Face Recognition, CVPR 2019
[e] Prototypical Contrastive Learning of Unsupervised Representations, ICLR 2021
[f] Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID, NeurIPS 2020
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. The author has answered some of my doubts, and I will change the score to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer L6wV,
Thank you for your thoughtful consideration and for raising the score.
We are glad that our responses have addressed some of your concerns.
If you have any remaining questions or feedback, we would be glad to provide additional clarifications or results if needed.
Please don't hesitate to let us know.
We sincerely appreciate your valuable feedback to improve our work.
Best regards,
Authors of submission 16440
---
Rebuttal 2:
Title: A gentle reminder for reviewer-author discussion
Comment: Dear reviewer L6wV,
As the reviewer-author discussion period is coming to a close, we kindly ask if there are any remaining concerns or points about our submission that we haven't sufficiently addressed.
We're ready to provide additional clarifications or information if needed.
Once again, we appreciate your valuable efforts and feedback to strengthen our work.
Best regards,
Authors of submission 16440 | Rebuttal 1:
Rebuttal: Dear reviewers and chairs,
We sincerely appreciate all reviewers for their time and efforts for reviewing our work.
We are glad that the reviewers found our work "clear motivation and idea"(Ywj9, 4LRb), "simple and effective"(L6wV, 4LRb), "well presented"(Ywj9, 4LRb), and "sufficient experimental evidence" (Ywj9, fzqs).
We believe our BAU has potential to complement the ongoing advancements in data augmentation techniques, offering opportunities for synergistic combinations to further improve model generalization.
In response to the reviewers' constructive feedback and insightful comments, we have included a 1-page PDF with additional experimental results.
This additional material addresses key concerns and provides further evidence to support our method:
* Figures A and B: Investigate ***Commonality of the polarized effect*** across different backbones, loss functions, and augmentation types. (fzqs, 4LRb)
* Table A and Figure C: Demostrate ***Effectiveness of the alignment loss*** with our proposed weighting strategy. (L6wV, Ywj9, 4LRb)
* Table B: Shows ***Applicability of the proposed method*** on various types of baselines. (L6wV)
* Table C: Validate ***Robustness of BAU*** with different augmentation configurations. (4LRb)
Overall, the attached 1-page PDF includes:
- **Figure A**: Additional investigation of the existence of polarized effects with different types of (a) backbones, (b) loss functions, and (c) augmentation types.
- **Figure B**: Additional analysis of polarized effect with varying augmentation probabilities by individual data augmentation, (a) Random Erasing and (b) RandAugment.
- **Figure C**: An analysis of the weighting strategy for the alignment loss with varying augmentation probabilities.
- **Table A**: A comparison between different strategies of alignment loss.
- **Table B**: A validation of the proposed BAU on various baseline models consisting of an unsupervised approach, different backbones, and loss functions.
- **Table C**: A comparison between recent SoTA method ACL and the proposed BAU with different augmentation configurations.
Once again, we would like to thank all reviewers for their constructive feedback, which helped us strengthen our work.
Best regards,
Authors of submission 16440
Pdf: /pdf/a792cd968000f1b4dafa9897d4b7a470f2ed584b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Public-data Assisted Private Stochastic Optimization: Power and Limitations | Accept (poster) | Summary: The paper presents some new lower bounds for public-data assisted private SCO, when some public examples are available, either with or without labels.
In the unlabeled case, a simple algorithm for GLM assisted with unlabeled public examples is presented and analyzed (and it is shown that this removes a dimension dependent term, similar to previous results).
Strengths: - The problem is important.
- The new results shed additional light on the limits of public-data assisted private SCO.
- In particular, they partially closes the gap between lower and upper bounds, in certain regimes of $d$ and $n_{pub}$.
- There is good commentary following most of the results.
Weaknesses: - The presentation is poor. Little effort was made to summarize the main results (and how they improve on previous bounds) in an easily accessible form. This could be remedied with a table summarizing assumptions, new and existing results, conditions on $n_{priv}, n_{pub}, \epsilon, d$, ranges in which the bounds are tight, etc.
- For example, it was unclear from the abstract and introduction in which regimes the new bounds are tight. This is only discussed later in the text. Similarly, there are boundedness assumption that should have been clarified early on.
- In Theorems 3, 6, 7, it seems there are missing conditions on $n_{pub}$. For example, according to the theorem statement, nothing prevents one from taking $n_{pub} = 1$, or even 0, and it seems one would still get the improved bound. This should not be the case. Please clarify in the rebuttal.
- The significance of Theorem 3 is unclear. Similar algorithms and similar improvements of dimension dependence were obtained in other works. The authors mention line 245 that their result avoids "many of the strong assumptions seen in previous work" without giving any details. What are these assumptions and how does Theorem 3 compare to existing results?
- Corollaries 2 and 3 are frankly hard to interpret, and their significance is unclear to me. Maybe more effort is needed in this section.
- The general message of the paper, that "more sophisticated attempts at leveraging public data will yield no benefit" (in the labeled case), should be toned down a bit. While the new bounds indeed show some fundamental limits, these come under assumptions, and improvements were in fact shown for certain classes of problems like linear regression, as in some of the papers that were cited lines 28-29, (most of these were cited without discussion unfortunately).
Minor:
- What do the authors mean by "the non trivial regime $n_{pub} = \Theta(n)$ and $n_{pub} = o(n)$"? Do you mean either of these is true? (if so, it would be clearer to use or, not and).
- There are minor formatting issues like a broken reference line 359 and some typos.
- In the discussion following Cor.3, how do the $log d$ terms appear?
Technical Quality: 3
Clarity: 1
Questions for Authors: Please see questions in the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: As mentioned above, the paper should be clearer about its assumptions, regimes in which its bounds are tight, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: * *Presentation:*
We will add a table to our updated version as you suggest and clarify tightness of the rates. Please see the global response PDF for this table.
* *Missing conditions on $n_\text{pub}$ :*
Theorems 3, 6, and 7 claim the rate is achievable given access to *some* number of public samples which is at most $n_\text{pub} = \tilde{O}(n_{\text{priv}}\epsilon)$. We will clarify the phrasing of these theorems to make this more clear.
* *The significance of Theorem 3 is unclear. Similar algorithms and similar improvements of dimension dependence were obtained in other works. The authors mention line 245 that their result avoids "many of the strong assumptions seen in previous work" without giving any details. What are these assumptions and how does Theorem 3 compare to existing results?*
With regards to the [[PHYS24]](https://arxiv.org/pdf/2306.03962) paper we cite, in addition to the assumptions we make, [PHYS24] additionally requires that the underlying distribution satisfies both a large margin condition and a low rank assumption (see Definition 3 and Theorem 1 of their paper). Further, they also only provide a statement for the privacy regime $\epsilon = o(1)$. We will add these details to the revision.
* *Corollaries 2 and 3 are frankly hard to interpret, and their significance is unclear to me. Maybe more effort is needed in this section.*
Corollary 2 shows that with enough public data, it is possible to privately learn feed-forward neural networks without paying any explicit penalty for the network width (the rate only scales with the norm of the weights).
Corollary 3 shows that (with access to public data) it is possible to obtain dimension-independent rates for constrained non-Euclidean GLMs. In both cases, such a width/dimension independence was previously unknown. We will add such comments to the revision.
* *The general message of the paper, that "more sophisticated attempts at leveraging public data will yield no benefit" (in the labeled case), should be toned down a bit. While the new bounds indeed show some fundamental limits, these come under assumptions, and improvements were in fact shown for certain classes of problems like linear regression, as in some of the papers that were cited lines 28-29, (most of these were cited without discussion unfortunately).*
As with all theoretical works, our results come with certain assumptions, and our statements are made with respect to those assumptions. The class of problems we study for our lower bounds, Lipschitz losses over compact parameter space, are one of the most commonly studied classes in the DP optimization literature. Further, we do not believe the overall tone of our paper suggests that more sophisticated algorithms are useless in other settings. To the contrary, half our paper is devoted towards showing that in a slightly different setting (unlabeled public data) more sophisticated algorithms *are* useful.
* *What do the authors mean by "the non trivial regime $n_{\text{priv}} = \Theta(n)$ and $n_{\text{pub}} = o(n)$"? Do you mean either of these is true? (if so, it would be clearer to use or, not and).*
We mean "and" for this statement. The regime $n_{\text{priv}} = \Theta(n)$ and $n_{\text{pub}} = o(n)$ is the only non-trivial regime, at least if one is interested in asymptotics. This is because if either condition is not true, one can discard the private dataset and still have a dataset of size $\Theta(n)$; i.e., one does not have to worry about privacy. This is obvious if $n_{\text{pub}}=o(n)$ is not true. If $n_{\text{priv}}=\Theta(n)$ is not true, then it must also be the case that $n_{\text{pub}} = \Theta(n)$ since by definition $n = n_{\text{pub}} + n_{\text{priv}}$. We will elaborate in the revision.
* *In the discussion following Cor.3, how do the $\log(d)$ terms appear?*
The $\log{d}$ term appears from the choice of the function $\Delta(w) = \frac{\log{d}}{2}\Vert w \Vert_{1+1/\log{d}}$ -- see discussion in line 366 in the paper. This choice is standard in the $(\ell_1,\ell_\infty)$-case -- see referenced work [FGV17]. We will revise the Corollary and discussion following it to make it clear.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and clarifications.
- The discussion about Corollary 2 and 3 is helpful and should be expanded in the paper.
> Theorems 3, 6, and 7 claim the rate is achievable given access to some number of public samples
- Do you mean that there should be a lower bound on $n_{pub}$ (in addition to the upper bound) that is missing from the statements of these theorems? Please clarify.
- Regarding the non trivial regime: It appears there is a typo in the paper that led to my misunderstanding (both conditions applied to $n_{pub}$, instead of $n_{pub},n_{priv}$...) It is clear now (but please make sure to correct the typo).
- Regarding the message of the paper: are the authors willing to rephrase some of the statements made in the paper? For example, the abstract claims that "the simple strategy of either treating all data as private or discarding the private data, is optimal". This should be properly qualified. Similarly for the statement "more sophisticated attempts at leveraging public data will yield no benefit".
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will add the discussion about Corollary 2 and 3 to the paper. We will also fix the typo regarding $n_{pub}$.
To further clarify Theorems 3, 6, and 7, there are alternative ways to the state the condition on $n_{pub}$. For example, we can either state it as 1) "There exists some number $n_{pub} = \tilde{O}(n_{priv} \epsilon)$ such that ..." or 2) "There is a constant c > 0 s.t. for any $n_{pub} \geq c n_{priv} \epsilon$, we have ..." To be clear, our intention for what is written in the paper is *not* an upper bound of the form "for any $n_{pub} = \tilde{O}(n_{priv} \epsilon)$ but rather an upper bound on the number of samples the algorithm would ever need. To make it more clear, we can replace what is in the paper with "Then there exists some $n_{pub} = \tilde{O}(n_{priv} \epsilon)$..." or "Then there exists a universal constant c such that for any $n_{pub} \geq c n_{priv} \epsilon$..."
With regards to the message of the paper, we are happy to be more explicit about our claims in the abstract. Specifically, we can say that our claim applies to the problem of stochastic convex optimization of Lipschitz functions and our claim is made with respect to asymptotic rates. Please let us know if there are other assumptions worth enumerating in the abstract. | Summary: The paper investigates a public-data-assisted differential privacy problem. Firstly, for labeled public data, the author introduces a novel mean estimation lower bound of $\tilde{\Omega}\left(\min \left{\frac{1}{\sqrt{n_{\mathrm{pub}}}}, \frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{n \epsilon}\right}\right)$. Secondly, when considering unlabeled public data with $n_{\text {pub }} \geq n_{\text {priv }}$, the paper presents a dimension-independent rate of $\tilde{O}\left(\frac{1}{\sqrt{n_{\text {priv }}}}+\frac{1}{\sqrt{n_{\text {priv }} \epsilon}}\right)$ given $\tilde{O}\left(n_{\text {priv }} \epsilon\right)$ unlabeled public data. Additionally, the results for unlabeled public data are extended to general hypothesis classes with bounded fat-shattering dimensions.
Strengths: 1. The paper provides a technically solid and well-structured analysis of the public-data-assisted differential privacy problem. The most intriguing part is the Private Supervised Learning with Unlabeled Public Data. In this setting, the paper leverages $n_{\text {pub }} \geq n_{\text {priv }}$, showing a dimension-independent rate and demonstrating that an increasing number of unlabeled public data cannot improve the results because unlabeled public data can only reveal information about the marginal distribution.
2. The emergence of the dimension-independent rate from "public data can be used to identify a low-dimensional subspace, which under the appropriate metric acts as a cover for the higher-dimensional space," is particularly interesting. Could the author provide more explanations and insights on this?
3. Algorithm 1 projects the private data to an orthogonal space, derives the solution, and then projects back to the original space to obtain the result. This approach is novel to me, but I have some questions below.
Weaknesses: N/A
Technical Quality: 4
Clarity: 3
Questions for Authors: For Algorithm 1, why does the author perform the projection before applying the differential privacy subroutine? What is the function of the projection here, or how does it help the result? I am not sure if it is because the author uses the unlabeled dataset to construct a data space.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Refers to the weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: * *Response to Questions:*
Differentially private subroutines often incur some penalty proportional to the problem dimension, $d$. Thus, at a high level, the reason for performing the projection before applying the DP subroutine is to ensure that the penalty does not scale with $d$.
Indeed, we show that one can reduce the dimension of the problem (using public data) to the extent that, no dependence on $d$ (the original dimension) shows up in the final rate. This is because the effective dimension after projection does not exceed $n_{pub}$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I will maintain my current score. | Summary: This paper studies the effectiveness of using public data to assist private convex optimization. Private convex optimization gives the solver query access to a convex function $f:W\times X \to \mathbb{R}$ and samples $S$ drawn i.i.d. from some unknown distribution $D$. The solver is asked to find some point $w\in W$ that minimizes the population risk $\mathbb{E}_{x\sim D}[f(w,x)]$, but to do so in a manner that is private with respect to the sample $S$. This is an important problem that has received much interest in the community. The notion of using public data that one expects has been drawn from the same distribution has been explored earlier, and the authors continue this line of work.
The main contributions of this work are:
1. Stronger lower bounds than prior work showing that in the general setting even for approximate DP there is a dependence on the dimension $d$ of the sample space $X = \mathbb{R}^d$ (typically one assumes data is drawn from some bounded $d$-dimensional ball in the DP setting). They also complement prior work and extent a lower bound for the pure DP setting that held previously only for a limited range of values. These lower bounds show that the naive approach of either treating all data (public and private) as private or ignoring the private data and using SCO on public data is asymptotically optimal.
2. A new algorithm for the unlabelled setting showing that one can gainfully use unlabelled public data to perform dimension reduction for labelled private data and achieve a risk bound that has no dependence on the ambient dimension. This is in contrast to the labelled setting where their lower bounds showed a dependence on the ambient dimension $d$.
Strengths: 1. I think this problem is well-motivated and of interest to the DP community.
2. The asymptotic separation between the labelled and unlabelled settings is interesting, and the improvement of lower bounds compared to prior work is also good to see.
3. The paper is in general well-written and easy to follow.
Weaknesses: 1. I think that in this case it becomes important to understand the exact constants that occur in the upper bound in the labelled case - even if asymptotically one does not hope to see any improvement over the naive strategies, I would imagine that in practice one still does gain some improvement in the public case. Maybe this point could be better made.
2. Although the analyses seem to be formal and strong (and relatively assumption free), there are not necessarily any new high-level ideas in this work (if you would like to emphasize any new highlights or unexpected outcomes that would be great to see though). The technical work is still interesting however.
Technical Quality: 3
Clarity: 3
Questions for Authors: There is one point that I don't understand, I think I might be confused about the nature of the results. In principle, if one treats public labelled data as unlabelled and attempts to follow a similar approach of using them to learn a good lower dimensional representation of the labelled private data, is it easy to see why this does not lead to an asymptotic improvement over ignoring the public data entirely, or treating it as private?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No additional limitations need to be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: * *Response to Weakness 1:*
We completely agree exact constants are important here, and we will make a note to better emphasize this. However, we do not believe this devalues the importance of characterizing the asymptotic rates, as we do in our paper. Most of the existing understanding of (private and non-private) optimization is in terms of asymptotic rates.
* *Response to Weakness 2:*
Many of the novel ideas in our paper are more technical in nature. For example, in our lower bound proof, while we leverage the pre-existing framework of fingerprinting codes, our proof involves several novel steps to overcome the challenges of applying this framework to PA-DP settings. For example, we observe it is crucial to use fingerprinting distributions with small mean and also rely on a clipping analysis when getting the $\sqrt{\log(1/\delta)}$ improvement in the lower bound. This analysis is different from the way fingerprinting codes have been used previously. We include further discussion of these novelties in Appendix B.2, but did not have space to include this in the main body. We can include this in the final version using the extra space given.
With regards to the proof ideas in the unlabelled public data section, while it indeed bears similarities to existing work of [[ABM19]](https://dl.acm.org/doi/pdf/10.5555/3454287.3455215), which we have acknowledged in the paper, there are several novel components in the analysis; for instance, the analysis of the size of the cover required. We perform this analysis by posing it as another supervised learning problem in the realizable setting with squared loss.
Using optimistic rates for the squared loss [[SST08]](https://papers.nips.cc/paper_files/paper/2010/file/76cf99d3614e23eabab16fb27e944bf9-Paper.pdf),
we achieve fast $1/n$ rate for this intermediate learning problem, which is crucial in achieving optimal rates. This is in contrast to the analysis in [[ABM19]](https://dl.acm.org/doi/pdf/10.5555/3454287.3455215),
which is done via VC-dimension based counting arguments.
* *Response to Questions:*
This could lead to an improvement over ignoring the *public* data entirely, but will not lead to an improvement over ignoring the *private* data entirely. That is, one incurs a $\frac{1}{\sqrt{n_{\text{pub}}}}$ penalty due to the error in the subspace approximation made using $n_{\text{pub}}$ points. Note that this is never better than the $O(\frac{1}{\sqrt{n_{\text{pub}}}})$ rate achieved by ignoring the private dataset and running some optimal non-private optimization algorithm.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your rebuttal! I will keep my score. | null | null | Rebuttal 1:
Rebuttal: See the attached pdf for the lower bound table that will be added the revision.
Pdf: /pdf/7553660fd9ab4f5dd36e7a113513fc739be4f551.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reinforcement Learning with LTL and $\omega$-Regular Objectives via Optimality-Preserving Translation to Average Rewards | Accept (poster) | Summary: The authors present a theoretical framework for learning $\omega$-regular properties in unknown MDPs through average reward optimization (via an "optimality preserving" approach).
Compared to previous work, the approach allows for multi-chain MDP.
The idea is to formalize the property through a deterministic Rabin automaton, construct an auxiliary model (a product MDP) that encodes the state space and the transitions of this automaton into those of the original MDP, and incorporate the resulting model into a reward machine.
Rewards are then generated according to the current knowledge of the underlying graph of the product MDP.
The reward machine can be constructed on the fly, which enables its optimization through RL.
Finally, the authors show that optimizing the discounted return objective boils down to (i) optimizing the average reward objective, which in turn implies (ii) the optimality preserving learning of the $\omega$-regular property.
Finally, convergence guarantees are proved through a sequential algorithm making calls to any PAC-MDP algorithm for discounted return, eventually decreasing the error and confidence terms to 0 as the algorithm progresses.
Strengths: The authors answer questions that were left open: (i) learning $\omega$-regular properties through a limit average reward objective in an optimality-preserving way is not possible with Markovian rewards, (ii) this is in contrast made possible by using finite-spaces reward machines, and (iii) in RL, one can design such an algorithm that converges in the limit.
Overall, the paper is well-written and the proof sketches in the main text effectively allow the reader to grasp the main ideas of the proofs presented in the appendix.
Weaknesses: First, although the paper is quite well written, I think that for somebody unfamiliar with all the concepts presented, the paper might be difficult to follow. For instance, the authors build upon accepting end components to detect when the specification is satisfied. However, the authors do not mention or discuss the reason why it is possible to do that in practice. Specifically, I would expect the authors to mention that identifying ECs can be done through purely graph algorithms, which makes them affordable to consider in this context.
Second, I found that the authors do not really take into account practical considerations in the proposed approach. For instance, there is no mention of how to detect minimal ECs. I mention other issues that I believe should be clarified in the "Questions" section.
Finally, there is no evaluation. I understand that this is intentional due to the theoretical nature of the paper. However, after reading the text, I am eager to know if the approach works in practice, as learning to satisfy LTL specifications is an ongoing challenge in RL. Furthermore, Algorithm 1 is an (unrealizable) theoretical algorithm but I would have loved to see a discussion or remark on how to implement such an algorithm in practice.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Is listing all minimal ECs not equivalent to listing all the possible deterministic policies in a maximal end component? If that's the case, even though the memory of the reward machine need not be too large due to the coverage sufficiency, detecting them might be costly and in the worst case exponential. I believe this question should be addressed and at least discussed in the main text.
- Section 5 is a bit ambiguous to me. In particular, I don't understand how Lemma 13 applies without fixing the set of visited transitions $E$. By looking at the Appendix, I understand that either the set $E$ is fixed, or it changes over time steps, i.e., the set is defined as $E_t$ for $t \geq 0$. If this is the case, then the reward machine $\mathcal{R}$ should take this time step into account in its definition, and I expect to see this in the main text. Could you please clarify?
- Could you please clarify what happens if the current algorithm believes to act in an EC (according to $E$) that is actually _not_ an EC? For instance, what happens if a region of the system is detected at the current time step as being an EC that has a *very* small probability of being left?
- Do you have any insights on how to design a *practical* algorithm that preserves the theoretical guarantees in the limit?
### Remarks and suggestions
- In Section 6, I was confused because I did not understand why considering a discounted return objective is relevant for the average return. I had to check in the appendix to see the reference, mentioning that when the discount $\gamma$ goes to one, $(1 - \gamma) \mathcal{J}\_{\mathcal{R}^{\gamma}}$ coincides with $\mathcal{J}\_{\mathcal{R}^{\text{avg}}}$. This should be explicitly stated in the main text!
- line 131: everyone => every?
- for the reference that LTL is not PAC-learnable, I would also cite [1]
- line 177: it seems that you used $R_i$ for denoting the sets of states visited finitely often and then suddenly switched to $B_i$.
[1] Hugo Bazille, Blaise Genest, Cyrille Jégourel, Jun Sun: Global PAC Bounds for Learning Discrete Time Markov Chains. CAV (2) 2020: 304-326
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations have been addressed and further discussed in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. We will take their suggestions into account when revising the paper. In particular, we will use the additional page for the camera-ready version to elaborate on practical aspects and the construction in Sec. 5.
Next, we address the reviewer’s questions (numbered 1-4).
# Practical Aspects: Constructing the Reward Machine (Question 1)
ASECs yield (partial) deterministic policies. Our construction considers a collection of ASECs covering all states in ASECs (l. 205). It does not necessarily require listing all possible ASECs but only (up to) one ASEC per state. It is unclear whether this can be obtained in polynomial time but the focus of our submission was primarily simplicity of presentation and clarity of the optimality-preserving translation. In the following, we present a slightly modified construction which is *efficient*. We will give a brief account in the revision.
We consider a different collection $\mathcal C_1,\ldots,C_n$ of ASECs:
> Suppose $\mathcal C’_1,\ldots,\mathcal C’_n$ is a collection of AECs (not necessarily simple ones) containing all states in AECs. Then we consider ASECs $\mathcal C_1,\ldots,C_n$ such that $\mathcal C_i$ is contained in $\mathcal C’_i$.
The definition of the reward machine in Sec. 4.2 and the extension in Sec. 5 do not need to be changed. Next, we argue the following:
1. This collection can be obtained efficiently (in time polynomial in the size of the MDP and DRA).
2. Lemma 10 and hence the correctness result (Thm. 10) still hold.
For 1. it is well-known that a collection $\mathcal C’_1,\ldots,\mathcal C’_n$ of maximal AECs (covering all states in AECs) can be found efficiently using graph algorithms [1, Alg. 3.1], [2,3] and [4, Alg. 47 and Lemma 10.125].
Subsequently, Lemma 19 can be used to obtain an ASEC contained in each of them. In particular, note that the proof of Lemma 19 immediately gives rise to an efficient algorithm. (Briefly, we iteratively remove actions and states whilst querying reachability properties.)
For 2., the first part of Lemma 10 clearly still holds. For the second, we modify policy $\pi$ as follows: Once, $\pi$ enters a maximal accepting end component we select an action on the shortest path to the respective ASEC $\mathcal C_i$ inside $\mathcal C_i’$. Once we enter one of the $\mathcal C_i$ we follow the actions specified by the ASEC as before. Observe that the probability that under $\pi$ an AEC is entered is the same as the probability that one of the $\mathcal C_i$ is entered under the modified policy. The lemma, hence Thm. 10, follow.
[1] Luca Alfaro. Formal Verification of Probabilistic Systems. PhD thesis, 1998
[2] Fu & Topcu. Probably approximately correct MDP learning and control with Temporal Logic constraints. 2014
[3] Chatterjee & Henzinger, “Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification”, 2011
[4] Baier & Katoen. Principles of Model Checking. 2008
# Construction in Sec. 5 (Question 2/3)
Intuitively, in Sec. 5 we can eliminate knowledge of the transitions with positive probability because for a run we almost surely see enough transitions of the true product MDP to determine whether it reaches and stays in an AEC.
This is due to the well-known result that with probability 1, the states and actions occurring infinitely often in runs constitute an end component (not necessarily an accepting one) w.r.t. the true MDP (Lemma 20). Moreover, for accepting runs this EC must clearly be accepting.
Therefore, our reward machine tracks the set $E$ of transitions in the product MDP encountered earlier in a run as part of its state, and $E$ is used to assign rewards, which enforce staying in one of the dedicated ASEC relative to the current knowledge $E$. Whenever a new transition is seen, the RM resets the status flag and we will need to follow one of the ASECs w.r.t. the updated $E$. Importantly, in the setting of limit-average rewards, initial “missteps” do not affect the total reward.
## Question 2
Further to the above explanations, we illustrate the evolvement of the $E$-component of the RM’s state over time:
Initially, this $E$-component of the state is $\emptyset$. Over time, as new transitions are observed in a run, $E$ grows monotonically. Since the MDP is finite, at some time step all transitions in a run have been seen and $E$ does not grow further.
The reward machine in Sec. 5 does take the current $E$ into account and rewards transitions based on the dedicated ASECs w.r.t. $E$. (Note the superscripts in the definition following l. 257.) There is no need to consider the number $t$ of the time step.
## Question 3
Suppose we follow one of the dedicated ASECs w.r.t. the transitions observed in the past (and hence receive rewards of 1). If this is not ASEC w.r.t. the true product MDP there exists a transition leaving it. As argued above, almost surely it will eventually be taken. (This may take a while if it has very small probability.)
Thereafter, the reward machine acts according to the updated set of possible transitions. The rewards obtained thus far do not affect the overall limit-average reward.
# Practical Algorithm with Convergence Guarantee (Question 4)
Algorithm 1 can be improved in practice by exploiting the internals of the RL-algorithm ```Discounted``` with PAC-guarantees and running it incrementally. For example, the model-based approaches of [5,6] first sample a sufficiently good approximation of the MDP’s transition probabilities before planning optimal policies on it. Instead of discarding earlier samples on the next invocation of ```Discounted```, we can simply determine how many additional samples are required to achieve the more stringent guarantees for the updated parameters.
[5] Kearns & Singh. Near-optimal reinforcement learning in polynomial time. 2002
[6] Strehl et al. Reinforcement learning in finite MDPs: PAC analysis. 2009
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. However, I still have a question. You mention that with probability one, if $E$ is not an EC, the algorithm should see the leaving transition. This is true, however with a PAC algorithm, you are given a finite number of samples. What happens if the transition is never seen? Can you bound the maximum error that this may incur?
Concerning the practical algorithm, this is interesting. I still have a question though; if you use the knowledge from the previous iterations, then each call to the PAC algorithm is dependent on the previous one. Is that not a problem?
---
Rebuttal 2:
Comment: We thank the reviewer for their ongoing discussion.
> However, I still have a question. You mention that with probability one, if $E$ is not an EC, the algorithm should see the leaving transition. This is true, however with a PAC algorithm, you are given a finite number of samples. What happens if the transition is never seen? Can you bound the maximum error that this may incur?
If we understand this question correctly, it is concerned about the correctness of the reward machine in Sec. 5, i.e. that the translation is optimality-preserving, as formalised in Thm. 9 and Lemma 10.
We believe there might be some conceptual misunderstanding: in the present paper we do not discuss algorithms that directly solve the RL problem with $\omega$-regular/LTL-objectives. Rather, we present an *optimality-preserving translation* to the more standard problem of RL with limit-average reward machines. Furthermore, we underpin the efficacy of this approach by providing an algorithm with theoretical guarantees for the latter problem. Crucially, the algorithm for limit-average reward machines is independent of the translation from $\omega$-regular/LTL-rewards.
We stress that for the *construction* of the reward machine we do not sample from the MDP. On the other hand, for the correctness of the reward machine we need to ensure that almost all *runs* are accepted by the product MDP if they receive a limit-average reward of 1 (Lem. 10.1).
It is important to note that since runs have infinite length they contain an infinite number of transitions. Hence, the probability that a run never takes a transition leaving a suspected EC is $0$.
> Concerning the practical algorithm, this is interesting. I still have a question though; if you use the knowledge from the previous iterations, then each call to the PAC algorithm is dependent on the previous one. Is that not a problem?
Indeed, if the calls in Alg. 1 are incremental (re-using samples of earlier iterations), the failure probabilities in the iterations are no longer independent. However, for correctness of the overall algorithm we only need that the expected number of failures is finite. This is proven in Thm. 15 exploiting linearity of expectations, which even holds for non-independent random variables. | Summary: The paper studies the link between reinforcement learning with $\omega$-regular objectives to reinforcement learning with limit-average reward. It is shown that one cannot reduce RL with $omega$-regular objectives to RL with limit-average objectives by only replacing the reward function (Proposition 4), but that it is possible if one considers rewards given by finite-memory reward machines (Theorem 11), which is the main result of the paper. It is also shown that RL with limit-average rewards can be learned in the limit, thus implying that RL with omega-regular objectives can also be learned in the limit (Theorem 17).
Strengths: The paper is original in the sense that it provides theoretical tools for a new approach for RL with $\omega$-regular objectives by reducing it to the well-known case of limit-average rewards. This allows to apply the many approaches designed originally for the case of limit-average rewards to $\omega$-regular objectives. The idea of approaching RL with $\omega$-regular objectives by analyzing the end components of the product MDP with a DRA or LDBA representing the objective might not be new, but the way it is precisely used with accepting simple components and reward machines is new and original to the best of my knowledge. I would judge the paper of high quality as it is well presented, the proofs are sound as far as I could check, and the context of the results is very well explained. The paper is easy to read and its more technical concepts are explained in a simple manner (Section 4 is a good addition in that sense). The results of the paper solves open problems given in [1], and contribute to building a theoretical framework for transforming specifications in RL, which is right now underdeveloped with many open questions, and was a work begun in [1]. Moreover, even though no practical applications of the results are presented, it is not unreasonable to expect that the main result might be directly applied in the case of finite MDPs with the support of the actions known. Therefore, the results are of high significance to the community.
[1]: Rajeev Alur, Suguman Bansal, Osbert Bastani, and Kishor Jothimurugan. A framework for transforming specifications in reinforcement learning. In Jean-François Raskin, Krishnendu Chatterjee, Laurent Doyen, and Rupak Majumdar, editors, Principles of Systems Design: Essays Dedicated to Thomas A. Henzinger on the Occasion of His 60th Birthday, pages 604–624, Cham, 2022. Springer Nature Switzerland.
Weaknesses: I do not think that the paper has any obvious weaknesses, but it could be improved by either providing a polynomial construction for the reduction from $\omega$-regular objectives to limit-average ones in the case where the probabilities of the MDP are unknown or providing a negative result.
Technical Quality: 4
Clarity: 4
Questions for Authors: I do not have any questions
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors addressed the limitations of their work in the Limitations section at the end.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments.
We agree with the reviewer that investigating the possibility of a polynomial translation for the general case (where the transitions with positive probability are not known) is a very interesting future direction.
---
Rebuttal Comment 1.1:
Comment: I'm happy with the answers provided by the authors. | Summary: This paper tackle several open problems in the field of specification driven learning using Reinforcement Learning (RL) algorithms. It proves that $\omega$-regular objectives can be translated in an optimality preserving manner to limit-average reward MDPs. A PAC-MDP convergence proof for limit-average reward MDPs is introduced as well.
Strengths: The paper reads well and presents a clear and logical progression towards the main results.
Weaknesses: The claim of the first proof of convergence for average reward MDPs may need to be further evaluated [1].
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. How does the choice between Limit Deterministic Büchi Automatas (LDBAs) and DRAs to express $\omega$-regular languages affect the results [2]?
2. Although [1] does not consider the PAC-MDP setting, how relevant is their proof of convergence w.r.t. the results in the paper?
3. Minor typos
1. L131 everyone → every
2. L156 $\delta_d$ -> $\delta_u$ ?
3. L289 “sequence of discount sum”
References:
[1] Learning and Planning in Average-Reward Markov Decision Processes, Wan et al, ICML 2021
[2] From LTL to your favourite deterministic automaton, Kretínsky et al,
In CAV (1), volume 10981 of Lecture Notes in Computer
Science, pages 567–577. Springer, 2018.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The cost of running these transformations is not discussed (empirically or otherwise).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments and we will take their suggestions into account when revising the paper. We proceed to address the questions raised by the reviewer.
# Choice of DRAs vs. LDBAs
> How does the choice between Limit Deterministic Büchi Automata (LDBAs) and DRAs to express \Omega-regular languages affect the results?
Our construction of the reward machine relies on the representation of $\omega$-regular languages by DRAs (particularly their determinism). We do not know if there is an *efficient* reduction from LDBAs to limit-average reward machines. Note the presence of non-deterministic transitions in LDBAs, whereas reward machines are inherently deterministic. We will investigate if an efficient reduction from LDBAs to limit-average reward machines is possible avoiding the blow-up incurred from translating LDBAs to DRAs.
# Novelty of convergence for limit-average objectives
> Although [1] does not consider the PAC-MDP setting, how relevant is their proof of convergence w.r.t. the results in the paper?
Wan et al. [1] require the following (communicating) assumption: “For every pair of states, there exists a policy that transitions from one to the other in a finite number of steps with non-zero probability.”
This assumption generally fails for our setting, where in view of our Cor. 16, MDP states also track the states of the reward machine. For instance, in the reward machine in Fig. 2(a), it is impossible to reach $u_1$ from $u_2$.
Therefore, the paper [1] does not undermine the novelty of our approach for general MDPs.
We will add the reference to the list of similar works discussed in l. 341.
[1] Wan et al.: Learning and Planning in Average-Reward Markov Decision Processes, ICML 2021
[2] Kretínsky et al: Rabinizer 4: From LTL to your favourite deterministic automaton, CAV 2018.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and have no further questions. | Summary: This paper considers the problem of reducing temporal logic objectives to limit-average reward problems which enable using reinforcement learning to compute policies. The key contributions are an explicit construction and analysis for the setting where the MDPs transition support is known, followed by a relaxation which drops this assumption. The key theorems then provide a characterization of the optimality preserving nature of the reduction.
Strengths: The constructions and theorems are to my knowledge novel and sound. Further, the target problem of LTL policy synthesis in the model free setting is important and relevant to the Neurips audience.
Particularly interesting is the adaptive relaxation of the transition support knowledge. Further lemma 13's robustness implications further speak to the potential.
Weaknesses: My primaries concerns for this paper are the (lack of) relation to prior work and the lack of empirical validation, even on a toy problem.
In particular, this works goal and conclusions seem very similar to [1] which also shows a non-markovian reduction policy learning which seems to also be optimal policy preserving with high probability and seems to apply to a larger problem space.
[1] Eventual Discounting Temporal Logic Counterfactual Experience Replay, 2023
If this relationship can be clarified, I could raise my rating.
My other nitpick is with the novelty of section 3. My understanding is that this result is known, for example [2].
[2] On the Expressivity of Markov Reward, 2022
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations section in paper is satisfactory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. We will take their suggestions into account when revising the paper. In particular, we will clarify the relation to the additional references as detailed below.
# Translation to Eventual Discounted Rewards [1]
> In particular, this work's goal and conclusions seem very similar to [1] which also shows a non-markovian reduction policy learning which seems to also be optimal policy preserving with high probability and seems to apply to a larger problem space.
Voloshin et al. [1] present a translation from LTL objectives to eventual discounted rewards, where only strictly positive rewards are discounted. (In particular, they do not consider average rewards.)
Their main result (Thm. 4.2) is that
$$p-p’\leq 2\log(1/\gamma)\cdot m    (*)$$
where
- $\gamma$ is the discount factor,
- $p$ is the optimal probability of LTL satisfaction,
- $p’$ is the probability of LTL satisfaction by the policy maximising eventual $\gamma$-discounted rewards, and
- $m:=\sup_\pi O_\pi$ is a constant which depends on the probabilities in the MDP.
This provides a bound on the *sub-optimality* w.r.t. LTL satisfaction of optimal policies for the eventual discounted problem for a given discount factor.
Besides, Cor. 4.5 (of [1]) concludes that for a given error-tolerance $\epsilon$ a discount factor can be chosen guaranteeing a sub-optimality bound (*) above of $\epsilon$, ***provided $m$ is known***. However, a priori, the constant $m$ is not available without knowledge of the MDP.
As such, this paper does not provide an optimality preserving translation which works regardless of unknown parameters of the MDP. Thus, it falls in a similar category as the approaches mentioned in ll. 335-9 in our discussion of related work.
# Expressivity of Markov Reward
> My other nitpick is with the novelty of section 3. My understanding is that this result is known, for example [2,3]
[2,3] study MDPs with discounted rewards and three types of task specifications: (i) a set of acceptable policies, (ii) a partial ordering on policies, and (iii) a partial ordering on runs.
Whilst their paper is concerned with the limited expressivity of reward functions, it neither covers LTL nor average rewards.
In particular, we do not think that our Proposition 4 is stated (or follows from the results) in [2,3]. They acknowledge this by stating the following when briefly mentioning LTL task specifications: “A natural direction for future work broadens our analysis to include these kinds of tasks.”
(Task specification (iii)—imposing a partial ordering on runs—is most closely related to LTL tasks. However, their framework enables the expression of far richer preferences. In particular, the task designer can distinguish all traces and is not constrained by the limited expressivity of LTL. This naturally makes finding a suitable reward function even significantly harder.)
[1] Cameron Voloshin, Abhinav Verma, Yisong Yue: Eventual Discounting Temporal Logic Counterfactual Experience Replay. ICML 2023.
[2] David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh: On the Expressivity of Markov Reward (Extended Abstract). IJCAI 2022
[3] David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh: On the Expressivity of Markov Reward. NeurIPS 2021
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications.
After reading the rebuttal and the other reviews, I’m inclined to revise my score to be higher.
I would encourage the authors to add the comparison and contextualization given in their rebuttal to the final draft if possible.
---
Rebuttal 2:
Comment: Dear Reviewer VjJR,
we thank the reviewer for their discussion. We will follow their recommendation and add the comparison and contextualisation when preparing the final revision.
We also appreciate the reviewer's indication to raise their score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Identifiable Shared Component Analysis of Unpaired Multimodal Mixtures | Accept (poster) | Summary: This work presents a method for performing a shared component analysis (SCA) in the case of multi-modal unpaired data drawn from a linear mixture. This problem (and method) will be referred to as Unaligned SCA. Unaligned SCA is tackled by matching the probability distributions of the embedded (features) multi-modal data. More specifically, the authors draw inspiration from the traditional adversarial loss used in GANs, and formulate the component analysis algorithm as a min-max optimization problem where a discriminator (a neural network) is trained to maximize the confusion between the alignment of two representations (typically from two different modalities), and then the best alignment is sought in order to “fool” the current discriminator. The alignment matrices are structured so that the algorithm can distinguish between components shared across modalities and private components specific to each modality. The authors claim that the shared components can be identified up to the same ambiguities as those identifiable in the aligned case. Furthermore, the authors explain that while there are other methods that attempt to solve the Unaligned SCA, the conditions of the proposed algorithm are considerably milder.
The algorithm is then extended to cases where additional knowledge is present. First the algorithm is modified to accommodate the scenario where the data is generated by a single modality (uniform modality). Then, the algorithm is modified to accommodate the case where some data pairing is available (similar to a weakly supervised case). The authors show that by adding appropriate constraints the shared component can be identified under milder conditions.
The author provide first a theoretical analysis with numerical simulations, and then some concrete applications of Unaligned SCA for the problem of domain adaptation (same modality), Multi-lingual Information Retrieval (only in the appendix, same modality) and Single Cell Sequence Analysis (multi modality with and without pairing).
Strengths: - The work tackles an important area of research with potential applications that range from explainability, to SSL, or multi-modal problems.
- The method seems very flexible:
- The method proposed can work for completely unpaired data but if some paring is available it can take advantage of such additional knowledge.
- The method proposed is meant for multi-modal scenarios but it can also work in homogenous use cases.
- All parameters are well documented in the appendix and code was part of the additional material.
Weaknesses: I would divide the weaknesses into two groups: the empirical evaluation, of which I am fairly confident about, and the theoretical analysis, of which due to my limited knowledge in this field I am less confident of. I will share here my concerns as solving them might also be helpful for other readers in my position
*Empirical Analysis.*
I find the empirical analysis weak. The multi-modal and unpaired scenario results are underwhelming, while I find the domain adaptation (same modality) potentially problematic due to the use of CLIP as pre-processing step, and the lack of a strong recent baseline (less problematic than the CLIP reason).
- The only practical results where the data are multi-modal and unpaired are the ones shown in Figure 4 first blue dot where the accuracy is about 10%. Since this is the only case presented it is unclear if the method, in practice, cannot cope in this scenario or if the specific problem chosen is particularly challenging (in which case other evidence would be maybe better).
- In the domain adaptation experiments the paper reads: “The images are pre-processed by the pretrained CLIP model [34] that uses ViT-L/14 transformer architecture.” It is not clear if ALL baselines used the CLIP embeddings as pre-processing, or if this was only done for the proposed algorithm. For fair comparison the same pre-processing should be applied to all algorithms.
- Additionally, CLIP is known to have been pre-trained on a large and diverse dataset and there is a good chance it has been trained on Home-Office and Office-31 too, so it is difficult to appreciate the ability of the proposed method when using such a powerful pre-processing step (which as I mentioned might have been trained on theses datasets, including the their test sets). So making sure the pre-processing is equal is necessary, I would also encourage to present results with a less powerful pre-processing step (e.g., something pre-trained on ImageNet either supervised or SSL style like SimCLR) in order to better distinguish the contribution of CLIP vs every baseline and the proposed model.
- The authors use a lot of baselines as comparison however all these baselines seem to be fairly old (all before 2020?). I would suggest comparing with a stronger baseline (either check the leaderboard or here are some suggestions [1-4], note that not all might be immediately applicable).
[1] D3GM (https://arxiv.org/pdf/2401.05465)
[2] CLUE (https://arxiv.org/abs/2010.08666)
[3] LAMDA (https://arxiv.org/abs/2208.06604)
[4] SDM (https://arxiv.org/abs/2203.05738)
*Theoretical analysis.*
I have struggled to follow the theoretical explanation. Specifically, I understand the rationale behind formulation in eq (7) but I would fully agree with it if the samples were paired. I do not understand how 6(b) holds for unpaired samples. I suppose this is the explanation currently presented before assumption 1 but even after reading it I was left with the same question.
Technical Quality: 3
Clarity: 3
Questions for Authors: A satisfactory answers to these points could improve the "contribution" (and partially the "soundness") of the work.
- Was CLIP used as a pre-process step for all the baselines in the Domain Adaptation? If not, could the authors present those results by using the same pre-processing step? As I mentioned above ideally without using CLIP due to the potential contamination of the test sets.
- Could the author explain why 6(b) holds for unpaired samples?
Further addressing these less critical aspects could improve the "Presentation" score (and partially the contribution see first point)
- I believe the paper would be stronger with more recent baselines as suggested above.
- I find unclear how Fig.1 was created. More explanation would help the understanding.
- I find Fig 2 not clear: why are there c1 and c2 and only p1, whereas I was expecting p1 and p2 and a common (shared) c? Reading if further it might be that c1 and c2 are the two axes of the common space $c$. If this is the case I’d make sure to clarify it.
- Assumption 1 is not clear to me. Is this saying that both points y1 and y2 leave in the same sub-space within the span of $(c, p^{(1)}, p^{(2)})$? If so in which way is this useful?
- The authors say the results are an average of 5 runs, which is great, but the standard deviation should also be reported.
- I find this sentence confusing: "First, it is unclear if (6b) could disentangle c from p(q). In general, Q(q)x(q)
could still be a mixture of c and p(q) yet (6b) still holds (e.g., when both c and p(q) are Gaussian.)" first it says it is unclear if they can be disentangled, but the whole identifiably relies on the ability to disentangle them no?
- In a couple of places in the manuscript the authors refer to an experiments where all the results are in the appendix. I would add a brief summary of the results so the main paper is self-standing (this happens in More Synthetic-Data Validation and Application (iii)).
- In a few places $p^{(2)}$ appears without a closed bracket, i.e., $p^{(2}$.
- Sometimes the authors used the comma (,) to separate thousands but other time they didn’t. I would use a uniform notation.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors have identified and clearly stated three main limitations:
- The fact that the conditions presented are sufficient while the necessary are not yet known.
- The fact that the method works only for linear mixtures (which limits the expressivity).
- Lastly, the fact that the theoretical derivation assume infinite data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Empirical Results, CLIP, and Recent Baselines]**
**(i) “Only Practical Result is in Fig 4.”** We believe that there might have been some misunderstanding. Fig. 4 is used to validate Theorem 3. The blue dot markers suggest that conditions (a-b) in Theorem 1 might not be satisfied.
To clarify, the paper studied three settings, corresponding to Theorems 1, 2 and 3. The motivation for studying Theorems 2-3 is that condition (a-b) in Theorem 1 seemed to be restrictive for many applications. Theorems 2 and 3 also work under **practical settings**, but with more structural info. relative to Theorem 1. Theorem 2 uses homogeneity and Theorem 3 uses weak supervision; see similar settings in [R17,R20,R22]. Theorem 1 is validated in Fig. 3; Theorem 2 is validated in Sec. 6, applications (i) and (iii), and Theorem 3 is validated in Fig. 5 and Sec. 6 application (ii).
**(ii) Avoid Using CLIP and More Recent Baselines.** Thanks for this comment. The reviewer has a good point.
We agree with the reviewer that the CLIP was trained on a large and diverse dataset and could have included Office31 and Office Home as well. Hence, it may not be fair to run the methods over the CLIP-learned space. To address this issue, we follow the reviewer’s suggestion and use features from Resnet50 pre-trained over the ImageNet1k dataset. All methods use the same pre-trained features for a fair comparison.
As per reviewer’s suggestion we also added two recent baselines:
**ELS [R24]** (ICLR 2023)
**SDAT [R23]** (ICML 2022)
Note that we didn't use the reviewer-suggested active learning baselines as they applied to different settings.
Please see Table 1 and 2 in the **attached PDF** for the new experiments.
**[Regarding Distribution Matching (Eq(7) and Eq(6b))]**
The first two terms (GAN loss) in Eq. (7) is used to enforce (6b), i.e., matching the distributions of random variables $\bf{Q}^{(1)}\bf{x}^{(1)}$ with $\bf{Q}^{(2)}\bf{x}^{(2)}$ [R25].
We do not need paired samples to match the distributions. Intuitively, (7) “learns” $\bf{Q}^{(q)}, q=1,2$, in such a way that the samples of $\bf{Q}^{(1)}\bf{x}^{(1)}$ cannot be distinguished from the samples of $\bf{Q}^{(2)}\bf{x}^{(2)}$ in terms of distribution.
**[Questions Regarding Contribution and Soundness]**
**a.** CLIP: The reviewer has a great comment. No, the CLIP was not used for other methods. We agree with the reviewer that this should be rectified. We followed the reviewer’s suggestion and ran all methods using CLIP features (by throwing away the baselines’ original encoders). It turned out that all methods perform similarly well (**see Table 3 in the attached PDF**). As the reviewer pointed out, this is perhaps because CLIP memorizes too much information.
**Tables 1 and 2** in the attached PDF show new results where every baseline uses features from ImageNet1k-pretrained ResNet50. We hope these new results would alleviate this concern.
**b.** Please refer to **[Regarding Distribution Matching (Eq(7) and Eq(6b))].**
**[Questions Regarding Presentation]**
**a.** Please refer to **[Empirical Results, CLIP, and Recent Baselines]**
**b.** To create Fig. 1, we first sample $\bf{c} \sim \cal{N}(\bf{0}, \bf{I}), \bf{c} \in R^2$, and set $\bf{\Theta}^{(q)}, q=1,2$ to two different rotation matrices. A unique color is picked for each sample of $\bf{c}$ and used to color both $\bf{\Theta}^{(q)} \bf{c}, q=1,2$. The purpose of Fig. 1 is to illustrate that $ \bf{\Theta}^{(1)} \bf{c} \stackrel{(d)}{=} \bf{\Theta}^{(2)} \bf{c}$ even when $\bf{\Theta}^{(1)} \neq \bf{\Theta}^{(2)} $, which is clearly the case. We will make Fig.1 caption clearer.
**c. [Clarification of Assumption 1]**
(A1) aims to provide a way to characterize how different the latent distributions $P_{\bf{c},\bf{p}^{(1)}}$ and $P_{\bf{c},\bf{p}^{(2)}}$ are.
Let $S^{(q)}$ denote the set of all possible ``stripes'' $\cal{A}^{(q)}$ described in (A1) for domain $q=1,2$. Then, (A1) simply says that there should not exist any two stripes $\cal{A}^{(q)} \in S^{(q)}, q=1,2$ such that the two latent distributions assign the same probability to all scaled versions of stripes $\cal{A}^{(q)}$. This makes the two distributions sufficiently different from each other, needed to identify common and private information in the two domains.
Regarding $\bf{y}_1, \bf{y}\_2$: No, it is not saying that $\bf{y}\_1$ and $\bf{y}\_2$ live in the same subspace within the span of $(\bf{c},\bf{p}^{(1)},\bf{p}^{(2)})$. The vectors $\bf{y}_i^{(q)}$ are any linearly independent vectors in $\bf{R}^{d\_C+d\_P^{(q)}}$, whereas $\bf{c}$ and $\bf{p}^{(q)}$ are random vectors in $\bf{R}^{d\_C}$ and $\bf{R}^{d\_P^{(q)}}$, respectively.
The **use** of the vectors $\bf{y}\_1, \bf{y}\_2, \dots$ is to characterize the stripes $\cal{A}^{(q)}$, which are ultimately used to characterize the difference between the two latent distributions, $P\_{\bf{c},\bf{p}^{(1)}}$ and $P\_{\bf{c},\bf{p}^{(2)}}$.
**d.** Due to time constraints in the rebuttal process, we could only produce the standard deviation for a couple of transfer tasks in the new experiments. Table 2 (first 2 rows) shows the corresponding **mean and standard deviations** for (Ar $\to$ Cl, and Ar $\to$ Pr) using 5 trials. We will include the new experiments with multiple runs and standard deviation in the revised version.
**e.** Yes, identifiability implies disentanglement of $\bf{c}$ and $\bf{p}^{(q)}$. The two sentences highlight the challenge of ensuring this disentanglement by solving Problem (6), and provide an example where disentanglement fails, such as when $\bf{c}$ and $\bf{p}^{(q)}$ are Gaussian. This indicates that disentanglement is not always possible.
The purpose of our work is to derive precise conditions under which disentanglement can be ensured.
**f,g,h.** Thanks. We will fix the typos, and add a brief summary of the results in the main paper.
For references, please refer to the **References** section in the **Overall Response**.
---
Rebuttal Comment 1.1:
Comment: Thanks to these authors for their answers. I appreciate the explanations and the effort in running the domain adaptation experiments using my suggestions. I suggest using the ImageNet features as the main result in the paper and provide the CLIP ones in the Appendix (as it shows that a powerful feature extractor such as CLIP reduces most of the differences among all the algorithms).
There is still one aspect I do not fully understand. Could the authors state explicitly which of the three experiment settings (i), (ii) or (iii) shown in the paper are at the same time multi-modal and unpaired?
---
Reply to Comment 1.1.1:
Comment: (1) We will follow the reviewer’s suggestion and use ImageNet features as the main result in the paper, while moving updated CLIP experiments to the Appendix.
(2) All 3 applications (i.e., (i) image domain adaptation, (ii) single-cell sequence alignment, and (iii) multilingual embedding retrieval) are multimodal and unpaired. But their detailed settings vary. We proposed our unaligned SCA approach under three different settings, depending on how much structural information can be exploited.
**[Setting 1] Multimodal and unpaired:** The setting uses $\bf{x}^{(i)} = \bf{A}^{(i)} \bf{z}^{(i)}$ where $\bf{z}^{(i)}=(\bf{c},\bf{p}^{(i)})$ for modality $i$. The different $\bf{A}^{(i)}$’s and $\bf{p}^{(i)}$’s both represent the modality discrepancies. Synthetic data was used to validate the setting. We argued the condition needed here for identifiability was too strong for many applications. This was the motivation for us to consider Settings 2-3. (see Sec. 4 Line 203-208).
**[Setting 2] Multimodal and unpaired; the modalities share a homogeneous feature space (sometimes called multi-domain setting):** The setting uses $\bf{x}^{(i)} = \bf{A} \bf{z}^{(i)}$ where $\bf{z}^{(i)}=(\bf{c},\bf{p}^{(i)})$ for modality $i$. The modality/domain differences are captured by $\bf{p}^{(i)}$’s. Unlike Setting 1 where $\bf{A}^{(i)}$ varies across $i$, here the mixing systems $\bf{A}^{(i)}=\bf{A}$ for all $i$. This is often called a homogeneous multi-domain setting, which is a more special case of multimodal learning. This setting makes sense when the data $\bf{x}^{(i)}$ for all domains share the same feature space, often used in applications like image-to-image domain adaptation [R17] and image-to-image style translation [R21]. **Applications (i) and (iii) were used to validate the method under this setting**. Application (i) is on domain adaptation of images. The images from different domains are unpaired. Application (iii) is Multilingual retrieval problem. The modalities correspond to unpaired words in different languages.
**[Setting 3] Multimodal and largely unpaired (with a small number of paired data):** The setting uses $\bf{x}^{(i)} = \bf{A}^{(i)} \bf{z}^{(i)}$ where $\bf{z}^{(i)}=(\bf{c},\bf{p}^{(i)})$ for modality $i$. The vast majority of data are unpaired. But there exists a small number of paired data (for example, in our experiment of Fig 4, the total number of data in each domain is 1,874. We considered cases where 0 to 256 paired data exist, i.e., 13% at its maximum). This setting is considered realistic in applications such as [R22, R26]. **Application (ii) was tested under this setting**. It corresponds to the single-cell experiment. Modalities correspond to unpaired RNA sequences and ATAC sequences.
[R26] *Wang et. al., 2020. Semi-supervised Learning for Few-shot Image-to-Image Translation.*
---
Rebuttal 2:
Comment: Thank you for your explanation. I now think I understand where the misunderstanding comes from: by multi-modal I was expecting two completely different data modality (e.g., image and sound). By this definition I thought the only true multi-modal setting was application (ii) with one type of data being RNA sequences and one being ATAC sequences. After a bit of further investigation I think even this setting is not quite multi-modal (as in two separate and different modality) as I believe (but I am not a biologist) both sequences are actually made by the 4 DNA bases. Similarly, the other two applications are mono-modal: application (i) is images (taken from different type of cameras) and application (iii) is text (coming from different languages).
I appreciate the in all three applications the data have different distributions, but I think it should be made clear that the focus is not multi-modal settings (as in multi-modality type of data) but rather in same modality with a distribution shift, also known as domain adaptation. I believe changing the title and the explanation to domain adaptation rather than multi-modal would help setting the right expectation for the reader. I would also encourage the authors to provide a short summary of the results of application (iii) rather than delegating everything to the appendix.
With all the additional experiments and clarifications provided during this rebuttal I believe the authors have increased the quality of their work to the following:
Soundness: 3: Fair -> Good
Presentation: 3: Fair -> Good
Contribution: 2: Fair
The reason for keeping the contribution as Fair is that:
- The method was tested only on domain adaptation experiments rather than what I thought was a multi-modality setting.
- It was shown in the additional experiments that using a powerful feature extractor (CLIP in this case) is arguably more beneficial than any sophisticated algorithm.
I am ok to increase my overall recommendation from 4 to 5.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their detailed and constructive discussion during the rebuttal.
The comment about multi-modality seems to be a terminology issue. We will follow the reviewer's suggestion and change the term to "multi-domain".
We will also add the summary of results of application (iii) in the main paper.
Nonetheless, the terminology issue doesn’t seem to affect our contributions. We believe that our contribution lies in providing rigorous understanding of the proposed unaligned multi-domain problem structure. Our synthetic and real-data experiments were designed to validate that understanding. The unaligned SCA problem is of great interest as a latent component analysis problem, like ICA, PCA and CCA. Its identifiability has been elusive and our work filled this gap.
We wonder if the reviewer could re-assess the contribution from the identifiability research viewpoint, rather than the “multi-modality” vs “mono-modality” viewpoint.
In any case, we sincerely thank the reviewer for the comments and discussion, and for pushing us to improve our experiments and presentation. | Summary: The paper considers the identifiability of shared components from a linear mixture. The theory requires multiple domains. However, compared to previous works, the required domains do not need to be aligned in this work. A practical estimation model has been proposed according to the theory.
Strengths: 1. The discussion on the related work is comprehensive.
2. The experiments have been conducted on both synthetic and real-world datasets.
3. The proposed algorithm looks pretty neat.
4. Limitations have been discussed in detail together with potential next steps.
Weaknesses: 1. Since there are already many works in learning shared components in the nonlinear setting, and some of them can even handle unpaired mixtures, the linear setting appears less appealing in comparison.
2. Assumption 1 is similar to the one used in the previous works, which should be highlighted earlier in the paper.
3. The discussion of the assumption of hyper-rectangle support is missing. Is it restrictive? Maybe some real-world examples could be helpful.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I didn't fully understand the proof in Line 780--does the usage of data processing inequality require a Markov chain? If so, has it been shown in the proof?
2. Could you please elaborate more on the connection between the proposed theory and previous work focusing on identifying content and style variables? It seems like they share a similar goal.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Regarding "Many Shared Component Learning Methods Exist"]** We would like to note that the existing **identifiability research** on shared component learning (including nonlinear mixture based ones) from **unaligned** multi-domain data is in fact rather limited (although empirical studies are abundant). In addition, the limited existing identifiability results cannot cover our settings, and thus study under our settings is still of its own significance. We reviewed these identifiability studies in the manuscript. We reiterate with a bit more details as below:
The work in [R16] considers identifiability of unaligned data learning using linear transformation. But they did not model shared and private latent components. Their identifiability conditions are also much more restrictive relative to ours (e.g., they require the 3rd-order moments of data are high-rank tensors, which is hard to satisfy and a little counter-intuitive).
The work of [R17,R20] considered a non-linear mixing function for the homogeneous data case. Compared to our work, [R17,R20] required at least a large number (i.e., $ 2 d_P + 1$) of domains, component-wise independence of latent variables, and much more stringent domain variability assumptions as discussed in Line 167-173 of the manuscript. Our method can work with as few as only 2 domains and the domain variability condition (A1) that we use is much more relaxed.
As detailed from Line 105-118, the work [R19] considered the same generative model as ours, but operated under much more stringent assumptions on the latent variables, such as all variables being component-wise independent and having unit variance. Both conditions are not needed under our framework.
The works [R2,R14,R18] all consider identifiability of the shared components under linear or nonlinear mixture settings. However, they all assume that the cross-domain data are aligned, which is significantly simpler than our setting.
**[Regarding the Rectangle Assumption]** We consider the assumption of hyper-rectangle support to be not very restrictive. Any collection of real-valued features could form a hyper-rectangle. For example, in image-to-image translation between animal faces (e.g., dog and cat images be the two domains) [R21], the position and orientation of the animal is generally the shared information whereas the appearance of the animal is the private information. If these two aspects are represented by two real-valued components in the latent space, then their supports could easily form a rectangle.
**[Questions]**
**Q1.** Yes, the data processing inequality requires a Markov chain. And we do have a Markov chain: $\hat{\bf{p}}^{(q)} \to \hat{\bf{c}}^{(q)} \to \bf{\Theta}^{-1} \hat{\bf{c}}^{(q)} = \bf{c} $. It is a Markov chain because conditioned on $\hat{\bf{c}}^{(q)}$, $\bf{\Theta}^{-1} \hat{\bf{c}}^{(q)}$ is a constant, and thus independent of $\hat{\bf{p}}^{(q)}$.
It was not explicitly written in the manuscript. We will add this clarification.
**Q2.** Please refer to **[Regarding “Many Shared Component Learning Methods Exist”]**
## References
[R2] Lyu et al., 2022. Understanding Latent Correlation-Based Multiview Learning and Self-Supervision: An Identifiability Perspective.
[R14] Kugelgen et al., 2021. Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style.
[R16] Gulrajani et al., 2022. Identifiability Conditions for Domain Adaptation.
[R17] Xie et al., 2023. Multi-Domain Image Generation And Translation With Identifiability Guarantees.
[R18] Sorensen et al., 2021. Generalized Canonical Correlation Analysis: A Subspace Intersection Approach.
[R19] Sturma et al., 2023. Unpaired Multi-Domain Causal Representation Learning.
[R20] Kong et al., 2022. Partial disentanglement for domain adaptation.
[R21] Choi et al., 2019. StarGAN v2: Diverse Image Synthesis for Multiple Domains.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I would like to maintain my positive score. | Summary: This work considers a problem similar to classical Canonical Correlation Analysis (CCS), which assumes a linear generative model for data $(x_1, x_2)$: $x_1=W_1z$, $x_2=W_2z$ and aims to identify the underlying components.
This problem has been extended previously to include "private information": $x_1=W_1z_1, z_1=[c,p_1]$, $x_2=W_2z_2, z_2 = [c,p_2]$ for common $c$ and independent $p_j$.
The current work further assumes that data is *unpaired* and rather than mapping each pair ($x_1$, $x_2$) such that $z_1$, $z_2$ are close together (in some metric), it is proposed that all $x_1$ are mapped to be similar *in distribution* to the mapped $x_2$s.
Strengths: The paper aims to provide rigorous criteria in which underlying generative factors are identifiable in the extended CCA problem it tackles (unpaired CCA with private information).
The results show improvement over benchmarks indicating promise to the approach.
Weaknesses: High level:
* While I understand the basics I am not an expert in the area of CCA, but I find the paper fairly difficult to follow. More explanation would be helpful, e.g.
- [28] why is any linear mixture model ill-posed (is that strictly true in *every* linear mixture case?)
- [47] what is meant by "facilitating one-to-many translations", the context/meaning is unclear.
* The theoretical part of the paper relates to a simple linear model, but none of the experiments follow this model
- e.g. the algorithm is applied to CLIP embeddings, which are not "the data", so to make claims about a simple linear model z=Ax and then apply it to CLIP seems incongruous. This experiments seem to relate more to a CCA-based "loss function" that takes representations and looks to align them/encourage independent factors etc.
- other experiments appear to be on discrete data, which the methods doesn't apply to, presumably these are also represented as some intermediate step?
- it seems strange to propose a simple linear model, present theoretical results about identifiability that rely on that simplicity but make a dramatic departure in the experiments where the assumptions clearly do not hold and the notion of identifiability is unclear.
* if the work does achieve an improvement in a CCA type setting, it seems appropriate to compare with other CCA methods on suitable data. Adding results on more complex data/representations may be of additional interest.
Assumption 1
- hard to parse and could be made more clear.
- unclear if correctly defined, don't vectors y need to be orthogonal to subspace P? It seems extremely loose to the point of simply saying $P_{c,p_1} \ne P_{c,p_2}$ (specifying where any difference lies to this might add clarity).
Theorem 1
- is this saying that if all dims of c are distributed differently, p(z)'s can only match (e.g. under GAN loss) by correctly aligning each dim? If so, that is pretty intuitive and it could be made clearer that you are putting that mathematically and proving it for the sake of rigour.
- I have not been through the proof, 5+ pages of proof without a sketch in the paper might be more suitable for a journal as appendices are not typically expected to be reviewed in detail.
Overall, there may be useful results in the paper, but in my view it should be re-written to make more clear what it is doing. It seems a confusing mix of simple linear generative model and related CCA methodology mixed with much more complex representations (e.g. CLIP) passed through a GAN + linear layer.
Technical Quality: 2
Clarity: 2
Questions for Authors: see weaknesses
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Linear mixture models (LMMs) are Ill-posed]**
In general, LMMs are not identifiable. Because for any $\bf{y}=\bf{A}\bf{x}$, where both $\bf{A}$ and $\bf{x}$ are unknown, one can find an infinite number of invertible $\bf{Q}$ such that $\bf{y}=\bf{AQQ}^{-1}\bf{x}$. Then, both $(\bf{A},\bf{x})$ and $(\bf{AQ},\bf{Q}^{-1}\bf{x})$ are equally fit to the data $\bf{y}$, making the problem ill-posed in terms of identifiability (or, solution uniqueness) [R3-R5]. In our case, we aim to identify two blocks in $\bf{x}$, i.e., $\bf{x}=[\bf{c},\bf{p}]$. The same ill-posedness remains.
**[One-to-many translations]**
Translation means changing the appearance of a sample in $\bf{x}^{(1)}=\bf{A}^{(1)}[\bf{c},\bf{p}^{(1)}]^T$ to its corresponding samples in the other domain. Note that the content is given by $\bf{c}$, and the appearances are controlled by $\bf{A}^{(q)}$ and $\bf{p}^{(q)}$. One-to-many translation means that one can combine the $\bf{c}$ extracted from $\bf{x}^{(1)}$ with many different $\bf{p}^{(2)}$; see examples in [R6].
**[Using CLIP as Pre-processing]**
Please note that we only used CLIP as pre-processing (analogous to using PCA in the old days). Powerful pre-processing tools like CLIP and pretrained vision models (eg. ResNet) can map images to approximately linear subspaces [R7] (also see linear probing experiments in [R8-R9]).
**[Theory-Experiment Consistency]**
We respectfully disagree with the comment that our experiments are "dramatic departure" from our theory. Please note that our experiments in Fig. 3, 5, and 7 **exactly follow the LMM**. The experiments on images, single-cell data, and language data used proper pre-processing in order to map data to linear subspaces, approximating our model and supporting the * **usefulness** * of the model in practice.
**[“Discrete Data”]**
No. We did not run experiments with discrete data. We applied our methods on the continuous feature representation space of the data. Those features were obtained from gene expression counts [R10] and fast-Text embeddings [R11] in single cell and multi-lingual experiments, respectively. Modeling such features as continuous random variables is a common practice [R12,R13].
**[The Significance of Understanding the LMMs]**
Understanding the LMM-based unaligned SCA is the first step towards more complex models. This is how studies evolved from ICA [R3], PCA, and NMF [R4] (all LMMs) to provable nonlinear mixture models. However, there has been no existing theoretical support for the identifiability of the clearly important unaligned SCA model considered in this work.
**[Comparing with CCA]**
Let us clarify: CCA cannot work under our setting. The limitation of CCA is that CCA needs to know the one-to-one correspondence between multi-domain data. For example, for multilingual word alignment, it needs paired data $(\bf{x}^{(1)}\_\ell,\bf{x}^{(2)}\_\ell )\_{\ell=1}^L$ where $\bf{x}^{(q)}\_\ell,$ $q=1,2$ represent the same entity (e.g., "cat") in two languages. In our case, the data is not paired, i.e., $\bf{x}^{(q)}\_\ell,$ $q=1,2$ need not correspond to the same word. Hence, our method is called "unaligned SCA", but CCA is essentially "aligned SCA". Their settings are fundamentally different.
**[Clarifying Assumption 1 (A1) ]**
(A1) aims to provide a way to characterize how different the latent distributions $P_{\bf{c},\bf{p}^{(1)} }$ and $P_{\bf{c},\bf{p}^{(2)}}$ are.
Let $S^{(q)}$ denote the set of all possible "stripes" $\cal{A}^{(q)}$ described in (A1) for domain $q=1,2$. Then, (A1) simply says that there should not exist any two stripes $\cal{A}^{(q)}$$\in S^{(q)},~q=1,2$ such that the two latent distributions assign the same probability to all scaled versions of stripes $\cal{A}^{(q)}$. This makes the two distributions sufficiently different from each other.
**(i) Vectors $y$ orthogonal to subspace $\cal{P}$**
No, $\bf{y} \perp \cal{P}$ is not needed since this orthogonality constraint leaves $S^{(q)}$ unchanged, i.e., $\hat{S}^{(q)}= \\{\cal{A}^{(q)}\in S^{(q)}|~\hat{\bf{y}}_i^{(q)}\perp\cal{P}^{(q)}\\}=S^{(q)}.$ To see this, first it is clear that $\hat{S}^{(q)}\subseteq S^{(q)}$. Second, any $\cal{A}^{(q)}\in S^{(q)}$ is equal to some $\hat{A}^{(q)}\in\hat{S}^{(q)}$ constructed using $ \bf{\Pi}\_{\cal{P}^{(q)}}\{y}\_i^{(q)}$ instead of $\bf{y}\_i^{(q)}$, i.e., the projection of $\bf{y}\_i^{(q)}$ onto the orthogonal subspace of $\cal{P}^{(q)}$. Hence $S^{(q)}\subseteq \hat{S}^{(q)}$, and thus $S^{(q)}=\hat{S}^{(q)}$.
**(ii) Difference between (A1) and $P_{\bf{c},\bf{p}^{(1)}}\neq P_{\bf{c},\bf{p}^{(2)}}$**
We are a little confused by the reviewer’s comment that "It seems extremely loose … saying $P_{\bf{c},\bf{p}^{(1)}}\neq P_{\bf{c},\bf{p}^{(2)}}$". **We hope to clarify that we never used $P_{\bf{c},\bf{p}^{(1)}}\neq P_{\bf{c},\bf{p}^{(2)}}$ but only (A1).** For $P_{\bf{c},\bf{p}^{(1)}}\neq P_{\bf{c},\bf{p}^{(2)}}$ to hold, the two joint PDFs can be exactly the same everywhere except for an arbitrarily small subset of their domain. However, Assumption 1 only holds if the two joint PDFs are sufficiently different, by comparing the measures over the specifically defined “stripe” regions.
**[Theorem 1 clarification]**
**(i) Meaning of theorem 1**, the reviewer’s description is not entirely accurate, but partially correct. Theorem 1 (a) states that if all dimensions of $\bf{c}$ are distributed differently (as in Line 179) and independent, then $p(\bf{z}^{(1)})$ can only match $p(\bf{z}^{(2)})$, where $\bf{z}^{(q)}=\hat{\bf{Q}}^{(q)}\bf{x}^{(q)}$, only when $\bf{z}^{(1)}$ and $\bf{z}^{(2)}$ are aligned, i.e., $\bf{z}^{(1)}=\bf{z}^{(2)}=\bf{\Theta}\bf{c}$. This is called block-identifiability [R14,R15].
**(ii) Regarding the proof length**, We will include a proof sketch in the main paper when it fits; otherwise, we will include it in the appendix with clear pointers in the main paper.
For references, please refer to the **References** section in the **Overall Response.**
---
Rebuttal 2:
Title: Reviewer response
Comment: I realise (as an author) that reviews can sounds attacking. I would like to stress, since your response doesn't seem to acknowledge any change, that I appreciate this line of work and my comments are to improve the paper if possible.
* **LMMs & 1-many translations**: these are points of clarity to "the general reader", not just me, I think the paper could be more clear and standalone than it currently is
* **Preprocessing**: you mention "linear subspaces", I'm not sure I follow, for sure various representation models give representations that already untangle much complexity in the data (e.g. so that semantically similar items are clustered). CCA acts on the raw data, you are acting on representations. It does not make the approach invalid, but it should be more clearly stated that is what you are doing. You are in effect heavily relying on what other models achieve, which is completely arbitrary with respect to your contribution. In effect you are providing a loss function to wrap around pre-trained representations along the principles of CCA. This is not what is in the abstract for example: "This work takes a step further, investigating shared component
identifiability from multi-modal linear mixtures where cross-modality samples are unaligned". Given you don't know what the "representation model" has done, it detracts from identifiability claims, which typically refer to the data itself and should at least be caveated.
* **Data**: you do run experiments on discrete data: text is discrete. As above you rely on representations that have already done a lot of work in re-representing it. It would be better, in my view, to demonstrate that the linear CCA-type workings actually work as expected on multiple appropriate (simpler) datasets and then show that that still holds for more complex scenarios where non-linear encoders (or similar) have effectively taken the non-linearity into account. Identifiability should relate to factors that those underlying models have identified.
* **Thm 1**: I think an intuitive explanation in the paper would improve readability/understandability.
---
Rebuttal Comment 2.1:
Comment: We would like to stress that we absolutely found the reviewer’s comments valuable for improving the paper’s clarity. It was our fault that we missed adding sentences that commit changes (while concentrating too much on the 6000 character limitation), which was not our intention. We do agree with the reviewer: any suggestion that may help the general readers to better understand the paper is appreciated. We thank the reviewer for the help on clarity and will definitely make revisions accordingly.
**[LMM and 1-many translations]**
We agree with the reviewer that explanations to these points could make our paper more self-contained. Hence, we will add the explanations (provided in the rebuttal) in a separate "Preliminaries" section in the Appendix with clear pointers in the main paper.
**[Preprocessing]**
By "linear subspaces", we mean representation spaces where the representations are likely to be linear mixtures of semantic information. As mentioned in our original rebuttal (**[Using CLIP as Pre-processing]**), this has been observed to be the case for embedding spaces of neural networks such as CLIP [R7], word embeddings [R30] etc.
We would like to clarify that our method is always applicable wherever CCA is applicable, since CCA shares the same generative model [R18, R29] (also see section 2 Aligned SCA in the manuscript) as ours (the only difference is that CCA further requires the cross-modality samples to be aligned according to their content). Note that CCA also uses pre-processed features representations for complex real-world data (e.g., image, text) [R28, R18]. This is because these complex real-world data might not follow the linear mixture model in Eq. (1) in the manuscript, however the pre-processed representations might. Note that it is common for identifiability works to use pre-processed features for real-data validation of their Theorems [R18]. However, we understand the reviewer’s comment on the applicability to complex data directly. We will explain in more detail in the beginning of the experiment section regarding why preprocessing is involved.
We also hope to remark that the sentence in our abstract "*This work takes a step further, investigating shared component identifiability from multi-modal linear mixtures where cross-modality samples are unaligned*" is an accurate claim. Note that our claim is for multi-modal **linear** mixtures. Therefore, for complex datasets, it is necessary to find appropriate linear representation spaces. We will add more clarifications/reminders when it comes to the experiment section.
**[Data]**
We agree with the suggestion of first using simpler raw data and then representations of more complex data to run experiments. The presented experiments in fact may have implicitly reflected this comment.
To explain, note that the single-cell data is not pre-processed using any encoder but a normalized (zero mean and unit std) version of the raw-data, which is a simpler dataset as the reviewer mentioned. The more complex image and language data were preprocessed by existing encoders.
Following the reviewer’s suggestion on “simpler datasets ---> harder datasets” comment, we will change the order of presenting the single-cell experiment and the other experiments, to make this more explicit.
**[Thm 1]**
Thank you for your suggestion. We will include a simpler, intuitive explanation of Theorem 1 in the revised version.
**References**
[R28] Shi et al., 2019. Image Retrieval via Canonical Correlation Analysis.
[R29] Ibrahim et al., 2020. Reliable Detection of Unknown Cell-Edge Users via Canonical Correlation Analysis.
[R30] Mikolov et al., 2013. Efficient Estimation of Word Representations in Vector Space. | Summary: This work considers the identifiability of linear latent representations that are shared (i.e., identical) across data modalities, in the special case that they are unaligned/unpaired.
The approach leverages GAN-style training to achieve divergence minimization between the latent distribution of each modality.
The approach appears to be restricted to two modalities.
Under the assumption of shared latents, sufficient (not necessary) conditions for identifiability are presented, which are milder and, thus, more general than existing studies.
Further, structural constraints based on side information are introduced to further relax identifiability conditions.
Several experiments on simulation and real-world data are provided.
Strengths: Originality :
- The work introduces a combination of novel and known ideas in a clever formulation that yields new, less restrictive conditions for identifiability of shared signals from two modalities.
- The work differs from and extends previous contributions, dealing two unaligned/unpaired modalities.
- The work further relaxes the identifiability conditions via structural constraints based on additional side information that may be available in certain problems.
- The manuscript cites related work on identifiability conditions for aligned/paired data, as well as unaligned/unpaired results using stricter ICA conditions, also linking the work to nonlinear studies, adequately indicating the sources of inspiration.
Quality :
- The work is technically sound, including proofs for identifiability claims.
- The theoretical claims are well supported by the experiments.
Clarity :
- The work is well written and organized, focusing on the key points and contributions.
Significance :
- The results appear to be quite meaningful, with a potentially wide range of application.
Weaknesses: Quality :
- The code is not too friendly to readers, lacking higher 1-to-1 correspondence with the notation in the paper. Suggest improving documentation and tidying up the codebase for readability.
- Figure 5 does not seem to replicate well in `synthetic_train.ipynb` --> Clarify
- The numerical validations (simulation) are limited to 100,000 samples. Could you illustrate performance at 10,000, at 1,000, and at 100 samples? Many multimodal applications are limited to sample-poor regimes (N < 100), where classical CCA is one of the few performant methods, so it would be useful to assess the performance of unaligned SCA at varying sample sizes, and perhaps include comparable CCA results. Is there a summary measure (like Amari distance) that could be reported alongside figures?
Clarity :
- The paper contains some typos that limit the clarity. Besides fixing these typos, consider improving the readability of the proofs by being a bit more explicit with "obvious" steps that may be currently omitted.
- I think the set L of paired samples was not defined?
- Line 59: "transformations identifies" --> "transformations identify"
- Line 65: "samples available" --> "samples are available"
- Lines 95-96: "the cross-modality samples share the same c are aligned" --> unclear meaning... maybe drop "share the same c"?
- Line 117: "to met" --> "to be met"
- Line 183: Sentence ends abruptly at: "where \Theta^(q)."
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. What do you mean by "lift" the constraints in line 131?
2. Although theorem 1(a) does not "require" independence between c and p, isn't that implied/necessary? Otherwise, could you show that Dependence between p and c still yields identifiability of c? Specifically, say, if c1 and c2 are conditionally independent p(c1,c2,p) = p(c1|p)p(c2|p)p(p).
2.a. Does this have to do with Line 149: Q^(q)A^(q) = [Θ^(q), 0] ?
3. Are there obvious limitations wrt differences in the sample size for x^(1) and x^(2)? Are there any provable biases if the data is highly unbalanced? How does unbalanced data affect the identifiability?
4. Is the methodology and identifiability theory limited to the two-dimensional case?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: - A discussion of the asymptotic computational performance is amiss? Specifically, with respect to d_C and the total sample size.
- The paper does not appear to discuss the applicability of the theorems to the case of q > 2 (i.e., more than 2 modalities).
- Lacks a discussion of the stability of GAN training, especially with several additional loss augmentations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Code Clarity]** We will clean the code and change the variable names according to the notation used in the paper. We found that Fig. 5 sometimes could not be replicated due to occasional failure of GAN convergence. We fixed the issue by increasing the regularization parameter $\beta$ from 0.001 to 0.005.
To ensure reproducibility, we have fixed the random seed in the code. If the AC allows, we could share an anonymous link of the code for experiment in Fig. 5 (the rebuttal policy does not allow any link in the replies) through the AC. Please let us know.
**[Results under Various Sample Sizes]** Following the reviewer’s suggestion, we have calculated the Amari distance [R1] between $\widehat{\bf{\Theta}}^{(1)}$ and $\widehat{ \bf{\Theta}}^{(2)}$ for different number of samples of unpaired data for CCA and SCA. The data for two views were generated, by sampling the shared component of dimension ($D=2$) from VonMises distribution and private components from Gamma and Laplace distributions.
As the sample size decreases, the Amari distance increases for unaligned SCA because the distribution matching is difficult only using a few samples. CCA does not really work under this setting as it needs aligned cross-domain samples. However, our unaligned SCA does not need aligned samples.
**Table 1**: Amari distance between $\widehat{\bf{\Theta}}^{(1)}$ and $\widehat{ \bf{\Theta}}^{(2)}$.
---
N | SCA | CCA
---
100,000 | 6.5 × 10^-3 | 0.677
10,000 | 5.5 × 10^-3 | 0.533
1,000 | 8.4 × 10^-3 | 0.352
100 | 4.2 × 10^-3 | 0.364
50 | 0.071 | 0.313
20 | 0.298 | 0.402
***
**[Typos, Definitions]** Thank you for your careful reading. We will fix the grammar and typos, and include more detailed definitions.
**[Questions]**
**Q1.** We meant that our reformulation (7) uses a regularization term $R(\bf{Q}^{(q)})$ to promote the constraint in Problem (6c). This operation ``lifts'' up the constraint that was below the objective function into the new objective function. We will make this clearer.
**Q2:** No, the independence between $\bf{c}$ and $\bf{p}$ is not necessary. In fact, Theorem 1 did not use the independence between $\bf{c}$ and $\bf{p}$. However, Theorem 1(a) did use marginal independence of the components of $\bf{c}$, i.e., for $\bf{c}$ $=[c_1, …, c_{d_C}]^T$, $p(c_1, …, c_{d_C}) = \prod_{i=1}^{d_C} p(c_i)$.
Note that conditional independence does not imply marginal independence, i.e., $p(c_1, c_2, \bf{p})$ $= p(c_1 | p) p(c_2 | \bf{p}) p(\bf{p})$ does not imply $p(c_1, c_2) = p(c_1) p(c_2)$. Hence, assumption in Theorem 1(a) may not be satisfied, and identifiability cannot be guaranteed. However, it is still possible for $\bf{c}$ to be dependent upon $\bf{p}$ and satisfy Theorem 1(a), e.g., $p(c_1, c_2, \bf{p} )$ $= p(c_1) p(c_2 |\bf{p}) $$p(\bf{p})$, then $p(c_1, c_2) = p(c_1) p(c_2)$.
The statement in Line 149, $\bf{Q}^{(q)}\bf{A}^{(q)} = [\bf{\Theta}^{(q)}, 0]$, is the goal of Theorem 1. In our proof, this goal is achieved without the assumption that $\bf{c}$ and $\bf{p}$ are independent.
**Q3.** Our conjecture to both of the first two questions is “yes”, but we have not had analytical underpinning yet. Empirically, unbalanced data is not friendly to such unaligned multi-domain learning problems. Our identifiability does not consider sample sizes of $\bf{x}^{(1)}$ and $\bf{x}^{(2)}$, or differences thereof. The analysis is carried out in the limit of infinite data, i.e., assuming an exact solution to the proposed optimization problem---which is already a challenging analysis problem. Therefore, data imbalance conditions are not covered by the current analysis. For latent component analysis problems, e.g., ICA and nonlinear ICA, the vast majority of the literature have to consider the population case to simplify analysis. Nonetheless, finite sample analysis does exist [R2], but such analysis itself is highly nontrivial and might deserve a standalone study.
**Q4.** The methodology and identifiability theory are not limited to the two-dimensional case. Note that the latent dimensions $d_C$ and $d_P$ can be much larger positive integers.
**[Limitations - Q>2 Case]** The algorithm can be easily extended to Q>2 cases. Nonetheless, it does require additional work to understand what benefits can be brought upon in terms of identifiability by extra domains.
**[Limitations - Discussion on GAN stability]** In our experience, GAN training is sensitive to hyperparameter setting, and can fail occasionally for different random initializations. However, adding regularization (e.g., homogenous mixing case, weakly supervised case, and classification loss in domain adaptation) does seem to improve the training stability. We will add this discussion.
## References
[R1] Amari et al., 1995. A New Learning Algorithm for Blind Signal Separation.
[R2] Lyu et al., 2022. Understanding Latent Correlation-Based Multiview Learning and Self-Supervision: An Identifiability Perspective.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. I have some follow up questions below:
Sample size assessment: The Amari distances are quite remarkable (very low) considering the sample sizes. Could you clarify how you estimate \Theta? I recommend reporting this result (e.g., median +/- std Amari values) for every experiment (assuming you can estimate \Theta). CCA result does not need to be reported.
Q2: Can you add the note about conditional independence in a footnote?
Q3: Can you provide any empirical evidence about how unbalanced data affects identifiability in the different scenarios investigated here?
Q4: So the current theory is limited to Q = 2 domains, but can be extended. Could you elaborate further on the point about "benefits" from extra domains? Do you anticipate diminished returns from including extra domains?
Complexity analysis: Could you add a discussion about the computational complexity of the proposed model? Do the memory/computation requirements grow linearly/quadratically/other w.r.t. d_C?
---
Reply to Comment 1.1.1:
Comment: **[Amari Distance Computation]**
In our evaluation, we used $\bf{\hat{\Theta}^{(q)}} = \bf{Q}^{(q)} \bf{A}^{(q)}$$(1:d\_C)$, where $\bf{A}^{(q)}$$(1:d\_C)$ represents the first $d\_C$ columns of $\bf{A}^{(q)}$. Note that $\bf{Q}^{(q)}$ is our estimated linear operator, and $\bf{A}^{(q)}$ is the ground-truth mixing system that is available for synthetic data experiments (we only evaluated Amari distance for the synthetic data on our previous reply).
Another note is that (thanks to the above discussion) we realized that general matrix distances (such as Euclidean distance) could be a better fit for our case than the Amari distance. To see, recall that we have content identifiability if and only if
$ \bf{Q}^{(q)} \bf{A}^{(q)} = [\bf{\Theta}, \bf{0}]$. Hence, we need $\hat{\bf{\Theta}}^{(1)} = \hat{\bf{\Theta}}^{(2)} = \bf{\Theta}$. However, Amari distance is insensitive (invariant) to permutation and scaling, i.e., $\hat{\bf{\Theta} }^{(1)} = \bf{P} \Lambda \hat{\Theta}^{(2)}$ incurs zero Amari distance, where $\bf{P}$ and $\Lambda$ are any permutation and scaling matrices respectively. Hence, we present the Euclidean distance instead of the Amari distance. Additionally, we also report the $\\| \bf{Q}^{(q)} \bf{A}^{(q)}$$(d\_C+1 : d\_C + d^{(q)}\_P ) \\|_F$ which needs to be close to $\bf{0}$ for identifiability.
Table 1: Numerical evaluation of identifiability $\\| \widehat{\bf{\Theta} }^{(1)} (1: d\_C) - \widehat{ \bf{\Theta} }^{(2)}(1:d\_C) \\|_{F} .$
|N | SCA | CCA |
| :- | :- | :- |
|100,000 | 0.009 | 1.368|
|10,000 | 0.007 | 1.544|
|1,000 | 0.003 | 2.206|
|100 | 0.032 | 1.755|
|50 | 0.133 | 1.667|
|20 | 1.462 | 1.522|
Table 2: $1/2 \sum\_{q=1}^2 \\| \widehat{\bf{\Theta}}^{(q)} ( d\_C+1 : d\_C+d^{(q)}\_P) \\|\_{F}$.
|N | SCA | CCA |
| :- | :- | :- |
|100,000 | 0.021 | 0.284|
|10,000 | 0.034 | 0.279|
|1,000 | 0.002 | 0.329|
|100 | 0.043 | 0.368|
|50 | 0.131 | 4.092|
|20 | 0.747 | 0.755|
We will follow the reviewer’s suggestion and add the new experiment (with mean and standard deviation) for the synthetic data experiments in the revised version.
**Q2.** Yes. We will add a footnote about conditional independence in the main paper.
**Q3.** Thanks for the suggestion. We have run the following experiment with unbalanced data.
For the following experiment, the data for two modalities were generated, by sampling the shared component of dimension($D=2$) from VonMises distribution and private components from Gamma and Laplace distributions. The number of samples in the first modality is fixed to 100,000 and the second view ranges from 10,000 to 10 samples.
Table 3: Performance of SCA on imbalance data based on following two metrics,
**metric1** = $\\| \widehat{\bf{\Theta}}^{(1)} (1: d\_C) - \widehat{ \bf{\Theta}}^{(2)}(1: d\_C) \\|\_{F} $,
**metric2** = $1/2 \sum\_{q=1}^2 \\| \widehat{\bf{\Theta}}^{(q)} ( d\_C+1: d\_C+d^{(q)}\_P) \\|\_{F}$.
| \# samples in modality 2 | metric1 | metric2 |
| :- | :- | :- |
|10,000 | 0.008 | 0.025 |
|1,000 | 0.025 | 0.015 |
|100 | 0.091 | 0.087 |
|10 | 1.375 | 0.209 |
We will include the above result (with mean and standard deviation) in the revised version.
**Q4. [Possible Benefit of $Q \geq 2$ domains]**
One foreseeable benefit of having more than two domains is that Assumption 1, when modified for $Q \geq 2$ domains, could be more relaxed. This is because modality variability in general is satisfied if at least two of the total number of domains satisfy the current Assumption 1. Having more modalities can make the chance of Assumption 1 increased. On the other hand, enforcing the distribution matching constraint Eq. (6b) could be more challenging for more than two domains. We will add this discussion in the revised version.
**[Complexity Analysis]**
The short answer is that **both the memory and computational complexities of the proposed method scales linearly with** $d_C$. The per-iteration computational complexity is $O(B d\_C (d^{(1)} + d^{(2)}) )$, where $B$ is the mini-batch size. The per-iteration memory complexity is $O(B d\_C (d^{(1)} + d^{(2)}) )$ as well. The complexities are based on the fact that we use mini-batch based stochastic gradient-type optimizer. We will add a section in the appendix to detail the complexity calculation. | Rebuttal 1:
Rebuttal: ### [ **Overall Response** ]
We sincerely thank all the reviewers for their effort in reviewing our manuscript. Our responses are summarized as follows:
**Reviewer DxMx** suggested improving code clarity and observing the effect of sample size with a new evaluation metric. Following the comments, we ran an additional experiment and evaluated the Amari distance between $\widehat{\bf{\Theta}}^{(1)}$ and $\widehat{ \bf{\Theta}}^{(2)}$. We also improved our code readability and clarified the questions of the reviewer regarding the methodologies and identifiability.
**Reviewer 1Nhs’s** major comments were regarding the use of pre-processed data, theory-experiment consistency, significance of linear mixture models, and clarifications on our assumption and theorem. We provided clarifications. In particular, we pointed out that features obtained after common pre-processing tools (e.g., pre-trained models and word embeddings) are more likely to follow the linear mixture model by empirical evidence from the literature. Hence, such settings are arguably suitable for applying our methods.
**Reviewer tDhk** suggested discussing differences from existing content-style approaches. There are also comments regarding the practicality of the hyper-rectangle support assumption for the content and the use of the data processing inequality in our proof step. Following the suggestions, we explained use cases where the content could be seen as hyper-rectangle support. Further we clarified our proof step where we use data processing inequality and added the discussion regarding the connection with the previous related works.
**Reviewer YjhB** suggested using the same pre-processing for all methods in experiments. The reviewer also made a good point that using CLIP might not be appropriate as CLIP may have seen all the data under test. To address this concern, all methods now use ImageNet1k pretrained ResNet features instead of the CLIP features. The results are attached in the **enclosed PDF**. We have also added more recent baselines following reviewer suggestions.
Due to space constraints for the rebuttal, we have the references used in the rebuttal are as follows:
### **References**
[R1] Amari et al., 1995. A New Learning Algorithm for Blind Signal Separation.
[R2] Lyu et al., 2022. Understanding Latent Correlation-Based Multiview Learning and Self-Supervision: An Identifiability Perspective.
[R3] Common et al., 1994. Independent Component Analysis, A New Concept?
[R4] Lee et al., 1999. Learning the parts of objects by non-negative matrix factorization.
[R5] Erdogan et al., 2013. A Class of Bounded Component Analysis Algorithms for the Separation of Both Independent and Dependent Sources.
[R6] Huang et al., 2018. Multimodal Unsupervised Image-to-Image Translation.
[R7] Bhalla et al., 2024. Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE).
[R8] Radford et al., 2021. Learning Transferable Visual Models From Natural Language Supervision.
[R9] Chen et al., 2020. A Simple Framework for Contrastive Learning of Visual Representations.
[R10] Cao et al., 2018. Joint profiling of chromatin accessibility and gene expression in thousands of single cells.
[R11] Joulin et al., 2016. FastText.zip: Compressing Text Classification Models.
[R12] Lample et al., 2018. Word Translation without using Parallel Data.
[R13] Yang et al., 2021. Multi-domain translation between single-cell imaging and sequencing data using autoencoders.
[R14] Kugelgen et al., 2021. Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style.
[R15] Lyu et al., 2022. Understanding Latent Correlation-Based Multiview Learning And Self-Supervision: An Identifiability Perspective.
[R16] Gulrajani et al., 2022. Identifiability Conditions for Domain Adaptation.
[R17] Xie et al., 2023. Multi-Domain Image Generation And Translation With Identifiability Guarantees.
[R18] Sorensen et al., 2021. Generalized Canonical Correlation Analysis: A Subspace Intersection Approach.
[R19] Sturma et al., 2023. Unpaired Multi-Domain Causal Representation Learning.
[R20] Kong et al., 2022. Partial disentanglement for domain adaptation.
[R21] Choi et al., 2019. StarGAN v2: Diverse Image Synthesis for Multiple Domains.
[R22] Wu et al., 2018. Multimodal Generative Models for Scalable Weakly-Supervised Learning.
[R23] Rangwani et al., 2022. A Closer Look at Smoothness in Domain Adversarial Training.
[R24] Zhang et al., 2023. Free Lunch For Domain Adversarial Training: Environment Label Smoothing.
[R25] Goodfellow et al., 2014. Generative Adversarial Nets.
Pdf: /pdf/a3c372f3da745c70d5b57f67f86da66ecbf997e4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Universal Growth Rate for Learning with Smooth Surrogate Losses | Accept (poster) | Summary: This paper analyzes the growth rate of the H-consistency bounds (which subsume excess risk bounds) for various smooth surrogate losses commonly used in binary and multiclass classification. Specifically, for binary classification, the work establishes a tight square-root growth rate near zero (under mild conditions) for margin-based surrogate losses. For multiclass classification, the work establishes a tight square-root growth rate near zero (under mild conditions) for two families of surrogate losses: comp-sum and constrained losses. Finally, the work also studies how the number of classes affects these bounds, as well as the minimizability gaps in the bounds.
Strengths: **Originality**
A comprehensive analysis of the growth rate of the H-consistency bounds (which subsume excess risk bounds) for various smooth surrogate losses commonly used in binary and multiclass classification:
- For binary classification, the work establishes a tight square-root growth rate near zero (under mild conditions) for margin-based surrogate losses (Theorem 4.2). In particular, the lower bound requires weaker conditions than [Frongillo and Waggoner, 2021,
Theorem 4], and the upper bound is new.
- For multiclass classification, the work establishes a tight square-root growth rate near zero (under mild conditions) for two families of surrogate losses: comp-sum and constrained losses (Theorems 5.3 and 5.5).
Related work has been properly cited.
**Quality**
The work is technically sound. Proofs are given for theoretical results.
**Clarity**
The paper is clearly written and well-organized. Although it is a bit dry, as a researcher working in the related field, I did not find it very hard to read.
**Significance**
The comprehensive analysis presented in this work promotes a deeper understanding of different surrogate losses. It is helpful for researchers studying surrogate losses and consistency.
Weaknesses: I did not find any obvious weaknesses.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Based on this work, do you have any concrete suggestions for practitioners to choose surrogate losses (assuming they do not know about H-consistency at all)?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work. Please find our detailed responses below.
**Questions: Based on this work, do you have any concrete suggestions for practitioners to choose surrogate losses (assuming they do not know about H-consistency at all)?**
**Response:** We have discussed some practical implications of our analysis in Section 6 and Appendix H.
In practice, in realizable or nearly realizable cases where minimizability gaps are zero or relatively small, we believe that the logistic loss is a favorable choice because it is advantageous for optimization and its bound is independent of the number of classes. This in fact can partly explain its widespread practical application.
In other scenarios, both the number of classes and the minimizability gaps are essential in loss selection and a good choice out of the family of comp-sum losses might suggest a parameter $\tau$ closer to 2.
More generally, our theoretical analysis can help in selecting surrogate loss functions by considering several factors related to their $H$-consistency bounds:
- Their growth rate. For example, the growth rate for polyhedral surrogate losses is linear, while it is square-root for smooth losses.
- Their optimization property. For example, smooth losses are more favorable for optimization compared to polyhedral losses, particularly with deep neural networks.
- Their functional form. For example, comp-sum loss functions can vary in their forms of $H$-consistency bounds due to the dependency on the number of classes (see Section 6).
- Their approximation property. For example, the minimizability gaps differ in $H$-consistency bounds for smooth surrogate losses, even with the same growth rate (see comparison in Section 6). Also, under certain conditions, minimizability gaps can be zero or close to zero.
For smooth loss functions, in particular comp-sum losses, please see further our discussion in Section 6.2.
---
Rebuttal Comment 1.1:
Title: Increase my rating to 8
Comment: Thank you for the responses. I think this work is solid and of significant value to the learning theory community. I want to increase my rating to 8, reflecting its significance.
---
Reply to Comment 1.1.1:
Comment: We would like to express our gratitude to the reviewer for their positive feedback and for recognizing the significance of our work. | Summary: Since optimizing zero-one loss is intractable and it does not have properties such as differentiability, a common approach in learning theory is to replace it with a surrogate loss function. H-consistency bounds relate the excess error for surrogate loss to zero-one loss. This paper establishes a square-root growth rate near zero for smooth surrogate losses in binary and multi-class classification, providing both upper and lower bounds under mild assumptions.
Strengths: The paper proves both upper and lower bounds for H-consistency with smooth surrogate losses. Previous results only provided lower bounds, but this paper presents lower bounds with fewer conditions and also shows a matching upper bound of the square root, applicable to both binary classification and multiclass classification, such as Comp-sum losses.
Weaknesses: The paper studies only smooth surrogate losses, not piecewise linear ones such as hinge loss. However, using smooth surrogate loss functions is very common in machine learning applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: The results are based on the assumption that the hypothesis class is complete. Can you explain why this condition is necessary and what happens to the growth rate when this is not satisfied? It seems that many practical hypothesis classes don't meet this condition.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not have any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work. Please find our detailed responses below.
**Weaknesses: The paper studies only smooth surrogate losses, not piecewise linear ones such as hinge loss. However, using smooth surrogate loss functions is very common in machine learning applications.**
**Response:** Indeed, smooth surrogate loss functions are the most commonly used ones in current training of neural networks. Our focus on smooth surrogate loss functions was also motivated by the fact that the prior work of Frongillo and Waggoner (2021) implies a linear growth rate for polyhedral losses. In Appendix G, we give a brief discussion comparing polyhedral losses with smooth losses.
**Questions: The results are based on the assumption that the hypothesis class is complete. Can you explain why this condition is necessary and what happens to the growth rate when this is not satisfied? It seems that many practical hypothesis classes don't meet this condition.**
**Response:** Completeness means that for any instance, the set of possible scores generated by a hypothesis set spans $\mathbb{R}$. This condition is met by common hypothesis sets, for instance the family of all linear models, $H_{\mathrm{lin}} = \\{ (x,y) \mapsto W_{y} x + b_{y} \\}$, or that of multi-layer neural networks, $H_{\mathrm{NN}} = \\{ (x, y)\mapsto u_y \cdot \rho_{n}(W_{y, n}(\cdots \rho_2(W_{y, 2} \rho_1(W_{y, 1} x + b_{y, 1})+b_{y, 2})\cdots)+b_{y, n}) \\}$, where $\rho_j$ is an activation function.
More broadly, our analysis and results can be extended to bounded hypothesis sets even when completeness is not satisfied. This can be done by leveraging the characterization of the error transformation function given by Awasthi et al. [2022b] and Mao et al. [2023b] without assuming completeness.
We expect that the square root growth rate still holds in bounded cases for smooth surrogate losses. This is because, for bounded hypothesis sets, the estimation error transformation function typically admits two segments; however, our primary focus is on the segment where $t$ is close to zero, which aligns with the complete case. Thus, a similar analysis can be applied in these situations as well. We will elaborate on this in the final version. | Summary: This paper presents a comprehensive analysis of the growth rate of H-consistency bounds (and excess error bounds) for various
surrogate losses for some intractable loss used in classification. The authors prove a square-root growth rate near zero for smooth margin-based surrogate losses in binary classification and comp-sum and constrained loss used in multi-class classification, providing both upper and lower bounds under mild assumptions.
Strengths: The paper provides solid analysis for the H-consistency and excess error bound and these provide good guidance for selecting good surrogate loss for classification tasks where the target loss is hard to optimize.
The theoretical analysis is novel and provides good insight for handling intractable target loss.
Weaknesses: The paper should provide a more intuitive statement on the motivation and implication of these error bounds and provide more insights about the proof.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The error bounds are local bounds. Can the author provide insights on how to have global bounds and the neighborhood for the local bounds to hold?
2. Can the authors provide more insights on how these bounds help us select surrogate functions?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work and for your suggestions to improve its readability. Please find our detailed responses below.
**Weaknesses: The paper should provide a more intuitive statement on the motivation and implication of these error bounds and provide more insights about the proof.**
**Response:** Thank you for your suggestions. We will continue to work on improving our presentation for readers. The addition of one extra page in the final version will also allow us to include more detailed discussions of the motivation, implications, and proof techniques in the main body.
**Questions:**
**1. The error bounds are local bounds. Can the author provide insights on how to have global bounds and the neighborhood for the local bounds to hold?**
**Response:** $H$-consistency bounds hold for all predictors $h$ in the hypothesis set $H$ considered and for all distributions. We provide a tight analysis of their growth rate for smooth surrogate losses in both binary and multi-class classification, demonstrating that it follows a square-root function.
We suspect the reviewer refers to these square-root growth rate bounds as "local bounds" and is inquiring about "global bounds" that do not rely on growth rates. Previous works by Awasthi et al. (2022a,b) and Mao et al. (2023b,e) offer precise global bounds for both binary and multi-class classification across various commonly used loss functions, along with general methods for deriving such bounds. Our paper focuses on analyzing the growth rate of any of these bounds for smooth loss functions.
Please let us know if we are not answering the question appropriately, as we are unsure about what meant. Thank you.
**2. Can the authors provide more insights on how these bounds help us select surrogate functions?**
**Response:** Our theoretical analysis assists in selecting surrogate loss functions by considering several factors related to their $H$-consistency bounds:
- Their growth rate. For example, the growth rate for polyhedral surrogate losses is linear, while it is square-root for smooth losses.
- Their optimization property. For example, smooth losses are more favorable for optimization compared to polyhedral losses, particularly with deep neural networks.
- Their functional form. For example, comp-sum loss functions can vary in their forms of $H$-consistency bounds due to the dependency on the number of classes (see Section 6).
- Their approximation property. For example, the minimizability gaps differ in $H$-consistency bounds for smooth surrogate losses, even with the same growth rate (see comparison in Section 6). Also, under certain conditions, minimizability gaps can be zero or close to zero.
For smooth loss functions, in particular comp-sum losses, please see further our discussion in Section 6.2.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments. We appreciate the reviewer's valuable suggestions and feedback. | Summary: The paper provides an analysis of the growth rate of H-consistency bound for surrogate losses in binary and multi-class classification. The authors prove square root growth rate near zero for smooth margin-based surrogate losses for binary classification as well as for smooth comp-sum and constrained losses for multiclass classification. Since minimizability gaps makes a big difference between the bounds of different surrogates, the authors also analyze these gaps to guide in selecting better surrogates. In section , the authors introduce H consistency bounds and they build upon this to derive results for binary and multi-class classification in later sections.
Strengths: I must say, I am not an expert in this area of research. I find this paper pretty well written. In theorem 4.2, the authors prove that the transformation function tau is precisely of the order t^2 for a class of margin based loss function that is smooth and follows some other properties as mentioned in the main statement. Hence, the growth rate for these loss functions is precisely square-root. Further, the authors derive similar result for multiclass classification for comp sum losses and constrained losses. Because there is a minimizability gap term in the H consistency bound hence even with identical growth rates, surrogate losses can vary in their H-consistency bounds. The authors show that in the case of multiclass classification, minimizability gap scales with number of classes.
Weaknesses: I have a few very basic questions, regarding the work.
1. I understand that H consistency-based bound helps convert a surrogate-based bound to a bound that we require. How is it an improvement over earlier work Bartlett et al. (Convexity, Classification, and Risk Bounds). H consistence based bound contains the term minimizability gap which was not existent in the previous work that I cited.
2. I also understand that the faster the bound for surrogate loss will be, the better bound for the actual loss could be obtained. However, I am wondering if we provide a fast rate using small ball method or local Rademacher complexity based method for the surrogates, under what conditions the fast rate for the actual loss can still be recovered or is it lost always ?
It would be also great if you could explain simply the implications of getting a precise rate for the transformation function as I understand it might not give you a lower bound on the estimation error because of equation 2 is not an eqaulity. Am I missing something?
I am currently giving it a borderline accept and am happy to reconsider it after the rebuttal.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging review. We have carefully addressed all the questions raised. Please find our detailed responses below and let us know if there is any other question.
**1. I understand that H consistency-based bound helps convert a surrogate-based bound to a bound that we require. How is it an improvement over earlier work Bartlett et al. (Convexity, Classification, and Risk Bounds). H consistency based bound contains the term minimizability gap which was not existent in the previous work that I cited.**
**Response:** The results of Bartlett et al. (2006) hold only for the family of all measurable cases, that is for $H = H_{\mathrm{all}}$. They also focus exclusively on binary classification (other studies, such as those of Tewari and Bartlett (2007) and Zhang (2004) analyze certain multi-class classification loss functions).
In contrast, $H$-consistency bounds hold for arbitrary hypothesis sets. They provide the tightest possible upper bound on the estimation error for the actual loss, such as the zero-one loss, in terms of the surrogate estimation error, for an arbitrary hypothesis set $H$. They admit as special cases the excess bounds of Bartlett et al. (2006) in binary classification, when setting $H$ to $H_{\mathrm{all}}$.
For the more realistic scenario where the hypothesis set $H$ does not include all measurable functions, as demonstrated in Appendix C, under general assumptions, minimizability gap terms appear. These are necessary terms to relate the surrogate estimation error to the actual estimation error. In the special case $H = H_{\mathrm{all}}$, minimizability gaps vanish. They are also zero in realizable cases.
Thus, $H$-consistency bounds provide a strict generalization of previous work, needed to analyze the standard case where $H$ does not include all measurable functions. Our paper further provides a detailed analysis of minimizability gaps.
**2. I also understand that the faster the bound for surrogate loss will be, the better bound for the actual loss could be obtained. However, I am wondering if we provide a fast rate using small ball method or local Rademacher complexity based method for the surrogates, under what conditions the fast rate for the actual loss can still be recovered or is it lost always?**
**Response:** This is a very good question. It is well-established that local Rademacher complexity or small ball methods can be used to derive fast rate generalization bounds under Tsybakov noise conditions. Remarkably, under the same conditions, indeed, one can also derive more favorable $H$-consistency bounds (as shown in our concurrent work), that is with better exponents. Combining both yields more favorable generalization bounds on the actual loss, as suspected by the Reviewer. We will elaborate on this in the final version.
More generally, more favorable $H$-consistency bounds can be derived under various distributional assumptions.
**3. It would be also great if you could explain simply the implications of getting a precise rate for the transformation function as I understand it might not give you a lower bound on the estimation error because of equation 2 is not an equality. Am I missing something?**
**Response:** Like excess error bounds, $H$-consistency bounds are worst-case bounds that hold for any distribution. As noted in Lines 232-235, estimation error transformation functions provide tight $H$-consistency bounds. This means that for any $t \in [0, 1]$, there exists a hypothesis $h \in H$ and a distribution such that the inequality of the bound can be achieved. Thus, a square root growth rate for transformation functions implies a square root growth rate of estimation errors in the worst case.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, the deadline for the end of the discussion period is approaching. We wanted to confirm that we addressed all your questions suitably. If there other questions we could address, please let us know. Thank you. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
COSMIC: Compress Satellite Image Efficiently via Diffusion Compensation | Accept (poster) | Summary: The authors propose COSMIC, a simplified and efficient compression method for satellite earth observation images. Due to the increasing number of satellites and the volume of image data, existing compression schemes are difficult to deploy with the limited computing power and energy available on satellites. COSMIC designs a lightweight encoder to significantly reduce computation while achieving a high compression ratio. For ground-based decoding, it uses a diffusion-based model to compensate for the detail loss caused by the simplified encoder, leveraging the multi-modal nature of satellite data (such as coordinates and timestamps) to improve image reconstruction quality. Experimental results show that COSMIC outperforms existing methods in both perceptual quality and distortion metrics.
Strengths: (1)The writing and presentation of the paper are excellent, with a clear and logical flow.
(2)The authors introduce a substantial amount of knowledge on deep learning-based image compression and diffusion models, making it easy for readers who are not in this field to understand.
(3)Satellite image compression is a novel topic, and the lightweight coding framework used by the authors is of practical significance.
Weaknesses: (1)More performance comparison tests of various models need to be included, such as those introduced in Section 2.2 on deep learning-based remote sensing image compression [1-3] and some of the latest works [4,5] in deep learning-based image compression.
(2)The process of handling metadata is not clearly explained. I only saw the Metadata Encoder mentioned in the paper. How is this part of the data processed, and how is it aligned with the satellite image data?
(3)The authors need to further report the spatiotemporal complexity of COSMIC compared to other methods to demonstrate its excellent lightweight architecture.
[1] Fu, Chuan, and Bo Du. "Remote sensing image compression based on the multiple prior information." Remote Sensing 15.8 (2023): 2211.
[2] Xiang, Shao, and Qiaokang Liang. "Remote sensing image compression based on high-frequency and low-frequency components." IEEE Transactions on Geoscience and Remote Sensing (2024).
[3] Zhang, Lei, et al. "Global Priors with Anchored-stripe Attention and MultiScale Convolution for Remote Sensing Images Compression." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2023).
[4] He, Dailan, et al. "Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[5] Liu, Jinming, Heming Sun, and Jiro Katto. "Learned image compression with mixed transformer-cnn architectures." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.
Technical Quality: 2
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and valuable feedback.
### Q1. More performance comparison tests of various models need to be included.
Please see our general response 2 above.
### Q2. The process of handling metadata is not clearly explained.
Sorry for the confusion in writing. First, we normalize each metadata. And then, inspired by the way that the timestep $t$ processed by diffusion model, we use sinusoidal embedding to encode each metadata. For each metadata embedding, we use its corresponding linear function to map it to CLIP space. Finally, we use CLIP to align it with the satellite image.
### Q3. Spatiotemporal complexity comparison.
In this paper, we use two lightweight modules, LCB and CAM, while other methods use ordinary convolution. We list their time and space complexity as follows. Assuming that the number of input channels is $c_{in}$, the number of output channels is $c_{out}$, the input feature map size is $n^2$ and the convolution kernel size is $k$.
| Modules | LCB | CAM | ordinary convolution |
| --- | --- | --- | --- |
| Temporal complexity | $O(n^2\times c_{in} \times k^2 + \frac{1}{2} \times n^2 \times c_{in} \times c_{out})$ | $O(k\times n^2 \times c_{in} \times c_{out})$ | $O(n^2 \times k^2 \times c_{in} \times c_{out})$ |
| Spatio complexity | $O(k^2 \times c_{in} + \frac{1}{2} \times c_{in} \times c+{out})$ | $O(k\times c_{in} \times c_{out})$ | $O(k^2 \times c_{in} \times c_{out})$ |
---
Rebuttal 2:
Title: A Gentle Reminder of the Final Feedback
Comment: Please allow us to thank you again for your careful review and valuable feedback, and in particular for recognizing the strengths of our paper in terms of clear writing, novel topic and practical significance.
Kindly let us know if our response and the new experiments have properly addressed your concerns. We are more than happy to answer any additional questions during the post-rebuttal period. Your feedback will be greatly appreciated.
---
Rebuttal 3:
Title: A Gentle Reminder of the Final Feedback
Comment: Dear Reviewer AGki,
Thank you very much again for your initial comments. They are extremely valuable for improving our work. We hope our response has adequately addressed your concerns. We shall be grateful if you could kindly give any feedback to our rebuttal.
Best Regard,
#Paper9518 Author(s) | Summary: This paper presents a novel method to address the challenge of transmitting the increasing volume of satellite images to ground stations. The core innovation lies in designing a lightweight encoder that reduces computational complexity on satellites, coupled with a diffusion-based compensation model on the ground to enhance image quality. The experimental results demonstrate COSMIC's superior performance over existing methods in terms of both perceptual and distortion metrics.
Strengths: 1) Lightweight Encoder: The design of a lightweight encoder significantly reduces the computational load on satellites, making the solution feasible for in-orbit deployment.
2) Diffusion-Based Compensation: Utilizing a diffusion-based model to enhance image details during the decompression process effectively addresses the limitations of the lightweight encoder.
3) Comprehensive Evaluation: The extensive experiments and comparisons with state-of-the-art baselines highlight the robustness and superiority of COSMIC in various metrics.
4) Multi-Modal Integration: Incorporating sensor data as conditions for diffusion generation leverages the multi-modal nature of satellite images, enhancing the overall reconstruction quality.
Weaknesses: 1) Encoder Degradation: The lightweight encoder's reduced feature extraction capability may limit image quality at extremely low bit rates.
2) Training Specificity: The reliance on a pre-trained stable diffusion model that lacks specific priors for satellite images could limit the model's performance under certain conditions.
3) Limited Power Supply Considerations: While the lightweight encoder addresses computational constraints, the paper does not thoroughly discuss the power supply limitations on satellites and their impact on the proposed method.
4) Real-World Application Scenarios: The paper could benefit from more detailed discussions on practical deployment scenarios and the associated challenges, such as real-time processing requirements and potential bottlenecks.
5) Satellite images typically consider using pixel-level distortion metrics instead of perceptual metrics. Due to the sensitivity of satellite images to compression method distortion, lossless or near-lossless compression methods are usually used.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Encoder Degradation at Low Bit Rates:
a) Question: How does the encoder's degradation specifically affect the image quality at low bit rates? Are there specific types of image details that are consistently lost?
b) Suggestion: Provide a detailed analysis of the types of image features that are most affected by the lightweight encoder at low bit rates. This can include examples or case studies highlighting these issues.
2) Training Specificity and Pre-Trained Models:
a) Question: How does the performance of the pre-trained stable diffusion model compare with a model specifically trained on satellite images? Have any preliminary experiments been conducted in this regard?
b) Suggestion: Discuss any preliminary results or plans for training a diffusion model specifically on satellite images. This could include potential improvements or challenges identified during these experiments.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: 1) The authors have acknowledged the following limitations:
a) Encoder Degradation at Low Bit Rates: The lightweight encoder's performance drops at very low bit rates, affecting image quality.
b) Training Specificity: The use of a pre-trained stable diffusion model, which lacks specific prior knowledge of satellite images, may limit performance under certain conditions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1. Encoder Degradation and Training Specificity.
These two questions are just **repeating** our last section, i.e. **Limitations & Future work**. Moreover, **we are frustrated to find two detectors (GPT-zero and Scribbr) have 100% confidence of AI-generation on your review**. It's clearly doubtful if the review can really help this paper's decision process.
Anyway, let us to explain these two questions again.
1. We never claim COSMIC is perfect. In experiments, we notice the performance of COSMIC is slightly degraded at extremely low bit rate and we report this limitation honestly. The reason is simple since the less image content the image encoder provides, the more compensation from diffusion is needed at decompression. It is hard to rely only on diffusion to decompress.
2. We believe that training a diffusion model specifically for satellite images, which has sufficient prior knowledge of satellite images, can solve this problem to some extent. However, this is not the scope of this paper.
### Q2. Limited Power Supply Considerations & Real-World Application Scenarios.
Please see our general response 1 above.
### Q3. Satellite images typically consider using pixel-level distortion metrics instead of perceptual metrics.
In the paper, we considered both distortion metrics and perceptual metrics, and experimental results show that COSMIC can achieve SOTA performance in both metrics.
---
Rebuttal Comment 1.1:
Comment: I think satellite images should be coded and decoded in a pixel-level controllable way, and the author's method can only ensure that the distribution of the generated image is as close as possible to the distribution of the original image, and the distortion is uncontrollable and unsuitable for satellite image compression.
---
Rebuttal 2:
Title: Author Response (Reviewer upSj)
Comment: Thanks for the reply.
First, thanks for acknowledging our work can **ensure that the distribution of the generated image is as close as possible to the distribution of the original image** which is the general goal of any lossy image compression methods.
The history of satellites' photography is much longer than image compression algorithms. Of course there are pixel-level image compression methods used by **Mars rover Viking [1] launched in 1975**, way before the invention of JPEG. Other pixel-level methods are mostly used for lossless compression, which is **neither within our scope nor used by satellites launched since 2000.** Today, **most satellites are using lossy compression methods like JPEG which is not coded and decoded in a pixel level . We list satellites using JPEG as follows: Soloar-B[2], BILSAT-1[3], Cartosat-2[4], TacSat-2[5], TEAMSAT[6], SPOT-5 [7], Cartosat-1[8], CartoSat-2E[9], TurkSat-3USat[10], SAC-C[11], etc.).**
Moreover, **JPEG also results in uncontrollable distortion but is still widely used by various satellites.**
JPEG uses different quantization table only to tune the compression ratio which cannot control the distortion performance which depends on the image contents. In our paper, we have demonstrated that **COSMIC surpasses JPEG2000 across both distortion metrics and perceptual metrics.**
Reference
[1] The Martian Landscape, [https://www.nasa.gov/wp-content/uploads/2024/01/sp-425-the-martian-landscape.pdf](https://www.nasa.gov/wp-content/uploads/2024/01/sp-425-the-martian-landscape.pdf)
[2] Soloar-B, https://www.eoportal.org/satellite-missions/solar-b#spacecraft
[3] Bradford A, Gomes L M, Sweeting M, et al. BILSAT-1: A low-cost, agile, earth observation microsatellite for Turkey[J]. Acta astronautica, 2003, 53(4-10): 761-769.
[4] Cartosat-2, https://space.skyrocket.de/doc_sdat/cartosat-2.htm
[5] TacSat-2, https://space.skyrocket.de/doc_sdat/tacsat-2.htm
[6] TEAMSAT, https://www.eoportal.org/satellite-missions/teamsat#overview
[7] SPOT-5, https://www.eoportal.org/satellite-missions/spot-5#launch
[8] Cartosat-1, https://earth.esa.int/eogateway/missions/irs-p5
[9] CartoSat-2E, https://www.eoportal.org/satellite-missions/cartosat-2e#spacecraft
[10] TurkSat-3USat, https://www.eoportal.org/satellite-missions/turksat-3usat#transponder
[11] SAC-C, https://www.eoportal.org/satellite-missions/sac-c#mmrs-multispectral-medium-resolution-scanner | Summary: The authors propose a novel method to compress satellite images using a learned algorithm that relies on a diffusion model on the ground to decode the compressed image. The proposed method is designed for deployment on satellites.
Strengths: A novel method to compress satellite images by using a diffusion model to reconstruct the encoded image is presented. The diffusion and decoder models are used to compensate for the lightweight encoder used. The method is original and may have the potential to be extended to other edge devices.
Weaknesses: * Authors claim in the literature reviews that no algorithm for compressing data on satellite exist. A simple search will show that this is not true. Below are two examples
Artificial Intelligence Based On-Board Image Compression for the Φ-Sat-2 Mission
A Simple Lossless Algorithm for On-Board Satellite Hyperspectral Data Compression
* The difference between training and testing networks is unclear in the text or the figure.
* No comparisons with efficient compression algorithms used on satellites or edge devices
* Authors claim that the advantages of the proposed method are particularly visible on image seams (page 7); elsewhere in the paper, it is mentioned that the proposed method deals with image patches. How can the proposed method show advantages on image seams while it is working on the individual patches as input and not the stitched image?
Technical Quality: 3
Clarity: 2
Questions for Authors: * Please rewrite the text and the figures to clarify the distinction between training and testing stages.
* Update the literature review with models used for compression on-board satellites. (suggested search terms: on-board satellite compression, in-orbit satellite compression)
*Does the proposed compression and decompression work with the complete large image? Or just the patches? When do the reconstructed patches get stitched? See the last comment on the "Weaknesses" section.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the very detailed review and suggestions.
Fig.S2 can be found in global response PDF.
### Q1. Please rewrite the text and the figures to clarify the distinction between training and testing stages.
Sorry for the confusion in writing. The training is divided into two stages.
In the first stage, we train the compression model. Since the Image decoder $\mathcal{D}$ needs two parts of information for decoding, they correspond to $y\prime$ in Fig.S2, which represents the feature map extracted by the on-board image compression encoder, and $z_0$, which represents the compensation information. Therefore, we introduce another Image encoder, corresponding to the Image encoder in the Compensation Model part of Fig.S2, to extract compensation information $z_0$ from the original image. In the first stage, $\mathcal{E}$, $\tilde{\mathcal{E}}$ and $\mathcal{D}$ are trained together.
In the second stage of training, we freeze the parameters of $\mathcal{E}$, $\tilde{\mathcal{E}}$ and $\mathcal{D}$, and train the noise prediction network, with the goal of making the information generated by the diffusion model as close to $z_0$ as possible, denoted as $z_0\prime$, so as to generate the compensation information required by the decoder.
In the testing stage, the trained diffusion model can generate compensation information $z_0\prime$. Therefore, we no longer need $\tilde{\mathcal{E}}$. The $z_0\prime$ generated by the diffusion model replaces the $z_0$ extracted by $\tilde{\mathcal{E}}$ to help the image decoder decompress the image.
We will improve the text and figures to guarantee a better reading experience.
### Q2. Update the literature review with models used for compression on-board satellites.
We will add the following content to the *Background section*.
There are many methods for remote sensing image compression, but most of them don't focus on onboard deployment. There are also some works for compressing data on satellites. [1] used the CAE model to extract image features and reduce the image dimension to achieve compression. However, this method only considers the reduction of image dimension and does not consider the arithmetic coding process in the actual transmission process, resulting in the image compression rate being fixed at 8 and the bpp not being able to be flexibly adjusted. [2] proposed the complexity-reduced VAE, which reduces the amount of calculation by reducing the number of model channels and the entropy model structure. However, violently reducing the number of model channels will lead to a significant decline in model performance.
### Q3. No comparisons with efficient compression algorithms used on satellites or edge devices.
On terrain, there have been some efficient compression algorithms for edge devices. However, on terrain, edge devices are usually used to decompress images at the receiving end [3][4][5], for example, receiving a picture on a smartphone. Therefore, existing efficient compression algorithms for ground-edge devices usually focus on decoder efficiency, which is not applicable to satellite scenarios. Few works focus on encoder efficiency on satellites for image compression. We choose the European Space Agency (ESA)'s paper [1] as a baseline and list comparison as follows and will add them in revision.
[1] conducts experiments on three platforms, respectively NVIDIA GeForce GTX 1650, Intel Myriad 2 VPU, and Intel CoreTM i7-6700 Processor. However, the power consumption of GPU and CPU shown in the article is 83W and 45.5W respectively (the results are shown in Table V in [1]), which is unpractical for power-constrained satellites and no satellites deploy GeForce GTX 1650 or Intel i7-6700 as payloads. For VPU, according to Table V, taking 10.92 seconds to process 2048 $256\times256$ patches, we can infer that the throughput onboard of [1] is 98.36Mbps, while COSMIC can achieve 507.37Mbps within satellite supported power range [6].
### Q4. Authors claim that the advantages of the proposed method are particularly visible on image seams (page 7); elsewhere in the paper, it is mentioned that the proposed method deals with image patches. How can the proposed method show advantages on image seams while it is working on the individual patches as input and not the stitched image?
Earth-observation satellites take large photos, for example, the swath width of WorldView-3 is 13.1km with 1m GSD [7] which will lead to several GBs raw data per photo. The constrained computing resources on the satellites cannot support to compress an entire photo. Therefore, satellite photos are typically compressed by tiling to small images and compression onboard. After receiving on the ground, they are decompressed and stitched together. COSMIC and all baselines all follow this process and do not specially process seams. Generally speaking, without any special processing on the seams, the higher the decompression image fidelity, the less noticeable the seams will be. Moreover, stitching satellite image algorithms is orthogonal to compressing each tile, and we’ll discuss them in revision.
### Reference
[1] Artificial intelligence based on-board image compression for the Φ-Sat-2 mission. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023.
[2] Reduced-complexity end-to-end variational autoencoder for on board satellite image compression. Remote Sensing, 2021.
[3] Computationally efficient neural image compression. arXiv:1912.08771, 2019.
[4] Computationally-efficient neural image compression with shallow decoders. ICCV 2023.
[5] Complexity-guided slimmable decoder for efficient deep video compression. CVPR 2023.
[6] Wildfires From Space. [https://blogs.nvidia.com/blog/ororatech-wildfires-from-space/](https://blogs.nvidia.com/blog/ororatech-wildfires-from-space/), 2021.
[7]WorldView-3 [DG_WorldView3_DS_2014.pdf (spaceimagingme.com)](https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf)
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thanks alot for clarifying. Q4 is clear now but the remaining points require major revisios to the paper especially Q3.
---
Rebuttal 2:
Title: Major revision for Q1
Comment: Thank you for the feedback.
Given the character limit (5000), we have to response each question separately in three comment.
This is the response to Q1. We’ll make the revision as follow:
**Q1. Rewrite the training and testing stage**
We revise the *Method* section as follow:
1. **Add more detailed training process and rearrange the order for clearer expression.**
2. **Clarify the distinction between training and testing stages.**
The final revision in Sec4.3 from L198 to L205 is as follow (~~strike-through~~ means the removed content and ***Italic+Bold*** indicates newly added):
> ~~We first determine what details are lost in $\lfloor \mathbf{y} \rceil$.~~ ***The training is divided into two stages. In the first stage, we train the compression model.*** ~~Here, the image decoder receives two parts of information, one of which is the latent representation $\lfloor \mathbf{y} \rceil$ extracted by the encoder $\mathcal{E}$ on the satellite, and the other part is the information $\mathrm{z}_0$ extracted from the original image $\mathbf{x}_0$ by the image encoder $\tilde{\mathcal{E}}$ on the ground, which is used as compensation to $\lfloor \mathbf{y} \rceil$.~~***Since the Image decoder $\mathcal{D}$ needs two parts of information (i.e. $y\prime$ and $z_0$ in Figure 2) for decoding, we introduce another image encoder $\tilde{\mathcal{E}}$ to extract compensation information $z_0$ from the original image.*** In the first stage, $\mathcal{E}$, $\tilde{\mathcal{E}}$ and $\mathcal{D}$ are trained together. ***In the second stage of training, we freeze the parameters of $\mathcal{E}$, $\tilde{\mathcal{E}}$ and $\mathcal{D}$, and train the noise prediction network, with the goal of making the information generated by the diffusion model as close to $z_0$ as possible, denoted as $z_0\prime$, so as to generate the compensation information required by the decoder.***
During the inference phase, ~~$\mathrm{z}_0$ is replaced by $\mathrm{z}_0\prime$ generated from Gaussian noise by diffusion under the guidance of specific conditions.~~ ***the trained diffusion model can generate compensation information $z_0\prime$. Therefore, we no longer need $\tilde{\mathcal{E}}$. The $z_0\prime$ generated by the diffusion model replaces the $z_0$ extracted by $\tilde{\mathcal{E}}$ to help the image decoder decompress the image.***
>
---
Rebuttal 3:
Title: Major revision for Q2
Comment: This is the response to Q2. We’ll make the revision as follow:
**Q2. Update the literature review in Background section**
We revise the *Background* section as follow:
1. **Remove the claim that no algorithm for compressing data on satellite exist.**
2. **Update the literature review of onboard image compression algorithm.**
The final revision in Sec2.1 from L112 to L119 is as follow (~~strike-through~~ means the removed content and ***Italic+Bold*** indicates newly added):
> ~~Although there are some works on remote sensing image compression, none of the previous work is targeted at on-board computing scenarios.~~ ***There are some compression methods specifically for remote sensing images [1,2,3]***. [1] uses discrete wavelet transform to divide image features into high-frequency features and low-frequency features, and design a frequency domain encoding-decoding module to preserve high-frequency information, thereby improving the compression performance. [2] explore local and non-local redundancy through a mixed hyperprior network to improve entropy model estimation accuracy. ***Few of these work focus on onboard deployment. [4] use the CAE model to extract image features and reduce the image dimension to achieve compression, and deploy the model on VPU. However, this method only considers the reduction of image dimension and does not consider the arithmetic coding process in the actual transmission process, resulting in the image compression rate can only be adjusted by changing the model architecture.***
>
Reference
[1] Remote sensing image compression based on high-frequency and low-frequency components. IEEE Transactions on Geoscience and Remote Sensing, 2024.
[2] Remote sensing image compression based on the multiple prior information. Remote Sensing, 15(8):2211, 2023.
[3] Global priors with anchored-stripe attention and multiscale convolution for remote sensing images compression. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023.
[4] Artificial intelligence based on-board image compression for the Φ-Sat-2 mission. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023.
---
Rebuttal 4:
Title: Major revision for Q3
Comment: This is the response to Q3.
**Q3. Add a baseline to compare with efficient compression algorithms used on satellites**
To further alleviate your concerns, we select a representative work [4] of ESA for onboard compression for comparison. We use the same model structure as in [4] and triple the number of channels to adapt RGB image as input. We retrain the models with fMoW dataset for a fair comparison with Adam optimizer with $lr = 1 × 10^{−4}$ for 100 epochs with a batchsize of 64.
- **COSMIC can still achieve SOTA results on distortion and perception metrics.** The results show that under similar PSNR, COSMIC achieve higher MS-SSIM, lower LPIPS and FID at lower bpp. As [4] only considers the reduction of image dimension, only some fixed bpps can be achieved. And due to the serious reduction of image dimension, a large amount of information is lost, and the image reconstruction quality is poor.
| Method | bpp↓ | PSNR↑ | MS-SSIM↑ | LPIPS↓ | FID↓ |
| --- | --- | --- | --- | --- | --- |
| ESA_2023[4] | 1.0 | 28.07 | 0.979 | 0.1229 | 71.55 |
| COSMIC(ours) | 0.61 | 28.68 | 0.980 | 0.0462 | 19.44 |
| ESA_2023[4] | 2.0 | 29.51 | 0.986 | 0.0863 | 57.46 |
| COSMIC(ours) | 0.76 | 29.42 | 0.986 | 0.0349 | 16.95 |
- **For efficiency comparison, COSMIC can reduce FLOPs by $3\times$ and increase throughput by $5\times$**, as shown in the following table. [4] conducts experiments on VPU platform, and we use the same level computing power edge devices, Jetson Xavier NX. For detailed information, please refer to *Global response Q1* and *Rebuttal Q3*.
| Method | FLOPs (G) ↓ | Throughput (Mbps)↑ |
| --- | --- | --- |
| ESA_2023[4] | 15.4 | 98.36 |
| COSMIC(ours) | 4.9 | 507.37 |
Reference
[1] Remote sensing image compression based on high-frequency and low-frequency components. IEEE Transactions on Geoscience and Remote Sensing, 2024.
[2] Remote sensing image compression based on the multiple prior information. Remote Sensing, 15(8):2211, 2023.
[3] Global priors with anchored-stripe attention and multiscale convolution for remote sensing images compression. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023.
[4] Artificial intelligence based on-board image compression for the Φ-Sat-2 mission. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023.
---
Rebuttal Comment 4.1:
Comment: Thanks for the feedback. I raise my score to 6 borderline accept because of the new edits.
---
Reply to Comment 4.1.1:
Title: Thank You for Your Positive Feedback and Gentle Reminder
Comment: Dear Reviewer GEfY,
Thank you so much for your positive feedback! It encourages us a lot!
We noticed that you mentioned in your response that you would raise your score to 6. Again, we sincerely appreciate this! However, the current score remains unchanged (as 4). We speculate that you may have forgotten to make changes to your original rating in your busy schedule. We would be very grateful if you could kindly change the score before the end of the author-reviewer discussion at your convenience, in case of potential misunderstandings during the reviewer discussion period.
Best Regard,
#Paper9518 Author(s) | Summary: This paper presents COSMIC, a coding scheme designed for satellite-to-ground image transmission. It addresses the disparity in computing performance between the satellite and ground station. COSMIC features a lightweight encoder on the satellite, reducing FLOPs by 2.6 to 5 times, to achieve a high image compression ratio and save bandwidth. On the ground, a diffusion-based model compensates for image detail loss during decoding. Together, these components facilitate efficient satellite-to-ground image transmission.
Strengths: 1. Unlike traditional methods that rely on arithmetic coding, this paper employs a generative model to reduce the precious information bandwidth, making for an interesting and novel approach.
2. The use of a diffusion model to supplement missing details is highly feasible. Despite the availability of various encoding and decoding techniques, this choice is both wise and practical.
3. Experimental results demonstrate that this method achieves better rate-distortion (RD) performance compared to other approaches.
Weaknesses: 1. The application scenario restrictions are not comprehensive. In particular, the paper overlooks an important limitation of satellites: their power capability. Generally, satellites are powered by photovoltaic panels, so power consumption must be considered when calculating hardware demands. Compared to power consumption, the influence of channel width may be less critical. Therefore, I would like to see a more in-depth discussion on power challenges.
2. There are significant issues in the writing of the thesis. For example, section 4.3 states: "We first determine what details are lost in y. In the initial training stage (Figure 2(b)), we train the image compression encoder $\mathcal{E}$, image encoder $\widetilde{\mathcal{E}}$, and image decoder $\mathcal{D}$ jointly." However, in this figure, the image compression encoder, image encoder, and image decoder are not correctly annotated, which seriously affects the interpretation and assessment of this paper. There are many similar instances throughout the text.
3. Different satellites may carry different sensors, leading to variations in the type of metadata ($m$), which may affect the method's performance. This potential variability requires more discussion.
Technical Quality: 3
Clarity: 1
Questions for Authors: See the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for this thoughtful review and we are glad to see their positive assessment.
Note that Fig.S2 can be found in the PDF attached to the global response.
### Q1. More in-depth discussion on power challenges.
Please see our general response 1 above.
### Q2. Issues in the writing.
Sorry for the confusion. The image compression encoder is deployed on the satellite, which is lightweight and used to extract satellite image features. In Fig.S2, it corresponds to the image encoder on the satellite in the Compression Model part. The image encoder corresponds to the Image Encoder module in the Compensation Model part in Fig.S2. This module is only used during training to extract the information lost in the lossy compression process to provide compensation for the decompression process, and help finetune the diffusion to learn how to generate the compensation information as a target. During the inference, this module is not used, and the compensation information is generated by diffusion. Image Decoder corresponds to the Image Decoder module in the Compression Model part of Fig.S2, which is used for the image decompression process. We make detailed annotations in Fig.S2, and will further explain and modify them in revision.
### Q3. Different satellites may carry different sensors, leading to variations in the type of metadata (m), which may affect the method's performance.
Good question. We demonstrate that **COSMIC has SOTA results with only three common metadata (i.e. location, timestamp, and GSD) on satellites**. Different satellites carry different sensors but there are several sensors on almost all satellites. Take LANDSAT8 launched by NASA in 2013 [1] and Sentinel-2 launched by ESA in 2015 [2] as examples, they can all collect location, timestamp, and GSD. If we use these three metadata and take a bpp of 0.46 to experiment with COSMIS, PSNR and MS-SSIM decrease slightly, respectively from 27.31 to 27.20 and from 0.969 to 0.968, which can still guarantee the SOTA results. We believe that more metadata can achieve better results, and COSMIC can still achieve SOTA performance even with commonly used metadata. We’ll add them in revision.
### Reference
[1] Landsat 8 (L8) Data Users Handbook Version 5.0 [https://www.usgs.gov/landsat-missions/landsat-8-data-users-handbook](https://www.usgs.gov/landsat-missions/landsat-8-data-users-handbook)
[2]Sentinel-2 User Handbook [sentinel.esa.int/documents/247904/685211/Sentinel-2_User_Handbook](https://sentinel.esa.int/documents/247904/685211/Sentinel-2_User_Handbook)
---
Rebuttal Comment 1.1:
Comment: Your explanations effectively addressed my concerns. Considering the technical reliability and potential impact, I will maintain my original rating.
---
Rebuttal 2:
Title: A Gentle Reminder of the Final Feedback
Comment: Please allow us to thank you again for reviewing our paper and the insightful comments, and in particular for recognizing the strengths of our paper in terms of novel method, highly feasible, good soundness, and good contribution.
Kindly let us know if our response and the new experiments have properly addressed your concerns. We are more than happy to answer any additional questions during the post-rebuttal period. Your feedback will be greatly appreciated. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments and acknowledging that the paper is well-written and with clear logical flow (AGki), the method is interesting and novel (gvA5/GEfY/AGki), highly feasible (gvA5/upSj/AGki) and potential to be extended to other edge devices (GEfY), and the evaluation is comprehensive (gvA5/upSj). We have carefully considered your comments and will take them into account to further improve our work. Before we respond to each reviewer individually, we address common concerns as follows.
### Q1. Power consumption in real-world application
Thanks to Reviewer AGki for useful suggestions. We plan to add the following contents in revision. **COSMIC can be deployed on a real satellite** by deploying COSMIC's encoder $\mathcal{E}$ on an embedded GPU (i.e. Nvidia Jetson Xavier NX) which was already deployed on various satellites in-orbit (FOREST-1, 2 [1], Chaohu-1 [2], Optimus [3], etc.). After training, we convert and deploy the image compression encoder $\mathcal{E}$ compatible with the NX system via TensorRT 8.2.1 SDK (Jetpack 4.6.1, CUDA 10.2, cuDNN 8.2.1). We use the tegrastats tool to monitor NX's power consumption. During the compression, the power consumed by $\mathcal{E}$ is between 5.7W and 7.7W, which can be fully supported by satellites like FOREST-1, 2 since their payloads can support NX running at 15W [1].
### Q2. More baselines for comparison
Different from natural images, remote sensing images are actually multimodal data, which contain many sensor data in addition to images (e.g. timestamps and location). Our insight is that **multimodal information in sensor data is instructions of image contents** to a certain extent. For example, a satellite's location can roughly determine whether its photo content is a city, desert, or ocean. Thus, COSMIS is novel since we use the multimodal data nature of sensing images for better decompression which cannot be generally extended by natural images without such multimodal information as instructions. Moreover, various existing methods rely on complex encoders so they are hard to meet the computing and power constraints of satellites.
According to the suggestions of reviewers GEfY and AGki, we add the representative work HL-RS [5] in remote sensing image compression and a latest work Elic [6] as baselines. Please note that LIC_TCM [7] has FLOPs of 51.19G, which is 10 times larger than COSMIC, so we do not experiment it as a baseline for comparison. The results of the encoder efficiency analysis (FLOPs) are shown as follows (* represents the newly added baseline).
| Method | Elic*[6] | HL-RS*[5] | CDC | COLIC | Hific | mbt-2018 | cheng-2020 | COSMIC |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| FLOPs(G) | 21.78 | 11.87 | 13.1 | 26.4 | 26.4 | 8.07 | 24.45 | 4.9 |
We also show the rate-distortion (perception) results across 12 metrics in Fig.S1 in the PDF attached to the global response. In 8 of them, COSMIC achieves SOTA results at all bpp. In the other 3 metrics, COSMIC achieves SOTA results at certain bpp.
### Reference
[1] Wildfires From Space. [https://blogs.nvidia.com/blog/ororatech-wildfires-from-space/](https://blogs.nvidia.com/blog/ororatech-wildfires-from-space/), 2021.
[2] "Chaohu 1". Gunter's Space Page. Retrieved July 12, 2024, from [https://space.skyrocket.de/doc_sdat/chaohu-1.htm](https://space.skyrocket.de/doc_sdat/chaohu-1.htm).
[3]Space Machines Company, Optimus OTV, [https://space.skyrocket.de/doc_sdat/optimus-otv.htm](https://space.skyrocket.de/doc_sdat/optimus-otv.htm), 2024.
[4] Jetson Modules [https://developer.nvidia.com/embedded/jetson-modules](https://developer.nvidia.com/embedded/jetson-modules)
[5] Remote sensing image compression based on high-frequency and low-frequency components. IEEE Transactions on Geoscience and Remote Sensing, 2024.
[6] Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding CVPR 2022.
[7]Learned image compression with mixed transformer-cnn architectures. CVPR 2023.
Pdf: /pdf/a934766087383247f63377efcae02b39226de593.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Customizing Language Models with Instance-wise LoRA for Sequential Recommendation | Accept (poster) | Summary: Sequential recommendations is well studied problem with impact across industries which try to personalise a users experience based on their past interactions. Recent popularity of LLMs has led to a lot of research into their use for this task through various generation methodologies. One of those methods is the use of LoRA to adapt the LLM to a specific data distribution (i.e. dataset) leading to better recommendations but it has been shown that due to the high variability in the sequences being input it is often hard to capture all kinds of patterns. This leads to negative transfer of information - to solve which the authors propose a method called iLoRA which is really MoE meets LoRA. Instead of a single LoRA module they propose having multiple LoRA "experts" with the motivation that they will capture different kinds of patterns and information in them. To combine the output from these experts they use an "attention"-like gating mechanism that looks at the input sequence embeddings (created using existing techniques) and computes a softmax to get the weight of each expert to be considered.
The paper contains extensive experimentation to prove out the ideas and contributions and validate the motivation behind the idea. The metrics show that this approach is better than the other SoTA comparable methods in industry.
Strengths: The paper is written clearly, is easy to follow and explains the key ideas that inspired and led to the approach described. The experiments conducted (including ablation studies) provide sufficient evidence to prove the claims and contributions.
The paper is able to take key concepts (MoE, Attention, LoRA, item recommendations) in research and combine them in a new way to solve a problem like sequential recommendation which is significant across many industries.
The results on the metrics being used by the paper outperform other SoTA techniques which proves out the significance for industry use. The performance gain is also demonstrated across 3 popular datasets which gives it authenticity.
The idea is original albeit similar techniques are present in research where people have either tried to a weighted sum of multiple LoRA's or swap them out at runtime based on a gating mechanism. Although I haven't seen an application in recommendation systems yet.
Weaknesses: The authors should have shown the contribution by expanding to more metrics - at the least HitRate@K, but also metrics like NDCG@K.
These are very common metrics in the recommendation space and provide a more holistic view of the output quality.
The novelty in the paper is limited as it combines common research ideas in a not so unique way.
Also, the impact of training data size on the model performance was not studied - recommendation models can be heavily dependent on this due to the generic sparse nature of data so this analysis would give more weight and significance to the paper.
I would have liked to understand more on the out of domain abilities of such a model and also specific abilities on cold start recommendations - which is a big challenge in this space. This was not discussed.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Did you do any comparisons with the idea of training multiple full LoRA's and then doing a weighted sum of them OR swapping them based on instance embeddings ?
2. Other than the overall metrics did you do any analysis on performance when the data is sparse , or situations of cold start recommendations ?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have discussed the major limitations. The impact of invalid outputs due to the generative approach was not discussed in detail - this could have potential risks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer wwLq
**Comment:**
We gratefully thank you for your valuable comments! Here we meticulously give point-by-point responses to your comments, and further revise our manuscript following your suggestions. We sincerely appreciate the depth of your insights, which have undoubtedly enriched our work. Here we meticulously give point-by-point responses to your comments.
> **Comment 1: More evaluation metrics** - "The authors should have shown the contribution by expanding to more metrics - at the least HitRate@K, but also metrics like NDCG@K. These are very common metrics in the recommendation space and provide a more holistic view of the output quality."
We fully agree that additional evaluation metrics are needed. Inspired by your comment,For the LastFM and MovieLens datasets, we have expanded the evaluation metrics, including HitRatio@3, HitRatio@5, NDCG@3, and NDCG@5, as **Table 1** in one-page uploaded pdf shows.
> **Comment 2: Limited novelty** - "The novelty in the paper is limited as it combines common research ideas in a not so unique way."
Thank you very much for your feedback! In the context of recommendation systems, user preferences are reflected in their historical interaction sequences. Unlike single tokens that lack specific meanings in the recommendation context, user representations encompass rich user preferences and can serve as inputs for gating, outputting instance-wise expert activation weights. while iLoRA draws inspiration from the soft-attention concept, it innovatively guides routing using user interaction history representations, which is a core and critical innovation of our entire iLoRA framework. By routing with user representations, iLoRA effectively provides instance-wise unique activation weights for each different sequence, enabling recommendations that better capture user differences and fine-grained preferences, mitigating the negative transfer.
> **Comment 3: Lack of data size impact experiments**
We appreciate your insightful question regarding the impact of training data size on model performance. Inspired by your comment, we have delved into the influence of the generic sparse nature of data on final recommendation effectiveness. Specifically, on the LastFM dataset, we have conducted experiments at varying degrees of data sparsity, dropping sequences randomly by 25%, 50%, 75%, and 90% at the user level. Subsequently, we have evaluated the performance of both traditional sequence recommendation models (SASRec, Caser, GRU4Rec) and LLM-based recommenders (TALLRec, LLaRA, iLoRA) across these datasets, as illustrated in the **Figure 3** in one-page uploaded pdf.
Several key observations emerged from our study:
- Traditional sequence models exhibit a relatively gradual improvement in recommendation performance as training data size increases. In contrast, LLM-based recommenders demonstrate rapid performance gains with smaller training sets. This phenomenon may stem from LLMs quickly capturing foundational patterns of the recommendation scenario from limited samples, thereby bridging the gap between training data and real-world recommendation scenarios and manifesting "emergent" behavior on unseen data. However, as training data size grows, diminishing marginal returns on model performance become evident, albeit maintaining a positive correlation with data size within our training set.
- Our proposed iLoRA framework consistently outperforms baseline methods under equivalent training data sizes, underscoring the superior effectiveness of iLoRA in recommendation tasks.
- Even with only 10% of user training data, all LLM-based recommenders employed in our study outperform traditional sequence recommendation methods in terms of HitRatio@1, highlighting the substantial potential of LLM-based recommendations in advancing traditional recommendation paradigms.
In conclusion, these findings underscore the nuanced relationship between training data size and model performance in recommendation systems, particularly highlighting the unique advantages of LLM-based approaches in sparse data environments.
> **Comment 4: Discussion about out-of-domain and cold-start recommendation abilities**
We value your comments.
To further investigate the out-of-domain capabilities of LLM-based Recommenders, we have conducted an additional experiment, as depicted in the **Figure 4** in one-page uploaded pdf. Specifically, we fine-tuned iLoRA using training data from different domains: 1) iLoRA (LastFM), 2) iLoRA (MovieLens), and 3) iLoRA (Steam). We then evaluated the models on the test sets of LastFM, MovieLens, and Steam. Our results demonstrate iLoRA's ability to generalize across domains. For instance, after fine-tuning exclusively on MovieLens data, 'iLoRA (MovieLens)' exhibited strong performance on the LastFM dataset, even surpassing some traditional sequence models trained and tested directly on LastFM data. This finding is impressive, indicating that iLoRA possesses cross-domain generalization capabilities beyond single-domain adaptation."
> **Comment 5: Comparison with ensemble LoRA models and instance-based swapping** - "Did you do any comparisons with the idea of training multiple full LoRA's and then doing a weighted sum of them OR swapping them based on instance embeddings?"
We appreciate your comments. We considered the approach you suggested, which theoretically could lead to better recommendation performance. However, in practice, it would incur greater training resource expenses. Our iLoRA framework aims to release negative transfer while maintaining a consistent trainable parameter count to mitigate negative transfer. Thus, we have opted to maintain a fixed total parameter count, specifically expert number multiplied by each expert's rank equals 16. | Summary: The author focus on extend sequential recommendation task with the help of large language models. The author proposed instance-wise LoRA and integrate with mixture of experts framework to capture specific interests of user preferences. Experiments results on two benchmark datasets demonstrate the effectiveness of proposed model.
Strengths: 1. The authors proposed iLoRA for sequential recommendation task, which address previous works failing to capture individual variability. It creates a diverse array of experts and captures specific aspects of user preferences, guiding the gating network to output customized expert participation weights.
2. Extensive experiments on two benchmark datasets demonstrate the effectiveness of iLoRA. The ablation studies clearly show that every part of the proposed model makes sense.
3. The paper is well-written. Figures and tables are very clear and easy to read.
Weaknesses: See Questions to authors.
Technical Quality: 3
Clarity: 3
Questions for Authors: . How do you partition the sequence data into 8 clusters? Clustering by calculating Euclidean distance of what? is the output embeddings of SASRec? Explain why you do that.
2. I think that the proposed mod framework in this manuscript is kind of like the soft-attention mechanism with some parameters having shared weights. What are the in-depth or essential differences between the proposed MoE framework and the soft-attention mechanisms?
3. Lacks further analysis in ablation study section 4.3. The authors just repeat the results that have shown in figures. For example, 4-experts can achieve the optimal performance and 8-experts reduce it. Why? With fewer training epochs, 8-experts outperforms, but it cannot keep it while training epochs increase. Is it because of the hidden dimension r of LoRA that limits the performance of more experts?
4. The writing language can be further checked, since there are minor language mistakes such as nvidia-smi a100.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The author explains the limitations of the model constructed in the article, such as the recommendation strategy and universality that still need to be explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We gratefully thank you for your valuable comments! Here we meticulously give point-by-point responses to your comments, and further revise our manuscript following your suggestions. Hope that our responses can address all your concerns.
---
> **C1: Questions about experimental details and motivation**
Thank you for your feedback. Existing research assumes that gradient conflicts, defined as negative cosine similarity between gradients, lead to negative transfer [1], where different gradients interfere with each other, affecting model performance [2]. To support our research, we aim to investigate whether training a single model to handle all recommendation sequences could result in negative transfer. Previous research [2] suggests that close tasks enjoy similar loss geometries and vice versa. Therefore, we need to partition the original dataset based on user similarity.
In recommendation scenarios, user preferences are implicitly reflected in their historical interaction sequences. Therefore, we partition the user population using representations derived from their historical interaction records. **SASRec** stands as a cornerstone in sequential recommendation tasks. As you mentioned, we utilize the model to generate rich collaborative information representations for each user. Based on these representations, we apply the **K-means** clustering algorithm to divide the dataset into clusters, with the number of clusters set as a hyperparameter (in this case, 8). Clustering based on **Euclidean distance** and cosine similarity yields similar results, with comparable sequence counts per cluster.
---
> **C2: Lack of clear distinction from soft attention mechanism**
Thank you for your feedback! As you mentioned, our proposed iLoRA framework draws inspiration from the soft-attention concept, utilizing multiple experts weighted and summed to obtain activation weights. However, the innovation lies in the differences between iLoRA and traditional soft attention methods. In the context of recommendation systems, user preferences are reflected in their historical interaction sequences. Soft attention routing methods typically employ token-level routing or use the entire input *x* as the routing input. A single token lacks specific meanings in the recommendation context, whereas using the entire prompt as routing input introduces considerable redundancy and irrelevant information. Unlike single tokens that lack specific meanings in the recommendation context, user representations encompass rich user preferences and can serve as inputs for gating, outputting instance-wise expert activation weights.
This innovation inspired the development of iLoRA, which innovatively guides routing using representations of user interaction history—a core and critical feature that distinguishes iLoRA from traditional soft-attention mechanisms. By routing with user representations, iLoRA effectively provides instance-wise unique activation weights for each different sequence, enabling recommendations that better capture user differences and fine-grained preferences, thereby mitigating the negative transfer.
To demonstrate the effectiveness of iLoRA compared to traditional soft attention mechanisms, we have conducted an additional ablation experiment. We compared the performance of iLoRA (user representation routing) against two soft attention routing methods (token-level routing and full input *x* routing) on the LastFM dataset, as depicted in **Figure 2** in one-page uploaded pdf. This experiment underscores our innovative use of user historical interaction sequences for routing.
---
> **C3: Lacks further analysis in ablation study**
We appreciate your insightful observation, which highlights an intriguing phenomenon we have uncovered in our research. Thank you for pointing out the shortcomings in our analysis in section 4.3; we will revise our paper accordingly.
Your observation aligns with a common finding when applying MoE+LoRA in other domains [4-5]. Specifically, increasing the number of experts does not necessarily improve model performance.
We propose the iLoRA framework to enhance model expressiveness while maintaining a constant trainable parameter count, mitigating negative transfer. Therefore, we maintain a fixed total parameter count, i.e., expert number * each expert's rank = 16. Increasing the number of experts benefits by allowing the model to focus more on individual differences between users. Conversely, increasing the hidden dimension *r* of each expert can encourage the model to emphasize universal preferences across the user population, but it may simultaneously limit the expressive power of additional experts. Thus, under the constraint of a fixed total parameter count, this becomes a trade-off scenario. The optimal number of experts likely depends on the data's heterogeneity. In our experimental setup, the model indeed achieved optimal performance with `expert num=4`.
Regarding your second point, we agree with your analysis. A smaller number of experts restricts the iLoRA framework's ability to capture fine-grained user preferences. We appreciate your insightful discussion on this issue.
> **Comment 4: Language mistake**
We sincerely apologize for the oversight and have promptly addressed these issues in the manuscript. We will carefully review the language issues and make corrections in the revised version.
Once again, we appreciate your thorough review and conscientious feedback on our work!
[1] Characterizing and avoiding negative transfer.
[2] Investigating and Improving Multi-task Optimization in Massively Multilingual Models.
[3] LLaRA: Large Language-Recommendation Assistant.
[4] Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters.
[5] MOELoRA: An MOE-based Parameter Efficient Fine-Tuning Method for Multi-task Medical Applications.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you very much for your valuable and constructive feedback. We greatly appreciate the time and effort you have put into providing us with these insights, which have significantly helped us in refining our paper.
Regarding your primary concern about **experimental details and motivation**, we have provided additional discussion and clarification. For the point you raised about the **Lack of clear distinction from the soft attention mechanism**, we have thoroughly explained the differences and advantages of our proposed iLoRA framework compared to the soft attention mechanism. Additionally, we have included an experiment demonstrating the performance benefits of iLoRA over soft attention routing in **Figure 2** in one-page uploaded pdf. Concerning the **Lack of further analysis in the ablation study**, we have added a detailed analysis following your suggestion. We also have throughly examined the linguistic concerns raised and will make the appropriate revisions in the final version.
Thank you once again for your suggestions, and we hope that our response effectively addresses your concerns! We truly appreciate your support, encouragement, and understanding.
Best regards,
Authors | Summary: The paper introduces iLoRA, which combines LoRA with user representation-guided mixture of expert architecture. The motivation is clear and the writing is good. Extensive experiments are conducted on three public datasets, demonstrating the performance of the proposed method.
Strengths: 1. Timely study on large language models and sequential recommendation.
2. The motivation of the proposed method, i.e., should be more personalized LoRA parameters, is clear and convincing.
3. The writing is good and the paper is easy to follow.
4. Experiments are conducted on three public datasets, demonstrating the performance of the proposed iLoRA technique.
5. Code is available during the reviewing phase.
=== Update After the Rebuttal Phase ===
The rebuttal includes new results and clarifications, which are helpful in addressing my concerns. In recent days, I also conducted experiments using the provided code, and the results further verify the paper’s conclusions. As a result, I will raise my rating and vote for acceptance.
Weaknesses: 1. The evaluation setting is kind of fragile. For each input sequence, only 20 candidate items are randomly selected as negative samples. In this setting, the results may be influenced a lot by randomness. In addition, if the proposed method only applies to a candidate size of 20 items, the value of this work to the real recommender systems can be doubted.
2. References of MoE are limited. The related works about MoE are quite concise. The author(s) are encouraged to discuss more about how MoE is applied to LLMs and recommendation models.
3. Presentation issues.
1. Lines 554 - 562 and lines 563 - 570 are duplicated.
2. Figure 3 (c) is suggested to be replaced by a vector graph to make it more clear.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please find the details in "Weaknesses".
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and valuable comments. Your main suggestions about the evaluation setting help us substantiate wide applicability of our proposed iLoRA. To address your concerns, we have detailed our responses point-to-point below.
---
> **C1: Fragile Setting of Sampled Evaluation**
Thanks so much for your great suggestions! The issue you raised is highly critical, which is a common limitation in current LLM-based recommendation research. Moreover, we are actively exploring methods such as codebook approaches to address full-rank recommendation scenarios. As the following table shows, preliminary experimental results are promising in this regard.
| | Amazon-Sports | | | |
| :--- | --- | --- | --- | --- |
| | HitRatio@5 | HitRatio@10 | NDCG@5 | NDCG@10 |
| SASRec | 0.0324 | 0.0547 | 0.0182 | 0.0247 |
| Our new work | 0.0596 | 0.0784 | 0.0389 | 0.0477 |
We note your mention of the experimental setting where we select the next item from candidate items. In traditional recommendation research, there are excellent studies that adopt sampling-based evaluation methods.
Inspired by your query about whether our iLoRA training framework is limited to tasks involving 20 candidate items for next-item prediction, we have extended our experiments on the LastFM and MovieLens datasets to evaluate results with 30 and 40 candidate items, respectively. We benchmarked our approach against TALLRec and LlaRA, with detailed results presented in the table below. We conducted five repetitions of our experiments using different random seeds and calculated a p-value, which was less than 0.05, demonstrating the stability of our proposed iLoRA framework.
| | MovieLens | | LastFM | |
| :--- | --- | --- | --- | --- |
| | HitRatio@1(30 candidates) | HitRatio@1(40 candidates) | HitRatio@1(30 candidates) | HitRatio@1(40 candidates) |
| TALLRec | 0.3277 | 0.2833 | 0.3461 | 0.2972 |
| LlaRA | 0.3513 | 0.3250 | 0.3772 | 0.3250 |
| iLoRA | 0.4066 | 0.3666 | 0.4283 | 0.3735 |
Besides, for the LastFM and MovieLens datasets, we have expanded the evaluation metrics, including HitRatio@3, HitRatio@5, NDCG@3, and NDCG@5, as the following table shows.
| | LastFM | | | | MovieLens | | | |
| :--- | --- | --- | --- | --- | --- | --- | --- | --- |
| | HitRatio@3 | HitRatio@5 | NDCG@3 | NDCG@5 | HitRatio@3 | HitRatio@5 | NDCG@3 | NDCG@5 |
| GRU4Rec | 0.4370 | 0.4964 | 0.3544 | 0.4110 | 0.4831 | 0.5584 | 0.4075 | 0.4702 |
| Caser | 0.4445 | 0.4918 | 0.3564 | 0.4232 | 0.4892 | 0.5603 | 0.4134 | 0.4724 |
| SASRec | 0.4253 | 0.4792 | 0.3382 | 0.4073 | 0.4256 | 0.5132 | 0.3881 | 0.4239 |
| TALLRec | 0.6814 | 0.7473 | 0.3900 | 0.4650 | 0.4874 | 0.5408 | 0.4290 | 0.4601 |
| LlaRA | 0.7223 | 0.7862 | 0.6016 | 0.6972 | 0.5505 | 0.6189 | 0.4752 | 0.5067 |
| iLoRA | 0.7873 | 0.8350 | 0.6894 | 0.7530 | 0.6464 | 0.7084 | 0.5603 | 0.5905 |
---
> **C2: Limited Reference of MoE**
Thank you for highlighting this valuable point. Following your feedback, we will revise the relevant content in the Related Work section. Due to space constraints, detailed citation information will be included in the References.
Our approach to integrating MoE with LLM considers three perspectives:
- **NLP Perspective**: Mixture-of-Experts modifies feedforward neural network layers into sparse activation experts, significantly expanding model capacity without a substantial increase in computational costs. Recent explorations of MoE have evolved from sample-level to token-level MoE, with most works aiming to scale up model parameters while reducing computational overhead. Our approach differs significantly. We employ a MoE-like structure that nearly preserves total parameter count to address negative transfer in LLM-based recommendation systems.
- **LoRA+MoE Perspective**: In the era of LLMs, researchers have integrated MoE concepts into PEFT methods to enhance model performance. LoraHub trains multiple LoRA models and selects different LoRA combinations based on data type during inference. MOELoRA improves model efficiency in medical multitasking by incorporating MoE structures. However, these methods require data types as input to the router during training, which necessitates prior knowledge of data types for selecting LoRA combinations during inference. Our approach is distinct as iLoRA is an end-to-end framework that does not require a priori knowledge of data types for inference.
- **Recommendation Perspective**: Traditional recommender systems, exemplified by MMoE, demonstrate remarkable capabilities in task scenarios. With the advent of large models, methods like OneRec leverage MoE structures to design expert combinations for multi-domain user embedding, integrating collaborative knowledge into LLMs. Our approach innovates by combining MoE principles with PEFT in LLM-based recommendation scenarios, addressing negative transfer effects caused by individual user differences during fine-tuning. We further propose user representation-guided routing for instance-wise recommendations.
---
> **C3: Typo & Presentation issues**
Thank you for reviewing our work so conscientiously, and we sincerely apologize for the oversight. We will carefully review the presentation issues and make corrections in the revised version. Additionally, we will review the presentation of the entire manuscript.
Once again, we appreciate your thorough review and conscientious feedback on our work!
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to address my concerns. The rebuttal addressed most of them. I’ll raise my rating on "Soundness" and keep my overall rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We would like to express our sincere gratitude for your valuable comments. Your insights on the **Fragile Setting of Sampled Evaluation** and **Limited Reference of MoE** have significantly helped us in establishing a more comprehensive and robust evaluation standard for the proposed iLoRA framework, as well as in providing a more thorough discussion of the relevant content.
We hope that our responses have addressed most of your concerns. If so, we kindly ask if you could consider revising the overall rating of our paper. If there are any remaining concerns, we would be more than happy to further discuss them with you, especially since the rebuttal period is nearing its end. Please let us know if you have any additional questions.
Thank you once again for your support and understanding!
Best regards,
Authors | Summary: The paper addresses the challenge of personalizing language models for sequential recommendation tasks, where user behaviors exhibit significant individual variability. The proposed solution, Instance-wise LoRA (iLoRA), adapts the mixture of experts concept to tailor LLMsfor this variability.
Strengths: The iLoRA framework proposes splitting the standard LoRA module into multiple experts, each capturing specific aspects of user behavior. Key components of this solution include:
- Splitting Low-Rank Matrices into Experts: Dividing the projection matrices into an array of experts to capture different aspects of user preferences.
- Instance-wise Attentions: Using a gating network to generate attention scores for each expert based on the user's historical item sequence. This customization helps in adapting the model more accurately to individual user behaviors.
- Fine-tuning with Hybrid Prompting: Applying the personally-activated LoRA to fine-tune the LLM, thus mitigating negative transfer and enhancing the model's adaptability.
The solution leverages the mixture of experts approach to ensure scalability and efficiency, maintaining the same total number of parameters as the standard LoRA to avoid overfitting.
The starting point of the article's issue is reasonable, and personalized LoRA is an impressive idea.
Weaknesses: #
- The technical contribution of the paper is insufficient. For example, simply splitting the low-rank matrices into multiple experts representing different aspects seems inadequate and has room for expansion. It could be more refined by setting the number of experts based on different user clusters and controlling the low-rank size of the corresponding experts according to the number of users in each category.
- The paper proposes a framework rather than a model. However, in the main experiments, the authors compare it with numerous models instead of fine-tuning-based methods, which is puzzling. Additionally, the framework proposed in the paper seems to be a general one, or are there any challenges that limit it to sequential recommendation? It might be worthwhile to extend beyond sequential recommendation tasks to broader scenarios.
- The effectiveness of this method needs further validation in extremely large-scale datasets or user groups, or with more evaluation metrics.
- It is hoped that the code will be made open-source for careful review. I may adjust my score based on this.
- The reproducibility does not seem to be good enough, and no code link was found
Technical Quality: 3
Clarity: 3
Questions for Authors: see above
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments, which greatly improve our paper. Below we provide the point-to-point responses to address your concerns and clarify the misunderstandings of our proposed method.
---
> **C1: More Refined by Integrating Experts with Clustering**
Thanks. Based on your suggestions, we have made clarifications and additional experiments:
- **Clarification of Our Technical Contribution**: We agree that, for instance-wise LoRA, a more refined idea will better show its potential. However, our paper focuses on a **simple yet effective implementation** of this idea.
- **Clarification of User Clustering**: The user clustering (in Sec Introduction & Experiments) operates independently from the expert design (in Sec Method).
- The role of clustering is to show whether a uniform LoRA causes negative transfer and iLoRA can mitigate it, where the number of user clusters is a hyperparameter.
- The role of experts is to represent different aspects of users, such that their mixture activated with personal attentions makes the LLM specialized in a user instance, where the number of experts is another hyperparameter.
- **Additional Experiment of More Refined Method**: Based on your comments, we conducted additional experiments where the total rank of all experts is 16, while adjusting each expert's low-rank size, as **Fig 1** in one-page uploaded pdf shows.
- **Stability and Complexity**: Integrating clustering adds complexity and potential instability due to initialization sensitivity and the dynamic nature of user behavior.
- **Empirical Observations**: Our findings suggest that uniform expert size distribution in our current model approximates the benefits of clustering but with greater simplicity and stability.
---
> **C2: Missing Comparison with Fine-tuning Methods & Generalization in Broader Scenarios**
Thanks. Below we clarify the misunderstanding of compared baselines and show the results of additional experiments.
There are various parameter-efficient fine-tuning strategies [1], such as LoRA, adapters, and prefix tuning.
- **Fine-tuning Baselines**: We compared iLoRA with the fine-tuning baselines (TALLRec, LLaRA). Specifically, TALLRec tunes LoRA on text-only prompts, and LLaRA tunes LoRA along with a simplified adapter on hybrid prompts.
- **Additional Experiments of More Fine-tuning Baselines**: We added an additional baseline, which adopts prefix-tuning for recommendation.
We visualized the fine-tuning paradigms in **Fig 5** in the one-page uploaded PDF and present their results in the table below:
| | | LastFM | | MovieLens | | Steam | |
| :-------- | ------------------ | ---------- | ----------- | ---------- | ---------- | ---------- | ---------- |
| | | ValidRatio | HitRatio @1 | ValidRatio | HitRatio@1 | ValieRatio | HitRatio@1 |
| TALLRec | LoRA | 0.9836 | 0.4180 | 0.9263 | 0.3895 | 0.9840 | 0.463 |
| PrefixRec | Prefix Tuning | 0.9627 | 0.3754 | 0.8836 | 0.2973 | 0.9273 | 0.4108 |
| LlaRA | LoRA + Adapter | 1.0000 | 0.4508 | 0.9684 | 0.4421 | 0.9975 | 0.4949 |
| iLoRA | instance-wise LoRA | 1.0000 | 0.5000 | 0.9891 | 0.5275 | 0.9981 | 0.5264 |
Clearly, over the three datasets, iLoRA outperforms the baselines using various fine-tuning strategies (LoRA, LoRA+adapter, prefix-tuning) in terms of the HitRatio@1 metric.
- **Generalization of iLoRA to Broader Scenarios**: We are extending the testing of iLoRA on the collaborative filtering task. Due to the time constraints of the rebuttal phase, full results are still pending. We plan to present comprehensive findings during the discussion phase.
---
> **C3: Evaluation on Larger Datasets & More Metrics**
Thanks. We clarify that the Steam dataset tested in our paper is one of the largest datasets currently used in the LLM-based recommendation domain. Following your suggestions, we are running additional experiments on larger datasets. However, due to the time limit of the rebuttal phase, we haven't collected all the results and will update more results during the discussion phase if you are interested.
We agree that additional evaluation metrics are needed. Beyond HitRatio@1, we added HitRatio@K and NDCG@K as the metrics, where K=3 and 5. The table below demonstrates that LoRA (iLoRA) outperforms other baselines across these metrics, in the LastFM and MovieLens datasets.
| | LastFM | | | | MovieLens | | | |
| :------ | ---------- | ---------- | ------ | ------ | ---------- | ---------- | ------ | ------ |
| | HitRatio@3 | HitRatio@5 | NDCG@3 | NDCG@5 | HitRatio@3 | HitRatio@5 | NDCG@3 | NDCG@5 |
| GRU4Rec | 0.4370 | 0.4964 | 0.3544 | 0.4110 | 0.4831 | 0.5584 | 0.4075 | 0.4702 |
| Caser | 0.4445 | 0.4918 | 0.3564 | 0.4232 | 0.4892 | 0.5603 | 0.4134 | 0.4724 |
| SASRec | 0.4253 | 0.4792 | 0.3382 | 0.4073 | 0.4256 | 0.5132 | 0.3881 | 0.4239 |
| TALLRec | 0.6814 | 0.7473 | 0.3900 | 0.4650 | 0.4874 | 0.5408 | 0.4290 | 0.4601 |
| LlaRA | 0.7223 | 0.7862 | 0.6016 | 0.6972 | 0.5505 | 0.6189 | 0.4752 | 0.5067 |
| iLoRA | 0.7873 | 0.8350 | 0.6894 | 0.7530 | 0.6464 | 0.7084 | 0.5603 | 0.5905 |
---
> **C4 & 5: No Open-Source Code Link Available**
Thanks. Please find our codes in the supplementary material. Following your suggestions, we have uploaded the codes, datasets, and checkpoints to an open-source repository accessible via the anonymous link: https://anonymous.4open.science/r/iLoRA-8C57.
[1] Towards a Unified View of Parameter-Efficient Transfer Learning. ICLR 2022.
---
Rebuttal Comment 1.1:
Title: To Authors
Comment: Thank you for your detailed rebuttal, which solved most of my concerns. I am willing to raise the rating to boarderline accept.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you very much for your valuable and positive feedback. We appreciate your recognition of our efforts in addressing your concerns, and we are encouraged by your comments that our responses have effectively addressed the issues raised. This motivates us to continue advancing the field with our research.
Following your suggestion, we will incorporate the additional insights discussed in the rebuttal into the revised version of the paper. Thank you once again for your supportive and understanding comments.
Best regards,
Authors! | Rebuttal 1:
Rebuttal: # Summary of strengths acknowledged by the reviewers and the responses to address their concerns
**Comment:**
Dear ACs/SACs/PCs,
We would like to summarize the strengths of this work acknowledged by the reviewers, and the responses we have made to address all the reviewers’ concerns.
------
**Strengths** acknowledged by the reviewers:
1. Novelty: timely study (Reviewer **59LV**); address previous works failing to capture individual variability (Reviewer **7niz**); take key concepts in research and combine them in a new way to solve a problem (Reviewer **wwLq**); mitigating negative transfer and enhancing the model's adaptability(Reviewer **cnYP**)
2. Well-written (Reviewer **59LV**,**7niz**, **wwLq**); Clear and convincing motivation (Reviewer **59LV**);
3. Extensive experiments (Reviewer **59LV**, **7niz**, **wwLq**).
------
There are four main concerns raised by reviewers:
1. More evaluation metrics (Reviewer **cnYP**, **59LV**, **wwLq**).
2. More explanations related to experiments (Reviewer **cnYP**, **7niz**).
3. Inclusion of the impact of training data size (Reviewer **cnYP**, **wwLq**).
4. Inclusion of out of domain abilities and specific abilities on cold start recommendations (Reviewer **wwLq**).
All of these main concerns have been successfully addressed during the rebuttal phase, and we hope that the improvements we made during this stage will be taken into consideration.
------
We sincerely appreciate your valuable time!
Thanks and regards,
Authors
Pdf: /pdf/ca8442009498807b08a5624491202c4c5e1145fa.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences | Accept (poster) | Summary: The paper shows that place cells can emerge in networks that autoencode temporally continuous sensory episodes based on spatially smoothed Gaussian random fields. The obtained place fields reproduce the disputed idea of remapping, the established fact that such spatial representations are uncorrelated, and a slow representational drift. The model implements “experience manifolds” in the network’s hidden layer and weakly spatially modulated (WSM) rate maps, which are interesting concepts that deserve more analysis. Also, dimensionality of the environment seems to be non-problematic, although also here a comparison to observations in biological experiments would be desirable.
Strengths: The paper introduces useful concepts such as experience manifolds or weakly spatially modulated (WSM) rate maps that may be useful in the further study of hippocampal function, although at the moment the ideas play a role only in the context of modeling, and the experience manifolds seem to be here merely metaphorical (although a similar analysis tool has been used in other instances of population codes). It is very good that clear prediction have been made, and that model is well describe (suppl. material).
Weaknesses: Main problems with the paper are the lack of strong quantitative evidence and the realizability of the model by biologically relativistic neural networks of a size comparable to CA3 (or much smaller considering that a mammal typically works with environmental information of much higher complexity. On the level of the current model, it could be discussed what model features are essential for what feature of the result. For more detail see Questions below.
Technical Quality: 3
Clarity: 3
Questions for Authors: Although it is possible to model CA3 as a recurrent autoencoder to show how place cells can emerge, can we really say that CA3 **is** an autoencoder or that it is its function to represent place fields?
Would it be possible to reach the level where a quantitative comparison to experimental data becomes possible or is this difficult due to the ongoing discussion of what characteristics can be extracted from experimental data towards a meaningfully comparison?
Can affirmative statements like “closely resembles results of experiments” (l295), “consistent with experimental results” (l203) be given more evidence?
The numbers given (l232) are said to be “mirroring experiments with rodent CA3 place cells”, but isn't the absence of certain correlations both in the simulations and in the biological experiments rather weak evidence for the proposed model? Can any remainder correlations be compared?
The fit in [47, Fig. 2A] is quite bold and does not represent the underlying mechanism, so that it is not exactly a good standard for comparison (l239). Would it nevertheless be useful to mention any quantitative agreement with even some interpretation of the biological data?
The “experimental evidence that place cells can develop multiple place field” (l252) was never really “strong”. Would it be possible to consider more recent studies, so that a delicate analysis might be enabled to show whether the experimental observations are “mirrored” by the simulations?
Can the bias in the literature be changed from classical papers towards more recent modeling approaches so that these are sufficiently discussed? It is not needed to do this here comprehensively, but the paper would gain from some comparison with other approaches as would the theory of spatial representations in mammals.
It remains unclear why not the weakly spatially modulated (WSM) rate maps (or any other non-local representation of spatial information) can provide a similar autoencoding property and what specifically is necessary for the formation of roundish place fields. This question can be asked for most models that show place field emergence, so that here also theoretically not much progress is achieved, and it could seems that without the specifically designed noise of sigma=12cm (which is not varied here and not listed in the parameter tables in the appendix) the results may have been less realistic, while the diversity of realistic place fields in not achieved in the proposed model nor is it evaluated in comparison to biological data.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The current version is limited in regards to the discussion of experimental finding, the effect of some of the model parameters, and of the realization of model by biologically more realistic neural networks. Other limitation and some open questions are addressed well in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer kdgt for their time and suggestions.
>Can we really say that CA3 is an autoencoder or that it is its function to represent place fields?
The idea that CA3 functions like an AE directly corresponds to the autoassociative model of CA3 [1-3]. The 'autoencoding' property of the network arises from the fact that the input is a partially masked signal of the target output. We are putting forward the hypothesis that CA3 functions as an AE, and suggest that place cells are naturally produced during pattern completion of memory, while CA3’s core function is not to produce place cells per se.
Note also that that our network functions as a standard continuous time RNN, which are biologically plausible and have been used in many previous studies [4-9].
>Quantitative comparison to experimental data?
We have listed the challenges for such a comparison in the global response. While a statistical comparison may be possible, the precise stimuli to which place cells respond is unknown, thus complicating the possibility of meaningful interpretations. Secondly, the properties of PFs can be parametrically adjusted by parameters within our framework as we intend our work to be a study of PF emergence. These parameters could be fit to experimental data, but that is a substantial endeavor in its own right and there is not sufficient space to report it in a single paper.
>Can affirmative statements like “closely resembles results of experiments” (l295), “consistent with experimental results” (l203) be given more evidence?
These statements aim to emphasize that qualitative aspects of the emergent place cells resemble the phenomenology of biological place cells. We emphasize this qualitative similarity as it extends previous research (Fig. 2) while generating many of the experimentally observed features of PFs within a single framework.
>l232 are said to be “mirroring experiments with rodent CA3 place cells”, but isn't the absence of certain correlations both in the simulations and in the biological experiments rather weak evidence?
The correlation we report is higher than what is reported in biological experiments because we used a Pearson correlation, whereas the ref [10] uses the averaged dot product. The averaged dot product for different rooms in our experiments is 5e-4, which compares well with [10]. We will add this information to the final version.
>The fit in [47, Fig. 2A] is quite bold and ... is not exactly a good standard for comparison (l239).
Similar experiments that have tested a single subject this many times are very sparse. We are currently in conversation with an experimental group to carry out such measurements, but it will take a while to set up the experiment. Additionally, the representational drift which we plotted in Suppl. Fig. 2 is similarly observed in [10] Fig. 3 A.
>The “experimental evidence that place cells can develop multiple place field” (l252) was never really “strong”.
A few recent work has reported this. [11] has reported the development of multiple place field centers in dorsal CA1 when an animal is placed in very large environment. [12] indicates that cells with multiple PFs consistently occur in large environments across CA1, CA3, and the dentate gyrus.
>Comparison with more recent modeling approaches
We thank the reviewer for this suggestion. In the global response, we have included a discussion of some recent work.
>Why couldn’t the WSM rate maps supports autoencoding?
>Explanation for roundish PFs
The WSM signals are modeling processed sensory inputs which are not known to have a direct autoencoding property. The idea here is that the hippocampus provides that autoencoding property for the incoming sensory representation. Why do place cells specifically emerge from this autoencoding? As shown in panel c of the attached PDF, place cells emerge as we increase the constraint of the overall firing rate, indicating that they are likely a balance between efficiency and encoding precision. This balanced solution encourages fewer units to encode as much variation as possible within a localized region. When each cell’s encoded pattern is projected onto the experience manifold, the pattern will naturally be roundish and place-like, because of the high-dimensional nature of the projection (l164-l166).
>Theoretically progress achieved?
The framework we proposed is our theoretical progress. Motivated by numerous prominent computational studies that report the emergence of grid cells when a system processes movement-modulated sensory signals [6-8] to achieve various target functions, we suggest the encoding of spatially modulated sensory information (WSM) with a simultaneous firing rate constraint gives rise to place cells and shapes their phenomenology.
>Why sigma is set fixed to 12cm
We used sigma=12cm as an example for reproducibility. The value of sigma does not impact PF emergence. We have illustrated this in the attached PDF.
>How to achieve a diversity of place cells
The sizes and firing rates of our emergent PFs change in response to variations in the firing rates and sigma values of the WSM signals. More diverse place fields could therefore be realized if we take the input signals to be sparse and allow more variation in the number of WSM channels connected to different hidden units. These parameters can be chosen to match hippocampal anatomy and physiology when such data becomes available.
[1] M. Hasselmo & Wyble (1997). Behavioural Brain Research.
[2] J. Guzowski et al. (2004). Neuron.
[3] S. Leutgeb et al. (2007). Science.
[4] G. Yang et al. (2019). Nature Neuroscience.
[5] V. Mante et al. (2013). Nature.
[6] D. Schaeffer et al. (2023). NeurIPS.
[7] B. Sorscher et al. (2019). NeurIPS.
[8] C. Cueva & X. Wei (2018). ICLR.
[9] M. Benna & S. Fusi (2021). PNAS.
[10] K. Almea et al. (2014). PNAS.
[11] J. Harland et al. (2021). Current Biology.
[12] S. Park et al. (2011). PLoS ONE.
---
Rebuttal Comment 1.1:
Title: Additional comments on "mirroring experiments with rodent CA3 place cells"
Comment: We are writing to provide more information about the comparison between our model and experiment, especially with regard to the correlation of the population within and between rooms during different visits. We propose to replace the original sentences with the following paragraph:
We compared the correlation between rooms from cycle 2 and cycle 3, a scenario similar to the experiments in Alme et al [10]. The mean correlation between different rooms is $0.164 \pm 0.029$, and that of the same rooms is about 0.55 greater, $0.710 \pm 0.097$. The corresponding experimental values reported in [10] are also different by about 0.57: $0.08 \pm 0.005$ and $0.65 \pm 0.02$, respectively. Note that we should not expect precisely the same values of the correlation because the precise setups of the environments and experiments are different. For example, we have many trial rooms in our {\it in silico} study, while there are only 2 rooms in [10]. Furthermore, our network contains 1000 units while Alme et al. recorded only 342 neurons. Overall, we find that the population vectors of familiar rooms have significantly higher correlations as compared to different rooms, consistently with experiments. | Summary: The paper explores the emergence of place cells in neural networks by simulating the hippocampal area CA3, specifically when trained to recall and reconstruct temporally continuous sensory experiences encountered during navigation in simulated environments. The authors model this area as a recurrent autoencoder that operates on sensory inputs from simulated agents moving through environments with varying sensory landscapes. The results show place cells that resemble those recorded in the hippocampus.
Strengths: The approach is novel, and the idea of training a network to remember temporally continuous sensory episodes and then characterize its neural representations is a useful contribution.
The paper conducts empirical evaluations in different types of environments (e.g., rooms with different shapes) and makes several testable predictions.
Weaknesses: The paper is written in a somewhat unusual style with the results following right after the introduction without a separate methods section making it difficult to follow the approach and understand the results.
The interaction between sensory inputs and velocity integration seems to be missing.
Technical Quality: 3
Clarity: 2
Questions for Authors: How does your approach relate to path integration?
Do you expect to see place cells in the absence of visual inputs?
What exactly are the inputs to the place cells? Can those pixel-level visual inputs preprocessed with some sensory processing modules?
Do you observe the remapping and changing shape of the place fields when the environment changes size or the walls move (e.g., O'Keefe & Burges 1996)?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors should address the limitations more explicitly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer KNH2 for their time and suggestions. We will resolve the reviewer’s comments in turns:
>The paper is written in a somewhat unusual style with the results following right after the introduction without a separate methods section making it difficult to follow the approach and understand the results.
We thank the reviewer’s suggestion. We employ our current style in an effort to incrementally introduce conceptual ideas and provide corresponding experimental verifications immediately afterward. We agree that the clarity of the paper would benefit from dividing Section 2 into two parts, as this aligns better with the NeurIPS-style convention and enhances the paper's structure. We will split section 2 at approximately line 119.
>The interaction between sensory inputs and velocity integration seems to be missing.
>How does your approach relate to path integration?
This question can be interpreted in two ways, and we address both to ensure clarity:
1. "Why is the velocity/path integration feature not discussed in the context of place cells?": Velocity integration is typically hypothesized to be a functionality of grid cells in the MEC, as cited in references [1-3]. The velocity and other vestibular signals necessary for the grid system's emergence [2, 3] do not directly project into the hippocampus, where place cells are primarily located. This is why the discussion does not extend to place cells in this context.
2. "Since WSM signals, grid cells, head direction cells, and border cells are reported in the same region (i.e., MEC), how would they interact with each other?": This issue requires further investigation as responses in the MEC are typically multimodal. We emphasize the "spatially modulated" nature of our system's inputs, contrasting with the "movement modulated (MM)" signals involved in the grid systems [2, 3]. We design the input to our system under the hypothesis that while both place cells and grid cells are suggested to encode locations, they differ in their phenomenologies due to the nature of their inputs — one being movement-modulated and the other spatially modulated. Thus, we believe that keeping MM signals separate from the SM signals will better reflect the hypothesis.
Regarding the relationship to path integration, which is typically hypothesized to be the function of grid cells rather than place cells, we suggest the place cells do not support this functionality.
>Do you expect to see place cells in the absence of visual inputs?
>What exactly are the inputs to the place cells? Can those pixel-level visual inputs preprocessed with some sensory processing modules?
We sample a set of WSM signals generated by Gaussian Random Fields (GRFs) to train our model (refer to Fig. 1 caption). Essentially, each room corresponds to a set of unique WSM rate maps. At each location, we sample this set of WSM rate maps to generate a high-dimensional embedding to train the model.
The WSM signals are used to represent a general format of what sensory signals might look like after being processed by their corresponding cortices at various locations within the trial room. We posit that these can be represented by WSMs (GRFs), based on the assumption that the magnitude of any sensory response decreases as the distance to the stimulus increases, regardless of the specific modality.
We have tested the validity of this assumption in our parallel work. Specifically, we tested responses to visual stimuli at different spatial locations in VR-simulated rooms. We captured images at different locations in these rooms and passed them through several models of visual systems. The features generated by these models consistently produce WSM fields. Therefore, the WSM signals can be regarded, if desired, as processed visual input. But more generally they are intended to represent the full range of inputs from different sensory modalities, pre-processed by diverse cortical modules. So our model will work in the absence of visual inputs, reflecting the fact that place fields are known to have multi-modal inputs and responses.
>Do you observe the remapping and changing shape of the place fields when the environment changes size, or the walls move (e.g., O'Keefe & Burges 1996)?
Yes, we observed such remapping. As mentioned above, we defined each room as a set of WSM rate maps. Therefore moving the wall could be considered as a rotation or flip of a subset of WSM signals while keeping the rest unchanged. We have verified that this will elicit remapping in the emerged place units. We will add a figure panel to the final version of the paper.
**References**
[1] Yoram Burak, Ila R. Fiete. Accurate path integration in continuous attractor network models of grid cells. 2009. PLoS Comp. Biology.
[2] Chris Cueva & Xue-Xin Wei. Emergence of grid-like representations by training recurrent neural networks to perform spatial localization. ICLR 2018.
[3] Ben Sorscher, Gabriel Mel, Surya Ganguli, Samuel Ocko. A unified theory for the origin of grid cells through the lens of pattern formation. NeurIPS 2019.
---
Rebuttal Comment 1.1:
Comment: I appreciate the responses and clarifications and I adjusted my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your time spent reviewing our work and the increase of your score from 5 to 6.
Thank you,
Authors. | Summary: This paper presents a novel approach to understanding the emergence of place fields in the hippocampus. The authors propose that place cells can emerge from networks trained to remember temporally continuous sensory episodes, without explicit spatial input. They model the hippocampal CA3 region as a recurrent autoencoder (RAE) that reconstructs complete sensory experiences from partial, noisy observations. The model reproduces key aspects of hippocampal phenomenology, including remapping, orthogonality of spatial representations, and slow representational drift. The paper offers several testable predictions and provides a fresh perspective on the origin of place fields, suggesting that "time makes space" in neural representations.
Strengths: - Novel approach: The paper presents an intriguing hypothesis about the emergence of place fields from temporally continuous sensory experiences.
- Comprehensive modelling: The model reproduces multiple key aspects of hippocampal phenomenology.
- Testable predictions: The paper offers concrete predictions that could guide future experimental work.
- Thorough experimentation: The authors conduct a wide range of simulations to test their hypotheses.
- Biological plausibility: The model is grounded in known hippocampal anatomy and physiology.
- Effective explanations: The paper employs (impressively) clear (to me) expressions and explanations that enhance its readability and impact.
Weaknesses: - Limited quantitative comparison with actual neural data from rodent studies.
- Reliance on simplifying assumptions about sensory input structure (smooth Gaussian random fields).
- Insufficient exploration of network parameter dependencies.
- Absence of publicly available code for verification and extension of the work. I read the author’s justification, but this remains a weakness in my point of view.
- Lack of formal theoretical analysis to explain the emergence of place fields, i.e. a rigorous mathematical framework that provides a deep analytical understanding of why place fields emerge in this model.
- Inadequate positioning within existing frameworks of sequential data modelling, for example in relation to simple hidden Markov models (HMMs).
- Although minor, the term "weakly spatially modulated" signals, which is important for understanding this work, lacks a clear definition.
- Also minor, the section structure could be improved for clarity, particularly Section 2 which combines methods and results. How about calling it Experiments?
Technical Quality: 3
Clarity: 3
Questions for Authors: - The most important question for me is, how sensitive are your results to changes in network architecture and hyperparameters?
- How does your model specifically relate to and extend beyond HMM in the context of sequential data modelling?
- How about my 2 minor points in the weaknesses?
- Have you considered conducting a more rigorous statistical comparison with experimental rodent data? If so, what challenges did you face?
- Could you elaborate on how your model might be extended to account for other hippocampal phenomena, such as theta phase precession or grid cells?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed some limitations of their work, particularly regarding the simplifying assumptions of their model. However, they could improve by:
- Providing a more detailed discussion of the limitations of using smooth Gaussian random fields to model sensory inputs.
- Addressing potential limitations in the generalizability of their findings to real neural systems.
- Discussing any computational limitations or scalability issues of their approach.
- Considering potential negative societal impacts, if any, of their work (e.g., implications for AI systems that might use similar principles).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 6VUu for their time and suggestions.
>Limited quantitative comparison with actual neural data from rodent studies.
We thank the reviewer for their suggestions. We addressed this in the global response.
>Reliance on simplifying assumptions about sensory input structure.
>WSM signals lacks a clear definition.
Here, WSMs simulate processed sensory information entering the hippocampus. They should be location-dependent, and the response magnitude should decrease as distance to the stimuli increases. Thus, the most important property of using WSMs is that they should be smooth. This smoothness can be guaranteed if the Gaussian kernel used to generate the WSM signals has a standard deviation ($\sigma$) greater than the body length ($l_a$) of the modeled animal.
We verified the validity of using GRFs as models for WSMs in a parallel study but have not included it here to maintain the focus of this paper. Specifically, we tested responses to visual stimuli at different spatial locations in VR-simulated rooms. We capture images at different locations in these rooms and pass them through several network models of visual systems. The features generated by these models consistently produce WSM fields.
>Insufficient exploration of network parameter dependencies.
We resolved this point in the global response and provided a summary figure in the corresponding pdf.
>Publicly available code for verification and extension of the work.
We completely agree that publicly available code greatly accelerates neuroscience and AI research. In fact, the RAE used in this project builds on a general-purpose RNN for neurostimulation we have previously published on PyPI and GitHub. All parameters in this study directly match those in our package. We will point readers to this code. The code for simulating rodent movement and generating WSM cells is also part of an extensive rodent simulation project we plan to release on PyPI after we wrap up our current work.
Again we are happy to release our code, including the parts for simulating rodent movement and WSM cells. However, anonymizing the entire RNN package without breaking its integrity is challenging, and we're concerned about potentially violating the double-blind requirement. We'll add a link to our GitHub repository once the reviewing process is over.
>Lack of formal theoretical analysis to explain the emergence of place fields.
We agree that this work would benefit from a formal theoretical analysis. This work is primarily inspired by previous studies where grid cells emerge in networks that reconstruct locations from sequences of movement-modulated signals. Here, in contrast to the movement-modulated signals required for grid cell emergence, we suggest the encoding spatial modulated signals gives rise to place-like patterns. In the present work we aimed to both explain this emergence and provide a comprehensive replication of place cell phenomenology in a computational RNN framework. There is not enough space to include the formal mathematical framework that we are separately working on.
>Inadequate positioning within existing frameworks
We thank the reviewer for pointing us to this. We have included a more detailed comparison with these models in the global response.
>Section structure could be improved for clarity
We thank the reviewer for this suggestion. We employ our current style which derives from theoretical physics, in an effort to incrementally introduce conceptual ideas and provide corresponding experimental verifications immediately afterward. We agree that the clarity of the paper could benefit from dividing Section 2 into two parts, as this aligns better with the NeurIPS-style convention and enhances the paper's structure. We will split section 2 at approximately line 119.
>Robustness to network architecture and hyperparameters?
We have verified that PFs consistently and robustly emerge across a wide range of parameters. We have also provided the details in the global response.
>Statistical comparison with experimental data
We attempted to conduct a more rigorous statistical comparison. The primary challenge is that the precise hippocampal stimuli of place cells are unknown, making it hard to conduct such a comparison in the absence of aligned stimuli. Additionally, in our model, place cell properties, such as their width and firing rate, can be parametrically adjusted. We could use this to fit the statistics of experimental datasets. But that requires examination of data for many animals and different experimental settings. There is insufficient space here for such a substantive analysis, and we therefore postpone it to a separate effort. We have included a more detailed explanation in the global response.
>Relation to theta precession and grid cells
Given that both WSM signals and grid cells are found in the MEC, which projects into the hippocampus, it's plausible that grid cells might act as a specific type of WSM signal. The WSM signals we focus on primarily come from sensory observations, which vary constantly and might not always be available. Grid cells, on the other hand, are hypothesized to be generated from integrating movement modulated signals and may provide robust input for hippocampal place cells. This idea is also supported by [1] which shows that path integrating velocity to produce place cells generates grid cells in intermediate layers. Regarding theta precession, we consider it primarily as a separate process that consolidates hippocampal memory and thus is not directly related to our model. That said, theta rhythms may cause events that happen together to compress into a single memory, similar to our training protocol which composes recent events into single memories.
[1] Sorscher et al. A unified theory for the origin of grid cells through the lens of pattern formation. NeurIPS 2019.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their informative responses. There was a point that the authors listed but I think they forgot to address, namely: "Reliance on simplifying assumptions about sensory input structure." However, most of my other points, including my most major concern, were indeed addressed, so I adjusted my sore accordingly. Good luck!
---
Rebuttal 2:
Title: thank you
Comment: Thank you for taking the time to read our paper and our responses. We are also grateful for the improved score.
Regarding the point "Reliance on simplifying assumptions about sensory input structure", we intend to answer it together with our definition of WSM signals. Specifically, we verified the validity of using GRFs as models for WSMs in a parallel study but have not included it here to maintain the focus of this paper. We tested responses to visual stimuli at different spatial locations in VR-simulated rooms. We capture images at different locations in these rooms and pass them through several network models of visual systems. The features generated by these models consistently produce WSM fields.
Thanks,
Authors | Summary: This study demonstrates that place cells can develop in networks trained to remember temporally continuous sensory episodes. The model CA3 as a recurrent autoencoder that recalls and reconstructs sensory experiences from noisy and partially occluded observations by agents traversing simulated arenas. The autoencoder training, which included a constraint on total activity, led to the emergence of place cells with spatially localized firing fields. These place cells exhibited key hippocampal characteristics: remapping, orthogonality of spatial representations, robust place field formation in variously shaped rooms, and slow representational drift. The authors present a unique framework of the optimal encoding of the experience space The study suggests that continuous spatial traversal results in temporally continuous sensory experiences, making several testable predictions about place field behavior under different conditions.
Strengths: This work presents a new perspective on the topic of place cell formation in CA3 during navigation. Their model is clear and well described model. The analysis of their network, combined with the predictions they make regarding hippocampal remapping, make this a relevant work for the field.
Weaknesses: More in-depth visualization in figure 3 would be nice, I like this framing of the problem in the text. Maybe show each example (suboptimal encoding, optimal encoding, remapping in a new environment, returning to the original environment) as a figure panel?
It would be helpful for the authors to examine which of their assumptions and initializations are critical for the PF emergence they observe. What do your results look like when you use different history buffer, different levels of noise, a form of input different than the WSM, etc.?
Technical Quality: 4
Clarity: 3
Questions for Authors: I'm surprised that the units learn (via the recurrent weights) return to their initial positions - especially after changes (albeit small) in the input weights. My naive assumption would be that there are many redundant solutions which could accurately autoencode the experience vectors. Why does the network return to the same solution it initially had?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: I think this work would greatly benefit from greater comparison to other models of place field/cognitive map formation (in either an autoencoder or predictive learning framework), Levenstein et al. 2024 as a recent example.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer hM85 for their time and suggestions, and are happy that they liked the paper. We will address the reviewer’s comments one-by-one:
>A more in-depth visualization in Figure 3 would be nice. I like this framing of the problem in the text. Maybe show each example (suboptimal encoding, optimal encoding, remapping in a new environment, returning to the original environment) as a figure panel?
We thank the reviewer’s endorsement of Figure 3. We have generated a more elaborate plot of Figure 3. However, the attached PDF does not have enough space for us to include an additional figure. We will add this new plot and the corresponding discussion to the supplemental materials.
Essentially, the examples for suboptimal and optimal coding will be similar to Figure 3b but broken into two separate plots. The case of suboptimal coding would be when neuron N1 encodes experience manifold E2, while the optimal encoding would be when neuron N1 encodes E1.
>It would be helpful for the authors to examine which of their assumptions and initializations are critical for the PF emergence they observe. What do your results look like when you use different history buffers, different levels of noise, a form of input different than the WSM, etc.?
We thank the reviewer for this comment and agree that this paper will benefit from indicating how different parameters affect the results. We addressed this suggestion in the global response, where we provided an additional figure with an discussion of how these parameters impact the emergent PF. In short, the history buffer won’t affect the PF emergence after passing a threshold value (approx. 200 seconds); The noise also doesn’t impact the PF emergence until the noise is too high and the pattern becomes undecipherable.
As for different forms of WSM signals, we have varied max_fr and sigma values when we generate WSM signals, and none of them disrupt PF emergence. Additionally, we verified the feasibility of using WSM signals to represent sensory experience, especially for visual signals. In one of our parallel studies, we tested responses to visual stimuli at different spatial locations in VR-simulated rooms. We captured images at different locations in these rooms and passed them through several network models of visual systems. The features generated by these models consistently produce WSM fields. These WSM fields, when used to train our RAE, consistently produce place cells.
>I'm surprised that the units learn (via the recurrent weights) return to their initial positions - especially after changes (albeit small) in the input weights. My naive assumption would be that there are many redundant solutions which could accurately autoencode the experience vectors. Why does the network return to the same solution it initially had?
Yes, we also agree on the presence of many redundant solutions. We hypothesize that neurons that capture more variance within a localized region could encode this place's information more efficiently. As proposed in section 2.3, only a few neurons may be efficient in capturing this variance. Therefore, within a region, as long as these few selective neurons remain efficient in capturing the variance of the experience manifold (i.e., their input projections remain stable), there could be many ways to inhibit the non-critical neurons.
>I think this work would greatly benefit from greater comparison to other models of place field/cognitive map formation (in either an autoencoder or predictive learning framework), Levenstein et al. 2024 as a recent example.
We thank the reviewer for pointing us to this relevant research. We have included a wider comparison to other models in the global rebuttal and will include this in the main text if accepted. Specifically regarding comparing [1] with our model, given that the observation of the next step and observation of the current time step could be very similar, our model best corresponds to the next-step architecture in their setup. The most significant difference is that the agent’s observation in Levenstein et al. is conditioned on the direction of the agent. However, this detail could be reconciled by considering non-observing directions in Levenstein et al. as experiences being occluded in our setup. We were also excited to see somewhat similar place-like units emerge in this relevant research.
**Reference**
[1] Levenstein et al. Sequential predictive learning is a unifying theory for hippocampal representation and replay. 2024.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for your reply, we will keep our score.
---
Rebuttal 2:
Comment: Thank you so much for taking the time to read our paper, and for the positive evaluation. | Rebuttal 1:
Rebuttal: We thank all reviewers for your time and valuable comments. We have addressed the common suggestions below and will resolve additional comments in each individual reply.
>How might the emergence of place fields (PFs) change in response to different parameter settings?
We thank the reviewers’ for their suggestion to include this information. We have now included a detailed discussion of robustness of our results to parameter variations throughout the attached PDF which we will add to the supplement..
We evaluate the firing maps of hidden layer units using three metrics: (1) the percentage of active units, defined as units with a maximum firing rate $>$ 0.1 Hz; (2) the percentage of place units, identified by a Spatial Information Content (SIC) $>$ 5; (3) the average SIC across all active units.
We’d like to highlight a few interpretations: (1) Increasing the duration of each episodic memory segment slightly reduces the number of place cells, as longer episodes likely involve multiple locations, decreasing spatial specificity. Despite this, the majority of active cells continue to exhibit place-like characteristics. (see Panel a). (2) The number of place cells decreases as trial duration increases. This decrease is due to the optimizer forcing the network to encode WSM more efficiently after the MSE loss stops decreasing. This optimized encoding is thus overfitted to one single room and requires individual cells to fire at unrealistic rates, which are unlikely to occur in biological systems (see Panel b) (3) As $\lambda_{fr}$ increases, all active cells become place cells. (4) The number of place cells and active cells increases as the recall length increases, stabilizing once the recall duration exceeds 200 seconds. (see Panel d) (5) Neither the maximum firing rate of WSM signals nor the sigma value affects the emergence of PFs. (see Panels e & f) (6) Place fields only emerge when the number of WSM signals exceeds 100, aligning with our hypothesis that the emergence of place fields requires a larger number of WSM signals (l164-l166). We have also observed that the PF emergence prefers different numbers of hidden units under different numbers of WSM signals. This could likely be due to different optimal encoding strategies under different input vs encoding unit ratios, which could be further explored in future studies. Overall, the emergence of place fields is robust under a wide range of parameters.
>Quantitative comparison with actual neural data from rodent studies.
We thank the reviewers for suggesting comparisons with neural data from rodents. We didn’t include such comparisons in this work for the following reasons:
1. While the firing of place cells is highly correlated to the animal’s location, the precise stimuli leading to place cell formation remain unclear. Even identical trial rooms under different global environments may produce orthogonal population representations. It is hard to quantitatively compare two neuronal responses without knowing if they are responding to the same stimuli.
2. At the population level, developing methods to compare two populations of neuronal response remains an active field of research. Most existing lines of work also require the stimuli to be aligned before the comparison is possible, information that we do not have for extant animal experiments.
3. Most importantly, we designed our experiments parametrically so that most qualitative PF properties and statistics can be varied with parameter adjustments (see above). For instance, the width of the PFs could be adjusted using the sigma of the Gaussian smoothing kernel of WSM signals as well as $\lambda_{fr}$; the firing rate (FR) of PF could be adjusted with the FR of the WSM signals or the ratio of $\lambda_{mse}$ and $\lambda_{fr}$, etc.
It could be meaningful to fit the statistics of our emergent place fields to different experimental data sets to see how the parameters of our model need to vary to match different animals and experimental settings. However, that requires a comprehensive examination of many datasets, for which there is insufficient space in this submission. We will therefore return to this in a separate effort.
>A more elaborated comparison with recent models of PFs
We thank the reviewers for suggesting this and will include comparison with recent literature in our paper. The two studies most relevant to our work are [1] and [2]. Specifically, [1] trains a model with visual observations conditioned on the agent's direction. Conditioning on agent direction can be realized in our setup by considering non-observed directions as occluded experiences in our setup. Their RNN also develops localized representations when trained to predict observations for the next step or multiple steps ahead. Additionally, our view that place cells may simply be emergent patterns of memory during navigation aligns with [2], which suggests spatial awareness results from processing sequences of sensory inputs. Although they use a different learning framework based on a hidden Markov model, they similarly observed the emergence of place-like patterns.
On the other hand, our hypothesis extends these recent works, particularly in explaining the key phenomena of remapping and reversion of place fieldss after remapping. In particular, differently from [1,2], our conceptual framework proposes that the RNN acts as an encoder for experience manifolds, elucidating how remapping is a learned process and why such learning is reversible.
Overall, our hypothesis and observations are consistent with recent efforts to explain PFs, but offering complementary perspectives on the PF emergence and their detailed phenomenology.
[1] Levenstein et al. Sequential predictive learning is a unifying theory for hippocampal representation and replay. 2024.
[2] Raju et al. Space is a latent sequence: Structured sequence learning as a unified theory of representation in the hippocampus. 2024.
Pdf: /pdf/74d7f77129bedfeaef9d23e2f2db9892f20f5214.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Guided Trajectory Generation with Diffusion Models for Offline Model-based Optimization | Accept (poster) | Summary: The paper proposes a novel conditional generative modeling approach using diffusion models for offline model-based optimization (MBO). The method constructs synthetic trajectories toward high-scoring regions, trains a conditional diffusion model, and samples multiple trajectories to explore and select high-fidelity designs.
Strengths: 1. Introduces a new approach to generating trajectories that consistently move towards high-scoring regions.
2. Leverages the powerful capabilities of diffusion models to handle complex and high-dimensional data.
3. The method utilizes classifier-free guidance and context conditioning during the sampling process, enhancing the exploration of high-scoring regions.
Weaknesses: 1. The authors have adopted the Diffusion model for offline model-based optimization, which is a current trending approach. It is recommended that the authors emphasize their unique contributions and clarify the distinctions between their method and previous approaches.
2. The model filters candidates through a trained proxy model. The authors should elaborate on how data handling is managed during experiments to ensure that there is no label leakage and discuss whether the comparisons with other models are conducted fairly.
3. The authors should provide a more detailed description and discussion of the experimental results.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see weaknesses
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and further suggestions that could enhance our manuscript.
> (Weakness 1) The authors have adopted the Diffusion model for offline model-based optimization, which is a current trending approach. It is recommended that the authors emphasize their unique contributions and clarify the distinctions between their method and previous approaches.
We acknowledge that several concurrent works adopt diffusion models for offline model-based optimization. DDOM [1] is the pioneering work utilizing diffusion models for offline MBO. Following DDOM, DEMO [2] trains a diffusion model to match a pseudo-target distribution constructed by gradient ascent and uses the model to edit designs in the offline dataset. Diff-Opt [3] considers a constrained optimization setting and introduces a two-stage framework that begins with a guided diffusion process for warm-up, followed by a Langevin dynamics stage for further correction. Diff-BBO [4] measures the uncertainty of generated designs to select the optimal target value for conditioning the diffusion model.
Our method uniquely distinguishes itself from prior and concurrent works by using diffusion models to generate trajectories toward high-scoring regions, learning to improve solutions from the dataset. While we mention this part in the related work section, we will modify the manuscript to emphasize our unique contributions in the introduction.
> (Weakness 2) The model filters candidates through a trained proxy model. The authors should elaborate on how data handling is managed during experiments to ensure that there is no label leakage and discuss whether the comparisons with other models are conducted fairly.
In the filtering stage, we train our proxy model with only offline datasets for fair comparison.
> (Weakness 3) The authors should provide a more detailed description and discussion of the experimental results.
Sorry for missing detailed description and discussion on the experiment results. We will provide more thorough analysis on our main experiment in the manuscript as written below.
Our experiment results generally surpass forward approaches, which struggle to fall into OOD designs, especially in high-dimensional settings. We also observe that our method outperforms inverse approaches, including DDOM, which utilizes the diffusion model. It demonstrates that generating trajectories towards high-scoring regions can be more effective than generating a single design, as we can distill the knowledge of the landscape of the target function into the generator. Our method achieves higher performance compared to BONET, which also generates trajectories. It indicates that our novel trajectory construction strategy effectively guides the diffusion model to explore diverse paths toward high-scoring regions.
[1] Krishnamoorthy, Siddarth, Satvik Mehul Mashkaria, and Aditya Grover. "Diffusion models for black-box optimization." International Conference on Machine Learning. PMLR, 2023.
[2] Yuan, Ye, et al. "Design Editing for Offline Model-based Optimization." arXiv preprint arXiv:2405.13964 (2024).
[3] Kong, Lingkai, et al. "Diffusion models as constrained samplers for optimization with unknown constraints." arXiv preprint arXiv:2402.18012 (2024).
[4] Wu, Dongxia, et al. "Diff-BBO: Diffusion-Based Inverse Modeling for Black-Box Optimization." arXiv preprint arXiv:2407.00610 (2024). | Summary: The paper introduces Guided Trajectory Generation (GTG) for offline model-based optimization (MBO). GTG creates synthetic trajectories from offline data, using locality bias to ensure consistent improvement. A conditional diffusion model generates these trajectories based on their scores. The method then uses guided sampling and a proxy function to identify the best designs. GTG shows superior performance on the Design-Bench benchmark compared to other methods.
Strengths: - The authors effectively motivate their approach with a toy 2D Branin example, highlighting the differences from previous synthetic trajectory-based methods.
- Extensive ablation studies demonstrate the robustness and effectiveness of the proposed method.
- The paper is well-written and easy to follow, with a straightforward and effectively communicated idea.
Weaknesses: - TFBind8 and TFBind10 are very similar. It would be better to see experiments on more varied discrete benchmarks like NAS. Also, why do the results for TFBind10 have such high variance?
- It’s not clear how this approach deals with out-of-distribution (OOD) problems. Diffusion models usually generate in-distribution data, so where does the robustness against OOD desings come from?
- The proxy function uses the standard regression approach, which can be quite fragile when estimating OOD designs. Have you investigated using COMS or ROMA models as a proxy?
- There isn't a principled mathematical explanation for why this approach works.
- The paper doesn’t include 50th percentile results, which are a standard part of evaluation in this field.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Please check the weaknesses
- For the discrete tasks, Is a latent space used or is everything kept discrete during diffusion and proxy training?
- How many gradient steps are used for gradient ascent methods like COMs and RoMA? Is it 64? And is the trajectory length the same for BONET and PGS?
- Were the hyperparameters picked using the proxy or the oracle?
- Are designs picked at the end of the generation process, or can they be selected from the middle of the trajectory? Is the filtering applied to results generated by BONET or PGS? How does this affect their outcomes?
- How many designs are initially considered, and how many are selected at the end (128 out of how many)?
- Why is the perturbation epsilon different for DKitty?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors address the limitations and potential negative impacts in their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the critical reviews, insightful feedback, and further suggestions to enhance our manuscript.
> (Weakness 1) More varied discrete benchmarks and high variance in TFBind10 results
We excluded NAS as it takes too long to evaluate. Instead, to verify the effectiveness of our method on various discrete tasks, we conduct additional experiments on UTR. We use the reported scores in PGS for other baselines. As shown in Table 19 attached in PDF, we achieve the second-best results in the UTR task.
For high variance in TFBind10, we observe that it is due to the variability of the seeds. When we conduct experiments on more seeds (16 seeds), we find that our method still showed high performance with lower variance.
**Performance of GTG on TFBind10 with more random seeds.**
|| TFBind10|
| -|:-|
| GTG (8 seeds) | 0.698 ± 0.127 |
| GTG (16 seeds) | 0.699 ± 0.091 |
> (Weakness 2) It’s not clear how this approach deals with out-of-distribution (OOD) problems.
Thank you for your interest in how our method addresses out-of-distribution (OOD) problems. We introduce classifier-free guidance to guide the diffusion model towards exploring high-scoring regions. To mitigate the OOD issue, we introduce $\alpha$, which controls the exploration level of the generated trajectories. We find that setting $\alpha=0.8$ generally exhibits good performance while high $\alpha$ results in sub-optimal OOD designs. Please refer to Section 5 and Appendix D.2 for a more discussion on the impact of $\alpha$.
> (Weakness 3) Using COMS or ROMA models as a proxy
As you mentioned, the proxy function can be fragile when estimating OOD designs. To mitigate this issue, we use ensembles of MLP as a proxy with rank-based reweighting during training to focus on high-scoring regions as written in Appendix B.2. Nevertheless, using COMs and ROMA is a promising approach to enhance the robustness against OOD designs.
To this end, we replace proxy with COMs and ROMA and evaluate the performance of our method on TFBind8 and Dkitty tasks. As shown in Table 19 attached in PDF, we find no significant difference in performance based on the choice of different proxy functions.
> (Weakness 4) Absence of principled mathematical explanation
While there is no principled mathematical explanation, our method shows promising results across various real-world tasks. We also conduct a thorough analysis of our method using Toy 2D experiment visualizations in Figure 2. Additionally, we perform experiments on practical settings such as sparse and noisy datasets, demonstrating the versatility of our method.
> (Weakness 5) Absence of median performance
We apologize for missing the median performance results. We will update the manuscript with median performance results. As shown in the table 20 attached in the PDF, we also achieve the highest mean rank (4.0) in terms of the median performance.
> (Question 1) For the discrete tasks, Is a latent space used, or is everything kept discrete during diffusion and proxy training?
As mentioned in Appendix A.2, we convert discrete input into a continuous vector and adopt a continuous diffusion and proxy model.
> (Question 2) How many gradient steps are used for gradient ascent methods like COMs and RoMA? Is it 64? And is the trajectory length the same for BONET and PGS?
We strictly follow the original procedures of the baselines written in their papers and publicly available code. For BONET, while the original paper generates 4 trajectories with a prediction length of 64, we generate 4 trajectories with a prediction length of 32 to match the evaluation budget of 128.
> (Question 3) Are designs picked at the end of the generation process, or can they be selected from the middle of the trajectory?
We aggregate all designs generated by trajectories and select candidates for evaluation with a filtering procedure.
> (Question 4) Is the filtering applied to results generated by BONET or PGS? How does this affect their outcomes?
Both methods do not apply a filtering strategy after generation. While PGS generates synthetic trajectories, it ultimately selects final candidates using a policy trained with the trajectories. Therefore, we examine the effect of filtering only in BONET. As shown in the table, filtering can improve the performance of BONET. However, we find that our method still outperforms BONET in terms of performance.
**Performance of BONET with filtering strategy.**
| | TFBind8| Dkitty|
|:- |:- |:-|
| BONET| 0.831 ± 0.109 | 0.950 ± 0.014 |
| BONET + Filtering | 0.913 ± 0.122 | 0.954 ± 0.011 |
| GTG (Ours) | 0.976 ± 0.020 | 0.971 ± 0.009 |
> (Question 5) How many designs are initially considered, and how many are selected at the end (128 out of how many)?
As mentioned in Section 4.4, we sample $N=128$ trajectories conditioning on $C=32$ context data points, so overall $4,096$ designs are initially considered. Then $Q=128$ designs are finally selected. As sampling procedure can be conducted in parallel, time complexity is not too problematic. For time complexity of sampling procedure, please refer to Appendix D.4.
> (Question 6) Why is the perturbation epsilon different for DKitty?
We introduce perturbation epsilon ($\epsilon$) to prevent synthetic trajectories from converging into a single maxima in the offline dataset. We use small $\epsilon$ for Dkitty as there are relatively many samples with high scores in the offline dataset compared to other datasets.
We also observe that even with the same $\epsilon=0.05$ for the Dkitty task, there is no big difference in performance, and it still outperforms other baselines, as shown in the table. For analysis on $K$ and $\epsilon$ for constructing trajectories, please refer to Appendix D.1.
**Performance of GTG on DKitty task with different perturbation epsilon ($\epsilon$).**
|| D'Kitty|
| -- |:--- |
| ICT (Best among baselines) | 0.960 ± 0.014 |
| GTG ($\epsilon=0.01$)| 0.971 ± 0.009 |
| GTG ($\epsilon=0.05$)| 0.965 ± 0.008 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. I have raised my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind response. If there are any remaining issues of discussion, please feel free to share them with us. We are always ready to engage in further discussion. Once again, we appreciate your thoughtful feedback. | Summary: The paper consider the problem of offline optimization where the goal is to find the optima of a black-box function in a zero-shot manner without online evaluations. The key idea is to generate trajectories with a locality biased heuristic and employ a conditional diffusion model to learn the distribution of these trajectories. Experiments are performed on multiple tasks from design bench benchmark.
Strengths: - I think looking at construction of trajectories is a really interesting problem in the context of offline optimization. Other than the two references mentioned in the paper, another recent ICML paper (listed below) also uses monotonic trajectories in the context of this problem that shows this is a recurring theme.
Hoang, M., Fadhel, A., Deshwal, A., Doppa, J., & Hoang, T. N. (2024). Learning Surrogates for Offline Black-Box Optimization via Gradient Matching. In Forty-first International Conference on Machine Learning.
- I like the ablation experiments in the experimental section that shows the efficacy of different components of the proposed approach.
Weaknesses: - This is not specific to the paper as such but the tasks in design-bench benchmark have multiple issues and as a result, some of the results are not reliable. For example, the offline dataset in superconductor task has multiple copies of the same inputs but with different outputs. As a result, the random forest oracle which is fit on this offline data is not reliable.
- The paper calls existing trajectory generation approaches (like sorted monotonic trajectories) as heuristics but the proposed approach is also a heuristic. Is there more principled/theoretical explanation for the proposed approach?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please see weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable review and positive feedback!! As only two questions have been raised by the reviewer, we will address it in this response. If you have any additional questions, please do not hesitate to let us know!
> (Weakness 1) Unreliable results of superconductor task.
We acknowledge the issues with Design-Bench, particularly in the Superconductor task. However, we have found that by fixing the seed for evaluation, we can obtain consistent outputs for multiple copies of the same inputs. Please note that we have reproduced baselines whose code is publicly available using the same evaluation procedure for a fair comparison.
> (Weakness 2) The paper calls existing trajectory generation approaches (like sorted monotonic trajectories) as heuristics but the proposed approach is also a heuristic. Is there more principled/theoretical explanation for the proposed approach?
We apologize for overstating our proposed method. Our trajectory construction method can also be considered a heuristic since it does not include learning components. Nevertheless, we focus on constructing trajectories that give us more valuable information for learning to improve designs toward diverse high-scoring regions. We have empirically found that constructing trajectories with locality bias can guide a diffusion model to explore high-scoring regions in the Design-Bench and its variants. Furthermore, we also find that the computational time for constructing trajectories does not require a significantly large amount of time, as shown in Table 9 in Appendix B.1.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks for your response to my questions. I am fairly convinced that this will be a good addition to the offline model based optimization literature. I would request to add a discussion about following things:
- I appreciate the special focus on fixing the seed and reproducing the baselines but the Superconductor task has one specific issue that it has multiple copies of the same inputs but with different outputs. As a result, the oracle is not reliable even though we can reproduce the results. Please add this while presenting the results about this task.
- Hoang, M., Fadhel, A., Deshwal, A., Doppa, J., & Hoang, T. N. (2024). Learning Surrogates for Offline Black-Box Optimization via Gradient Matching. In Forty-first International Conference on Machine Learning. is another recent paper that constructs trajectories. It will be good to add discussion about this paper in the related work section.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback! We will incorporate an explanation of the unreliability of the Superconductor task and a discussion about the recent paper by Hoang et al. (2024) into our manuscript.
Please let me know if you have any further suggestions or adjustments you would like us to consider. | Summary: This paper introduces Guided Trajectory Generation (GTG), a novel conditional generative modeling approach to solve the MBO problem. GTG consists of three parts, including trajectories construction, model training and trajectories sampling and filtering. Experimental results on various tasks, including a toy 2D experiment and Design-Bench variants, demonstrate the method's effectiveness.
Strengths: 1. This paper is well-written with a great motivation and description. Especially, authors introduce the conditional guidance to explore high-scoring regions.
2. The effectiveness of the proposed method is verified through experiments on multiple tasks and a detailed comparison with existing methods is presented.
3. The paper analyzes the hyperparameter settings and explores the effects of different hyperparameter settings on the model performance.
Weaknesses: 1. The method proposed is similar to Decision Diffuser [1]. The method proposed focuses on the offline model-based optimization problem while Decision Diffuser is used to sovle the offline RL problem.
2. It is not clear that in loss function, only x is used to calculate loss or both x and y are used.
3. In this paper, the concept of filtering in Section 3.4 and Section 5 are not clarified clearly.
[1] Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, and Pulkit Agrawal. Is conditional generative modeling all you need for decision-making? arXiv preprint arXiv:2211.15657, 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Please clarify the difference between the method proposed in this paper and Decision Diffuser [1].
2. In Section 3.4 and Appendix D.3, the distribution of data used to train models must be close to the real distribution. Otherwise, out of distribution could occur, leading to an error evaluation of trajectories.
3. Please clarify the concept of filtering in Section 3.4 and Section 5. Whether they are the same operation or not?
[1] Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, and Pulkit Agrawal. Is conditional generative modeling all you need for decision-making? arXiv preprint arXiv:2211.15657, 2022.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors have discussed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful review and positive feedback!
> (Weakness 1 & Question 1) Clarification on the difference between the proposed method and Decision Diffuser.
We would like to highlight that our method aims to find a design that maximizes the target black-box function while Decision Diffuser is a planner to solve offline RL problems by generating high-rewarding trajectories that follow environment dynamics.
Instead of utilizing diffusion model to generate a single design, we train diffusion model with synthetic trajectories constructed from the dataset. While several prior works construct trajectories from datasets, we developed a novel method for constructing trajectories with locality bias to help generator better understand the landscape of the target function. As reviewer MZpe mentioned, how to construct trajectories is a interesting problem in the context of MBO. The ablation study summarized in Table 4 underscores that our method outperforms prior strategies across various tasks.
> (Weakness 2) It is not clear that in loss function, only x is used to calculate loss or both x and y are used.
We apologize for the confusion in the notation. The trajectory $\tau$, which is a set of $(x, y)$ input pairs, should be used to calculate the loss in line 5 of Algorithm 2. We train diffusion model to generate trajectories, not a single design $x$. I will modify the manuscript to clarify this.
> (Question 2) In Section 3.4 and Appendix D.3, the distribution of data used to train models must be close to the real distribution. Otherwise, out of distribution could occur, leading to an error evaluation of trajectories.
For both Section 3.4 and Appendix D.3, we train proxy model with only offline dataset. As our objective is filtering high-fildelity designs with the proxy, we introduce a rank-based reweighting suggested by [1] during training to make the proxy model focus on high-scoring regions. To prevent high error in evaluation, we train ensembles of MLP for robustness of the proxy model. For more details, please refer to Appendix B.2 and our code.
[1] Austin Tripp, Erik Daxberger, and José Miguel Hernández-Lobato. Sample-efficient optimization in the latent space of deep generative models via weighted retraining. Advances in Neural Information Processing Systems, 33:11259–11272, 2020.
> (Weakness 3 & Question 3) Clarification of the concept of filtering in Section 3.4 and Section 5
We apologize for the confusion regarding the concept of filtering in the manuscript. Both Section 3.4 and Section 5 refer to the same concept. Among samples from multiple generated trajectories, we select candidates for evaluation by filtering with the proxy function. | Rebuttal 1:
Rebuttal: We sincerely thank the review committee for their detailed feedback. We appreciate the recognition of our paper's strengths, highlighted by the reviewers: **well-written** (gJu1, nEhc), **novel** (nEhc, FGy7), and **extensive experiments and ablations** (gJu1, MZpe, nEhc). In response to the reviewers' feedback, we provide a summary of conducted additional experiments:
- **More discrete tasks:** We conduct additional experiments on UTR tasks to demonstrate the effectiveness of our method on various discrete tasks. We achieved the second-best results on the UTR task.
- **Different proxy functions**: We conduct additional experiments with different proxy functions such as COMs and ROMa for filtering strategy. We find that there is no significant difference in performance depending on the choice of proxy function.
- **Median Performance**: We report the median score of our method and all baselines. We achieve the highest mean rank (4.0) regarding median performance.
Detailed responses have been provided for each point raised by the reviewers.
Pdf: /pdf/353597d22d23f0c80ba64abb32f28faffdd01539.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalizing Weather Forecast to Fine-grained Temporal Scales via Physics-AI Hybrid Modeling | Accept (poster) | Summary: The paper proposes a physics-AI hybrid modeling framework for fine-grained weather forecast. They propose to adaptively tune a PDE kernel together with a neural network as the encoder. Following the Euler time stepping, the PDE kernel can perform a fine-grained temporal forecast, which act as a physics-guided modeling part.
Strengths: 1. The combination of AI and physics is crucial and novel for weather forecast.
2. The fine-grained weather forecast is of interest to nowcasting and temporal downscaling.
Weaknesses: Some opportunities to improve:
1. The experimental details are way from sufficient. What are the hyperparameters you use, except for learning rate? How do you divide validation and test set? How do you divide input and label? What are the inputs and outputs to the model? Their sizes? What are the datasets' statistics, as there is none introduced? What is the time cost or number of parameters for your model compared to other models? How do you obtain the results in the tables, since there are no error bars? Are these all based on a one-time run? Are they statistically significant, and what are the p values, etc.? Table 3 does not demonstrate that the proposed model is the best model. Is there a specific reason why 120-min forecast is especially good? Any pseudo algorithm for understanding? Is there code for understanding and checking?
2. The experimental results are both not good enough and not analyzed well enough. It looks strange from Figure 4 that FourcastNet is much worse than ECMWF-IFS even though FourcastNet should have been better as reported in the original paper. There needs discussion to justify, or maybe it is due to experimental setting. It is hard to know due to lack of experimental details pointed out above. The nowcast results are basically suggesting that the proposed model is no better than previous models, except for 120 min that is a less realistic setting for real life. Figure 6 is revealing very minimum information about the comparison between models. All the errors look alike, and it is hard to know which model's prediction is better without ground truth. I am not convinced by the explanation why the physics weight is decaying within each hour: unless your neural network counterpart's prediction does not change much, the ratio between physics and AI does not directly reflect how much they change. It could be that the AI part is predicting/contributing less, although the weight is surging. It does not mean anything.
3. I want to highlight this problem using a separate paragraph. The ablation study is strange: it seems the physics part is not always helping. This might relate to how you use the PDE kernel and derive the PDEs. Your appendix A needs to cite references and name the equation names you are using. For example, how do you determine the coefficient for those constants in the equations? Shouldn't that be learned, since you cannot tell how big is the friction coefficient, let alone the term, for example? Moreover, Eq. 14 seems incorrect. I cannot recall such an equation in fluid dynamics. It is the continuity equation if you change p to z. However, pressure level slash geopotential slash height are not strictly the same thing. I feel concerned that they are assumed to be the same without any justification.
Technical Quality: 2
Clarity: 3
Questions for Authors: I encourage the authors to address my concerns listed in the weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your thoughtful review! We are pleased that you appreciate the innovative combination of AI and physics. We will address your remaining questions below.
> Q1: The experimental details are not sufficient.
Thank you for your suggestions. The table below outlines the experimental details:
|Hyperparameter|Value|
|-|-|
|Max epoch|50|
|Batch size|4x8(GPUs)|
|Learning rate|5e-4|
|Learning rate schedule|Cosine|
|Patch size|4x4|
|embedding dimension|1024|
|MLP ratio|4|
|Activation function|GLUE|
|Input (0-hour)|[4,69,128,256]|
|Output (1,3,6-hour)|[4,3,69,128,256]|
|Datasets|Training set|Validation set|Test set|Time resolution|Variable|
|-|-|-|-|-|-|
|WeatherBench|1980-2014|2015|2017-2018|1h|tp,t2m,u10,v10,z,q,u,v,t|
|NASA|None|None|2017-2018|30min|tp|
> Q2: The time cost for your model.
As shown in the `global response PDF`, introducing the PDE kernel slightly increases training time, but the added computational cost is acceptable.
> Q3: Error bars and p values.
The error bars of RMSE is displayed in the `global response PDF`.
We apply the Wilcoxon test to demonstrate that **our model achieves a lower RMSE compared to the original model at the 95% confidence level (p-value<0.05)**.
|Lead time (h)|6-hour|3-day|4-day|5-day|
|-|-|-|-|-|
|p-value t2m|1.42e-14|1.42e-14|1.42e-14|1.42e-14|
|p-value t850|1.42e-14|1.42e-14|2.84e-14|7.10e-14|
|p-value u10|1.42e-14|1.42e-14|1.42e-14|2.84e-14|
|p-value z500|8.04e-04|2.83e-02|5.68e-05|6.39e-08|
> Q4: Table 3 does not demonstrate that the proposed model is the best model. Why 120-min forecast is especially good?
Firstly, it should be pointed out that our model did not fit 30 and 90-min predictions during training, but achieving generalization in a unified model. In other words, our model achieves SOTA on most metrics under a more stringent setting (without interpolation models).
Secondly, precipitation forecasts with a lead time of 120-min or even longer serve as experimental settings `[1]` and play a crucial role in predicting disasters such as mudslides `[2]`.
Thirdly, as the lead time increases, the forecasting difficulty increases, and the advantages of our model can be better demonstrated. We added the tp RMSE at 180-min to emphasize that the benefits of our model are more pronounced.
|Model|tp RMSE@180-min ↓|
|-|-|
|FourCastNet|0.88|
|ClimODE|0.39|
|Keisler|0.43|
|WeatherGFT (ours)|**0.28**|
> Q5: Pseudo algorithm and code.
Part of the code for the PDE kernel is included in `Appendix B` of the paper. The complete code will be released after the paper is accepted.
> Q6: FourCastNet should have been better than ECMWF-IFS in the original paper.
In the original paper of FourCastNet `[3]`, `Figure 1` shows that RMSE z500 of FourCastNet are higher than those of ECMWF-IFS, which indicates that its prediction is relatively poor. Even in the case of FourCastNet v2 `[4]`, its forecast skill for z500 does not surpass that of ECMWF-IFS.
> Q7: All the errors in Figure 6 look alike.
The red boxes in `Figure 6` show that the predictions obtained by other AI models often have problems with smoothness and lack of extreme values. On the contrary, the predictions of our model have more details and more accurate extreme values.
> Q8: Explanation on router weight.
To ease your concerns, we calculate the input norm for each router as outlined in the global response PDF. The AI and physics feature norms remain consistent and similar (0.5:0.5), indicating **the router's independence from both PDE kernels and attention blocks**. This decoupling ensures that weight variations do not influence input. **Both AI and physics branches outputs of similar magnitude**, while the router dynamically selects the better part from them.
> Q9: Physics is not always helping.
Using PDE kernel without changing the training method may not necessarily lead to improved RMSE. If we only use lead time=6h labels for training supervision, the relatively front PDE kernels of this deep networks will not be effectively trained due to gradient disappearance. After adding 1h and 3h supervision to the middle part of the network, the role of the PDE kernel can be better reflected.
In addition, the *`bias`* and *`energy`* metrics in the `global response PDF` indicate that **integrating the PDE kernel effectively addresses the issue of energy decay with increasing lead time**.
> Q10:How do you determine the coefficient for constants.
`Equation 7` is the kinematic equation in the basic atmospheric equations. $F_h$ in `Equation 7` represents the friction force, which has a very small dimension (10e-12), so it can be omitted in the calculation refer to the `Section 2.4` of `[5]`. $fv$ in `Equation 15` represents the Coriolis force, where $f$ is the geostrophic parameter setting as a constant refer to the `Section 4.6.3` of `[5]`. Other constants are also referenced from `[5]`.
> Q11: Pressure level and geopotential height are not the same thing.
While we appreciate the reminder, we would like to clarify that we do not confuse the p-coordinate and the z-coordinate. Our PDEs are all converted to the p-coordinate refer to `Sections 1.6.1-1.6.2` of `[5]`. We will give a more detailed proof in later versions.
> References
- `[1]` *Sønderby C K, Espeholt L, Heek J, et al. Metnet: A neural weather model for precipitation forecasting.*
- `[2]` *Brunetti M T, Melillo M, Peruccacci S, et al. How far are we from the use of satellite rainfall products in landslide forecasting?*
- `[3]` *Kurth T, Subramanian S, Harrington P, et al. Fourcastnet: Accelerating global high-resolution weather forecasting using adaptive fourier neural operators.*
- `[4]` *Bonev B, Kurth T, Hundt C, et al. Spherical fourier neural operators: Learning stable dynamics on the sphere.*
- `[5]` *Holton J R. An Introduction to Dynamic Meteorology. Forth edition.*
If our responses have clarified your inquiries, we kindly ask for an update to your score. Thank you very much for your time.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have read the rebuttal carefully. I raise my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for enhancing the score. We appreciate your careful review once again! | Summary: In this paper, a hybrid model (a model that combines machine learning with physics) is demonstrated for nowcasting and medium-range weather forecasting. WeatherGFT uses machine-learned weightings that combine two successful methods for weather forecasting (machine learning and the traditional numerical, PDE-based method). The combination of both methods allows for significantly increased temporal resolution compared to purely data-driven methods (6-hourly vs 15 minutes). This approach also provides a new framework for combining physics-based methods with data-driven methods, which allows for the ability to differentiate through the hybrid model.
Strengths: Combining machine learning with physics-based approaches (e.g. hybrid modeling) is an exciting and highly researched topic for weather and climate. As mentioned in the paper, purely data-driven models are black-box systems even if they do perform well. Existing data-driven models also are too temporal coarse for some operational weather forecasting applications. This paper does a good job address both of these existing problems in the field.
Weaknesses: The novelty of the paper is not as strong as claimed in the paper. Other hybrid models that combine machine learning with physic-based approaches exist (see Arcomano et. al. 2022 and 2023, Clark et. al. 2022, and others). The Arcomano et. al. 2022/2023 even includes a machine-learned weighting for the combination of the ML-based model and the physics-based model.
The time-embedding and multi-lead times in one model have also been demonstrated before, see Stormer (https://arxiv.org/abs/2312.03876) and MetNet (https://arxiv.org/abs/2306.06079).
Technical Quality: 3
Clarity: 3
Questions for Authors: In section 4.3, I don’t understand how FourCastNet, ClimODE, or Keisler was used for nowcasting. They all have a time step of 6 hours, how is it possible to get 30-minute forecasts using frame interpolation methods? Are you using interpolation between ERA5 (e.g. the initial conditions) and a 6-hour forecast from these models?
In section 4.3, how are the forecasts including WeatherGFT initialized? Do they all use ERA5?
I would like to see the effects of the PDE Kernel on other metrics. Does this inclusion of a physics kernel improve spectral bias or allow stability? What about the conservation of energy or momentum?
What is the computational cost during inference of having a PDE kernel compared to just the transformer by itself?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Overall the limitations of the paper are well laid out, however, some claims are not supported by the paper.
Claims about being the first to move away from fixed lead times for data-driven weather forecasting are not true. Stormer (https://arxiv.org/abs/2312.03876) and MetNet (https://arxiv.org/abs/2306.06079) have demonstrated this previously.
Claim “In addition, the prediction error of our model at the lead time of 6-hour is significantly smaller than that of the physical dynamic model ECMWF-IFS”. For Z500 this is not supported by Figure 4.
For Figure 5 the prediction of the subtropical high (it should be a subtropical ridge) isn’t convincing. That seems to be a function of the contouring in Matplotlib as the difference plots show similar magnitude of errors.
Overall, if some of these claims are toned down and the addition of the appropriate citations the manuscript will be improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your insightful review and detailed feedback! We are glad for your recognition of the significance of our hybrid modeling of physics and AI.
We appreciate your feedback on specific claims in our paper, and will refine specific statements and add citations of relevant papers as necessary. In the following, we will address your remaining questions.
> Q1: Comparisons with other hybrid models that combine machine learning with physics.
While there exist various methodologies for integrating AI and physics, **our approach diverges in both methodology and focus from those mentioned**. The referenced paper primarily integrates machine learning techniques into a complete dynamical model (e.g., AGCM) to improve the forecast performance of this physics model. But our method focuses on combining PDE processes with neural networks, rather than improving the existing physical model. In addition, our focus is on finer-scale physical modeling to generalize finer-scale predictions without valid training labels, which differs from focusing on improving forecasting skills.
> Q2: The time-embedding and multi-lead times in one model have also been demonstrated before, see Stormer and MetNet.
Thank you for your reminder. It should be noted that time-embedding and multi-lead times are not our core innovations (our core innovation lies in the physical modeling). They represent technical innovations in model implementation. We will increase the citations of the papers related to these concepts.
However, we would like to emphasize that these methods differ from previous work. In Stormer and MetNet, there is no direct correlation between the number of network layers and the forecast lead time. In MetNet, the lead time condition serves as input for all network layers, while our network module (HybirdBlock) evolves within a short timeframe without lead time conditions. In our approach, **the lead time condition is solely fed into the decoder to extract predictions from different network layers' outputs**.
For multi-lead time training, the method of our paper is a completely different technology from the Multi-step finetuning in Stormer. **Stormer's finetuning focuses on autoregressive prediction, and its network itself can only output predictions for the next single step.** In contrast, our method has the capability to generate predictions for multiple steps within a single forward process.
> Q3: How FourCastNet, ClimODE, or Keisler was used for nowcasting?
For the precipitation nowcasting experiment, we use 1-hour interval ERA5 to train FourCastNet, ClimODE, and Keisler, respectively, to obtain models that can make 1-hour predictions. *(NOTE: In the medium-range forecast experiment, these models have a lead time of 6 hours during training.)* However, as ERA5 lacks half-hour data, these models are unable to directly provide half-hour predictions. Therefore, we utilize additional interpolation models to interpolate the 1-hour predictions of these models to 30-min.
> Q4: In section 4.3, how are the forecasts including WeatherGFT initialized? Do they all use ERA5?
The initial fields of all models are from ERA5. The only difference is that other models need to interpolate their prediction results to 30-min, while our method can directly get 30-min of prediction from the networks.
> Q5: Does this inclusion of a physics kernel improve bias? What about the conservation of energy?
This is a good suggestion and helps us to have a more comprehensive understanding of the role of the PDE kernel. We measured *`bias`* and *`energy`* and found that **the PDE kernel plays a positive role in maintaining energy**, as shown in `global response PDF`. We believe that the preservation of energy is intimately linked to enhanced physical modeling, exemplified by the dynamic equations outlined in `Equation 7`. We present the calculation methods and related references of *`bias`* and *`energy`* in this `PDF`.
> Q6: What is the computational cost of having a PDE kernel compared to just the transformer by itself?
As shown in the `global response PDF`, introducing the PDE kernel slightly increases training time, but the added computational cost is acceptable.
> Q7: Claim “In addition, the prediction error of our model at the lead time of 6-hour is significantly smaller than that of the physical dynamic model ECMWF-IFS”. For Z500 this is not supported by Figure 4.
We apply the Wilcoxon test to demonstrate that **our model achieves a lower RMSE compared to the ECMWF-IFS at the 95% confidence level (p-value<0.05)**. The table below indicates that the improvement in z500 is relatively smaller than that in other variables when compared to the ECMWF-IFS, which results in the z500 curve in `Figure 4` closely resembling that of the ECMWF-IFS.
|Lead time (h)|6-hour|3-day|4-day|5-day|
|-|-|-|-|-|
|p-value t2m|1.42e-14|1.42e-14|1.42e-14|1.42e-14|
|p-value t850|1.42e-14|1.42e-14|1.42e-14|1.42e-14|
|p-value u10|1.42e-14|1.42e-14|1.42e-14|2.84e-14|
|p-value z500|1.42e-14|1.25e-03|9.86e-04|6.26e-04|
> Q8: For Figure 5 seems to be a function of the contouring in Matplotlib as the difference plots show similar magnitude of errors.
The color bar we employ is consistent and standardized, with its upper and lower bounds derived from the maximum prediction error across all forecasts. By examining the error visualization in `Figure 5`, it is evident that our predictions exhibit reduced errors. The experiment depicted in `Figure 4` quantitatively demonstrates that our model boasts a relatively lower RMSE. Moreover, as detailed in the response to Q7, the results displayed in `Figure 4` hold significance.
We will introduce additional visualizations in the appendix to illustrate that our model yields comparatively smaller prediction errors.
If our responses have clarified your inquiries, we kindly ask for an update to your score. Thank you very much for your time.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing my concerns and answering my questions. If accepted I suggest adding some of the plots and or inference speed comparisons to the paper. I raise my score to a 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for enhancing the score. We will continue to improve our paper according to your valuable suggestions. Your meticulous review is greatly appreciated. Thank you once again. | Summary: The paper proposes WeatherGFT, a physics-AI hybrid model designed to generalize weather forecasts to finer temporal scales beyond the training dataset. By integrating PDE kernels for physical simulation and neural networks for adaptive bias correction, the model aims to provide accurate 30-minute forecasts using an hourly dataset. The lead time-aware training framework enhances the model's ability to generalize across multiple lead times, achieving state-of-the-art performance in both medium-range and nowcasting tasks.
Strengths: 1. The framework that fuses AI and PDE is innovative and improves the model's generalibility.
2. The model demonstrates generalization capabilities across time scales, achieving finer temporal resolutions (e.g., 30-minute forecasts) from coarser data.
3. Extensive experiments validate the model's state-of-the-art performance across various forecasting tasks and lead times.
Weaknesses: 1. The paper is well-structured but could benefit from clearer explanations and analyses of the novel modules and specific contributions.
2. The physics model relies on a limited set of PDEs for simulation, which may not fully capture the intricacies of real-world atmospheric dynamics.
3. The baseline comparisons are limited, as many AI models available for weather forecasting have not been included.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you further explain Equation 5 and how you formulate $K_x$ and $M_x$
2. How does the convolution layer work to align neural network features with physical features?
3. How do you update the learnable factor $r$? How do you compute the weight to draw Figure 1 since the weight is a vector?
4. Does tp (hourly precipitation) have a related PDE? How do you deal with variables that are not related to any PDE?
5. How fine-grained can the model achieve? Doing 30-minute forecasts with hourly data is impressive, but can the model achieve finer resolutions like 15 minutes or even 5 minutes?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your thoughtful review and detailed feedback! We are delighted that you value the innovative fusion of AI and physics in our research. We have revised the paper according to your suggestions and will now respond to the remaining queries you have.
> Q1: The paper is well-structured but could benefit from clearer analyses.
Thank you for your affirmation. We will add more quantitative and rigorous analysis to the paper to fully demonstrate our model. In the `global response PDF`, we added the evaluation of two indicators, *`bias`* and *`energy`*. The results show that **using PDE kernel can assist in making the model's prediction field energy more consistent**.
> Q2: The physics model relies on a limited set of PDEs.
We agree that exploring more PDEs is valuable. Currently, we have employed only a few PDEs which results in performance enhancements and demonstrates the effectiveness of our design in combining AI and PDEs. Moving forward, we intend to augment the number of PDEs to simulate atmospheric dynamics more comprehensively.
> Q3: The baseline comparisons are limited.
We have included additional model comparisons, as depicted in the table below. Specifically, SphericalCNN is a CNN model designed for spherical data, DMNWP utilizes a diffusion model for weather prediction, and EWMoE employs a mixture of experts (MoE) for weather forecasting.
|RMSE z500|6h|3day|4day|5day|
|-|-|-|-|-|
|SphericalCNN|28.40|161.1|239.9|338.4|
|DMNWP|52.33|272.3|360.7|466.8|
|EWMoE|23.52|165.3|240.1|341.6|
|WeatherGFT|**22.08**|**152.3**|**225.8**|**315.7**|
|RMSE t850|6h|3day|4day|5day|
|-|-|-|-|-|
|SphericalCNN|0.494|1.183|1.493|1.860|
|DMNWP|1.073|1.823|2.247|2.551|
|EWMoE|0.513|1.259|1.593|1.865|
|WeatherGFT|**0.457**|**1.176**|**1.480**|**1.839**|
> Q4: Can you further explain Equation 5 and $K_x$ and $M_x$?
`Equation 5` shows the differential and integral operators in the model. $K_x$ is the convolution kernel. Assume a one-dimensional data $x=[-2,-1,0,1,2]$. It gradually increases from left to right by 1, that is, its gradient is 1. Applying convolution kernel $K_x$ to $x$, the result is:
$$Conv(x, K_x)=\frac{(-2)\times 1+(-1)\times (-8)+0\times 0+1\times 8+2\times (-1)}{12} = 1$$
By using this convolution kernel, the data gradient can be determined.
$M_x$ obtains the integral through matrix multiplication. Given the matrix $x$ below, the result of $xM_x$ is:
$$x=\begin{bmatrix}
1 & 4\\\\
2 & 5\\\\
3 & 6
\end{bmatrix},\ xM_x=
\begin{bmatrix}
1 & 4\\\\
2 & 5\\\\
3 & 6
\end{bmatrix}
\begin{bmatrix}
1 & 1\\\\
0 & 1
\end{bmatrix}=\begin{bmatrix}
1 & 1+4\\\\
2 & 2+5\\\\
3 & 3+6
\end{bmatrix}$$
> Q5: How does the convolution layer work to align features?
The size of the tensor of the output of the PDE kernel is different from that of the output of the attention. The size of the output of the PDE kernel is $[8,5,13,32,64]$, where each dimension represents *batch size, (z, q, u, v, t), pressure levels, H-patch, W-patch*. The size of the output of the attention is $[8,1024,32,64]$, where each dimension represents *batch size, embedding dim, H-patch, W-patch*. We first reshape the output of the PDE kernel to $[8,5\times 13,32,64]$. Then, through a convolution layer `Conv(in_channel=65, out_channel=1024)`, the size of the tensor is converted to $[8,1024,32,64]$, aligning with the output from the attention block.
> Q6: How do you update the learnable factor $r$ and compute the weight to draw Figure 1?
Below is the PyTorch code for initializing and using the learnable factor $r$.
```python
import torch.nn as nn
r = nn.Parameter(torch.zeros(1, 1, 1, dim), requires_grad=True)
def router(physics_features, ai_features):
# Features size: [B, H, W, dim]
r1 = 0.5*torch.ones_like(physics_features) + r
r2 = 0.5*torch.ones_like(ai_features) - r
mixed_features = r1*physics_features + r2*ai_features
return mixed_features
```
Through the automatic gradient function of PyTorch, $r$ will be updated through model backward. After the model training is completed, we first average the $r$ of each HybridBlock to obtain 24 values corresponding to 24 blocks. Then we divide them into 4 groups, namely:
```python
# 1, 2, 3, 4, 5, ..., 24
# 00:15, 00:30, 00:45, 01:00, 01:15, ..., 06:00
[1, 5, 9, 13, 17, 21] # 15min
[2, 6, 10, 14, 18, 22] # 30min
[3, 7, 11, 15, 19, 23] # 45min
[4, 8, 12, 16, 20, 24] # 60min
```
Then, we can average the $r$ of each group and draw `Figure 1`.
> Q7: Does tp have a related PDE? How do you deal with variables that are not related to any PDE?
There is no PDE specifically for precipitation, since precipitation is considered as a diagnostic variable in the atmospheric dynamics, meaning that precipitation can be derived from other atmospheric variables. For these diagnostic variables, we utilized neural networks to predict their values. While these variables may not directly influence the PDE kernel, their information will be amalgamated through the router to accomplish implicit modeling.
> Q8: How fine-grained can the model achieve? Doing 30-minute forecasts with hourly data is impressive, but can the model achieve finer resolutions like 15 minutes?
As given in `Figure 2` and `Section 3.6`, each PDE kernel simulates atmospheric dynamics within 300 seconds, allowing our model to make predictions at various time intervals by stacking these kernels. Theoretically, the smallest time scale for our model to forecast is 15 minutes, this is because one attention block is composed of 3 PDE kernels, equating to 3x300s=15min. Presently, we are unable to evaluate forecasts at finer scales than 30min due to the lack of ground truth at the 15min. Nevertheless, the generalization of 30min predictions showcased in the paper has already proven the efficacy of our physical-hybrid modeling approach.
If our responses have clarified your inquiries, we kindly ask for an update to your score. Thank you very much for your time and feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification and additional results. They addressed my concerns. I've raised my score.
---
Reply to Comment 1.1.1:
Comment: Thank you immensely for the update. We genuinely value your kind words and highly constructive feedback! | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank all reviewers for their efforts in reviewing our submission and their recognition of our work, e.g., *'fuse AI and PDE is innovative'* and *'demonstrate generalization capabilities'* from **Reviewer CYo5**, *'the paper does a good job address both of these existing problems'* from **Reviewer oEsH**, and *'crucial and novel for weather forecast'* from **Reviewer jiN5**.
In the following, we offer general responses to the common questions and concerns raised by the reviewers. More detailed responses to each specific comment can be found in our rebuttal to each review.
- In response to questions from **Reviewer oEsH** and **Reviewer jiN5** about the significance of our model's performance improvement, we included the Wilcoxon test to demonstrate that **our model's enhancement is statistically significant at a 95% confidence level (p-value < 0.05)**.
- **Reviewer CYo5** has expressed concerns regarding the adequacy of our explanation of integral and differential operators, while **Reviewer oEsH** and **Reviewer jiN5** have raised questions about certain experimental details. Owing to the constraints of the paper's page limit, our coverage of these specifics in the main text is incomplete. We have responded to these queries in each rebuttal and will incorporate them into the appendix for the upcoming version. Thanks for your suggestions.
- **Reviewer oEsH** has noted similarities between some of the methods in our model and those found in previous papers. Thanks for your reminder, we will include references to relevant work where appropriate. However, we contend that although these works may share some similarities, they diverge in both motivations and methods. Our research emphasizes achieving generalization on a smaller scale through physical modeling, contrasting with studies that leverage AI techniques to enhance the forecasting capabilities of a physical dynamic system.
In the `global response PDF`, we present visualizations of five new results:
- Bias: Bias indicates the disparity between the model's predictions and the ground truth. Negative bias indicates underestimation, a prevalent issue in forecasting models. Although the PDE kernel was not specifically designed to address bias underestimation, experimental results indicate that its usage **helps ameliorate underestimation**.
- Energy: This assesses the energy changes in the model's predictions. The experiments reveal that **employing the PDE kernel aids in energy preservation**.
- Comparison of Time Consumption: Introducing the PDE kernel slightly increases training time, but the added computational cost is acceptable.
- RMSE Error Bars: Error bars of RMSE values for different variables across various lead times.
- Router Weights and Features Norm Change: This figure complements `Figure 7` in the paper. It illustrates that **physical and AI features are on a comparable scale**, with the router dynamically selecting the more effective aspects from each. The router's weight adjustments do not impact the output of the AI or physical branches, **highlighting the router's decoupling characteristics**.
We have provided tailored responses to each reviewer's queries. Should you have further questions, we are eager to engage in discussion and address them promptly. If our responses have clarified your inquiries, we kindly ask for an update to your score. Thank you very much for your time and feedback.
Sincerely,
Authors
Pdf: /pdf/1cc027e01a734f012996b55888275f766eab714d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
In-Context Learning with Representations: Contextual Generalization of Trained Transformers | Accept (poster) | Summary: This paper studies the training dynamics of multi-head Transformers by gradient descent on non-linear regression tasks via ICL. This work shows the linear convergence to the global minimum of the training and the in-context inference. An impressive contribution is that transformers are proven to learn contextual information to generalize to unseen examples when prompts contain a small number of input-label pairs. I am willing to update my review if my concerns are addressed.
Strengths: 1. The problem to be solved is significant, interesting, and challenging.
2. The paper is well-written.
3. The comparison with the related works makes the motivation clear.
Weaknesses: 1. It seems that the assumption that $H\geq N$ in Proposition 1 is too strong. The single-head works in Table 1 do not need this condition for few-shot ICL (Obviously, for single-head, $1<N$). I know the problem to solve is different, but this means that Transformers cannot handle long contexts without a large number of heads.
2. No experiments are conducted to verify the theory. It is important to verify the theory by experiments, even with the most simplified Transformer model. For example, can you show if $H<N$, the learning will fail, while if $H\geq N$, the learning becomes successful by a one-layer Transformer?
3. Some statements related to the existing works are wrong. For example, in [Li et al., 2024], $N=\mathcal{O}(\epsilon^{-2}T)$ is the number of data for the pretraining to enable ICL for the Transformer. It is not the number of examples in each prompt as stated in line 129 of the submitted manuscript. The number of examples in each prompt in [Li et al., 2024] is $l\_{tr}$ and $l\_{ts}$ in Eqn 9 of Theorem 3.3.
4. I am unsure whether the theoretical proof is correct. For example, I don't know how you derive linear convergence of the loss via only PL condition and the smoothness of the loss function. Please see Question 1.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Regarding Weakness 1, why do you have $\eta_Q$ in Eqn 63? I think by (41), (60), and Lemma 4.3 in [Nguyen and Mondelli, 2020], $\eta_Q$ should be replaced by $1$ in Eqn 63.
2. What is the requirement for the initialization of $w\_{h,k}$ defined in (7)?
3. It seems there is no requirement or assumption for the $f$ function to be learned. Is your bound derived for the worst case of all the possible $f$?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: There is no potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer vG3D
Thank you for your insightful review. We've **added experiments to validate our theory in the supplementary pdf and our global response**. Below we address your concerns. If our responses resolve your questions, we'd highly appreciate your consideration in raising the score. Certainly, please don't hesitate to request any further clarification.
> **(W1) Assumption $H\geq N$ too strong. The single-head works in Table 1 do not need this condition for few-shot ICL.**
We recognize the importance of weakening this assumption and will explore this in future work. Meanwhile, we want to make the following clarifications.
- As explained in Section 3.3, it is generally impossible to train a single-head shallow transformer to succeed in the task we consider. Our task involves more complex non-linear regression with representation learning, which fundamentally requires multi-head attention.
- Note that $H\geq N$ is a necessary and sufficient condition for the initialization condition (Assumption 3) to hold. To see why it is necessary, note that $B_k^{(0)}$ is an $N$-by-$H$ matrix, and has full row rank only when $H\geq N$.
- Assumption 3 is crucial to our proof: it guarantees that $\zeta_0$ (defined in (12)) is positive, which in turn ensures $\sigma$ (the PL parameter in (44)) is positive.
- However, it's important to note that Assumption 3 may not be a necessary condition for the convergence of the transformer. To weaken (or avoid) Assumption 3, other proof frameworks are needed. But due to the highly non-convexity of (even shallow) transformers, it's somewhat unavoidable to make other strong assumptions (see the examples we list in the next bullet point).
- Though the single-head works in Table 1 do not need $H\geq N$, they often rely on other stronger assumptions even for tasks without representation learning, see Section 1 and 3.1 for details. To summarize:
- Huang et al. [2023], Li et al. [2024], Nichani et al. [2024] assume that the tokens are pairwise orthogonal, which implies the the token dimension $d$ is no smaller than the dictionary size $K$, while in practice $K\gg d$.
- Huang et al. [2023] and Chen et al. [2024] require the context length to be sufficiently large, so they are not analyzing *few-shot ICL*.
- Li et al. [2024] requires the width of their model to be larger than the square of the number of data.
> **(W1) This means that Transformers cannot handle long contexts without a large number of heads/(W2) No experiments are conducted to verify the theory...can you show if $H<N$, the learning will fail, while if $H\geq N$, the learning becomes successful by a one-layer Transformer?**
- Our above response indicates $H\geq N$ is only a sufficient (but may not be necessary) condition used to guarantee the convergence of the transformer. Therefore, our results don't indicate transformers cannot handle long contexts without a large number of heads in practice. However, shallow transformers have limited expressive power, which necessitates more attention heads to solve complex tasks. To support this, we have put empirical results in the supplementary pdf, please refer to Figure 3 and Figure 5 and the corresponding description in our global response.
> **(W3) Statements on [Li et al., 2024] are wrong.**
- You are correct, and we apologize for the mistake and will fix it in the updated paper. Thank you for your careful review.
> **(W4/Q1)typo in (63)**
- Thank you for your sharp observation. You are right -- $\eta_Q$ should be replaced by 1 in (63). We apologize for the typo and will fix it in the updated paper.
- Let us emphasize that this typo does not affect the subsequent results, and the rest of our proof remain valid:
The revised (63) reads:
$$
\ell(\xi^{(t)})-\mathcal{L}^\star \leq \ell(\xi^{(t-1)})-\mathcal{L}^\star+\langle \nabla_\xi \ell (\xi^{(t-1)}), \xi^{(t)}-\xi^{(t-1)}\rangle +\frac{L}{2}||\xi^{(t)}-\xi^{(t-1)}||_2^2.
$$
Combining this with (38), we have
$$
\ell(\xi^{(t)})-\mathcal{L}^\star\leq \ell(\xi^{(t-1)})-\mathcal{L}^\star + \eta_Q(L\eta_Q/2-1)||\nabla \ell (\xi^{(t-1)})||_F^2.
$$
When $\eta_Q\leq 1/L$, we have $L\eta_Q/2-1\leq -1/2$, which gives the first part of (64):
$$\ell(\xi^{(t)})-\mathcal{L}^\star\leq \ell(\xi^{(t-1)})-\mathcal{L}^\star - \frac{\eta_Q}{2}||\nabla_\xi \ell (\xi^{(t-1)})||_F^2.$$
Note that this is the only place where (63) is used.
>**(W4) derive linear convergence of the loss via only PL condition and the smoothness of the loss function**
- The combination of PL condition and smoothness is indeed sufficient for linear convergence of gradient descent. See for example Theorem 13.2 in this note: https://www.stat.cmu.edu/~siva/teaching/725/lec13.pdf
- This approach is commonly used in deep learning optimization literature, as PL condition has been proven to hold for over-parameterized neural networks. See [1-3] for example.
> **(Q2) initialization of $w_{hk}$**
- $w_{hk}$ is initialized to be 0, as stated in line 226 of our paper.
> **Is your bound derived for the worst case of all the possible $f$?**
- our bounds apply for any possible $f$, which includes the worst case one. The key insight is that our theoretical guarantees depend on the function's outputs on the dictionary $\mathcal{V}$, i.e., in our analysis, we work with the finite-dimensional matrix $\widehat Z=(f_i(v_j))_{i\in[m],j\in[K]}\in R^{m\times K}$ defined in line 485, rather than the full functional form of $f$.
---
[1] Liu, C., Zhu, L., and Belkin, M. (2020). Loss landscapes and optimization in over-parameterized non-linear systems and neural networks.
[2] Karimi et al. (2016). Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition.
[3] Loizou et al. (2021). Stochastic polyak step-size for sgd: An adaptive learning rate for fast convergence.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. I am satisfied with the other answers except the one to W1. I still think the requirement that $H\geq N$ is too strong and not impractical. It is also not verified how loose the bound $H\geq N$ is by experiments. Can you at least verify by experiments that with a larger context length $N$, the number of heads $H$ should be larger to ensure a small loss?
---
Rebuttal 2:
Title: Re: Reviewer vG3D
Comment: Thank you very much for your quick response! Below we change $N$ and find the smallest $H$ that achieves a training loss $\leq$ 0.5 within 100 iterations. We set $K=200, d=100, m=20, \tau=0.01$. Our result is as follows:
| N | 1 | 5 | 10 | 20 | 30 | 40 |
|-----|-----|-----|-----|-----|-----|-----|
| H | 1 | 5 | 8 | 29 | 32 | 41 |
This result verifies our theoretical finding that with a larger $N$, $H$ should be larger to ensure a small loss.
If our response resolves your questions, we will highly appreciate your consideration in raising the score. Certainly, we are more than happy to answer your further questions.
---
Rebuttal Comment 2.1:
Comment: Thank you for the response. I am wondering why you set the threshold of the training loss as 0.5. If you use a different threshold, will the bound still be tight? By the way, it seems that the result is not consistent with Figure 3 in the submitted PDF, where the case of $H=1$ cannot reach a loss of less than 0.5. Maybe the setting is different?
---
Reply to Comment 2.1.1:
Title: Re: Re: Reviewer vG3D
Comment: Thank you for your follow-up questions.
>**regarding inconsistency with Figure 3**
The configuration of the two settings are different. While both cases have $H=1$, in Figure 3, $N=30$ and clearly here $H=1$ cannot reach loss 0.5 (because our above experiment shows that $H$ should be at least 32 to reach loss 0.5 for $N=30$). But in the above experiment, $H=1$ is for $N=1$, where the loss can reach 0.5. We also mention that the parameters $K$ and $\tau$ are also set slightly different, but they are not the main reason to cause the difference.
>**If you use a different threshold, will the bound still be tight?**
We tried several configurations and found that the nature of the results are insensitive to the thresholds (and other hyperparameters): $N$ always increases along with $H$. In the following we present the results with loss threshold = 0.1 and 0.5 and iteration number = 200, and the other configurations remain the same as the above experiment:
| N | 1 | 5 | 10 | 20 | 30 | 40 |
|-----|-----|-----|-----|-----|-----|-----|
| H (threshold=0.5) | 1 | 5 | 8 | 24 | 29 | 35 |
| H (threshold=0.1) | 1 | 7 | 21 | 36 | 51 | 57 | | Summary: This paper tackles the problem of task learning with representations using transformers. It presents a theoretical proof for convergence of training of a single-layer multi-head softmax attention transformer to the global minimum of the loss. Specifically, given $N$ example tokens and their corresponding task outputs, the transformer learns to predict the task output for unseen tokens $K-N$, where $K$ is the size of the dictionary. The paper shows that when trained for sufficiently long, the transformers can perform of ridge regression on the example tokens to label the unseen tokens.
Strengths: (1) The paper shows that the parameters of a single-layer multi-head softmax attention transformers can be trained at linear convergence rate (Theorem 1) to the global minimizer of the training loss. This result elucidates that the remarkably efficient training that a transformer architecture offers.
(2) The paper proves that once trained for sufficiently long, the transformer performs ridge regression (Theorem 2) on the example tokens to label the query tokens during inference. This is a very promising step towards understanding the generalization capabilities of transformers, which is quite different from earlier works that require the number of example tokens $N$ to be large.
(3) The paper is written crisp and clean, making it very easy to follow.
Weaknesses: (1) The optimal transformer configuration $(\boldsymbol{\theta})$ is not explicitly stated; only the functional form of the output is given, which makes it difficult to the interpret the result in the parameter $(\boldsymbol{\theta})$ space.
(2) The result in this paper is about the wide single-layer transformers (more heads), rather than the deep ones (more layers), and relies on Assumption 3, which holds when the number of heads, $H\ge N$ (Proposition 2). This is restrictive and there might be other ways in which the assumption can hold even if $N > H$. Some empirical demonstrations with shallow transformers on simple experiments might support this better.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Proposition 1 provides a way to make (almost) sure that Assumption 3 holds, but requiring $H \ge N$ seems very demanding in terms of compute. Can you comment on how to obtain similar results by simpler assumption?
(2) I am curios about the motivation behind your problem formulation, i.e., task-based labelling of a finite dictionary? Is there a real-world problem where this might be of use?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: (1) One minor limitation is that the performance of the trained transformer depends on the relation between the number $N$ of example tokens and the dimension $m$ of the representational space (lines 295-299).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer MzSs
Thank you for your time in reviewing our paper. Below We address your points. If our responses adequately address your concerns, we would be grateful if you could consider increasing your current score. And we are happy to answer your additional questions.
> **(W1) The optimal transformer configuration $(\theta)$ is not explicitly stated; only the functional form of the output is given, which makes it difficult to the interpret the result in the parameter space.**
- As is stated in line 316 and line 482-485, $\theta=${$Q_h,w_h$} is optimal if and only if it satisfies:
$$
\forall k: \sum_{h=1}^H w_{h,k} softmax(V^\top Q_h v_k)=A_{:,k},$$
where matrix $A$ is defined in (33).
Note that due to the non-linear nature of the softmax function and the complex interplay between Q_h and w_h, an explicit closed-form solution in the parameter space is not feasible.
- This characterization, while not explicitly in the parameter space, shows that the transformer gains in-context ability by learning to approximate the matrix $A$, which encodes the "inherent information" of the tasks (basic function maps), through a combination of its attention mechanism (represented by the softmax terms) and output weights. It's worth noting that this characterization shows the role of multi-head attention in approximating complex functions (see line 330-339) and connects the learned parameters to the underlying structure of the tasks.
> **(W2/Q2) Assumption 3 and the $H \geq N$ condition**
We recognize the importance of potentially weakening this assumption and will explore this in future work. Thank you for raising this. Meanwhile, we also want to make the following clarifications.
- To weaken (or avoid) Assumption 3, other proof frameworks are needed. For example, one possible way is to extend the analysis in (Huang et al. 2023) [9] based on induction hypothesis. However, due to the highly non-convexity of (even shallow) transformers, it's somewhat unavoidable to make other strong assumptions. For examples, similar works such as those listed in Table 1 make stronger assumptions than us despite they consider simpler tasks, see Section 1 for details.
- While the above argument indicates $H\geq N$ may not be necessary to guarantee the convergence of the transformer, it's intuitive that $H$ may need to be large in our setting. This is because shallow transformers have limited expressive power, necessitating more attention heads to solve complex tasks. In fact, we have shown that it's generally impossible to train a transformer to succeed in the non-linear regression tasks we consider at least when $H=1$, see line 330-339 for details.
- The experiment in Figure 3 in the supplementary pdf shows that when $H<N(=30)$, the loss stopped descending when it's far from the minimal value. And it keeps descending when $H=N$ (though slowly).
> **(Q2) motivation/real-world applications**
Thank you for your question. We'll incorporate the following in our updated paper:
- The non-linear regression setting is quite general and can model a wide range of real-world problems across various domains, including classification (by using a softmax function as the output layer [1]) time series forecasting [2], inverse problems[3], financial modeling [4], and physical/mathematical modeling [5].
- A key aspect of our work is the analysis of how transformers learn representations (see Section 3.3). This is fundamental to many complex machine learning tasks, including feature extraction, translation and generalization. Our insights into how transformers extract and memorize "inherent information" of basic function maps during training (lines 312-317) could be valuable for understanding these more complex tasks. Besides, our novel insights into how transformers acquire contextual generalization ability with limited data and handle underdetermined templates is relevant to a broader range of ICL tasks beyond regression.
- While our analysis uses a finite dictionary, this approach allows us to derive rigorous theoretical results while still capturing key aspects of how transformers learn and generalize. The insights gained from this analysis can inform our understanding of transformer behavior in more general settings.
> **(limitation) the performance of the trained transformer depends on the relation between $m$ and $N$**
- You're correct that the inference-time performance depends on the $m$-$N$ relationship. This behavior highlights a fundamental trade-off in ICL between having enough examples to determine the underlying function and avoiding overfitting to noisy labels. Far from being a limitation, this is a key finding of our work and reveals nuanced behavior in ICL that differs from traditional supervised learning.
- We've conducted experiments to verify this finding, please refer to Figure 2 in the supplemental pdf and the corresponding descriptions in our global response.
---
[1] Liu, C., Zhu, L., and Belkin, M. (2020). Loss landscapes and optimization in over-parameterized non-linear systems and neural networks.
[2] Karimi et al. (2016). Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition.
[3] Loizou et al. (2021). Stochastic polyak step-size for sgd: An adaptive learning rate for fast convergence.
[4] Fischer, Manfred M. (2015). Neural networks. A class of flexible non-linear models for regression and classification.
[5] F Stulajter (2002). Predictions in Time Series Using Regression Models.
[6] R Nickl (2021). On some information-theoretic aspects of non-linear statistical inverse problems.
[7] T Amemiya (1983). Non-linear regression models.
[8] M Alizamir et al. (2020). A comparative study of several machine learning based non-linear regression methods in estimating solar radiation: Case studies of the USA and Turkey regions.
[9] Y Huang et al. (2023). In-Context Convergence of Transformers.
---
Rebuttal Comment 1.1:
Comment: Thank you for the valuable comments, and for the clarification on the limitation (the relation between $m$ and $N$ affecting the performance) to be a strength of the paper. I agree with you.
Despite the clean proof and the result of the fast convergence of the training, the finite dictionary still seems to be a limitation as the transformer can essentially 'memorize' the basis vectors into the value transforms in the heads. As such, the paper is more about how an attention mechanism can 'memorize' efficiently (possibly better than other architectures), which I agree to be novel. However, direct impact of the paper is unclear. Moreover, the construction using a large number of heads is not very appealing.
Taking all of these into consideration, I shall keep my (positive) score.
---
Rebuttal 2:
Comment: Dear Reviewer MzSs,
We've taken your initial feedback into careful consideration in our response. Could you kindly confirm whether our responses have appropriately addressed your concerns?
If you find that we have properly addressed your concerns, could you please kindly consider increasing your initial score accordingly? Please let us know if you have further comments.
Thank you for your time and effort in reviewing our work!
Many thanks, Authors
---
Rebuttal 3:
Comment: Dear Reviewer MzSs,
Thank you very much for your response!
In addition to what you mentioned as 'memorize' the basis vectors, our results contain two more points: (i) after 'memorizing/learning' basis vectors, the ICL implements **ridge regression** to use such basis information, which is a new characterization; (ii) such tranformers having those properties (memorizing basis vectors and implementing ridge regression) can be learned naturally by gradient descent.
Many thanks,
Authors | Summary: The paper investigates the theoretical understanding of in-context learning, focusing on whether transformers can generalize to unseen examples within a prompt by acquiring contextual knowledge. The paper analyzes the training dynamics of transformers using non-linear regression tasks, demonstrating that multi-head transformers can predict unlabeled inputs given partially labeled prompts. The study shows that the training loss for a shallow multi-head transformer converges linearly to a global minimum, effectively learning to perform ridge regression. Main contributions include: (1) The paper establishes the convergence guarantee of a shallow transformer with multi-head softmax attention trained with gradient descent on general non-linear regression in-context learning tasks. (2) The paper analyzes the transformer’s behavior at inference time after training, demonstrating that the transformer decides its generating template by performing ridge regression. (3) The analysis framework allows overcoming several assumptions made in previous works, such as the requirement for large prompt lengths, orthogonality of data, restrictive initialization conditions, special structure of the transformer, and super wide models.
Strengths: 1. The focus on analyzing training dynamics and convergence guarantees for transformers in ICL tasks is relatively novel. Prior works often concentrate on empirical performance without delving deeply into the theoretical underpinnings.
The combination of gradient descent analysis with multi-head softmax attention in the context of non-linear regression is a creative fusion of ideas that opens up new avenues for understanding how transformers learn contextually.
By overcoming assumptions such as the need for large prompt lengths, orthogonality of data, and super wide models, the paper advances the state of the art and provides a more realistic framework for understanding transformer performance.
2. The paper’s analysis of the convergence dynamics and ridge regression behavior of transformers is mathematically robust, with detailed proofs and clear logical progression.
3. The paper is well-organized, with a logical flow from the problem statement to the theoretical analysis and conclusions, making it easy to follow the argumentation.
Weaknesses: While the paper excels in theoretical analysis, it lacks empirical validation of the proposed concepts and results. It is suggested to include empirical experiments that demonstrate the practical applicability of the theoretical findings would significantly strengthen the paper. For instance, running experiments on standard benchmarks to show how the proposed theoretical insights translate to improved performance in real-world tasks would validate the results more strongly.
The paper’s discussion on the practical implications of the theoretical findings is somewhat limited. This can make it challenging for practitioners to understand how to apply these insights to real-world problems.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Do you have any empirical results or planned experiments that demonstrate the practical applicability of your theoretical findings?
2. What are the potential impacts of the remaining assumptions and simplifications (e.g., specific initialization conditions, structure of the transformer) on the generalizability of your results?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Please review the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer TyrV
Thank you for your valuable feedback. We agree that empirical validation is crucial and **have included experimental results in the supplementary PDF and the global response**. Note that doing in-context training on real datasets is highly demanding and usually requires very large models trained on a lot of different datasets [3]. Therefore, it's standard practice for the line of works (e.g., [4,5]) that analyze transformers' in-context ability to experiment on the synthetic dataset. Below we address your other points. If these clarifications resolve your primary concerns, we would highly appreciate your consideration of increasing your score. Certainly, please don't hesitate to request any further clarification.
>**(Weaknesses/Q1) discussion on the practical implications of the theoretical findings**
Besides those discussed in the global response, one interesting implication is that our paper suggests assigning different learning rates for different parameter blocks, which has been shown to work well by some very recent work (e.g. [1,2]), where the authors show sgd (which assigns the same learning rate to all blocks) doesn't work well for transformers, but it's also not necessary to keep different learning rates for all parameters as Adam does.
>**What are the potential impacts of the remaining assumptions and simplifications (e.g., specific initialization conditions, structure of the transformer) on the generalizability of your results?**
Thank you for bringing up this important question. We'll incorporate the following discussion in our paper:
- While our current analysis focuses on shallow transformers, we believe our methodology has the potential to be extended to deeper architectures. The core of our proof technique, which combines the Polyak-Łojasiewicz (PL) condition with smoothness to demonstrate convergence (as detailed in the proof of Theorem 1), could potentially be generalized to deeper transformers. Therefore, we expect that our key results, such as the limit point of the output of the transformer (see (19) in Theorem 2), the explanation of the transformer's ICL ability and the analysis of its generalization ability would also apply to deep Transformers.
- The main challenge that we need to overcome in extending the proofs to deeper architectures lies in estimating the PL coefficient $\sigma$ and smoothness coefficient $L$. To this end, it is possible to leverage our techniques for one-layer transformer and carefully conduct a block-by-block analysis potentially via induction arguments.
- As we comment in line 217-218 and Proposition 1 in the paper, our initialization condition can be easily achieved via random sampling from Gaussian distribution.
- It's worth noting that we assume Gaussian noise for simplicity and show that the transformer decides its generating template by performing ridge regression (see Section 1.1 and Theorem 2). Our technique could potentially be extended to analyze other types of noise. For example, we could hypothesize that if the noise is Laplacian, the transformer might learn to perform Lasso regression instead. We leave this exploration as future work.
---
[1] Y Zhang et al. (2024). Adam-mini: Use Fewer Learning Rates To Gain More.
[2] Y Zhang et al. (2024). Why Transformers Need Adam: A Hessian Perspective.
[3] S Min (2022). MetaICL: Learning to Learn In Context.
[4] S Garg et al. (2022). What Can Transformers Learn In-Context? A Case Study of Simple Function Classes.
[5] S Chen et al. (2024). Training Dynamics of Multi-Head Softmax Attention for In-Context Learning: Emergence, Convergence, and Optimality.
---
Rebuttal 2:
Comment: Dear Reviewer TyrV,
We've taken your initial feedback into careful consideration in our response. Could you kindly confirm whether our responses have appropriately addressed your concerns?
If you find that we have properly addressed your concerns, could you please kindly consider increasing your initial score accordingly? Please let us know if you have further comments.
Thank you for your time and effort in reviewing our work!
Many thanks, Authors
---
Rebuttal Comment 2.1:
Title: kind reminder as the rebuttal deadline approaches
Comment: Dear Reviewer TyrV,
As the author-reviewer discussion period will end soon, we would like to check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards,
Authors
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer TyrV,
The author-reviewer discussion period will end in less than one day. We sincerely hope to receive your feedback and see if our responses have properly addressed your concerns. If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards, Authors | Summary: This paper presents a rigorous theoretical analysis of the training dynamics and generalization capabilities of a one-layer transformer with multi-head softmax attention for in-context learning (ICL) of non-linear regression tasks. The authors consider a more challenging and realistic setting where prompts contain only a small number of noisy labeled examples, insufficient to fully determine the underlying template function. This addresses limitations in previous theoretical work that often required unrealistically large numbers of examples per prompt.
A key theoretical contribution is proving convergence of the training loss to its global minimum at a linear rate when optimizing with gradient descent. This is the first such convergence result for transformers with multi-head softmax attention on ICL tasks. The analysis reveals that the transformer effectively learns to perform ridge regression during training. The paper provides precise bounds on the mean squared error between the transformer's predictions and those of the optimal ridge regression solution, as a function of the number of training iterations.
The authors demonstrate that multi-head attention is crucial for succeeding at this ICL task. They provide a lower bound on the number of heads needed (H ≥ N), while also noting that too many heads can slow down convergence. The analysis shows how the multi-head mechanism allows the transformer to approximate the required matrix operations for ridge regression.
Strengths: - Setting : The paper overcomes several limiting assumptions made in previous theoretical work, such as requiring very long prompts, orthogonal input data, restrictive initialization conditions, or unrealistically wide models. Key technical innovations include a novel reformulation of the loss function to remove expectations, proving smoothness and the Polyak-Łojasiewicz condition for the loss landscape, and carefully bounding various matrix norms and eigenvalues throughout training.
- Intuition and connection to ridge regression : The analysis provides insight into how the regularization strength in the implicit ridge regression depends on the ratio of the feature dimension m to the number of examples N in each prompt. Also, the authors show that the transformer acquires two types of generalization capabilities: contextual generalization to unseen examples within a task by learning to infer and apply the underlying template function, and generalization to unseen tasks by learning a general strategy (ridge regression) that works across different λ vectors.
The paper provides rigorous mathematical analysis in a realistic setting. In doing so it offers novel insights, in particular the notion of implicitly learning to perform ridge regression and acquire generalizable knowledge about the representation function.
Also, the paper provides the first convergence guarantees for transformers with multi-head softmax attention on ICL tasks, which is a significant theoretical contribution.
Weaknesses: - Limited model architecture (one-layer transformer) which is understandable
- Lack of empirical validation: The paper is purely theoretical and does not include any experimental results to validate its predictions.
- Focus on regression : The analysis is limited to regression tasks, which represent only a subset of the problems typically addressed by transformers and in-context learning. Many real-world applications involve classification, sequence generation, or more complex structured prediction tasks. It's not immediately clear how the insights about ridge regression would translate to these other task types.
Lack of coomparison to other ICL approaches: The paper doesn't provide a comprehensive comparison to other theoretical approaches to in-context learning. While it does mention some previous work, a more in-depth discussion of how this approach relates to or improves upon other theoretical frameworks for ICL would have provided valuable context.
Technical Quality: 3
Clarity: 3
Questions for Authors: Have the authors considered adding any toy domain experiments that would illustrate and further validate their claims ?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - Simplified model: The analysis is limited to a one-layer transformer, which is much simpler than state-of-the-art models used in practice.
- Lack of empirical validation: While the theoretical results are impressive, the paper doesn't include experimental results to validate its predictions.
- Focus on regression: The paper only considers regression tasks, and it's not immediately clear how these results would generalize to classification or more complex tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer dubr
Thank you for your review and positive comments. We've **included some experiments to verify our theoretical findings in the supplementary pdf and the global response**. Below we address your other points. If our responses resolve your concerns, we'd highly appreciate your consideration of increasing your current score. Certainly, please also let us know if you have further questions.
>**simplified model**
- Our analysis indeed focuses on shallow transformers, but we believe our methodology lays a foundation for extending to deeper architectures. The core of our proof technique, which combines the Polyak-Łojasiewicz (PL) condition with smoothness to demonstrate convergence (as detailed in the proof of Theorem 1), provides a framework that could potentially be adapted for deeper transformers. We anticipate that our key results, such as the limit point of the transformer's output (see (19) in Theorem 2), the explanation of the transformer's ICL ability, and the analysis of its generalization capability, would extend to deeper models, albeit with more complex derivations.
- However, extending the proofs to deeper architectures presents significant challenges, particularly in estimating the PL coefficient $\sigma$ and smoothness coefficient $L$. As evident from our proofs, even for a one-layer transformer, this computation is highly non-trivial and requires a meticulous block-by-block analysis. For deeper models, this analysis would become substantially more intricate, requiring novel mathematical techniques to handle the increased complexity.
- To the best of our knowledge, there's no existing work analyzing deep transformers' training dynamics from an optimization perspective. Most theoretical works, including those in Table 1, focus on shallow transformers due to these challenges. Developing rigorous analytical methods for deep transformers remains an open and exciting direction for future research in this field.
>**focus on regression**
- Our framework considers non-linear regression tasks, which are significantly more complex than the linear tasks analyzed in previous theoretical works (as highlighted in Table 1 of our paper). We expect that such a contribution within regression framework is valuable to the community.
- A key aspect of our work is the analysis of how transformers learn representations (see Section 3.3). This is fundamental to many other machine learning tasks, including feature extraction, translation, and generalization. Our insights into how transformers extract and memorize "inherent information" of basic function maps during training (lines 312-317) could be valuable for understanding other more complex tasks. Besides, our novel insights into how transformers acquire contextual generalization ability with limited data and handle underdetermined templates is relevant to a broader range of in-context learning tasks beyond regression. We leave the extensions of our settings as important directions for future work.
---
Rebuttal 2:
Comment: Dear Reviewer dubr,
We've taken your initial feedback into careful consideration in our response. Could you kindly confirm whether our responses have appropriately addressed your concerns?
If you find that we have properly addressed your concerns, could you please kindly consider increasing your initial score accordingly? Please let us know if you have further comments.
Thank you for your time and effort in reviewing our work!
Many thanks, Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for your rebuttal and the additional toy experiments, which I find sensible (as exemplified by discussion with Reviewer vG3D) and illustrative. I have therefore increased my score. | Rebuttal 1:
Rebuttal: We conducted the following experiments to validate our theoretical findings. The attached file includes the experimental plots. We will add these results as an experiment section in our revision.
**Set up.** We conduct experiments using synthetic dataset (which is standard practice of this line of works that analyze transformers' in-context ability, see [1,2] for example), where we randomly generate each token $v_k$ and their representation $f(v_k)$ from standard Gaussian. We experiment either on the 1-layer transformer described in Section 2 or a standard 4-layer transformer in [3] with $d_{model}=256$ and $d_{ff}=512$. For the 1-layer transformer experiments, we set the training loss to be the population loss defined in (8), and initialize $Q_h^{(0)}$ using standard Gaussian and set $w_h^{(0)}$ to be 0 ($h\in[H]$), identical to what is specifed in Section 3. For the experiments on 4-layer transformers, we generate $\lambda$ from standard Gaussian distribution to create the training set with 10000 samples and in-domain test set with 200 samples; we also create an out-of-domain test set with 200 samples by sampling $\lambda$ from $N(1e,4I)$. Given $\lambda$, we generate the label $y_k$ of token $v_k$ using (1), $k\in[K]$. We train with a batch size 256. All experiments use the Adam optimizer with a learning rate 1e-4.
Figure 1 in the attached file shows the training and inference loss of the 1-layer transformer, where we measure the inference loss by $\frac{1}{K}||\hat y-\hat y^\star||_2^2$ to validate (19): after sufficient training, the output of the transformer $\hat y$ converges to $\hat y^\star$. We set $N=30$, $K=100$, $d=100$, $m=20$, $H=30$. We generate $\lambda$ from $N(0.5e,0.01I)$, which is out of the training distribution (Assumption 1). Thus this experiment also shows the transformer's contextual generalization to unseen examples and generalization to unseen tasks, validating our claim in Section 3.2. We also validate the two types of generalization capabilities on the 4-layer transformer, as shown in Figure 4, where we set $K=200$ and the other configurations same as in Figure 1. From Figure 4 we can see that the three curves have the same descending trend, despite the inference loss on the ood dataset is higher than that on the in-domain dataset.
Figure 2 verifies our claim at the end of Section 2: the closer $m$ is to $N$, the better the transformer's choice is. To be specific, we fix $m=100,\tau=0.01$, and plot $\frac{1}{K}||\hat{y}^\star-\hat{y}^{\text{best}}||_2^2$ corresponding to $N$ from 50 to 150. When $N=m=100$, this square norm becomes exactly 0.
Figure 3 validates our claim on the number of attention head $H$ at the end of Section 3.3 using the 1-layer transformer. In this experiment we use different $H$ to plot the training loss curves, and set the other configurations same as those in Figure 1. From Figure 3 we can see that we need to set $H$ large enough to guarantee the convergence of the training loss. However, setting $H$ too large ($H=400$) leads to instability and divergence of the loss. Recall that in Proposition 1, we require $H\geq N$ to guarantee our convergence results hold. Although this condition may not be necessary, Figure 3 shows that when $H<N=30$, the loss stopped descending when it's far from the minimal value. On the other side, the loss keeps descending when $H=30$ (though slowly).
We also explore how $H$ affects the training on the 4-layer transformer, as displayed in Figure 5, where we set $K=200$ and the configurations other than $H$ are the same as in Figure 3. We fix the wall-clock time to be 100 seconds and plot the training loss curves with different $H$. The left figure shows the final training and losses change with $H$. It reflects that the losses converge faster with smaller $H$ (here the final training loss is the smallest when $H=4$). The right figure of training curves corresponding to different $H$ within 100s may provide some explanation to this phenomenon: (i) transformers with larger $H$ could complete less iterations within a fixed amount of time (the curves corresponding to larger $H$ are shorter); (ii) the training loss curves corresponding to large $H$ ($H=32,64$) descend more slowly. This indicates our claim that larger $H$ may yield slower convergence rate is still valid on deeper transformers. Note that unlike the 1-layer transformer, deeper transformers don't require a large $H$ to guarantee convergence. This is because deep transformers have great expressive power even when $H$ is small.
---
[1] S Garg et al. (2022). What Can Transformers Learn In-Context? A Case Study of Simple Function Classes.
[2] S Chen et al. (2024). Training Dynamics of Multi-Head Softmax Attention for In-Context Learning: Emergence, Convergence, and Optimality.
[3] A Vaswani et al. (2017). Attention is all you need.
Pdf: /pdf/0ab16335864c6f8111ad43a7956a39d17a7617ab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Variational Distillation of Diffusion Policies into Mixture of Experts | Accept (poster) | Summary: This paper introduces a theoretical method for extracting an MoE policy from a pretrained diffusion policy. The advantage of MoE policy is that it allows faster sampling speed compared with the diffusion policy more stable training and better performance compared with methods that train MoE from scratch.
Strengths: The proposed method solves the policy diversity issues in RL while maintaining high training/inference efficiency. The experiments show clear improvement over previous MoE-based imitation learning methods. The theories are well-proved and solid.
Weaknesses: 1. One main advantage of MoE policy against deterministic policy is its diversity. However, this paper seems to only focus on evaluating the final performance of algorithms and does not compare the diversity of different algorithms. Regarding on performance solely, it seems to me Gaussian policy + reverse KL objective is good enough, in what cases do we need more diverse policies? This needs to be justified.
2. Variantional distillation is a known technique in the 3D field. Although its application in RL is new and meaningful, this still indicates limited novelty. Could emphasize more on the new technique introduced in this paper when applying variational distillation for policy extraction.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What is the mixture number n in your experiments? Can you provide any experiment results ablating n? This could be very informative.
2. I highly suggest the authors provide the source code during rebuttal. Reproducibility is crucial for this kind of paper. (Even if it is just unsorted/raw code)
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to review our work and the many helpful comments and suggestions. We hope the following replies sufficiently address the raised questions and concerns. We will update the paper accordingly.
---
> One main advantage of MoE policy against deterministic policy is its diversity. However, this paper seems to only focus on evaluating the final performance of algorithms and does not compare the diversity of different algorithms.
>
We thank the reviewer for sharing their concern. If we understand correctly, when the reviewer mentions "final performance" they are referring to the success rate. In our evaluation, we additionally assess the diversity of policies using *task entropy (TE)*.
TE was originally introduced by [1] as a scalar metric ranging from 0 to 1, where 0 indicates that the model has only learned one way to solve the task, and 1 indicates that the model has learned all skills present in the demonstration data.
We apologize for any confusion caused by our description of TE as a measure of versatility rather than diversity. It is indeed intended to quantify the model's ability to exhibit diverse behaviors. We will make the necessary changes to the experiment section to further enhance clarity.
---
> Regarding on performance solely, it seems to me Gaussian policy + reverse KL objective is good enough, in what cases do we need more diverse policies? This needs to be justified.
>
We thank the reviewer for their suggestion to enhance the motivation for policies capable of learning diverse behavior. In response, we have incorporated additional points and references into the introduction of our paper.
Additionally, we summarized these points below:
1. **Improving Generalization:** If the learned policy overfits a specific set of demonstrated behaviors, it may not generalize well to new situations. By exposing the model to diverse behaviors, the risk of overfitting is reduced, and the learned policy is more likely to capture the underlying principles of the task rather than memorizing specific trajectories [1, 2, 3].
2. **Enhancing Skill Transfer:** Learning diverse behaviors facilitates better skill transfer across different but related tasks. If the agent can imitate a wide range of behaviors, it is more likely to possess a set of skills that can be applied to various tasks, making it a more versatile and capable learner [1, 3].
3. **Behavior in complex multi-agent settings:** Following a multimodal policy through a diverse set of skills, has been shown to be advantageous in multi-agent settings, such as table tennis [4].
---
> Variational distillation is a known technique in the 3D field. Although its application in RL is new and meaningful, this still indicates limited novelty. Could emphasize more on the new technique introduced in this paper when applying variational distillation for policy extraction.
>
We assume the reviewer is referring to [6] when mentioning that variational distillation is a known technique in the 3D field. We will add further details to the 'Related Works' section, highlighting the distinctions between their work and ours to emphasize the novelty introduced in our approach.
In essence, [6] aims to generate 3D scenes using a pre-trained 2D teacher model for distillation. Their approach fundamentally differs from ours, as they focus on distilling knowledge from one diffusion model into another diffusion model. In contrast, our goal is to distill a diffusion model into another family of generative models, specifically Mixture of Experts, to achieve favorable properties such as faster inference and tractable likelihoods.
---
> What is the mixture number n in your experiments?
>
To determine the value of n, we conducted a grid search and found that using n=8 yields consistently good performance across all D3IL tasks, and n=4 yields good results for Franka kitchen and block push. Additional details about the hyperparameters can be found in Appendix E.2, Table 4.
We will also include more detailed information about the mixture number in the main section of the paper.
---
> Can you provide any experiment results ablating n? This could be very informative.
>
We agree that such an ablation is very informative and in fact already included such this ablation in the original submission, see Figure 3a, with accompanying text in Section 5.5. The main takeaway is that increasing the mixture number improves task entropy and therefore the skill diversity of the learned policy. We will emphasize this ablation stronger in the next iteration of the paper.
---
> I highly suggest the authors provide the source code during rebuttal. Reproducibility is crucial for this kind of paper. (Even if it is just unsorted/raw code)
>
We apologize for not including the source code in our initial submission. We have now provided a link to an anonymous GitHub repository containing the code for all our experiments. Additionally, we have included an IPython notebook to test our method on a small toy task, which essentially reproduces Figure 2. We have sent these links to the AC in an Official Comment, as the rebuttal guidelines prevent us from directly sharing these links in our response to the reviewers.
---
We would like to thank the reviewer again and welcome the opportunity to address any additional concerns or questions that the reviewers may have.
---
[1] Towards Diverse Behaviors: A Benchmark for Imitation Learning with Human Demonstrations, ICLR ‘2024
[2] Neural Probabilistic Motor Primitives for Humanoid Control, ICLR ‘19
[3] InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations, NeurIPS ‘17
[4] One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL, NeurIPS ‘20
[5] Specializing Versatile Skill Libraries using Local Mixture of Experts, CoRL ‘21
[6] High-fidelity and diverse text-to-3d generation with variational score distillation, NeurIPS ‘23
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for the very detailed responses, which have resolved most of my concerns. I raise the score from 5 to 6.
However, I would like to share one of my very personal opinions: Most (if not all) current public RL benchmarks and their evaluation metrics actually lack a strong need for policy diversity, which hinders the necessity of diversified policies. I believe designing new tasks and more difficult benchmarks is of critical importance for designing these algorithms.
---
Reply to Comment 1.1.1:
Title: We thank the reviewer for the positive feedback
Comment: We thank the reviewer for the positive feedback and are pleased that our response addressed their concerns. We fully agree that the community needs new benchmarks that require and evaluate diversity in RL settings. We are happy to address any other concerns that might arise during discussion session. | Summary: This study introduces Variational Diffusion Distillation (VDD), a novel method that distills pre-trained diffusion models into Mixture of Experts (MoE) frameworks. VDD addresses diffusion models' drawbacks of intractable likelihoods and long inference times, while leveraging their ability to represent complex distributions. By leveraging a decompositional variational objective, VDD trains MoEs efficiently, enabling real-time applications. VDD outperforms existing distillation and MoE training methods in several complex behavior learning tasks.
Strengths: 1) The technical approach appears to be sound, demonstrating a rigorous and well-grounded methodology.
2) The visualizations presented in the paper are meticulously crafted, showcasing a high level of attention to detail and aesthetics.
3) The experimental section is quite impressive, offering a substantial evaluation of the proposed method.
Weaknesses: 1) The challenges posed, although relevant to diffusion models and their potential difficulties in handling certain tasks or long inference times, lack sufficient evidence to conclusively state that these issues persist across all novel variants of diffusion models.
2) The use of identical phrasing in the abstract and introduction could potentially limit the reader's engagement, as it fails to provide a nuanced progression of ideas.
3) A missing period on line 218 detracts slightly from the overall readability and professionalism of the manuscript.
4) It appears that the experiments for the DDPM component in Table 1(a) are incomplete for the kitchen and block push datasets, which limits the comprehensiveness of the evaluation.
5) While the proposed method achieves commendable results, it does not consistently outperform all metrics, necessitating a deeper analysis to explain the observed variations and identify potential avenues for improvement.
6) The inconsistency in Table 3 left, where 2.16 is highlighted as better than 2.15 is confusing and suggests that the experiment may not have been fully completed.
7) The comparison to a limited number of methods may limit the ability to comprehensively assess the strengths and weaknesses of the proposed approach.
8) The lack of illustrative visualizations, such as good and bad case studies, in the main text hinders the reader's ability to fully grasp the performance and limitations of the method in real-world scenarios.
Technical Quality: 3
Clarity: 2
Questions for Authors: see weaknesses
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to review our work and the many helpful comments and suggestions. We hope the following replies sufficiently address the raised questions and concerns. We will update the paper accordingly.
> The challenges posed, although relevant to diffusion models and their potential difficulties in handling certain tasks or long inference times, lack sufficient evidence to conclusively state that these issues persist across all novel variants of diffusion models.
>
To the best of our knowledge, there are currently no variants of diffusion models that enable precise one-step generation with tractable likelihoods. Therefore, the study of diffusion distillation remains an ongoing area of investigation. Recent studies include [1, 2]. However, if the reviewer is aware of any novel diffusion model variants that address the challenges outlined in our work, we would be very happy to include them as references/baselines in our work.
---
> The use of identical phrasing in the abstract and introduction could potentially limit the reader's engagement, as it fails to provide a nuanced progression of ideas
>
We thank the reviewer and will do our best to remove respective phrasing and make the paper more engaging for the reader.
---
> […] missing period on line 218 […] The inconsistency in Table 3 left, where 2.16 is highlighted as better than 2.15 is confusing […]
>
We thank the reviewer for carefully reading the paper and identifying the formatting issues. We have addressed and corrected them.
---
> It appears that the experiments for the DDPM component in Table 1(a) are incomplete for the kitchen and block push datasets, which limits the comprehensiveness of the evaluation.
>
We apologize for not including these results in the initial submission due to time constraints. We added the missing results to the Table R1 in PDF that accompanies the rebuttal.
---
> While the proposed method achieves commendable results, it does not consistently outperform all metrics, necessitating a deeper analysis to explain the observed variations and identify potential avenues for improvement.
>
We thank the reviewer for sharing their concern. However, we want to emphasize that the primary goal of our paper is not to outperform all metrics, as the performance of our approach is limited by the diffusion ‘teacher’ model.
Nevertheless, we agree with the reviewer that the submitted version of the paper did not sufficiently discuss future directions to enhance distillation capabilities. We will add this section to the manuscript and provide a summary below.
**Future Work.** A promising avenue for further research is to utilize the features of the diffusion 'teacher' model to reduce training time and enhance performance. This can be achieved by leveraging the diffusion model as a backbone and fine-tuning an MoE head to predict the means and covariance matrices of the experts. The time-dependence of the diffusion model can be directly employed to train the MoE on multiple noise levels, effectively eliminating the need for the time-step selection scheme introduced in Section 4.3.
---
> The comparison to a limited number of methods may limit the ability to comprehensively assess the strengths and weaknesses of the proposed approach.
>
To the best of our knowledge, we have considered the most recent and prominent baselines for training MoE models. However, we acknowledge that novel methods for distilling diffusion models, such as Consistency Trajectory Model (CTM) [3], were not included in our evaluation as they were not published at the time of submission. In order to better evaluate the strength of VDD, we additionally add CTM as a new distillation baseline. The evaluation results are presented in the PDF that accompanies the rebuttal.
---
> The lack of illustrative visualizations, such as good and bad case studies, in the main text hinders the reader's ability to fully grasp the performance and limitations of the method in real-world scenarios.
>
We thank the reviewer for bringing the need for illustrative visualizations to our attention. In response, we included additional visualizations (Figure R1) in the PDF that accompanies the rebuttal.
Figure R1 highlights the most likely experts according to the gating probability at a given state. We can see that using a single component can achieve a perfect success rate at the cost of losing behavior diversity. Contrary, using many experts potentially results in a slightly lower success rate but increased behavior diversity. These qualitative results are consistent with the quantitative results from the ablation study presented in Figure 3a of the paper.
---
We express our gratitude to the reviewer for their valuable comments and suggestions. We are pleased to address any additional questions or concerns that may arise.
---
[1] Song, Yang, and Prafulla Dhariwal. "Improved techniques for training consistency models.” ICLR 2024
[2] Xie, S., et al. “EM Distillation for One-step Diffusion Models”, Preprint
[3] Kim, Dongjun, et al. "Consistency trajectory models: Learning probability flow ode trajectory of diffusion." ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
As the discussion period is nearing its conclusion, we kindly ask you to engage in the discussion and provide notes on any concerns that have not yet been addressed, along with the reasons why.
Thank you for your attention to this matter.
AC.
---
Rebuttal Comment 1.2:
Comment: I appreciate the authors for the efforts in answering my questions. Although some issues were not substantively addressed, I maintain my positive score due to the overall quality of the paper.
---
Reply to Comment 1.2.1:
Comment: We sincerely thank the reviewer for the positive comments on our work. We aimed to address all questions accurately and concisely, and we apologize if doing so left open questions. If the reviewer points us towards the respective issues, we would be happy to clarify and elaborate on them.
---
Reply to Comment 1.2.2:
Title: Regarding the unexpected lowering of the score
Comment: We are very surprised and saddened to see that the reviewer unexpectedly lowered their score without further explanation.
We thoroughly checked the review and our rebuttal again and fail to see which concerns have not been substantively addressed.
In particular, we want to emphasize that in response to the reviewer's concerns, we conducted additional experiments, evaluations and visualizations during the rebuttal period:
- We performed DDPM distillation experiments on the Kitchen and Block Push datasets (in response to **Weakness 4**).
- We introduced the recently proposed Consistency Trajectory Model (CTM) as a new baseline (in response to **Weakness 7**).
- We included an illustrative visualization to provide deeper insights into our method (in response to **Weakness 8**).
The remaining concerns have also been addressed in the original rebuttal:
- **Weakness 1** was addressed and we offered to include additional discussion and baselines if pointed towards respective work.
- **Weakness 2,3** and **6** were addressed by fixing typos and rephrasing sentences in the improved manuscript, which can not be uploaded as to rebuttal guidelines.
- **Weakness 5** has been addressed and helped us clarify the goals, limitations and future work of our approach
We sincerely hope there is an underlying misunderstanding and want to point out that the additional experiments and visualization are in the **PDF attached to the "Author Rebuttal" post** and not in the main manuscript (adhering to the rebuttal guidelines).
We are of course happy to further clarify and elaborate on any of these concerns.
---
Rebuttal 2:
Title: Please reply to the rebuttal.
Comment: Dear Reviewer,
Please reply to the rebuttal.
AC. | Summary: This paper presents a variational inference method for distilling denoising diffusion policies into Mixture-of-Experts (MoE) policies. The primary motivation is to combine the strengths of both worlds - the ability to learn complex, multi-modal distributions of diffusion models - and the efficiency of MoEs offering faster inference and tractable likelihoods. Importantly, the decomposed upper bound of the variational objective enables separate, and thus more robust training for different experts. The proposed method is empirically evaluated on 9 behavior learning tasks. The author demonstrate its ability to distill complex distributions, outperforming existing distillation methods.
Strengths: 1. clear motivation of combining the strengths of both diffusion models and MoE.
2. combining decomposing objective with EM is shown to be effective to escape certain challenges of training MoE.
3. the paper is overall well-organized, with clear explanations of the method and detailed experimental results.
Weaknesses: 1. While EM is arguably be able to handle some limitations, it may also suffer from more longer convergence time during training. Moreover, while the authors discuss inference time improvements, this paper does not analyze training cost. As the training involves optimizing multiple experts and a gating - when the “optimal” number of experts is difficult to be pre-determined, it could potentially be computationally intensive. Is it possible to take task similarity into account and perform kind of meta-training like policy distillation?
2. The authors do not explicitly discuss how well VDD can capture long-range temporal dependency that might be presented in the original diffusion model, provided that it improves the inference speed. However, this can be important for sequential decision-making tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see Weaknesses section
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do have included discussion about the limitation and shed light on how they might be approaching in future studies.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to review our work and the many helpful comments and suggestions. We hope the following replies sufficiently address the raised questions and concerns. We will update the paper accordingly.
---
> While EM is arguably be able to handle some limitations, it may also suffer from more longer convergence time during training. Moreover, while the authors discuss inference time improvements, this paper does not analyze training cost […] .
>
We thank the reviewer for bringing this issue to our attention. We apologize for not reporting the training times for a varying number of experts in the initial draft of the paper. We added a table showing the training costs using $n \in \{1,2,4,8,16\}$ components to the PDF that accompanies the rebuttal. Figure R2 shows that both, training time and the number of parameters increase sub-linearly with $n$.
---
> As the training involves optimizing multiple experts and a gating - when the “optimal” number of experts is difficult to be pre-determined, it could potentially be computationally intensive.
>
Since the approach leverages variational inference, and hence the inherit mode seeking behavior of the reverse KL, VDD is relatively robust to the number of components. In fact, the results reported in the paper use $n=8$ for all D3IL tasks and $n=4$ for both kitchen and block push. Moreover, using a large number of components introduces only slight variations in performance criteria, as the gating network can deactivate those that are not needed to solve a task, as shown by the ablation study in Figure 3a in the paper and Figure R1 in the PDF that accompanies the rebuttal.
---
> Is it possible to take task similarity into account and perform kind of meta-training like policy distillation?
>
We thank the reviewer for the interesting suggestion. We believe that using a shared feature embedding across tasks could potentially speed up training time and is a promising avenue for future research. Currently, we are exploring how to efficiently leverage the features learned by the teacher diffusion model to improve training times.
---
> The authors do not explicitly discuss how well VDD can capture long-range temporal dependency that might be presented in the original diffusion model […]
>
VDD shares the same capacity for modeling temporal dependencies as the teacher diffusion model, as it employs the same transformer backbone. We argue that VDD's comparable success rates to the diffusion model demonstrate its ability to capture the long-range temporal dependencies inherent in the original diffusion model.
---
We hope that our responses have addressed your concerns. We would be more than happy to address any remaining questions/concerns.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for the efforts in answering my questions and I have no more questions at this stage. I will keep my score and lean towards positive outcome.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the positive evaluation and valuable suggestions, and we are pleased that our response addressed their concerns. We are happy to address additional questions and suggestions to improve our work further. | Summary: This paper presents Variational Diffusion Distillation (VDD), a method that distills denoising diffusion policies into Mixtures of Experts (MoE) using variational inference. Diffusion Models excel in learning complex distributions for behavior learning but have drawbacks like slow inference times. MoEs address these issues but are hard to train. In VDD, each expert can be trained separately under the corresponding guidance from the diffusion teacher. VDD demonstrates convincing performance in distilling complex distributions and outperforming existing methods across nine behavior learning tasks.
Strengths: The paper addresses the problem of distilling an expressive but slow diffusion model into a faster Mixture of Experts (MoE) model. MoE is an ideal target for distillation due to its intrinsic structure and rapid computational process. Though closely related to Variational Score Distillation, the proposed extension to MoE generators is novel. The claims appear valid, and I found no issues in the derivations. Overall, the paper is well-written and clear.
Weaknesses: - I like the idea of updating the gating of MoE, which has been proven to be crucial for the success of VDD in an ablation study. However, it appears a bit unnatural to me that the experts are learned with reverse KL with the gating is learned with forward KL, even though the minimizer for forward and reverse KLs should align. Authors may find it interesting to visit another perspective from a probably concurrent work (Xie et al., 2024), in which a new distillation framework is derived from forward KL.
- A very relevant work on distilling diffusion models, Luo et al. 2023, is missing.
- I observed some performance gaps between VDD-DDPM and VDD-BESO across various tasks even though the teachers perform similarly. I wonder if VDD is sensitive to the specific forward process used in the diffusion model.
- In my humble opinion, the author could try some more complex tasks to demonstrate the capabilities of VDD. For instance, implementing a fast and expressive model on a dexterous hand would benefit from an expressive policy like diffusion while also requiring speed. Prior work exists in directly learning an MoE image generation model with the EM algorithm from data (Yu et al., 2019). Given the power of diffusion teachers, I would expect a distillation method to perform better than directly learning from data.
Xie et al. 2024, EM Distillation for One-step Diffusion Models
Luo et al. 2023, Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models
Yu et al. 2021, Unsupervised Foreground Extraction via Deep Region Competition
Technical Quality: 3
Clarity: 3
Questions for Authors: - It appears that in simpler tasks like kitchen and block push, the distilled model underperforms compared to the original model. However, in more complex tasks, many distilled models seem to outperform the original. I am curious if the authors have some insights on this.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provided the limitation statement, mainly concerning the expressity of MoE.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very positive comments and are grateful for the valuable suggestions and comments. We hope the following replies sufficiently address the raised questions and concerns. We will update the paper accordingly.
---
> Authors may find it interesting to visit another perspective from a probably concurrent work (Xie et al., 2024), in which a new distillation framework is derived from forward KL […] A very relevant work on distilling diffusion models, Luo et al. 2023, is missing.
>
We thank the reviewer for pointing us towards these very interesting and related works. We will include these references in the main manuscript and further highlight both the commonalities and differences between these works and our approach.
---
> I observed some performance gaps between VDD-DDPM and VDD-BESO across various tasks even though the teachers perform similarly. I wonder if VDD is sensitive to the specific forward process used in the diffusion model.
>
Indeed, in our experience BESO appears to be distilled easier. We credit this to the different forward SDE which in the case of BESO is a simple (scaled) Wiener Process. In contrast, the DDPM SDE contains an additional non-zero drift term, which makes the distillation harder.
---
> In my humble opinion, the author could try some more complex tasks to demonstrate the capabilities of VDD. For instance, implementing a fast and expressive model on a dexterous hand would benefit from an expressive policy like diffusion while also requiring speed […]
>
We thank the reviewer for the encouraging suggestion. However, we would like to emphasize that the tasks used in this work are taken from a very recent benchmark suite [1], allowing for the quantification of the behavior diversity of the learned policy. Maintaining this diversity through the distillation process is an integral part of our approach. Moreover, this task suite contains several complex manipulation tasks. These complexities are particularly evident in tasks such as *Sorting (Image)* and *Stacking (Image)*, where the teacher model achieves success rates of less than 70%.
Nevertheless, we find the idea of incorporating the dexterous hand task very intriguing. In the future, we would like to further investigate this area and evaluate the performance of our model in this context.
---
> It appears that in simpler tasks like kitchen and block push, the distilled model underperforms compared to the original model. However, in more complex tasks, many distilled models seem to outperform the original. I am curious if the authors have some insights on this.
>
We assume the reviewer refers to the success rate as a performance criterion when mentioning ‘outperform’. We acknowledge that it may seem confusing at first that the 'student' outperforms the 'teacher' in terms of success rate. In these cases we observed that student usually exhibits lower task entropy, indicating less diverse behavior.
We hypothesize that there is a trade-off between mastering a single skill very well and learning multiple skills with slightly less accuracy. This hypothesis is further supported by the results of the ablation study presented in Figure 3a. Additionally, we have included further visual evidence supporting this hypothesis in the PDF file accompanying the rebuttal. Figure R1 shows that by reducing the number of experts the success rate increases, at the cost of losing skill diversity. The extreme case $Z=1$, has a success rate of 1, but 0 entropy.
We add a discussion about this trade-off in the experiment section.
---
We would like to thank the reviewers again for assessing our work. We would be delighted to address any additional questions or concerns they may have.
---
[1] Towards Diverse Behaviors: A Benchmark for Imitation Learning with Human Demonstrations, ICLR ‘2024
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response. I will keep my accepting rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's positive evaluation and valuable suggestions. We are happy to address any additional questions and suggestions to further improve our work and its assessment. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback. We would like to reiterate our main action points in response to the reviewers’ suggestions and concerns. New results can be found in the PDF file that accompanies the rebuttal.
- **Open-Sourced Code-Base:** We provided a link to an anonymous Github repository containing the code for reproducing the results of the paper. Additionally, we provided a link to an IPython notebook for testing our method on a simple toy task. We have sent both links to the AC in an Official Comment, as the rebuttal guidelines prevent us from sharing them here.
- **Added Baselines:** In response to feedback, we have included a comparison of our method with Consistency trajectory models [1].
- **Completed Results-Table.** The initial submission was missing the results for DDPM on two tasks, *kitchen* and *block push*. We have now included these results.
- **Detailed Parameter and Training Time Information:** Additional specifics on parameter counts and training times across varying number of experts have been incorporated.
- **Enhanced Visualizations:** To offer deeper insights into our method, we have included supplementary visualizations. It highlights the most likely experts according to the gating probability at a given state.
---
We would like to thank the reviewers again for assessing our work. We would be delighted to address any additional questions or concerns they may have during the upcoming discussion period.
---
[1] Kim D, Lai CH, Liao WH, Murata N, Takida Y, Uesaka T, He Y, Mitsufuji Y, Ermon S. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. ICLR 2024.
Pdf: /pdf/69a73e9f85c09709fbc103aabc2015ed9450af2b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper studies the knowledge distillation problem in diffusion models by distilling denoising diffusion policies into Mixtures of Experts (MoE) using variational inference. The goal is to combine the advantages of diffusion models, with the fast inference capabilities of MoE models. The authors construct an upper bound of a KL divergence-style loss function between the diffusion policy and the MoE policy and implement EM steps for variational inference to achieve the distillation. The proposed method, VDD (Variational Distillation of Diffusion), demonstrates superior performance compared to existing distillation and MoE methods.
Strengths: 1. The paper studies an interesting and practical problem in the distillation of diffusion models, achieving 1-step inference with performance comparable to the original diffusion policy. This type of fast inference can be highly beneficial for real-world control problems utilizing diffusion policies.
2. The proposed method, VDD, introduces a robust approach to distilling diffusion policies to MoE models using VI.
3. Overall, the paper is well-structured, easy to understand, and provides a clear presentation of the proposed method.
Weaknesses: 1. In the limitations section, the paper claims that "VDD is not straightforwardly applicable to generating very high-dimensional data like images." However, the experimental results in Table 1(a) show that the distilled model VDD-DDPM outperforms the original DDPM diffusion policy in the Sorting (Image) and Stacking (Image) tasks, which contradicts the discussion. What is the reason for this? Why does the distilled model outperform the original diffusion policy?
2. Regarding training time, the time cost for distillation models (like VDD-DDPM) is higher than for MoE models. I assume this refers only to the knowledge distillation time. If we consider the original training time for diffusion policies, the total time cost may be significantly larger. It would be beneficial to report this total time for a fair comparison between MoE methods.
3. Does the distilled MoE model have the same architecture as the EM-GPT and IMC-GPT models?
4. While I agree that, according to this paper, the distillation of MoE achieves better performance than training from scratch, why do we distill from diffusion models? How about distilling from large and robust MoE models into smaller MoE models? This could be especially relevant in low-dimensional state observation cases, where MoE models reportedly perform better.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why does the distilled model outperform the original diffusion policy in Sorting (Image) and Stacking (Image) tasks?
2. Does the distilled MoE model have the same architecture as the EM-GPT and IMC-GPT models?
3. Why do we distill from diffusion models? How about distilling from large and robust MoE models into smaller MoE models?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of the paper.
Please refer to the [weakness] section for detailed concerns and questions about the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to review our work and the many helpful comments and suggestions. We hope the following replies sufficiently address the raised questions and concerns. We will update the paper accordingly.
> […] the paper claims that "VDD is not straightforwardly applicable to generating very high-dimensional data like images."
>
The statement in the limitation section refers to the dimensionality of the action space. However, in the image-based version of sorting and stacking, the images represent the state space (model input). The model output remains a low-dimensional control signal for the robot. VDD is not straightforwardly applicable to tasks that require high-dimensional output space such as image generation, because predicting the Cholesky decomposition of the Covariance matrix scales quadratically with the dimensionality of output space. We will clarify the statement in the limitation section.
---
> Why does the distilled model outperform the original diffusion policy in Sorting (Image) and Stacking (Image) tasks?
>
We assume the reviewer refers to the success rate as a performance criterion when mentioning ‘outperform’. We acknowledge that it may seem confusing at first that the 'student' outperforms the 'teacher' in terms of success rate. In these cases we observed that student usually exhibits lower task entropy, indicating less diverse behavior.
We hypothesize that there is a trade-off between mastering a single skill very well and learning multiple skills with slightly less accuracy. This hypothesis is further supported by the results of the ablation study presented in Figure 3a. Additionally, we have included further visual evidence supporting this hypothesis in the PDF file accompanying the rebuttal. Figure R1 shows that by reducing the number of experts the success rate increases, at the cost of losing skill diversity. The extreme case $Z=1$, has a success rate of 1, but 0 entropy.
We add a discussion about this trade-off in the experiment section.
---
> Does the distilled MoE model have the same architecture as the EM-GPT and IMC-GPT models?
>
Yes, to ensure fairness across all experiments, we use the same architecture for these methods.
---
> […] It would be beneficial to report this total time for a fair comparison between MoE methods.
>
We thank the reviewer for pointing this out. In the PDF file accompanying the rebuttal, we report the total training time as well as the separate training time of DDPM and VDD in Table R1(b).
---
> Why do we distill from diffusion models? How about distilling from large and robust MoE models into smaller MoE models?
>
Diffusion models have shown great performance in recent studies, often making them the preferred choice for addressing complex tasks. However, they suffer from slow inference due the iterative denoising and do not offer a tractable likelihood. The goal of our work is, therefore, to mitigate these inherent downsides of diffusion models by distilling them into MoEs through variational inference. Nevertheless, we find the idea of leveraging our approach to distill large MoE into smaller ones very intriguing and will look into it in future work.
---
We express our gratitude to the reviewer for their valuable comments and suggestions. We are pleased to address any additional questions or concerns that may arise.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I appreciate the author's detailed responses, which have addressed most of my concerns.
However, there are still some aspects that remain unclear to me.
1. Why does the student VDD model outperform the teacher diffusion model?
The author suggests that this may be related to the trade-off between mastering a single skill very well and learning multiple skills with slightly less accuracy. However, when looking at the success rates in Table 1(a), the student model only outperforms in the Sorting (Image) and Stacking (Image) tasks, but not in others. Given a fixed teacher model and a fixed number of experts, it seems difficult to attribute this phenomenon to the acquisition of multiple skills or increased diversity. Could this be related to the use of image observations? Interestingly, the task entropy of VDD-BESO is comparable to BESO in the Sorting (Image) task. According to the paper, VDD-BESO is both more accurate and not less diverse than BESO. What could be the reason for this discrepancy?
2. How to choose the optimal number of experts in real-world control tasks?
The results indicate that reducing the number of experts can increase the success rate, albeit at the cost of skill diversity. While this conclusion seems reasonable, it raises the question: how should we determine the optimal number of experts in a Mixture of Experts (MoE) model for a given real-world control task or set of tasks? In many safety-critical control problems, success rate is often the primary concern, leading to the possibility of opting for a smaller number of experts. Could you provide guidance on this decision, or better yet, demonstrate through experiments any potential drawbacks of this strategy?
---
Reply to Comment 1.1.1:
Comment: > Why does the student VDD model outperform the teacher diffusion model? […] According to the paper, VDD-BESO is both more accurate and not less diverse than BESO. What could be the reason for this discrepancy?
>
We would like to express our sincere gratitude to the reviewer for their insightful questions, which prompted us to conduct further investigations. Below is a summary of our findings.
Unlike DDPM, which uses a fixed set of timesteps, BESO learns a continuous-time representation of the scores. This continuous representation enables the use of various numerical integration schemes, which can impact the performance of the diffusion model.
In our original experiments, we used the Euler-Maruyama method with 16 integration steps, following [1]. However, upon further investigation, we observed that utilizing alternative integration schemes, such as DDIM [2] or the Heun method [3], and increasing the number of integration steps, produced different success rates and entropy values. Some of these results surpassed those of VDD-BESO, underscoring that the teacher model indeed serves as an upper bound for the student model's performance. These findings are detailed in the table below. In the table we follow the convention "Sampler name - Number of integration steps”.
| | Euler_Maru-16 | Euler-16 | Heun-16 | DDIM-16 |
| --- | --- | --- | --- | --- |
| **Success Rate** | $0.70$ | $0.68$ | $0.72$ | $0.73$ |
| **Task Entropy** | $0.19$ | $0.25$ | $0.25$ | $0.25$ |
| | **Euler_Maru-32** | **Euler_Maru-64** | | **VDD-BESO** |
| **Success Rate** | $0.76$ | $0.76$ | | $0.76\pm0.04$ |
| **Task Entropy** | $0.23$ | $0.24$ | | $0.22\pm0.03$ |
We, therefore, attribute VDD-BESO’s superior performance over BESO to the timestep selection strategy (see Section 4.3) of the learned continuous-time score functions.
In response to the reviewer's comments, we will include the results of various integration schemes across all tasks and provide additional clarifications in our revised manuscript.
---
> Could this be related to the use of image observations?
>
Regarding the performance gap observed primarily in image-based tasks, we suspect that this may be due to the use of the pre-trained vision encoder from the diffusion model for VDD, which likely enhances distillation performance. However, to confirm this hypothesis and provide a definitive explanation, further investigations are required.
---
> How to choose the optimal number of experts in real-world control tasks?
>
>
> The results indicate that reducing the number of experts can increase the success rate, albeit at the cost of skill diversity. While this conclusion seems reasonable, it raises the question: how should we determine the optimal number of experts in a Mixture of Experts (MoE) model for a given real-world control task or set of tasks? In many safety-critical control problems, success rate is often the primary concern, leading to the possibility of opting for a smaller number of experts. Could you provide guidance on this decision, or better yet, demonstrate through experiments any potential drawbacks of this strategy?
>
We understand that when the reviewer refers to "safety-critical control problems," they are addressing situations where failures or incorrect operations in control systems could lead to significant harm to humans or robots.
To provide a meaningful response, we believe it is important to have more specific details about the particular task or setting.
For example, in a static task environment where achieving high success rates is the primary objective, it might be advantageous to select a small number of components.
On the other hand, if the task involves constantly changing dynamics, a policy that incorporates a diverse set of skills is likely to be more robust [4]. For instance, in the avoidance task depicted in Figure R1 of the PDF attached to our rebuttal, if the path learned by a single-expert policy (Z=1) becomes obstructed, the success rate would drop to zero. In such a dynamic environment, diverse policies could maintain some level of success in completing the task, making a larger number of experts the more suitable option.
If our interpretation of "safety-critical control problems" differs from the reviewer’s definition, we would be happy to provide further clarification.
---
[1] Jia X, Blessing D, Jiang X, Reuss M, Donat A, Lioutikov R, Neumann G. Towards diverse behaviors: A benchmark for imitation learning with human demonstrations. ICLR 2024.
[2] Song J, Meng C, Ermon S. Denoising diffusion implicit models. ICLR 2021.
[3] Karras T, Aittala M, Aila T, Laine S. Elucidating the design space of diffusion-based generative models. NIPS 2022.
[4] Eysenbach, B. and Levine, S., 2021. Maximum entropy RL (provably) solves some robust RL problems. ICLR 2022. | null | null | null | null | null | null |
Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts | Accept (spotlight) | Summary: The paper aims to deal with the task of AD patient classification with missing modalities, and proposes a framework, Flex-MoE.
Flex-MoE also utilizes a learnable missing modality combination bank to complete the missing embeddings of missing modalities.
The training strategy and the gating mechanism of Flex-MoE are also well-designed to handle this task.
The experiments verify that Flex-MoE is able to achieve SoTA performance in this task on the shown modality combinations.
Strengths: - The task explored by this study is practical in real-world scenarios. Addressing the issue of missing modalities in the AD domain is worth the attention of the community.
- The introduction section is well-written and easy to follow. (suggestion: using .pdf or .svg format for the figures is recommended.)
- The motivations for the proposed techniques in Flex-MoE, which you have claimed in your paper, are rational and decent.
- The ablation and sensitivity study is sufficient, showcasing the effectiveness of the components in Flex-MoE.
Weaknesses: - The expression from Line 159 to Line 167 is a little bit confusing. More explanation (or even illustration) is needed.
- Is the usage of the missing modality bank just a simple look-up operation? I am concerned that it seems to be naive and there may be a more smart approach to achieve this.
- There is not $R_{(x_j)}$ of Line 199 in your equations. You need to correct this.
- Why the S-router can be trained to activate the corresponding expert index by Eq. (3)?
- It would be better if you could use some equations to assist to express the content of Line 204-215.
- This paper also leaks sufficient discussion of some related works, such as other tasks of biomedical images [1,2], in Sec.2 or appendix.
- The modality combinations of the experiments in Table 2 is few. It would be better to provide the experiments based on more combinations to showcase the performance of Flex-MoE.
[1] Zhou, D., Gu, C., Xu, J., Liu, F., Wang, Q., Chen, G., & Heng, P. A. (2023). RepMode: learning to re-parameterize diverse experts for subcellular structure prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3312-3322).
[2] Jiang, Y., & Shen, Y. (2024). M$^4$oE: A Foundation Model for Medical Multimodal Image Segmentation with Mixture of Experts. arXiv preprint arXiv:2405.09446.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please see the section of weaknesses.
I may raise or lower my score after the rebuttal.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have claimed the limitations and broader impacts of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer **6yyS** for mentioning our work as `practical`, `addressing missing modalities is worth attention to the AD community` and `rationale and decent techniques`. For the concerns, we provide responses below:
---
**[W 1, 3: Expression from Line 159 - 167, 199]**
The main idea of the missing modality bank completion is to supplement missing modalities from a predefined bank, ensuring robust data integration with observed ones. For example, if a patient lacks clinical data but has imaging, biospecimen, and genetic data, the observed modalities pass through their specific encoders. The missing clinical embedding is supplemented from the missing modality bank, indexed by the observed modalities (e.g., {Imaging, Biospecimen, Genetic}, Clinical). This approach prevents reliance on incomplete or naively imputed data, as encoders only process observed modalities. Once embeddings are standardized in dimension, they proceed to the Sparse MoE layer within the Transformer architecture. For line 199, it should be $\text{max}(\mathcal{S}\text{-Router}(x_j))$. To improve readability, we will include such details in the final version.
---
**[W 2: Usage of Missing Modality Bank]**
As the reviewer mentioned, this can be seen as a look-up operation, but the way we construct and leverage this look-up table is what makes our work novel and distinctive. The rationale behind designing the missing modality bank is to differentiate the context of patient groups based on their unique observed modality combinations and corresponding missing modalities. In the AD domain, the missing modality problem provides a unique context for each patient’s diagnosis. For example, patients with biospecimen and image data but missing clinical and genetic data exhibit unique characteristics [1]. Motivated by this, we designed the missing modality bank with learnable embeddings indexed by observed modality combinations and their corresponding missing modality. This allows existing encoders to process only the observed features without being distracted by missing samples, equipping the model to handle various observed modality contexts flexibly. Empirically, as shown in Figure 4 of the manuscript, the clinical modality (C) shows more significant similarities with the full modality combination when predicting AD diagnosis compared to other modalities (I, G, B). Additionally, patients missing clinical (C) and genetic (G) modalities exhibit more similarities compared to other missing modality combinations, reinforcing our claim that missing modality combinations provide unique diagnostic contexts in AD.
[1] https://pubmed.ncbi.nlm.nih.gov/24360540/
---
**[W 4: Equation to assist line 204 - 215]**
Here, we aim to utilize load balancing loss [1] especially targeted for the remaining experts (i.e., experts except top-1 selected expert) to ensure experts for each modality combination to be balanced activated. Formally it can be expressed as,
1. **Load Balancing Loss**
$\mathcal{L}_{\text{balance}} = \text{CV}^2(\sum{_j^N} \text{importance}{_j}) + \text{CV}^2(\sum{_j^N} \text{load}{_j}) $
- where $N$ is the number of samples
2. **Coefficient of Variation Squared (CV²)**
$ \text{CV}^2(x) = \left( \frac{\sigma(x)}{\mu(x)} \right)^2 $
- where $\sigma(x)$ is the standard deviation of $x$ and $\mu(x)$ is the mean of $x$.
3. **Importance and Load**
$\text{importance}{_e} = \sum{_j^N} g _ {je}, \forall e \in E \setminus \{e _ {\text{top-1}}\} $
- where $e$ is expert index and $g{_ie}$ is the gate value for sample $j$ with expert $e$
$\text{load}{_e} = \sum{_j^N} \delta(g _ {je} > 0), \forall e \in E \setminus \{e _ {\text{top-1}}\} $
- where $\delta(g _ {je} > 0)$ is an indicator function that is 1 if the gate value $g _ {ie}$ is greater than 0, indicating that the expert $e$ is selected for sample $j$.
For the prediction head, we can simply denote it as: $\mathbf{Y} = \mathbf{Z} \cdot \mathbf{W}$ where
$\mathbf{Y} \in \mathbb{R}^{N \times |\mathcal{C}|} $ denote the predicted probability for each sample on each class, $\mathbf{Z} \in \mathbb{R}^{N \times (D*|\mathcal{M}|)}$ denote the concatenated embedding of each modality, and $ \mathbf{W} \in\mathbb{R}^{(D*|\mathcal{M}|) \times |\mathcal{C}|}$ denote the weight matrix that transforms the concatenated embedding into final prediction space.
For the final version, we will incorporate above equations to provide better understanding.
[1] https://arxiv.org/abs/1701.06538
---
**[W 5: Discussion of related works]**
To briefly summarize, RepMode predicts 3D fluorescent images of subcellular structures from 3D transmitted-light images, addressing challenges of partial labeling and multi-scale variations. M4oE uses modality-specific experts and a gating mechanism to enhance medical image segmentation across various modalities. However, RepMode focuses on subcellular structure prediction, and M4oE focuses on medical image segmentation, differing from our work, which aims to accurately classify the stage of AD disease, particularly in the multimodal AD domain. While these works leverage image modalities, they lack the ability to cope with various other modalities, such as genetic and clinical. Moreover, although M4oE uses modality-specific experts, it does not consider the characteristic of modality combination information in practical missing modality scenarios, which is our main focus.
---
**[W 6: More modality combination experiments]**
To achieve greater generalizability, besides more modality combinations, we added the MIMIC dataset [1], the AUC score as an additional metric, and two more baselines (ShaSpec and mmFormer) during the rebuttal period. The results can be found here: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf).
We still observe Flex-MoE outperforms existing baselines across various modality combinations.
[1] https://physionet.org/content/mimiciv/3.0/
---
Rebuttal Comment 1.1:
Title: The authors have addressed my concerns well
Comment: Dear authors and AC,
Sorry for my late reply.
The authors have addressed my concerns, with additional experiments, explanations, and discussion for some unclear expression in the paper.
I would like to raise my score to 7, if the authors can add these contents to the main paper or the appendix.
Thanks for your time and effort.
Best,
Reviewer 6yys
---
Reply to Comment 1.1.1:
Title: Thank you for your consideration
Comment: Dear Reviewer **6yyS**,
We are pleased to hear that our rebuttal has addressed your concerns. We will ensure that those additional experiments are included in the final version.
Best wishes,
Authors
---
Rebuttal 2:
Title: Please read the rebuttal to check if the authors addressed your concerns
Comment: Dear Reviewer 6yyS,
Can you have a look at the rebuttal and see if your concerns have been addressed?
Best regards
Your AC.
---
Rebuttal Comment 2.1:
Title: Eager for Your Feedback on Our Rebuttal
Comment: Dear Reviewer **6yyS**,
We sincerely thank you for dedicating your time to review our work and for your constructive feedback. As the deadline for the discussion period approaches, we are eager to engage further and understand if our responses address your concerns satisfactorily.
For quick access to additional experiments, including `more baseline results, datasets, and computational efficiency`, you can find them here: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf)
We would greatly appreciate it if you could kindly review our response. Thank you for your consideration.
Best,
Authors
---
Rebuttal 3:
Title: Please check the authors' rebuttal
Comment: Dear Reviewer 6yyS,
Please don't forget to read the authors' rebuttal to reach a final decision about this paper in the next day or so. If you have any further questions to the authors, that would be a great chance to reach out to them.
Best regards
Your AC. | Summary: The paper presents a multimodal learning framework, Flex-MoE (Flexible Mixture-of-Experts), designed to integrate diverse modalities in Alzheimer's Disease (AD) research using a Sparse Mixture-of-Experts design. Flex-MoE sorts samples based on the number of available modalities and processes them through modality-specific encoders. The framework trains experts with full modality samples and uses an S-Router to adapt the knowledge for fewer modality combinations.
Strengths: 1. The research topic addressing missing modalities in Alzheimer’s disease is crucial and highly relevant.
2. Source code is provided, facilitating the replication of experiments.
Weaknesses: 1. The model's specificity to Alzheimer's disease is unclear, as it appears to be a general approach for handling missing modalities.
2. The experimental setup lacks clarity. Details on how labels (e.g., dementia, CN, MCI) were decided, the prediction timeline, and the handling of patients at different stages are not provided. Additionally, the use of clinical data and whether it is time series data is not explained.
3. The evaluation is unconvincing. The imbalance rate of the datasets is not discussed. Only accuracy and F1 score are used for evaluation, but additional metrics like AUC could provide a more comprehensive understanding of the model’s performance. The process for selecting the threshold for classification to obtain the F1 score is also unclear.
4. The computational resources required for training and inference using Flex-MoE are not specified, especially given the need to train the model for various modality combinations.
5. No external dataset is included to validate the model's performance.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Why is the model specifically designed for Alzheimer's disease, and how does it differ from a general model for missing modalities?
2. How were the labels (dementia, CN, MCI) decided? What is the prediction timeline, and were patients at different stages treated as separate samples? What time period of data was used, and was the clinical data time series data?
3. What is the imbalance rate of the datasets? Including metrics like AUC can provide a more complete understanding of the model’s performance. How was the threshold chosen for determining the classification boundary to obtain the F1 score?
4. What specific computational resources are required for training and inference using Flex-MoE?
5. How would the model perform on an additional external dataset?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1. The evaluation and practical application of the model are challenging to assess. The necessity of training different models for each modality combination is not well justified.
2. The lack of external dataset validation limits the generalizability of the model's performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer **qdnn** for the comprehensive review of our paper. Specifically, we appreciate the reviewer's recognition of our key contribution in addressing missing modalities in AD as `crucial` and `highly relevant`. For the concerns raised, we provide the following responses:
---
**[W 1 & Q 1: model’s specificity to AD]**
As the reviewer mentioned, our approach can be generalized, similar to common phenomena in various biomedical papers (e.g., Transformer in 3D MRI [1], Graph Neural Network in the single-cell domain [2,3]). However, we emphasize the importance of a careful application that considers domain-specific characteristics, especially in response to recent improvements in the ML field. In this paper, we focused on the AD domain for two main reasons:
- **Underexplored Characteristic**: Despite the comprehensive multi-modal nature of AD (e.g., clinical, biospecimen, image, genetic) [3], it remains underexplored compared to domains with fewer modalities (e.g., image, text) [4, 5]. Empirically, as shown in Table 2 of the manuscript, the direct application of multimodal ML, e.g., LIMoE [5], underperforms compared to Flex-MoE in handling 3 or more modalities and in overall performance. This supports our motivation to address the diverse multimodal nature prevalent in the AD domain and the specific missing modality combinations.
- **Unique Characteristics of Missing Modalities**: In the AD domain, the missing modality problem provides a unique context for each patient’s diagnosis. For instance, patients with biospecimen and image modalities but missing clinical and genetic data present unique characteristics [6]. Empirically, as shown in Figure 4 of the manuscript, clinical modality (C) shares more significant similarities with the full modality combination when predicting AD diagnosis compared to other modalities (I, G, B). Additionally, patients missing clinical (C) and genetic (G) modalities show more similarities compared to other missing modality combinations, reinforcing our claim that missing modality combinations provide unique diagnostic contexts in AD.
In summary, our study aims to advance clinical diagnosis in the AD domain by effectively integrating multimodal data, especially in the growing era of AI4Science (Biology, Medicine, Health).
[1] https://www.nature.com/articles/s41598-024-59578-3
[2] https://www.nature.com/articles/s41467-021-22197-x
[3] https://www.nature.com/articles/s41467-023-36559-0
[4] https://arxiv.org/abs/2309.15857
[5] https://arxiv.org/abs/2206.02770
[6] https://pubmed.ncbi.nlm.nih.gov/24360540/
---
**[W 2, 3 & Q 2: Clarity in Experimental Setup]**
To validate the effectiveness of Flex-MoE, we focused on the gold-standard ADNI dataset. Following existing studies [1,2], we performed an AD stage prediction task to predict specific labels for Dementia, CN, and MCI. Specifically, we used the diagnostic summary file 'DXSUM_PDXCONV_22Apr2024.csv' and the 'DIAGNOSIS' column, which underwent clinical and neuropsychological evaluation as reported by ADNI [3]. The label statistics in our study were: CN (1030 patients, 43.3%), MCI (860 patients, 36.1%), Dementia (490 patients, 20.6%). Despite concerns about imbalance, we observed a relatively balanced ratio of 490/1030 = 0.47. The F1-score was calculated using the scikit-learn package [4], which automatically computes the score without manual selection of threshold.
Regarding timeline information, timestamp data is not readily available across genetic, biospecimen, and clinical modalities. However, for the image modality, we selected the most recent image to ensure relevance and mapped the patient ID with other modalities. Thus, our method remains static, but dynamic analysis with time information is a promising direction for future work.
We will add this information to the final version for clarity. Detailed statistics and preprocessing steps for each modality are provided in Section 4.1 and Appendix A of the manuscript, addressing the reviewer's concerns about the clinical modality.
[1] https://www.nature.com/articles/s41598-020-74399-w
[2] https://www.nature.com/articles/s41598-018-37769-z
[3] https://adni.loni.usc.edu/wp-content/uploads/2008/07/inst_about_data.pdf
[4] https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html
---
**[W 4 & Q 4: Computational Resources]**
We appreciate the reviewer's concerns about computational resources. We used NVIDIA A100 GPUs for training and inference but highlight the efficiency of adopting the Sparse MoE design, where only the selected top-k experts are utilized. We performed a computational comparison experiment in terms of Mean Time, FLOPs, and the number of activated parameters, which can be found here: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf).
The result demonstrate the efficiency of the SMoE design, showcasing the potential for various real-world datasets.
---
**[W 3, 5 & Q 4, 5 & L 1, 2: More Comprehensive Experiment]**
To achieve greater generalizability, we added the MIMIC dataset [1], the AUC as an additional metric, and two more baselines (ShaSpec and mmFormer) during the rebuttal period. We tested all possible modality combinations, and the results can be found here: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf).
Our experiments show that Flex-MoE outperforms existing baselines across various metrics and modality combinations, thanks to its careful consideration of modality combinations. Additionally, to justify the necessity of training different experts, Figure 5 shows that each expert index contains both global context knowledge and specialized modality combination knowledge, providing flexibility in responding to various modality combinations. Figure 4 further illustrates the interpretability of how different observed and missing modality combinations share similarities in predicting AD status.
[1] https://physionet.org/content/mimiciv/3.0/
---
Rebuttal 2:
Title: Please read the rebuttal to check if the authors addressed your concerns
Comment: Dear Reviewer qdnn,
Can you have a look at the rebuttal and see if your concerns have been addressed?
Best regards
Your AC.
---
Rebuttal 3:
Title: Eager for Your Feedback on Our Rebuttal
Comment: Dear Reviewer **qdnn**,
We sincerely thank you for dedicating your time to review our work and for your constructive feedback. As the deadline for the discussion period approaches, we are eager to engage further and understand if our responses address your concerns satisfactorily.
For quick access to additional experiments, including `more baseline results, datasets, and computational efficiency`, you can find them here: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf)
We would greatly appreciate it if you could kindly review our response. Thank you for your consideration.
Best,
Authors
---
Rebuttal 4:
Title: Please check the authors' rebuttal
Comment: Dear Reviewer qdnn,
Please don't forget to read the authors' rebuttal to reach a final decision about this paper in the next day or so. If you have any further questions to the authors, that would be a great chance to reach out to them.
Best regards Your AC.
---
Rebuttal Comment 4.1:
Comment: Thank you for the authors' response. I still have some concerns about the specific cognitive assessments included in the model. I’m willing to adjust my scores to 5 and my confidence to 3.
---
Reply to Comment 4.1.1:
Title: Appreciation and further clarification on cognitive assessment in ADNI dataset
Comment: Dear Reviewer **qdnn**,
Thank you for your response. We appreciate that our rebuttal was well received, and we thank you for your willingness to increase the score.
In our above rebuttal response [link](https://openreview.net/forum?id=ihEHCbqZEx¬eId=nYk2KzEUPG) under **[W 2, 3 & Q 2: Clarity in Experimental Setup]**, we provided the specific file referenced in this study, where the labels (e.g., Dementia, CN, MCI) for each patient were annotated according to the ADNI dataset. We also presented the statistical distribution and imbalance ratio.
However, we realize that there may still be some concerns regarding the details of the **cognitive assessments** used in our model. These assessments are crucial in determining the diagnostic categories such as Cognitively Normal (CN), Mild Cognitive Impairment (MCI), and Dementia. Below, based on ADNI documentation [1], we provide a more detailed explanation of each label including the specific cognitive tests involved and how they contribute to the classification process.
1. **Cognitively Normal (CN) (Interchangeable with Cognitively Unimpaired (CU))**:
- **Meaning**: Participants in this group show no signs of significant cognitive decline at the time of their evaluation.
- **Key Cognitive Assessments**: To be classified as CN/CU, individuals must score 0 on the Clinical Dementia Rating (CDR) and show no memory impairment (memory box score of 0). Their performance on the Mini-Mental State Exam (MMSE) and the Wechsler Logical Memory II sub-scale must be within the normal range, adjusted for education level.
- **Significance**: These tests ensure that participants classified as CN/CU exhibit cognitive functioning consistent with normal aging, without evidence of dementia or mild cognitive impairment.
2. **Mild Cognitive Impairment (MCI)**:
- **Meaning**: MCI is an intermediate stage between normal aging and dementia, where participants show noticeable cognitive impairments that do not yet meet the criteria for dementia.
- **Key Cognitive Assessments**: Individuals are labeled as MCI if they have a CDR global score of 0.5 and a memory box score of at least 0.5. The MMSE and Wechsler Logical Memory II sub-scale are also used, with specific cutoffs to diagnose MCI based on educational levels. These cognitive tests reveal impairments in memory and other cognitive domains that are more pronounced than in normal aging but not severe enough to warrant a dementia diagnosis.
- **Significance**: These assessments are crucial for identifying individuals at risk for developing dementia, making them a key focus for early interventions.
3. **Dementia (Dementia/AD)**:
- **Meaning**: Participants in this group meet the clinical criteria for dementia, typically due to Alzheimer's Disease, characterized by significant cognitive and functional impairment.
- **Key Cognitive Assessments**: A CDR global score of 0.5 or 1, combined with impaired scores on the MMSE and Wechsler Logical Memory II sub-scale, typically categorizes a participant as having dementia. Additionally, the classification process involves screening to exclude those whose symptoms suggest non-Alzheimer's forms of dementia, such as Frontotemporal Dementia.
- **Significance**: These cognitive assessments are vital for studying the progression of Alzheimer's Disease and for developing treatments targeted at this advanced stage.
For further details regarding the protocols and ADNI study information (we used ADNI 3 in this study), you may kindly refer to the official ADNI documentation [2].
In summary, to provide a clearer understanding for readers, we will incorporate this discussion into our final version.
Once again, thank you for all your comments to improve our paper.
Best,
Authors
[1] https://adni.loni.usc.edu/data-samples/adni-data/study-cohort-information/
[2] https://adni.loni.usc.edu/help-faqs/adni-documentation/ | Summary: This paper introduces Flex-MoE, a novel multimodal learning framework for Alzheimer's Disease that handles missing modalities using a Sparse Mixture-of-Experts design and demonstrates its efficacy on the ADNI dataset.
Strengths: 1. The idea of Flex-MoE is clear and straightforward, effectively addressing the challenge of integrating and handling missing modalities in Alzheimer's Disease research.
2. The expression is smooth and clear, making the complex concepts and methodologies easily understandable.
Weaknesses: 1. The paper emphasizes that the Sparse Mixture-of-Experts (SMoE) selectively activates only the most relevant experts to improve scalability, but does not provide experimental results demonstrating computational efficiency or scalability.
2. The interpretation of \( B_{M \setminus m, m} \) in Equation (2) is unclear; it should be better explained in details. Whether it involves a learnable module that outputs missing modality features based on available modality features, or via other method?
3. The approach to handling missing modalities differs from the method in the paper "Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling," which uses averaging for missing modalities. A comparison with this approach would be beneficial. The paper provides code https://github.com/billhhh/ShaSpec.
4. The paper overlooks some existing missing modality baselines, such as mmFormer and ShaSpec.
Some minor weaknesses:
1. The abstract states that "few recent studies attempt to integrate multiple modalities," but there are actually many methods for handling missing modalities (as noted in point 4 above), which are not compared experimentally in this paper.
2. The font size in Table 1 is too small and needs to be redrawn for better readability.
3. In line 180, the bold formatting for "It" is incorrect.
4. In Table 2(b), the value 61.08 in the first row is not the best result, which should be corrected.
5. Table 2 should ideally use a visualization approach similar to that in the ShaSpec paper to represent different modality combinations more clearly.
Technical Quality: 3
Clarity: 4
Questions for Authors: The most biggest question for me is pointed in weaknesses Q2: The interpretation of \( B_{M \setminus m, m} \) in Equation (2) is unclear; it should be better explained in details. Whether it involves a learnable module that outputs missing modality features based on available modality features, or via other method?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Please refer to the weaknesses section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer **BwQM** for their careful review and for mentioning that Flex-MoE is `straightforward`, `effectively addresses missing modalities in AD`, and `easily understandable`. For the remaining concerns, we provide details below:
---
**[W 1: Computational efficiency and scalability]**
Throughout the paper, we explained the rationale behind the SMoE design principle for handling missing modality scenarios by assigning modality combinations to each expert index. As the reviewer mentioned, utilizing the SMoE design indeed benefits computational efficiency and scalability. To further verify its effectiveness, we compared the training time, GFLOPs, and number of parameters used in training at this link: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf).
We observe that Flex-MoE not only performs best in the I, G, C, B modality configuration, as shown in Table 2 of the manuscript, but also achieves efficiency in terms of training time and GFLOPs. Additionally, it demonstrates scalability with fewer parameters compared to existing baselines, including the state-of-the-art model, FuseMoE [1].
[1] https://arxiv.org/abs/2402.03226
---
**[W 2 & Q 1: Interpretation of $\mathbf{B} _ {\mathcal{M} \setminus m}$ in Equation (2)]**
The main idea of the missing modality bank completion is to supplement missing modalities from a learnable embedding bank. If a sample contains missing modality data, we retrieve the missing information (i.e., embedding) from this bank, which includes all possible modality combinations and their respective missing modalities. Otherwise, for the observed input, the data directly passes through the modality-specific encoder.
For example, if a patient lacks clinical data (i.e., $m = \mathcal{C}$ where $m$ denotes the missing modality, thus $\mathcal{M} \setminus m = \{ \mathcal{I}, \mathcal{B}, \mathcal{G} \}$) but has imaging, biospecimen, and genetic data, the observed modalities pass through their respective encoders. The missing clinical embedding is supplemented from the missing modality bank, indexed by the observed modalities (i.e., $\mathbf{B} _ {\mathcal{M} \setminus m} = \mathbf{B} _ {\{\mathcal{I}, \mathcal{B}, \mathcal{G}\}}$). This approach ensures that encoders only process observed features, preventing them from being influenced by incomplete or naively imputed data. As this missing modality bank is learnable throughout the iterations, it eventually possesses the knowledge relevant to the downstream task, such as AD stage prediction.
Empirically, as shown in Figure 4 of the manuscript, the clinical modality (C) shows more significant similarities with the full modality combination when predicting AD diagnosis compared to other modalities (I, G, B). Additionally, patients missing clinical (C) and genetic (G) modalities exhibit more similarities compared to other missing modality combinations, reinforcing our claim that missing modality combinations provide unique diagnostic contexts in AD.
---
**[W 3 & W 4 & M-W 5: Comparison with other baselines in ShaSpec style]**
We appreciate highlighting these two relevant baselines in our study. To briefly summarize each paper: ShaSpec employs a strategy using auxiliary tasks based on distribution alignment and domain classification, along with a residual feature fusion procedure, to learn shared and specific features for handling the missing modality issue. During the rebuttal period, we employed ShaSpec, and the performance comparison on the ADNI dataset with additional metric, AUC can be found here: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf).
Interestingly, while ShaSpec has a specific module to handle missing modalities, it falls short of Flex-MoE. We believe this is due to forcing distribution alignment without carefully considering the context of each modality combination. For instance, when the clinical modality is missing, ShaSpec leverages other modalities to generate the clinical feature, but this may overlook the unique contextual meaning of the missing clinical data. In contrast, Flex-MoE retrieves a learnable embedding from the missing modality bank to supplement the clinical embedding without forcing observed modalities to contribute, allowing the clinical embedding more flexibility to learn downstream task-specific information.
Moreover, mmFormer is an end-to-end framework initially designed for different MRI modalities, consisting of hybrid modality-specific encoders, a modality-correlated encoder, and a convolutional decoder. This approach differs from ours, which considers images as one modality and extends the perspective to genetic, biospecimen, and clinical data. When implementing mmFormer for our task, we followed its proposed architecture for processing image data and replaced their encoder with our modality-specific encoders used in Flex-MoE. The results can be found here: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf).
In summary, mmFormer lacks the ability to handle diverse modalities beyond images, which have different contexts, making it significantly less effective than Flex-MoE. Additionally, by supplementing missing modality embeddings as global modality-invariant features, mmFormer fails to handle the specific context of modality combinations and corresponding missing modalities. This justifies the necessity of incorporating modality combination information while handling missing modalities, as proposed in Flex-MoE.
---
**[M-W 1,2,3: Improvements]**
Thank you for the constructive feedback. We will incorporate the discussed baselines, change the font size in Table 1 for better readability, and carefully revise the bold font.
---
**[M-W 4: Best result in Table 2 (b)]**
We have checked the bold formatting in Table 2 (b) and found no errors. Flex-MoE with 61.08 performs best in that row. Could we kindly ask reviewer **BwQM** to re-clarify the point? We are ready to provide more details and refine accordingly.
---
Rebuttal Comment 1.1:
Title: Adjusted the score to 7
Comment: I'd like to adjust my score to 7 according to the efforts of rebuttal by authors
---
Rebuttal 2:
Title: Please read the rebuttal to check if the authors addressed your concerns
Comment: Dear Reviewer BwQM,
Can you have a look at the rebuttal and see if your concerns have been addressed?
Best regards
Your AC.
---
Rebuttal 3:
Title: Eager for Your Feedback on Our Rebuttal
Comment: Dear Reviewer **BwQM**,
We sincerely thank you for dedicating your time to review our work and for your constructive feedback. As the deadline for the discussion period approaches, we are eager to engage further and understand if our responses address your concerns satisfactorily.
For quick access to additional experiments, including `more baseline results, datasets, and computational efficiency`, you can find them here: [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf)
We would greatly appreciate it if you could kindly review our response. Thank you for your consideration.
Best,
Authors
---
Rebuttal 4:
Title: Please check the authors' rebuttal
Comment: Dear Reviewer BwQM,
Please don't forget to read the authors' rebuttal to reach a final decision about this paper in the next day or so. If you have any further questions to the authors, that would be a great chance to reach out to them.
Best regards Your AC.
---
Rebuttal 5:
Title: Responses after rebuttal
Comment: The rebuttal of authors addresses all of my questions. They provide more baseline results, datasets, and computational efficiency. So I have no more questions related to the paper. W.r.t. the Best result in Table 2 (b), thank you for the clarification and I've double checked it again. It should be correct. Kindly add the new results in the newer version of the paper. Thanks for the effort!
Best
---
Rebuttal 6:
Title: Appreciation and kind request for your consideration of score adjustment
Comment: Dear Reviewer **BwQM**,
Thank you once again for your thoughtful feedback and for confirming that all of your concerns have been addressed. We greatly appreciate your time and effort in reviewing our work.
As you noted, we will include all modifications and results to our final version. We would be deeply grateful if you could kindly consider adjusting the score to reflect an improved positive evaluation of our submission. Your support at this stage would be of great help to us.
Thank you once again for all your support.
Best,
Authors | null | null | Rebuttal 1:
Rebuttal: In this study, we propose **Flex-MoE**, a novel framework designed to address the issue of missing modalities in the AD domain, where existing studies often **(1)** rely on single modality and complete data, and **(2)** overlook modality combinations. As a remedy, Flex-MoE includes a **missing modality bank completion** step, followed by **expert generalization and specialization** steps, and is equipped with a novel router design.
We express our profound gratitude to all reviewers for their time and effort in evaluating our manuscript. We particularly appreciate the positive feedback on the `clear and straightforward idea`, `effectively addressing missing modalities in AD domain which is crucial and highly relevant`, `worth the attention in the community`, `motivations for Flex-MoE are rational and decent`. Besides, all the constructive feedback has been invaluable in enhancing our work. There were common concerns raised by some reviewers, which we address below, alongside more detailed responses to each reviewer.
---
1. More comprehensive experiments - Reviewers **BwQM**, **qdnn**, **6yyS**
- To achieve greater generalizability, we added the (1) MIMIC dataset [1], the (2) AUC score as an additional metric, and (3) two more baselines (ShaSpec and mmFormer) during the rebuttal period. We tested (4) all possible modality combinations and the comprehensive result are shown in below [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf).
- The results show that in most cases, Flex-MoE consistently outperforms existing baselines across various metrics and modality combinations, thanks to its careful consideration of modality combinations. Notably, as the number of available modalities increases (e.g., 2, 3, 4), the performance of Flex-MoE significantly improves. In contrast, other baselines achieve their best performance when utilizing a subset of modalities. This demonstrates Flex-MoE's potential to effectively advance and address the challenges in the multi-modal AD domain.
- **Details on MIMIC dataset preprocessing**: For the MIMIC dataset, we use the Medical Information Mart for Intensive Care IV (MIMIC-IV) database, which contains de-identified health data for patients who were admitted to either the emergency department or stayed in critical care units of the Beth Israel Deaconess Medical Center in Boston, Massachusetts24. MIMIC-IV excludes patients under 18 years of age. We take a subset of the MIMIC-IV data, where each patient has at least more than 1 visit in the dataset as this subset corresponds to patients who likely have more serious health conditions. For each datapoint, we extract ICD-9 codes, clinical text, and labs and vital values. Using this data, we perform binary classification on one-year mortality, which foresees whether or not this patient will pass away in a year. We drop visits that occur at the same time as the patient's death. In order to align the experimental setup with the ADNI data, which does not contain temporal data, we take the last visit for each patient.
---
2. Computational Resources - Reviewers **BwQM**, **qdnn**
- Regarding the usage of SMoE design, we further performed a computational comparison experiment in terms of Mean Time, FLOPs, and the number of activated parameters, where the result can be found in below [PDF](https://openreview.net/attachment?id=KkGeR9PVFe&name=pdf).
- The results demonstrate the efficiency of the Sparse MoE design, showcasing the potential for improved performance on various real-world datasets.
---
3. Interpretation of Equation (2) - Reviewers **BwQM**, **6yyS**
- The main idea of the missing modality bank completion is to supplement missing modalities from a predefined bank, ensuring robust data integration with observed ones. For example, if a patient lacks clinical data but has imaging, biospecimen, and genetic data, the observed modalities pass through their specific encoders. The missing clinical embedding is supplemented from the missing modality bank, indexed by the observed modalities (e.g., {Imaging, Biospecimen, Genetic}, Clinical). This approach prevents reliance on incomplete or naively imputed data, as encoders only process observed modalities.
---
If you have any remaining concerns, we are more than happy to discuss them until the end of the discussion period. Please do not hesitate to ask any questions. Once again, we deeply appreciate all the careful reviews and the time from the reviewers.
Best,
Authors
Pdf: /pdf/0ab699d045e63991c5f2c279310f87179a2b1098.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Lumina-Next : Making Lumina-T2X Stronger and Faster with Next-DiT | Accept (poster) | Summary: Lumina-Next is an improved version of Lumina-T2X, featuring a core architecture that employs a Flow-based Large Diffusion Transformer (Flag-DiT). Through empirical experiments and analysis, Lumina-Next introduces an enhanced Next-Dit architecture and develops a fast sampling algorithm, which boosts the model's generalization capability and improves both training and inference efficiency.
Strengths: This paper focuses on Lumina-T2X and provides empirical analysis and discussion on the training stability, inference efficiency, and extrapolation performance of this type of Flow-based Large Diffusion Transformer. Its advantages are as follows:
- This paper uses 3D RoPE to replace 1D RoPE, enabling the perception of positional information in both spatial and temporal dimensions without the need for learnable [nextline] and [nextframe] identifiers. These improvements help stabilize training and benefit the extrapolation of visual sequence context.
- Sandwich normalization is applied to the model structure to prevent the accumulation of errors due to increased model depth, thereby preserving model performance.
- Different extrapolation schemes is ablated, and Frequency- and Time-Awareness Scaled RoPE was proposed to improve resolution extrapolation.
- A time schedule suitable for flow-based diffusion models was proposed, improving the visual quality of fast sampling produced with 10-20 NFEs.
Overall, this paper addresses some of the shortcomings of Lumina-T2X and provides insights for the development of large-scale flow-based diffusion transformers.
Weaknesses: - Whether identifiers such as [nextline] and [nextframe] are unnecessary when using 3D RoPE is not sufficiently discussed in the paper.
- The paper lacks a comparison with stable-diffusion-3, which is also a flow-based diffusion model.
- Is the proposed Time Schedule universally applicable at low NFEs (<10)? Figure 7 contains too few examples to reflect the actual effect. Are there any numerical comparisons?
Overall, this paper leans towards empirical research and analysis, making it more akin to a technical report.
Technical Quality: 4
Clarity: 4
Questions for Authors: see weaknesses.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Although the paper discusses limitations in the supplementary materials, I believe it could benefit from adding some discussion on whether the training data might introduce biases related to violence, pornography, race, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Whether identifiers such as [nextline] and [nextframe] are unnecessary when using 3D RoPE is not sufficiently discussed in the paper.
Good point! Lumina-T2X adopted learnable [nextline] and [nextframe] tokens to achieve flexible modeling of 2D/3D signals with 1D RoPE. However, we found that modeling any modality as the 1D sequence leads to certain limitations, e.g., misalignment with the image/video's physical properties and limited multi-dimensional resolution extrapolation ability (refer to Q1@Reviewer NAeC for more details). For these reasons, we introduced 2D/3D RoPE in lumina-next to replace 1D RoPE with learnable tokens. In this case, we naturally don't need learnable tokens to assist the model in understanding the 2D / 3D structures. We shall supplement the relevant discussion upon acceptance.
### Q2: The paper lacks a comparison with Stable Diffusion 3, which is also a flow-based diffusion model.
Thanks for the suggestion! The SD3 model remains closed-source when submitting this paper. Here we further supplement the quantitative comparison results with SD3. To better illustrate the aesthetics and semantic ability, we conduct an AI preference study to evaluate Lumina-Next against other text-to-image models, since conventional metrics such as FID and CLIP-Score may not accurately reflect the generation quality. Following PixArt, we employ GPT-4o, the SoTA multimodal LLM exhibiting strong alignment with human preference, as our evaluator to vote based on image quality. As shown in the following table, Lumina-Next demonstrates competitive performance with advanced text-to-image models including PixArt and SD3. Note that SD3 uses over 1B text-image pairs, which is ~100x greater than our models, PixArt, which is already a training-efficiencient model, still uses 3x training compute than ours. However, we have to admit that Lumina-Next still underperforms these SoTA models in terms of text-image alignment and compositional generation, due to inadequate data and training.
| Model | Winrate |
| --- | --- |
| SD3 | 69.5% |
| PixArt | 43.6% |
### Q3: Is the proposed Time Schedule universally applicable at low NFEs (<10)? Figure 7 contains too few examples to reflect the actual effect. Are there any numerical comparisons?
Thanks for posing this question. As a zero-shot method simply changing the time schedule, it is hard to get satisfactory results using extremely low NFEs (<10). Usually, using ~20 NFEs can get decent results. Besides, the design of the temporal condition in Time-aware scaled RoPE makes it intuitively more sensitive to the inference step number. Here, we give the results for NFEs of 40, 20, and 10 with NTK/Time- aware Scaled RoPE, and we can see that resolution extrapolation with Time-aware scaled RoPE is getting worse. However, the overall trend is consistent with the degradation of image quality in the case of using NTK-aware Scaled RoPE, and there is no significant nullification. We will add these experimental results to the main paper along with the visualization results.
| Method | NFE | Time-aware Scaled RoPE | NTK-aware Scaled RoPE |
| --- | --- | --- | --- |
| Time-aware Scaled RoPE | 40 | 28.93 | 58.66 |
| Time-aware Scaled RoPE | 20 | 28.34 | 60.07 |
| Time-aware Scaled RoPE | 10 | 28.21 | 67.98 |
| NTK-aware Scaled RoPE | 40 | 28.21 | 61.89 |
| NTK-aware Scaled RoPE | 20 | 27.82 | 62.19 |
| NTK-aware Scaled RoPE | 10 | 27.71 | 68.59 |
---
Rebuttal 2:
Comment: Thank you for taking the time to respond.
For Q3, what do the two columns of metrics represent respectively?
---
Rebuttal 3:
Comment: Thanks for posing this question and sorry for our typos. The columns represent CLIP Score and FID, respectively.
---
Rebuttal Comment 3.1:
Comment: Thank you for taking the time to respond.
I see. I think you need to state in your paper that relatively good results can be achieved with NFE of 5-10, rather than achieving high-quality text-to-image generation samples within only 5 to 10 steps as you describe, which is suspected of misleading the reader and overl-claiming.
---
Reply to Comment 3.1.1:
Comment: We greatly appreciate your response and valuable suggestions, which will improve the quality and impact of our paper. We will incorporate your feedback and update our paper accordingly in the revision. | Summary: The paper introduces Lumina-Next, an enhanced version of the Lumina-T2X model, which is a Flow-based Large Diffusion Transformer (Flag-DiT) aimed at transforming noise into various modalities like images and videos based on text instructions. Compared with Lumina-T2X, Lumina-Next introduces the following improvements:
* A redesigned architecture named Next-DiT, featuring 3D Rotary Position Embedding (RoPE) and sandwich normalization to stabilize training and inference.
* Frequency- and Time-Aware Scaled RoPE for better resolution extrapolation in text-to-image generation.
* An optimized time discretization schedule combined with higher-order Ordinary Differential Equation (ODE) solvers for faster and high-quality generation.
The model demonstrates stronger performance in text-to-image generation, resolution extrapolation, and multilingual capabilities in a zero-shot manner. It also shows versatility across different tasks such as visual recognition, multi-views, audio, music, and point cloud generation.
Strengths: - This paper, like SD3 and PixArt, explores image generation within the DiT architecture, and the codes and models have been open-sourced, which is beneficial to the community.
- Strong functionality. Without additional training, Lumina-Next can easily generate higher-resolution images and produce images of decent quality in fewer inference steps.
Weaknesses: - The paper mainly demonstrates the advantages over comparative methods in terms of functionality but does not show comparisons with these state-of-the-art methods in terms of text-image alignment and image quality.
- Although the paper's method is about DIT, it does not demonstrate the manifestation and properties of scaling laws in the generation domain.
Technical Quality: 3
Clarity: 3
Questions for Authors: na
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: The work does not show comparisons with these state-of-the-art methods in terms of text-image alignment and image quality.
Thank you for the suggestion. We have further supplemented the quantitative comparison experiments. Due to time constraints, we only compared Lumina-Next with representative SoTA T2I models, SD3 and PixArt. To better illustrate the aesthetics and semantic ability, we conduct an AI preference study to evaluate Lumina-Next against other text-to-image models, since conventional metrics such as FID and CLIP-Score may not accurately reflect the generation quality. Following PixArt, we employ GPT-4o, the SoTA multimodal LLM exhibiting strong alignment with human preference, as our evaluator to vote based on image quality. As shown in the following table, Lumina-Next demonstrates competitive performance with advanced text-to-image models including PixArt and SD3. Note that SD3 uses over 1B text-image pairs, which is ~100x greater than our models, PixArt, which is already a training-efficiencient model, still uses 3x training compute than ours. However, we have to admit that Lumina-Next still underperforms these SoTA models in terms of text-image alignment and compositional generation, due to inadequate data and training.
| Model | Winrate |
| --- | --- |
| SD3 | 69.5% |
| PixArt | 43.6% |
### Q2: The paper does not demonstrate the manifestation and properties of scaling laws in the generation domain.
Good point! The nature of good scaling up and the consequent emergence of modeling capabilities are essential properties for evaluating GenAI nowadays, which has been explored in Lumina-T2X -- scaling up DiT from 0.6B to 7B. However, in this paper, since the scale of data we use is much smaller than the leading open-source models (e.g., 30M in Lumina-Next V.S. over 1B in SD3), the benefits of further scaling up the model to more parameters are not significant. Therefore, we adopt a 2B model to achieve a sweet point between performance and efficiency. The focus of this paper is more on how to make the DiT-based models achieve more stable training, higher inference quality and speed. In addition, we would like to highlight the additional contribution presented in our paper, as detailed in the Appendix. These include (1) zero-shot multilingual generation, (2) flexible visual recognition, and (3) text-to-multiview/music/audio/point clouds generation, all of which are yet to be explored for flow-based diffusion transformers. | Summary: The paper introduces a new multi-modal generation model, Lumina-Next, which extends the previous Lumina-T2X approach by several innovations: i) 3D rotary position embedding (RoPE) ii) extra normalization to stablize training iii) frequency- and time- aware scaled RoPE for training free resolution extrapolation iv) improved time scheduling and higher-order ODE solvers for reverse sampling. Lumina-Next aims to achieve faster, better, and more efficient generation on multiple text-to-X tasks, e.g. text-to-image/audio/point-cloud, etc.
Strengths: Topic: the paper studies multi-modal generation using a unified framework, which would be of great interest to the community. The paper also promises to release the source code to support future research
Methodology: the proposed technical extensions for Lumina-T2X are technically sound.
Experiment: Lumina-Next demonstrates promising qualitative results on text-to-image generation, training-free resolution extrapolation, and text-to-point-cloud generation. The paper also provides quantitative evaluation on text-to-audio/music generation.
Weaknesses: The reviewer appreciates the authors’ effort on improving text-to-X generation. However, as a technical submission for NeurIPS, the paper is fully convincing as it provides limited quantitative evaluation.
As a representative reader, the reviewer would expect
i) quantitative evaluations on all generation tasks
ii) extra apples-to-apples evaluations on any resolution recognition, as for now, there is only one baseline, DeiT-Base. The comparison is also apples-to-oranges as the models have different architectures and were trained on different data.
iii) comprehensive quantitative ablations on the proposed technical extensions, a) 3D RoPE, b) extra normalization, c) frequency- and time- aware scaled RoPE, d) improved time scheduling and higher-order ODE solvers for reverse sampling, etc
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations have been discussed in the Appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Quantitative evaluations on all generation tasks.
Thank you for the suggestion. We have further supplemented the quantitative comparison experiments. Due to time constraints, we only compared Lumina-Next with representative SoTA T2I models, SD3 and PixArt. To better illustrate the aesthetics and semantic ability, we conduct an AI preference study to evaluate Lumina-Next against other text-to-image models, since conventional metrics such as FID and CLIP-Score may not accurately reflect the generation quality. Following PixArt, we employ GPT-4o, the SoTA multimodal LLM exhibiting strong alignment with human preference, as our evaluator to vote based on image quality. As shown in the following table, Lumina-Next demonstrates competitive performance with advanced text-to-image models including PixArt and SD3. Note that SD3 uses over 1B text-image pairs, which is ~100x greater than our models, PixArt, which is already a training-efficiencient model, still uses 3x training compute than ours. However, we have to admit that Lumina-Next still underperforms these SoTA models in terms of text-image alignment and compositional generation, due to inadequate data and training.
| Model | Winrate |
| --- | --- |
| SD3 | 69.5% |
| PixArt | 43.6% |
### Q2: Extra apples-to-apples evaluations on any resolution recognition.
Thanks for pointing this out! The lack of any resolution fine-tuning could indeed lead to unfair comparisons. To address this, we have added experiments with and without any resolution fine-tuning for any resolution inference. We also included two additional comparison methods: PVT-Base and Swin-Base. The experimental results show that under completely consistent settings, Next-DiT still achieves better performance over Deit-Base and PVT-Base, demonstrating the versatility of Next-DiT as a representation learning network for arbitrary resolutions. However, compared to Swin-Base, an advanced architecture with complicated architecture design for visual recognition tasks, Next-DiT still has room for improvement, objectively highlighting the superiority of specialized models in dedicated tasks. We have supplemented these results in the revised version of our paper.
| Model | Size | Training Resolution | Inference Resolution | Results |
| --- | --- | --- | --- | --- |
| Deit-B | 86M | 224✕224 | Any-resolution | 67.2 |
| PVT-B | 61M | 224✕224 | Any-resolution | 71.6 |
| Swin-B | 88M | 224✕224 | Any-resolution | 74.1 |
| Next-DiT-B | 86M | 224✕224 | Any-resolution | 72.6 |
| Deit-B | 86M | 224✕224 + Any-resolution Fine-tuning | Any-resolution | 82.9 |
| PVT-B | 61M | 224✕224 + Any-resolution Fine-tuning | Any-resolution | 83.2 |
| Swin-B | 88M | 224✕224 + Any-resolution Fine-tuning | Any-resolution | 85.3 |
| Next-DiT-B | 86M | 224✕224 + Any-resolution Fine-tuning | Any-resolution | 84.2 |
### Q3: Comprehensive quantitative ablations on the proposed technical extensions, a) 3D RoPE, b) extra normalization, c) frequency- and time- aware scaled RoPE, d) improved time scheduling and higher-order ODE solvers for reverse sampling, etc.
Thanks for the advice! Due to time constraints, we are unable to provide all independent ablation experiments for 3D RoPE and additional normalization here. However, our results in Fig. 6 demonstrate that Next-DiT, the combination of these aforementioned techniques, achieved an overall improvement on the ImageNet benchmark compared to the Flag-DiT used by Lumina-T2X. Additionally, In the table below, we supplement quantitative ablation studies on the Lumina-Next T2I model to verify the effectiveness of our resolution extrapolation and inference acceleration techniques. We have added these results to our main paper.
| Method | Resolution | CLIP Score | FID |
| --- | --- | --- | --- |
| Position Extrapolation | 2048✕2048 | 28.01 | 81.54 |
| Position Interpolation | 2048✕2048 | 27.36 | 113.24 |
| NTK-aware Scaled RoPE | 2048✕2048 | 28.21 | 61.89 |
| Frequency-aware Scaled RoPE | 2048✕2048 | 28.52 | 59.92 |
| Time-aware Scaled RoPE | 2048✕2048 | 28.93 | 58.66 |
| Method | Resolution | Steps | CLIP Score | FID |
| --- | --- | --- | --- | --- |
| Uniform | 1024✕1024 | 10 | 26.87 | 78.23 |
| Rational | 1024✕1024 | 10 | 28.57 | 64.12 |
| Sigmoid | 1024✕1024 | 10 | 28.39 | 63.40 | | Summary: This paper introduces the next generation of Lumina-T2X, Lumina-Next, which offers improved architecture, a scaled dataset, optimized sampling techniques, and a more efficient context extrapolation strategy. The improved architecture shows faster convergence rates, while the optimized sampling technique enables high-quality text-to-image generation with fewer steps. Through visual results, the authors validate the improvements provided by refining Flag-DiT in Lumina-T2X.
Strengths: * Next-DiT brings excellent high-resolution extrapolation generation.
* Compared to previous work, it provides better generation quality under a few-step generation setting.
Weaknesses: * There is a lack of quantitative experiment comparison results on some Text2Image benchmarks to demonstrate the superiority and effectiveness of the proposed method. Most of the comparisons are shown through visualization.
Technical Quality: 2
Clarity: 2
Questions for Authors: * Text-to-image benchmark result should be evaluated and reported
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: There is a lack of quantitative experiment comparison results on some Text2Image benchmarks to demonstrate the superiority and effectiveness of the proposed method.
Thank you for the suggestion. We have further supplemented the quantitative comparison experiments.
Due to time constraints, we only compared Lumina-Next with representative SoTA T2I models, SD3 and PixArt. To better illustrate the aesthetics and semantic ability, we conduct an AI preference study to evaluate Lumina-Next against other text-to-image models, since conventional metrics such as FID and CLIP-Score may not accurately reflect the generation quality. Following PixArt, we employ GPT-4o, the SoTA multimodal LLM exhibiting strong alignment with human preference, as our evaluator to vote based on image quality. As shown in the following table, Lumina-Next demonstrates competitive performance with advanced text-to-image models including PixArt and SD3. Note that SD3 uses over 1B text-image pairs, which is ~100x greater than our models, PixArt, which is already a training-efficiencient model, still uses 3x training compute than ours. However, we have to admit that Lumina-Next still underperforms these SoTA models in terms of text-image alignment and compositional generation, due to inadequate data and training.
| Model | Winrate |
| --- | --- |
| SD3 | 69.5% |
| PixArt | 43.6% |
In addition to the ImageNet experiments in our paper, we also conduct various ablation studies to verify the effectiveness of each proposed component.
| Method | Resolution | CLIP Score | FID |
| --- | --- | --- | --- |
| Position Extrapolation | 2048✕2048 | 28.01 | 81.54 |
| Position Interpolation | 2048✕2048 | 27.36 | 113.24 |
| NTK-aware Scaled RoPE | 2048✕2048 | 28.21 | 61.89 |
| Frequency-aware Scaled RoPE | 2048✕2048 | 28.52 | 59.92 |
| Time-aware Scaled RoPE | 2048✕2048 | 28.93 | 58.66 |
| Method | Resolution | Steps | CLIP Score | FID |
| --- | --- | --- | --- | --- |
| Uniform | 1024✕1024 | 10 | 26.87 | 78.23 |
| Rational | 1024✕1024 | 10 | 28.57 | 64.12 |
| Sigmoid | 1024✕1024 | 10 | 28.39 | 63.40 | | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Lumina-T2X encounters challenges including training instability, slow inference, and extrapolation artifacts. This paper introduces Lumina-Next, which improves Lumina-T2X with improved architecture, scaled dataset, optimized sampling techniques, and better context extrapolation strategy.
On the architecture side, they replace 1D RoPE with 3D RoPE to eliminate the inappropriate positional priors to model images and videos using the attention mechanism. They further removed all learnable identifiers in Flag-DiT, and introduced the sandwich normalization block in attention modules to control the activation magnitudes.
On the context extrapolation side, the authors propose a Frequency-Aware Scaled RoPE, reducing content repetition during extrapolation, and a novel Time-Aware Scaled RoPE for diffusion transformers to generate high-resolution images with global consistency and local details.
On the sample techniques side, the authors propose time schedules tailored for flow models to minimize discretization errors. They further combine the optimized schedules with higher-order ODE solvers, achieving high-quality text-to-image generation samples within 5 to 10 steps.
With the improved architecture, context extrapolation, and fast sampling techniques, the authors show that Lumina-Next yields strong generation capabilities, such as generating high-quality images much larger than its training resolution and multilingual text-to-image generation.
Furthermore, they extend Lumina-Next’s versatility to other modalities, such as multiviews, audio, music, and point clouds, with minimal modifications, achieving superior results across these diverse applications.
Strengths: - The paper is well written with extensive results, applications and details, which is valuable to the community.
- The paper provides practical improvements over architecture, dataset, sampling techniques, and context extrapolation strategy.
- The rethinking of existing model design is appreciated, especially showing a much smaller network can get better performance.
Weaknesses: - the changes over Lumina-T2X are a bit incremental, considering Lumina-T2X is just released a month before the submission. Some of the designs in Lumina-T2X itself such as learnable tokens are not essential and only exist in Lumina-T2X, so the improvement (removing it) is not applicable to other DiTs, so removing extra learnable tokens itself can hardly be considered innovation.
- the 3D rope does not show advantage over native designs, such as normal 2D + temporal positional embedding. There is no application showing the usage of 3D RoPE, such as video generation.
- the ImageNet experiments can hardly justify the improvements. The comparison between Lumina-Next and SiT/Flag-DiT is not clear, such as whether the model sizes are the same.
- there is a lack of explanation why model size can be reduced from the Flag-DiT 5B/7B model significantly.
- The adoption of newer and stronger text encoders is also not ablated, so the improvement over Lumia-T2X cannot fully be attributed to the proposed changes.
- There is a lack of quadratic evaluation metrics such as TIFA and ImageReward scores for fair comparison across models.
Technical Quality: 3
Clarity: 3
Questions for Authors: With all those changes on controlling the activation magnitudes, what about simply using grad clip norm to reduce gradient norms?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There are more interesting contents in the appendix, which can be moved to the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Incremental Changes.
Good point! We agree that Lumina-Next is largely based on Lumina-T2X with several improvements. However, we would like to highlight that these changes are made after comprehensive examinations and are essential for scaling this flow-based diffusion transformer or applying it in diverse fields. For example, we discover the exploding network activation and then propose to use sandwich normalization, which effectively eliminates the instability during both training and inference. In addition, we would like to highlight the additional contribution presented in our paper, as detailed in the Appendix. These include (1) zero-shot multilingual generation, (2) flexible visual recognition, and (3) text-to-multiview/music/audio/point clouds generation, all of which are yet to be explored for flow-based diffusion transformers.
### Q2: The advantage of 3D rope.
Well said! The current implementation of 3D RoPE is almost equivalent to normal 2D RoPE + temporal positional embedding. We use 3D RoPE to continue one of the excellent features of Lumina-T2X — a unified framework for different modalities. For example, we adopt the 3D RoPE for multiview generation as a natural extension, since it can be treated as a sequence of image frames. The high-quality and consistent multiview results shown in Figure 25 verified the effectiveness of 3D RoPE in modeling 3D signals. Due to current limitations in training resources and data availability, Lumina-Next is still a unified multimodal framework with independently trained models. However, we consider a unified multimodal generative model as a necessary future step, so this unified framework can serve as an essential foundation for the next stage.
### Q3: The ImageNet experiments can hardly justify the improvements.
Sorry for the confusion. Our comparison in Figure 5 is completely fair — with the same parameter settings and model sizes. Besides, the results from the ImageNet benchmark are also widely regarded as consistent when model scaling up and are commonly used as a preliminary validation test to reduce trial-and-error costs, e.g., EDM [3] conducts extensive experiments on ImageNet to gain valuable insights, which are further proven to be transferrable to various scales and fields.
[3] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." Advances in neural information processing systems 35 (2022): 26565-26577.
### Q4: Lack of explanation about model size.
Thanks for posing this question. Since the scale of data we use is much smaller than some open-source models (e.g., 30M in Lumina-Next V.S. over 1B in SD3), we found that, given the current data scale, the benefits of further scaling up the model to more parameters are not significant. This aligns with the finding in Lumina-T2X, where the 3B and 7B models achieved similar results. To achieve a sweet point between performance and efficiency, we adopted a 2B model, making it more accessible for contributors to run Lumina-Next on their personal devices. We have added more explanation in the revised version of our paper.
### Q5: The adoption of Gemma.
Please allow us to clarify that we chose Gemma 2B mainly for better efficiency, and it is not considered as one of our contributions. Actually, Gemma 2B is weaker than LLaMA 7B used in Lumina-T2X. The influence of using different text encoders is valuable and worth exploring. However, this is not the major focus of our paper. Besides, the effectiveness of our core contributions has been well validated as follows: (i) the experiments on the ImageNet benchmark demonstrate the improvements of Next-DiT over Flag-DiT, and (2) fair experiments on the Lumina-Next T2I model have confirmed the effectiveness of our resolution extrapolation and inference acceleration techniques. We have further supplemented the quantitative results of ablation studies here.
| Method | CLIP Score | FID |
| --- | --- | --- |
| Position Extrapolation | 28.01 | 81.54 |
| Position Interpolation | 27.36 | 113.24 |
| NTK-aware Scaled RoPE | 28.21 | 61.89 |
| Frequency-aware Scaled RoPE | 28.52 | 59.92 |
| Time-aware Scaled RoPE | 28.93 | 58.66 |
| Method | Steps | CLIP Score | FID |
| --- | --- | --- | --- |
| Uniform | 10 | 26.87 | 78.23 |
| Rational | 10 | 28.57 | 64.12 |
| Sigmoid | 10 | 28.39 | 63.40 |
### Q6: Lack of quadratic evaluation metrics.
Thank you for the suggestion. Due to time constraints, we only compared Lumina-Next with representative SoTA T2I models, SD3 and PixArt. To better illustrate the aesthetics and semantic ability, we conduct an AI preference study to evaluate Lumina-Next against other text-to-image models, since conventional metrics such as FID and CLIP-Score may not accurately reflect the generation quality. Following PixArt, we employ GPT-4o as our evaluator to vote based on image quality. As shown in the following table, Lumina-Next demonstrates competitive performance with advanced text-to-image models including PixArt and SD3. Note that SD3 uses over 1B text-image pairs, which is ~100x greater than our model, PixArt, which is already a training-efficiencient model, still uses 3x training compute than ours. However, we have to admit that Lumina-Next still underperforms these SoTA models in terms of text-image alignment and compositional generation, due to inadequate data and training.
| Model | Winrate |
| --- | --- |
| SD3 | 69.5% |
| PixArt | 43.6% |
### Q7: What about simply using grad clip norm.
Good point! We have tried using techniques like gradient clip, but they are not that effective in preventing model training from crashing. This finding has also been mentioned in other papers such as ViT-22B [4]. We have included these empirical findings in the paper, hoping to provide some helpful insights to the community.
[4] Dehghani, Mostafa, et al. "Scaling vision transformers to 22 billion parameters." *International Conference on Machine Learning*. PMLR, 2023. | null | null | null | null | null | null |
DeepDRK: Deep Dependency Regularized Knockoff for Feature Selection | Accept (poster) | Summary: This paper proposes a deep-learning-based method for performing "knockoff"-based feature selection. The main contribution is the proposal of a new optimization problem, which incorporates a four part loss. The method is evaluated on synthetic, semi-synthetic, and real-world data, showing comparable and, at times, improved performance over relevant baselines.
Strengths: **Experiments.** At times, this paper shows that the proposed method improves the power for a fixed FDR.
Weaknesses: **Motivation.** It's not clear why this problem is of interest. It is true that many papers have been published on this topic. But the authors motivate the problem not by describing why knockoffs are particularly important; they simply say that finding information features is "mission impossible," which doesn't give the reader any idea of why the problem is relevant.
**Organization.** To articulate a list of contributions, a paper must clearly answer two fundamental questions: (1) "What is the problem under consideration?" and (2) "How does the proposed method address this problem?" Unfortunately, it's not clear that this paper answers the first or second questions. Here are some comments:
1. The authors claim that their method reduces reconstructability. However, it's not clear what "reconstructability" is, or how it would be formally defined. The idea of reconstructability only arises in the discussion of related work, and even here, the problem is not presented in a way that is distinct from the solution presented in [4].
2. Section 2.1 is confusing. More care should be taken w/r/t the introduction of the problem of generating a knockoff sample. The authors adopt quite a bit of notation from [11]. However, whereas [11] presents the method with examples and clearly articulated goals, this paper's summary is so coarse/brief that any reader who has not read this past work will have little to no chance of understanding the problem of finding knockoffs.
3. Deferring the definition of FDR to the appendix doesn't make sense to me. This quantity is of fundamental interest; it is measured throughout the experiments, and theorems are presented (in part from past work) which rely on this definition. The paper is incomplete without including this definition in the main text.
4. Theorem 2.1 is not clearly stated. The "aforementioned property" is not inferable from context. The authors should make it clear that (1) they are referring to Theorem 1 in [11] and (2) that the "aforementioned property" is the choice of threshold in (3).
5. Readability would be improved if the authors included an example of the swap function, such as the example in [11] after Def. 2.
6. The authors seem to use (X,Z) and [X,Z] freely and interchangeably. This should be clarified. [11] sticks with (X,Z) (rather than square brackets) and I would recommend that the authors do the same.
7. Make it abundantly clear that whenever the authors say "power," they mean low type II error. This may not be clear to readers coming from deep learning.
**Main method.** A few comments on the main method.
1. The results are presented in a confusing order. (4) is presented before the authors know what any of the pieces are. Readers must wait around a page to get the details, which will frustrate. Furthermore, it's not clear why (4) is a fundamentally "good" objective, or where it came from. As written, it feels like the authors pulled this problem out of a hat. Why does posing this as a minimax problem (as opposed to, e.g., a constrained problem) make sense? Why do the authors choose these two loss terms (and not others)? And how should one choose all of the trade-off parameters?
2. The authors seem to imply that [56] has a problem. It's not clear what this problem is; the "is far from enough" sentence doesn't give any quantitative evidence as far as I can tell. The result of this is that when the authors say "To fix it, ...," it's unclear what problem they are fixing.
3. The authors borrow a loss term from the risk extrapolation (REx) paper from domain generalization. The authors should justify this choice, as well as the other loss functions that they append onto their objective. As written, it feels as if the authors are somewhat arbitrarily adding on loss functions, and there isn't a clear principle driving the decision making. This gives the paper a largely heuristic feel. And while the authors claim to provide ablations in the appendix, it's not clear where they are. The authors say that the results are "in the associated tables," but it's not clear what tables these are. A similar concern arises w/r/t Figure 9.
4. The trade-off between L_SL and L_DRL isn't clearly presented. This content should be presented in the main text; based on the figure in Appendix G, it's not clear that the trade-off always holds. Furthermore, even if it did hold, the contribution that the authors are "the first to observe this phenomenon" seems to be trivially true: If no one has used this loss before, then of course no one has seen this trade-off.
**Experiments.** Some comments on the experiments. The experiments seem to show marginal improvements for the proposed method. While DeepDRK does at times achieve higher power for a fixed FDR, the results are often fairly marginal, e.g., in Figure 5. Given the shortcomings in the presentation of the method, it's not clear that these results do enough to justify acceptance.
Technical Quality: 2
Clarity: 1
Questions for Authors: See above.
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. Please refer to the following for our response.
Comment: it's not clear why (4) is fundamentally "good" …
- Response: Thank you for the comments and questions. We will add more descriptions on the motivation for the two terms in the revised manuscript. Essentially, the two general terms are introduced to enforce swap property and to reduce collinearity, respectively. This form of losses are seen in most existing deep-learning-based knockoff methods [1][2][3][4]. Nonetheless, our proposed version, as empirically verified in all the experiments presented in the paper, suggests a good performance during the FDR-controlled feature selection. Adopting the min-max formulation is natural, as we want to minimize the total loss over the swapper that maximizes the distance between $(X, \tilde{X})$ and $(X, \tilde{X})_{\text{swap}(B)}$.
For the hyper parameters, we chose based on the performance across different dataset experiments and identified that the configuration outlined in Table 2 in general provides good controlled FDR and high power. All results (with different datasets) presented in the paper relies on this single setup, suggesting the adaptability of this method across different application scenarios.
- [1] Romano, Y., et al. . Deep knockoffs. ` Journal of the American Statistical Association, 115(532):1861–1872, 2020.
- [2] Jordon, J., et al. : Generating knockoffs for feature selection using generative adversarial networks. In International Conference on Learning Representations, 2018.
- [3] Sudarshan, M., et al.. Deep direct likelihood knockoffs. Advances in neural information processing systems, 33:5036–5046, 2020.
- [4] Masud, S. B., et al.. Multivariate rank via entropic optimal transport: sample efficiency and generative modeling. arXiv preprint arXiv:2111.00043, 2021.
Comment: The authors seem to imply that [56] has a problem. It's not clear…
- Response: Thanks for pointing out the ambiguity. [56] uses single-swapper during training, which results in insufficient FDR control and selection power (see Figure 2). The necessity of introducing the multi-swapper setup can be justified by the nonlinearity of learning the knockoff copies from X in the deep learning setup. Given that this optimization is nonlinear, there could be multiple local optima that the swap property could be violated. Introducing this multi-swapper scheme would enforce the model on learning a good knockoff that improves the feature selection performance. We have included the ablation study on the effectiveness of this multi-swapper setup. Please refer to Figure 9 for comparing between the DeepDRK model and the K=1 model. We can see that a single-swapper setup results in failure of FDR control.
Comment: The authors borrow a loss term from the risk extrapolation (REx) …
- Response: The first SWD term in Eq. (5) is introduced to directly enforce the swap property outlined in Eq. (1). There could be multiple choices such as KL divergence, Wasserstein Distance, etc. We choose SWD over other measurements due to the runtime benefit brought by the SWD (see Table 1 for the runtime comparison to other methods). The rationale behind the introduction of REx resides in the lack of FDR controllability in the single-swapper setup. Empirically in the case with only one swapper, we observe that the null knockoff statistics are not symmetric around zero, therefore it is hard to control the FDR and (see Figure 9 and Figure 13 in Appendix K2 and K3 for empirical results). This indicates that the swap property is not well-enforced. Thus we propose to use a multi-swapper setup. Each swapper represents one adversarial environment, so we are optimizing under multiple adversarial attacks. In this case, the REx term is naturally introduced to fight against multiple adversarial attacks simultaneously. Results can also be found in Figure 9 and Figure 13. The last $L\_\text{swapper}$ term is to ensure different adversarial environments. We have provided explanations of these terms in the paragraphs below Eq. (5) on page 4. We have improved the writing for better clarity in the revised paper.
Comment: The trade-off between L_SL and L_DRL isn't clearly presented. This content…
- Response: We empirically observe a competition between the two losses from the fact that when one loss decreases, the other increases, and it is hard to find a good set of hyper-parameters $[\lambda_1 , \lambda_2]$ to strike a balance. It is not a training problem because, as stated in Section 3.2, we observe such a phenomenon in all baseline knockoff generation algorithms of this paper. The baseline methods of course don’t define the losses in the exact same way as we did, but they all have corresponding loss terms that serve the same purposes. Thus the observation is well worth pointing out and the underlying reasoning is also stated in our paper: minimizing the swap loss, which corresponds to FDR control, is the same as controlling Type I error. Similarly, minimizing the dependency regularization loss is to control Type II error. With a fixed number of observations, it is well known that Type I error and Type II error can not decrease at the same time after reaching a certain threshold. This is why the two losses always compete with each other.
Comment: The experiments seem to show marginal…
- Response: Only looking at the right column in Figure 5, the improvement of our method is not significant, because the linear synthetic rule is an easy task, on which all methods perform quite well. However when a tanh synthetic rule is applied, i.e. the left column of Figure 5, it is apparent that DeepDRK is the only method that controls FDR under the target level and at the same time enjoys high selection power. Other results such as Figure 2 and 3 also suggest that for complex distributions and low sample size, DeepDRK performs significantly better than baseline algorithms.
---
Rebuttal 2:
Comment: Thanks for your valuable comments. Due to page limit, we provide brief introductions to the standard knockoff definitions and the corresponding selection procedures, as the general motivation and idea have been widely known by the statistics community. We do understand that it could cause inconvenience to people unfamiliar with the subject, and hereby in the revised manuscript we have added more details. We have also reorganized the paper and fixed typos based on your advice.
Feature selection, or variable selection, is a data-driven method to determine which of the $p$ features $(X\_1, \ldots, X\_p)$ are statistically related to the response $y$ in a linear model. Feature selection methods are important in a number of application areas, such as medical healthcare, economy, and political science [1]. Among various selection procedures, model-X knockoff emerges to be useful in providing an effective way to control the false discovery rate (FDR) of the selection result, regardless of the regression algorithm. To improve readability, we have already moved the definition of FDR to Section 2.1 from Appendix A and added examples to better illustrate swap property in the Appendix.
The knockoff selection method proposed in most statistics literatures [1] has two limitations. First, the assumption that $X$ is Gaussian is often unrealistic, and its underlying distribution is usually unknown. Second, simply decorrelating $X\_j$ and $\tilde{X}\_j$ is not enough whenever $X\_j$ is almost surely equal to a linear combination of $X\_{-j}$ [2], which is stated in Appendix B. This is referred to as “reconstructability”, which is a population counterpart of collinearity, see Section 2.3 and Appendix B for more discussions. When there is reconstructability, the resulting knockoff adds collinearity into the regressor $(X,\tilde{X})$, deteriorating the regression and thus the selection result. Due to the two drawbacks, we propose DeepDRK, a framework that performs powerful knockoff-based feature selection for unknown $X$ distributions.
- [1] Candes, E., Fan, Y., Janson, L., & Lv, J. (2018). Panning for gold:‘model-X’knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80(3), 551-577.
- [2] Spector, A., & Janson, L. (2022). Powerful knockoffs via minimizing reconstructability. The Annals of Statistics, 50(1), 252-276.
---
Rebuttal Comment 2.1:
Comment: Dear reviewer,
Thanks a lot for your time and effort. As the discussion period will end soon, we would greatly appreciate your feedback on whether our rebuttal has adequately addressed your concerns. Please reach out if you have further comments. Thank again! | Summary: This paper considers the construction of knockoff features
in the variable selection framework of model-X knockoffs.
The authors propose a deep learning-based method for
generating knockoff features. Extensive numerical simulation and real-data examples show
that the proposed method has advantageous performance compared
with other state-of-art methods, especially when the covariate distribution
is complex and when the sample size is relatively small.
Strengths: The paper considers one fundamental problem in the model-X knockoff
pipeline --- the construction of valid and high-quality knockoffs.
This is key to the applicability of the knockoffs method.
Substantial engineering efforts have been made in this work and
the proposed method is demonstrated to have satisfactory performance
in extensive numerical experiments. This is a significant contribution to the
knockoffs literature.
Weaknesses: The proposed method is based mostly on empirics and heuristics,
with a theoretical understanding lacking.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The introduction of the dependency regularized perturbation is
for power boosting, and yet in the ablation study, this feature seems to help
more with FDR control as opposed to power-boosting. Is there a reason?
2. On page 4, line 132, the definition of REx: I am confused since
the LHS does not depend on $i$ but the RHS depends on $i$.
3. On page 4, line 148: "correltaion" -> "correlation"
4. On page 5, line 153: I am confused by the statement that "the knockoff
$X_j$ should be independently sampled from $p_j(\cdot \mid X_{-j})$"
--- this alone is not enough to ensure the knockoff property.
5. On page 5, line 173, what is $\alpha$?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and advice. Please checkout our responses below.
Question: The introduction of the dependency regularized perturbation is for power boosting, and yet in the ablation study, this feature seems to help more with FDR control as opposed to power-boosting. Is there a reason?
- Response: Thank you for your question. In principle DRP should be considered as an approach to reduce collinearity in the concatenated design matrix at the cost of increasing the swap loss. It is true that the design of the DRP technique does not aim for FDR reduction. Nonetheless, looking at the knockoff statistics $W\_j$ in Figure 4 and Figure 13, we clearly observe that DRP tends to make the null statistics symmetric around zero. Since the knockoff statistics is the variable that is used for feature selection, the symmetric observation will reduce the number of negative $W\_j$’s to be selected, according to the selection criteria Eq. (3). This leads to a lower FDR.
Question: On page 4, line 132, the definition of REx: I am confused since the LHS does not depend on i but the RHS depends on i
- Response: Thank you for pointing this out. We will revise the equation to $\text{REx}(X, \tilde{X}\_{\theta}, \\{S\_{\omega\_i}\\}_{i=1}^K) = \widehat{\text{Var}}\_{S\_{\omega}}(\text{SWD}([X, \tilde{X}\_\theta], [X, \tilde{X}\_\theta]\_{S\_{\omega_i}}); i \in [K])$ to make the LHS and RHS consistent.
Question: On page 4, line 148: "correltaion" -> "correlation"
- Response: Thank you for pointing this out, we will make corrections in the revision.
Question: On page 5, line 153: I am confused by the statement that "the knockoff $\tilde{X}\_j$ should be independently sampled from $p(\cdot \vert X\_{-j})$ --- this alone is not enough to ensure the knockoff property.
- Response: Thanks for pointing out. Indeed the description is a bit hand-waving. To be precise, we want to sample $\tilde{X}\_j$ from $p\_j(\cdot | X\_{-j})$ such that $X\_j$ and $\tilde{X}\_j$ are as less dependent as possible. This ensures the swap property but reduces reconstructability. We will revise the description in the manuscript.
Question: On page 5, line 173, what is $\alpha$
- Response: Thank you for pointing this out. This is a typo and it should be $\alpha\_n$. We will correct it in the revision.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Thanks a lot for your time and effort. As the discussion period will end soon, we would greatly appreciate your feedback on whether our rebuttal has adequately addressed your concerns. Please reach out if you have further comments. Thank again! | Summary: The work introduces DeepDRK, a new algorithm for improving FDR in the model-X knockoff framework. In this framework, a knockoff covariate is generated for each existing covariate where then knockoff covariates have to satisfy the swap property. Given a knockoff statistic that satisfies the flip-sign property, one can perform feature selection with guaranteed false discovery rate by training a regression/classification model on the concatenated vector of original and knockoff covariates and then using any arbitrary feature importance score. Therefore, the main challenge in this framework is generating the knockoffs such that they satisfy the strong swap property: swapping any arbitrary subset of covariates with their knockoffs should not change the distribution of the concatenated vector.
Given that most real-world data does not follow Gaussian or a mixture of Gaussian distribution, generating knockoffs that satisfy the swap property is a challenge. Existing work is focused on using deep generative models (e.g. GANs) to handle this challenge. The paper argues that this approach for continuous covariates will result in reconstructability: overfitting on the data will make the distribution of knockoffs too similar to the data distribution and therefore making the concatenated vector too multicollinear for any feature-importance method to have discovery power.
The DeepDRK framework seeks to tackle reconstructability by training the deep model in such a way that not only satisfies the swap property but also reduces reconstructability. This is achieved in two steps:
Firstly, by having a 1- the adversarial swap loss where a group of models compete adversarially against the knock-off generator model 2- a dependence regularization loss to reduce reconstructability by directly reducing the sliced-Wasserstein correlation between the original and knockoff vectors.
Secondly, by the post-training dependency regularization which interpolates between the generated knockoff data table and the row permuted original data table. By making the interpolation weaker or stronger, one gains control in how much they can improve how much swap condition holds and therefore how good the FDR is at the cost of loosing power and vice versa.
The paper then shows experimental evidence in synthetic and semi-synthetic scenarios (where the true covariate-response) relationship is known. The argument is that compared to existing algorithms that have better power in low-sample settings, DeepDRK's empirical FDR is close the the selected level more consistently and this is proven both by the FDR vs Power results and the original vs knockoff feature importance distributions.
Strengths: All in all, the main novelties in the paper are:
- main one is the post-training Dependency Regularization Perturbation
- using multi swappers and SWD criterion
- Experimenting on synthetic distributions other than GMMs
- Showing that unlike other deep learning based knockoff algorithms, DeepDRK has more consistent FDR preservation. The ablation study really helps with understanding the benefit of each of the algorithmic choices.
Weaknesses: - It seems necessary to see 4.1 results for various beta scales used for generating the response variable to compare different methods in various levels of difficulties to make sure the value of 15 has not been selected to favor DeepDRK. I think it'd be interesting to have the results in a 2D FDR vs Power scatter plot for better comparison.
- I'd be interested in seeing mean + confidence interval performance plots for Figure 2 and Figure 3 using various distribution parameter selections (e.g. for GMM distribution using various \pi parameters in addition to to more \rho_base, etc)
- The process for choosing the value of alpha=0.5 needs to be clarified as "it leads to consistent performance" might suggest choice based on final results which would be a case of overfitting. The discussion related to Figure 8 seems to confirm overfitting.
Technical Quality: 4
Clarity: 4
Questions for Authors: Discussed above
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: There needs to be a discussion regarding the choice of hyperparameters especially the alpha parameter as in real-world applications of knockoffs it's impossible to know the best performing value in advance. Given the results in Figure 8, there seems to be high sensitivity to the alpha value which would make the framework unreliable. At the very least, the authors need to show the superiority of their framework to existing methods in a large interval of alpha values.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and advice. Please checkout our responses below.
Comment: It seems necessary to see 4.1 results for various beta scales used for generating the response variable to compare different methods in various levels of difficulties to make sure the value of 15 has not been selected to favor DeepDRK. I think it'd be interesting to have the results in a 2D FDR vs Power scatter plot for better comparison.
- Response: Thank you for the suggestions. We have provided experimental results concerning 4 different sets of the scale value: 5, 10, 15, 20. Results are presented in Figure 1 of the attached PDF file in the Author Rebuttal section. From the figure, we can clearly observe that the proposed DeepDRK outperforms other methods for a controlled FDR with relatively higher power (e.g. points are closer to the top left region). In addition, this figure also conveys a message that the commonly used $\frac{p}{\sqrt{n}}$ (with n = 2000) setup in existing works is a relatively easy task where most methods can achieve controlled FDR with high power.
Comment: I'd be interested in seeing mean + confidence interval performance plots for Figure 2 and Figure 3 using various distribution parameter selections (e.g. for GMM distribution using various $\pi$ parameters in addition to to more $\rho_\text{base}$, etc)
- Response: Thank you for the comments. We have provided GMM distribution with 10 sets of different $\pi$ parameters. Results are presented in Figure 2 and Table 1 of the attached PDF. This should complement Figure 2 and 3 of the main manuscript on the change of $\rho_base$ for the GMM distributions. From Figure 2, we visually observe that the model performance does not vary significantly for all tested models on 3 out of 10 $\pi$ setups. Among all the models, the proposed DeepDRK is the only one that can control the FDR given the nominal 0.1 level. In addition, we also provide a table that includes the mean, standard deviation and median, 5\% and 95\% quantiles for the 10 complete setups. This should reveal the robustness of the proposed model in this setup.
Comment: The process for choosing the value of alpha=0.5 needs to be clarified as "it leads to consistent performance" might suggest choice based on final results which would be a case of overfitting. The discussion related to Figure 8 seems to confirm overfitting.
- Response: First of all, please allow us to make corrections on Eq. (8). We realized that Eq. (8) has a missing coefficient $1- \alpha\_n$ in front of $\tilde{X}\_{\theta}$ . Essentially we perform interpolation between $\tilde{X}\_\theta$ and $X\_\text{rp}$ as the former seeks to control FDR for a satisfied swap property while the latter seeks to reduce collinearity for higher power. We look for a balance between the two. The corrected equation is $\tilde{X}^{\text{DRP}}\_{\theta, n} = (1 - \alpha\_n) \cdot \tilde{X}\_\theta + \alpha\_n \cdot X\_{\text{rp}}$ and this will be reflected in the revised manuscript. In addition, Figure 8 was plotted according to an earlier version of Eq. (8), where we take $\tilde{X}^{\text{DRP}}\_{\theta, n} = \alpha\_n \cdot \tilde{X}\_\theta + (1-\alpha\_n) \cdot X\_{\text{rp}}$. In other words, the $\alpha\_n$ presented in Figure 8 refers to the coefficient for $\tilde{X}_\theta$, whereas $\alpha_n$ should refer to the coefficient for $X\_\text{rp}$ in the updated Eq. (8). We have corrected Figure 8 and present the new figure as Figure 3 in the attached PDF.
As for the consistency of the model on the choice of alpha, we want to first point out that the choice of knockoff is not unique [1][2] and there is no closed form solution to the knockoff variable in non-Gaussian setups. As a result, it is hard to find a value for $\alpha_n$ from the data via procedures like cross-validation. We want to point out that the choice of $\alpha_n$ is not a result of overfitting. According to Figure 8 (i.e. Figure 3 in the updated PDF in the Author Rebuttal section), we clearly observed that choices of $\alpha_n$ around 0.5 results in low FDR with relatively high power across multiple datasets, suggesting its generalizability for a range of different data setups.
- [1] Candes, E., Fan, Y., Janson, L., & Lv, J. (2018). Panning for gold:‘model-X’knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80(3), 551-577.
- [2] Spector, A., & Janson, L. (2022). Powerful knockoffs via minimizing reconstructability. The Annals of Statistics, 50(1), 252-276.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Thanks a lot for your time and effort. As the discussion period will end soon, we would greatly appreciate your feedback on whether our rebuttal has adequately addressed your concerns. Please reach out if you have further comments. Thank again!
---
Rebuttal Comment 1.2:
Title: Response
Comment: I want to thank the authors for addressing all of my questions comprehensively. The empirical results now seem robust and reliable. I have changed my review scores accordingly. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We have completed more experiments, as suggested in your comments. Please refer to the PDF file for details.
Specifically, Figure 1 refers to additional experiments on the change of $\beta$ coefficient scales. We include three scales, 5, 10, and 20, in addition to the default 15. Namely, we consider $\frac{p}{5\cdot \sqrt{n}}$, $\frac{p}{10\cdot \sqrt{n}}$, $\frac{p}{15\cdot \sqrt{n}}$ and $\frac{p}{20\cdot \sqrt{n}}$ in total.
Figure 2 provides barplots for three different setups for the weights (i.e. $\pi_1$, $\pi_2$ and $\pi_3$) of the components in mixture Gaussian . They are: 1. [$\frac{1}{3}$, $\frac{1}{3}$, $\frac{1}{3}$]; 2. [0.2, 0.3, 0.5]; 3. [0.5, 0.3, 0.2]. The barplots compare the model performance on these three setups on FDR and Power. In addition, we further consider 7 other setups and calculate statistics on the FDR and power (e.g. mean, standard deviation, median and quantiles). The weights are uniformly sampled. The results can be found in Table1. Weights detail is presented below:
- [0.56215729, 0.38386235, 0.05398036]
- [0.42981112, 0.16832496, 0.40186392]
- [0.31650882, 0.32417065, 0.35932052]
- [0.31627876, 0.38820664, 0.2955146]
- [0.43860741, 0.48803919, 0.0733534]
- [0.31395573, 0.04100264, 0.64504163]
- [0.65634908, 0.28201858, 0.06163234]
We also provide the corrected Figure 8 (of the manuscript) in Figure 3 in the PDF file. Essentially, it flipped the X-axis horizontally and nothing else was changed.
Sincerely,
The Authors of the Paper
Pdf: /pdf/36106a03e24723e8e598b486a726fe7ac20e63ef.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Theoretical guarantees in KL for Diffusion Flow Matching | Accept (poster) | Summary: This paper proposes theoretical guarantees for diffusion flow matching, which is of the most popular generative models. Specifically, it provides an upper bound of the KL divergence between the target distribution and the one learned by the DFM. This work promotes the theoretical understanding of the discretized flow matching model.
Strengths: 1. The paper is clearly written, and the proofs seem to be sound.
2. The authors consider the early-stopping case and remove the Lipschitz assumption of the velocity field.
Weaknesses: 1. Assumption H1 is quite strong that requires both the source and target distributions to have finite 8-th moments.
2. The procedure is standard compared to some theoretical work on SGM. For example, the authors assume an L2-drift-approximation error and use Girsanov theorem for bounding the KL divergence.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. If you in addition assume the velocity field is Lipschitz, will you be able to relax assumption H1 to have finite second moments?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Questions:
_If you in addition assume the velocity field is Lipschitz, will you be able to relax assumption H1 to have finite second moments?_
We thank the reviewer for highlighting this issue. Indeed, for a Lipschitz velocity field, assumption H1 can be relaxed to only require finite second moments. This can be achieved by using the Lipschitz condition to bound the second term in equation (23) and employing the linear representation in equation (5) of the stochastic interpolant.
However, assuming that the velocity field is Lipschitz would come at a heavy price on the data distribution and the coupling. This is certainly not true under the assumptions of our work regardless of whether or not we are in the early stopping regime. One would need to add stronger assumptions.
Weaknesses:
_Assumption H1 is quite strong that requires both the source and target distributions to have finite 8-th moments._
We consider assumption H1 to be relatively mild, as in most applications, data distributions are compactly supported. Moreover, our paper presents the first convergence analysis for diffusion-type FMs. In contrast, early studies on SGMs required much stronger assumptions on data distributions, beyond the Lipschitz continuity of the score, as seen in [11], [12], and [14]. Furthermore, many of these studies did not achieve convergence bounds with polynomial complexity. While recent convergence guarantees for SGMs work under the finiteness of the second-order moment in the early stopping regime, it should be noted that analyzing DFMs is more challenging, as, contrary to SGMs, DFMs permit to interpolate between any two probability distributions in finite time. We appreciate the reviewer's insights and acknowledge that there is potential for improvements, albeit at the expense of additional technical complexity.
[11] V. D. Bortoli, Convergence of denoising diffusion models under the manifold hypothesis.
[12] V. De Bortoli, J. Thornton, J. Heng, and A. Doucet, Diffusion schrödinger bridge with applications to score-based generative modeling.
[14] A. Block, Y. Mroueh, and A. Rakhlin, Generative modeling with denoising auto-encoders and langevin sampling.
_The procedure is standard compared to some theoretical work on SGM. For example, the authors assume an L2-drift-approximation error and use Girsanov theorem for bounding the KL divergence._
Concerning the assumption on the L2-drift-approximation error, we believe that analyzing the convergence of the proposed DFM model under such a standard assumption is a strength of our work, not a weakness. This demonstrates that our bound is applicable under reasonable conditions.
Additionally, we assert that our procedure is fundamentally different from those proposed in previous works. While the application of the Girsanov theorem is now standard in the literature (see [9,10,11,12]), it is not a distinguishing feature of our approach, but rather a starting point. Beyond that, our method is entirely distinct and is not a straightforward consequence of the Girsanov theorem and assumption H3. The main innovation of the present article consists in using the Markovian projection of the interpolant as a reference measure and being able to control the $L^2$ norm of the adjoint process in the Pontryagin system associated with it. We do so by introducing a novel quantity in the generative model literature (see [13]), namely the so-called “reciprocal characteristic” of a Markov process, which may be viewed as some sort of mean acceleration field. The main efforts in our proofs are directed towards bounding the $L^2$ norm of the reciprocal characteristic whose representation in terms of either conditional moments or higher-order logarithmic derivatives of conditional densities is quite intricate, see page 26. Trying to bound each of these terms separately requires strong assumptions on the initial distributions and couplings leading to suboptimal results. However, using integration by parts both in time and space, and profiting from symmetry properties of the heat kernel, we managed to bound these terms under assumptions comparable to the minimal ones required in the analysis of SGMs. Note that the analysis of the reciprocal characteristic is not required for SGMs (it is always 0) and that controlling it also requires new tricks and ideas, since its representation contains up to three logarithmic derivatives of conditional distributions, whereas the analysis of SGMs requires at most two such derivatives to be analyzed, or, equivalently, to bound conditional covariances.
Based on the reviewer’s feedback, it appears that we haven’t sufficiently highlighted the novelty of our procedure compared to existing studies. We will make it a priority to clarify and emphasize this aspect more clearly in “the proposed methodology”, starting with adding part of the previous paragraphs.
[9] Giovanni Conforti, Alain Durmus, and Marta Gentiloni Silveri. Score diffusion models without early stopping: finite fisher information is all you need.
[10] Hongrui Chen, Holden Lee, and Jianfeng Lu. Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions.
[11] V. D. Bortoli, Convergence of denoising diffusion models under the manifold hypothesis.
[12] V. De Bortoli, J. Thornton, J. Heng, and A. Doucet, Diffusion schrödinger bridge with applications to score-based generative modeling.
[13] A. Krener, Reciprocal diffusions in flat space.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I appreciate the authors' comprehensive response. My questions and concerns are sufficiently addressed. Thus, I would like to increase the rating.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for considering our response. | Summary: This paper gives theoretical guarantees for Diffusion Flow Matching similar in spirit to existing results for Denoising Diffusion Probabilistic Models and Probability Flow ODE. This work requires weaker assumptions than prior work, replacing the Lipschitzness of the score function with a relative Fisher information condition.
Strengths: This paper weakens this required assumptions of prior work, and is the first to develop guarantees for diffusion flow matching, instead of, for example, an analysis of DDPM or probability flow ODE. Unlike these other works, the authors further exploit properties of the heat kernel that appears in the estimation procedure. Their results also easily extend to the early-stopping setting.
Weaknesses: One could make the argument that the present ideas are not as novel as its predecessors (e.g., Conforti et al. (2023) for the finite-fisher information criteria, or the line of works similar to Chen et al. (2023) in general). Moreover, the dimension-dependence in these results is quite poor (though the authors do mention this as an avenue for future work).
Technical Quality: 3
Clarity: 2
Questions for Authors: - The authors claim to tackle "all sources of error" in the abstract, though I would argue the statistical convergence rates remain open.
- What is the primary obstacle in improving the dimension dependence? Do you conjecture that it should be the same for DDPM?
Some typos I managed to catch and write down:
- L28: "are important... " or "were an important"
- L34: I don't know what the ":" is there
- L63: "In the case where..." maybe?
- L73: Reference to LWYql23 --- might be worth fixing the authors name in the bibliography
- L77: "... an approximate sample ..."
- L144: "... a solution of the ..."
- L164: "... is therefore the solution ..." [the subsequent page has a few more instances of missing articles that are worth fixing]
- L170: "through" (not trough)
- L173: Fix the [Kry] citation (no year)
- L190: "...gives an ideal..."
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Questions:
_The authors claim to tackle "all sources of error" in the abstract, though I would argue the statistical convergence rates remain open._
In our defense, we thought we have already acknowledged the lack of analysis on statistical convergence rates. In the conclusion, we suggest that “it would be interesting to complement our analysis with a statistical analysis of DFM.” Furthermore, while this is not mentioned in the abstract, we clarify in both the introduction and conclusion that "all sources of error" refer to the “drift-approximation error and the time-discretization error.” It is also important to note that, with the exception of [7] none of the previous works on FMs address the issue of statistical convergence rates. However, given the reviewer’s comment, it seems we have not been clear enough. Therefore, we will specify in the introduction that our work does not cover the statistical error of DFM.
[7] Yuan Gao, Jian Huang, Yuling Jiao, and Shurong Zheng. Convergence of continuous normalizing flows for learning probability distributions.
_What is the primary obstacle in improving the dimension dependence? Do you conjecture that it should be the same for DDPM?_
The primary obstacle stems from the challenge of bounding the reciprocal characteristic associated with the mimicking drift, as detailed in our response to the “Weaknesses” section. However, it is important to highlight that our bound depends polynomially on the dimension, unlike previous bounds (see [3], [8]) which depend exponentially on it. Consequently, our dimension-dependence is significantly more favorable. Nonetheless, we suspect that improvements could be made at the cost of additional technical complexity and of more sophisticated choices of discretization schemes. Therefore, we mention this as a potential avenue for future work in the conclusion.
[3] Michael S. Albergo and Eric Vanden-Eijnden. Building normalizing flows with stochastic interpolants.
[8] Joe Benton, George Deligiannidis, and Arnaud Doucet. Error bounds for flow matching methods.
Weaknesses:
_One could make the argument that the present ideas are not as novel as its predecessors (e.g., Conforti et al. (2023) for the finite-fisher information criteria, or the line of works similar to Chen et al. (2023) in general). Moreover, the dimension-dependence in these results is quite poor (though the authors do mention this as an avenue for future work)._
We believe that our approach contains fresh ideas that cannot be found in previous works dealing with SGMs, and is entirely distinct from those of [9] and [10].
The sole resemblance to [10] lies in the application of the Girsanov theorem, which however is not the distinguishing feature of our work, but simply a starting point of our proof. In addition, this step is not a contribution of [10] and is now standard for analyzing SGMs (see e.g., [11,12]).
Regarding [9], one of its main contributions is to estimate relative entropy on path space by interpreting (a modified version of) the drift of the backward process as the adjoint process in a stochastic control problem. In the context of this work, the connection with Markovian stochastic control is broken essentially because of the non-Markovian nature of stochastic interpolants. Our idea for this work was to partially restore it by using the Markovian projection of the interpolant as a reference measure. Then, the main innovation of the present article consists in being able to control the $L^2$ norm of the adjoint process in the Pontryagin system associated with the Markovian projection of the interpolant. We do so by introducing a novel quantity in the generative model literature (see [13]), namely the so-called “reciprocal characteristic” of a Markov process, which may be viewed as some sort of mean acceleration field. The main efforts in our proofs are directed towards bounding the $L^2$ norm of the reciprocal characteristic whose representation in terms of either conditional moments or higher-order logarithmic derivatives of conditional densities is quite intricate, see page 26. Trying to bound each of these terms separately requires strong assumptions on the initial distributions and couplings leading to suboptimal results. However, using integration by parts both in time and space and a double change of measure argument, and profiting from symmetry properties of the heat kernel, we managed to bound these terms under assumptions comparable to the minimal ones required in the analysis of SGMs. Note that the analysis of the reciprocal characteristic is not required for SGMs (it is always 0) and that controlling it also requires new tricks and ideas, since its representation contains up to three logarithmic derivatives of conditional distributions, whereas the analysis of SGMs requires at most two such derivatives to be analyzed.
Based on the reviewer’s feedback, it appears that we haven’t sufficiently highlighted the novelty of our procedure compared to existing studies. We will make it a priority to clarify and emphasize this aspect more clearly in “the proposed methodology”, starting with adding part of the previous paragraphs.
[9] Giovanni Conforti, Alain Durmus, and Marta Gentiloni Silveri. Score diffusion models without early stopping: finite fisher information is all you need.
[10] Hongrui Chen, Holden Lee, and Jianfeng Lu. Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions.
[11] V. D. Bortoli, Convergence of denoising diffusion models under the manifold hypothesis.
[12] V. De Bortoli, J. Thornton, J. Heng, and A. Doucet, Diffusion schrödinger bridge with applications to score-based generative modeling.
[13] A. Krener, Reciprocal diffusions in flat space. | Summary: This work provides theoretical guarantees for diffusion flow matching (DFM) models, which are a recent class of generative models similar to score-based generative models (SGM). Extensive background is given in sections 1 and 2, and section 3 contains the results, namely, bounds on the KL divergence from the target distribution to the distribution outputted by DFM. A detailed comparison with existing results is presented in section 3.2.
Strengths: The presentation of the whole paper is remarkably clear, notably the background explanations of section 2 are very clear and very welcome. Regarding the guarantees of section 3, they are (claimed to be) the first ones for DFMs, which are well-motivated as alternatives to SGMs and deterministic FMs; this makes these results quite valuable.
Weaknesses: (I can only comment on the presentation and context as I am not familiar with the guarantees for SGMs and FMs, sorry.)
- Two things remained unclear to me after reading section 2; see "Questions".
- The discussion of the previous work on SGMs seems a bit difficult to compare, since this paper is about DFMs, which (if I understand) are a different thing although similar.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Are SGMs a sub-case of DFMs, for a well-chosen coupling and bridge, and perhaps up to a change of time variable?
- Is the Markovian projection of an Ito process always unique, and/or is the Markovian projection of (3) unique?
- How do DFMs compare to SGMs and deterministic FMs in practice, say in generalization, speed, difficulty of implementation, robustness w.r.t. hyperparameters, ...?
- Are there settings where it makes sense to want to choose something else than $\pi$ being the independent coupling and the bridge being Brownian bridge?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Questions:
_Are SGMs a sub-case of DFMs, for a well-chosen coupling and bridge, and perhaps up to a change of time variable?_
We thank the reviewer for raising this point. As suggested by the reviewer, after an appropriate time transformation ($t=\exp(-\tau)$), the Ornstein–Uhlenbeck (OU) process at the basis of SGMs can be reformulated as the process at the basis of a linear stochastic interpolant. For further insights, see Section 5 of [1]. However, the foundational concepts of SGMs and DFMs differ significantly: SGMs aim to approximate the time-reversal of a diffusion process (most often OU), whereas DFMs aim to approximate a Markovian projection of a two-sided linear stochastic interpolant.
[1] Michael S. Albergo, Nicholas M. Boffi, and Eric Vanden-Eijnden. Stochastic Interpolants: A Unifying Framework for Flows and Diffusions.
_Is the Markovian projection of an Ito process always unique, and/or is the Markovian projection of (3) unique?_
The uniqueness of the Markovian projection of an Itô process, such as the one defined by equation (3), is not straightforward and necessitates regularity assumptions on the mimicking drift, such as those found in [2]. In the specific case of interest in our paper, we suspect that the well-posedness of equation (9) holds. We appreciate the reviewer for highlighting this interesting issue.
[2] Stroock, D. W. and Varadhan, S. R. S.. Diffusion processes with continuous coefficients
_How do DFMs compare to SGMs and deterministic FMs in practice, say in generalization, speed, difficulty of implementation, robustness w.r.t. hyperparameters, ...?_
In Section 3.4 of [3], the authors include some results for unconditional image generation, comparing the proposed interpolant flow model with diffusion methods on CIFAR10 and the ImageNet dataset. Their models emit likelihoods, measured in bits per dim, that are competitive with state-of-the-arts diffusion models on both datasets. Similarly, Frechet Inception Distance scores are proximal to those from diffusions, though slightly behind the best results. Finally, in [1, Section 7], the authors conducted experiments indicating that stochastic interpolants generally outperform their deterministic counterparts.
[1] Michael S. Albergo, Nicholas M. Boffi, and Eric Vanden-Eijnden. Stochastic Interpolants: A Unifying Framework for Flows and Diffusions.
[3] Michael S. Albergo and Eric Vanden-Eijnden. Building normalizing flows with stochastic interpolants.
_Are there settings where it makes sense to want to choose something else than 𝜋 being the independent coupling and the bridge being Brownian bridge?_
We thank the reviewer for raising this interesting question. Our primary motivation for choosing the independent coupling in the early-stopping regime and the Brownian bridge was simplicity, as analyzing non-independent couplings or non-Brownian bridges would have been significantly more complex. Nonetheless, we believe that, in the early stopping regime, non-independent couplings could be considered as well at the cost of additional technical complexity. Moreover, in practice, the independent coupling and the Brownian bridge are one of the most common choices due to their ease of implementation. However, the independent coupling is not always the most optimal choice, and there have been works highlighting the benefits of using other couplings, such as [4]. Additionally, we can mention Rectified Flows [5] and Diffusion Schrödinger Bridge Matching (DSBM) [6], which essentially involve iterating FM or DFM and allow for defining a sequence of coupling distributions expected to be more efficient than the independent coupling.
[4] A. Tong et al. Improving and generalizing flow-based generative models with minibatch optimal transport.
[5] X. Liu, C. Gong, and Q. Liu. Flow straight and fast: learning to generate and transfer data with rectified flow.
[6] Y. Shi, V. De Bortoli, A. Campbell A. Doucet. Diffusion Schrödinger Bridge Matching
Weaknesses:
_Two things remained unclear to me after reading section 2; see "Questions"._
See "Questions".
_The discussion of the previous work on SGMs seems a bit difficult to compare, since this paper is about DFMs, which (if I understand) are a different thing although similar._
Due to the connection between SGMs and DFMs we believed that a comparison with studies on SGMs would be of interest to our readers. However, if the reviewers and the AC find this comparison irrelevant to our work, we are willing to shorten this section of the paper accordingly.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. To improve the presentation, I would recommend adding a few words early in the paper on the fact that SGMs are a sub-case of DFMs. I maintain my rating (and low confidence score). | Summary: The paper derives theoretical guarantees for a flow matching procedure which constructs a diffusion-like process which allows for samples obtained from a source distribution $\mu$ to a target distribution $\nu^*$. The goal is to build a suitable simple sampling procedure where a source sample is updated through a series of relatively simple Markovian procedure to an approximate sample from $\nu^*$. In the typical setting of diffusion models, this takes the form of a score function augmented with additional Gaussian noise. However, this suffers from two main drawbacks -- firstly, the choice of base distribution is fixed (the base function is always Gaussian) and secondly, the diffusion is truncated after a finite number of timesteps which incurs error in the approximation of the intermediate distribution by a Gaussian. Flow matching addresses both of these by allowing for transformations of arbitrary distributions to each other.
To define a flow-matching process as in the paper, one obtains samples $(X, Y)$ generated from a coupling (joint distribution) $\pi (X, Y)$ where the marginals of $X$ and $Y$ are $\mu$ and $\nu^*$ (the source and target distribution respectively). Given these samples, an interpolating distribution, $\{X_t\}_{t \in [0, 1]}$, is constructed with $X_0 \coloneqq X$ and $X_1 \coloneqq Y$ and the intermediate values are distributed based on the choice of an appropriate bridge distribution. This paper utilizes the Brownian bridge which is the conditional distribution of Brownian motion conditioned on fixing its endpoints. The main issue with this approach is that such an interpolating distribution is not \emph{Markovian} which makes sampling from it in a diffusion-like process challenging. This may be addressed through the notion of
a Markovian projection which constructs another \emph{Markovian} process with the same marginals as the interpolating process for which a score-function-like analogue which enables efficient Markovian sampling. The paper then shows that under some relatively mild assumptions on the approximation of the score function and the moments of the coupled distributions, this procedure guarantees closeness to the target distribution in Total Variation distance. All previous approaches either require additional assumptions on the estimated score function, and/or do not account for the discretization error. One of the main challenges of the proof is in analyzing the discretization error of the scheme where the continuous process is approximated with one with discrete time steps. The paper observes that the terms controlling this error may be rather large at the end points of the interval. Therefore, the paper considers a truncated process on $[\delta, 1 - \delta]$ and show that on the truncated set, the gradients of the densities with control the discretization error are bounded in ($L^8$) norm.
Overall, the results of the paper are novel and interesting. The observation that the truncated distribution is well-behaved with respect to the discretization error is valuable for future theoretical and practical work on flow models. This also illustrates the drawbacks of prior work which require additional Lipschitz assumptions on the scores/estimated scores and presents a natural approach towards circumventing them. This is a nice contribution and I recommend acceptance.
Strengths: See main review
Weaknesses: See main review
Technical Quality: 3
Clarity: 3
Questions for Authors: See main review
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: See main review
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough review and valuable comments. We greatly appreciate the reviewer’s recognition of the novelty and significance of our work, as well as the potential of our ideas for future theoretical and practical advancements in flow models.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the authors' rebuttal and will retain my current evaluation. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their positive evaluation of our work and their valuable feedback. Their constructive comments will undoubtedly help enhance our original work, and we are committed to incorporating the suggested modifications. The reviewers acknowledge that our work advances the field of diffusion flow matching models by providing the first convergence guarantees, removing the Lipschitz assumption on the velocity field, and addressing the early-stopping regime. While the novelty of our results is recognized, two reviewers observed some similarities in our methodology with previous studies. However, the use of standard tools, such as the decomposition of the Kullback-Leibler Divergence based on Girsanov theorem, is not the distinguishing feature of our work, which is rather the strategy used to bound the reciprocal characteristic associated with the Markovian projection. We will make every effort to clearly differentiate our procedure from existing research and highlight its originality. Below, we address each reviewer’s comments individually. We will be happy to answer any additional questions during the author-reviewer discussion period. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics | Accept (spotlight) | Summary: This paper works on learning multi-human cooperative object manipulation, specifically the collaborative carrying of objects.
This paper proposes a two-stage method to learn collaborative object carrying. In the first stage, the agent learns single-person object carrying from motion capture data and heuristic-based task rewards following the Adversarial Motion Priors (AMP) framework. In the second stage, the agent policy is fine-tuned using multi-agent reinforcement learning to learn multi-agent collaborative object carrying.
Experiments show that the trained policy can successfully control the agents to complete collaborative objects carrying tasks given objects of varying shapes and weights.
Strengths: 1. The proposed cooperative interaction learning framework can learn multi-human collaborative behaviors requiring only single-human motion capture data. In addition, the learned policy has better generalization compared to tracking-based methods.
2. The implicit communication via object dynamics unifies the observation in the single-human and multi-human stages, reducing the gap between the two stages and facilitating the learning of collaborative object carrying.
3. This paper presented extensive ablation study and boundary analysis to show how and to what extent the proposed method works.
Weaknesses: 1. This paper simulates an oversimplified humanoid without hand modeling considering the fact that it works on object manipulation problems, for which the hands play essential roles. The authors also recognize this issue leads to capacity limitations.
2. The agent observation features may not be sufficient for more complex manipulations. In the presented paper, the agent observation mainly consists of the self-motion and cropped object bounding box features. However, more complex collaborative tasks like carrying articulated objects and assembling a sofa require a global overview of the object instead of only a local observation. In addition, the agent also needs to observe the collaborators to avoid interference such as grasping at the same point.
3. The proposed method needs to train a separate policy for a category of objects with similar shapes (L246-247), which indicates a generalization problem.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. How the object weights can affect the humanoid in the simulation? Is there an upper bound for the joint force of the simulated humanoid? How does the humanoid apply forces to the object in the simulation?
2. Are the standing point and held point also part of the goal inputs? If not, during the training, the same motion sequence can have different rewards given differently sampled standing points? How are the standing points sampled given an arbitrary object mesh?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The author discussed the limitation of lacking dexterous hands, which limits the agent interaction capacity and is left as future work.
Please also refer to the weakness discussion above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and feedback. We hope the following clarifications address your concerns.
> W1: This paper simulates an oversimplified humanoid without hand modeling considering the fact that it works on object manipulation problems, for which the hands play essential roles. The authors also recognize this issue leads to capacity limitations.
This is indeed a limitation of our work. Our framework currently handles 40 different common mid-size daily life objects, but incorporating dexterous hands would definitely enable humanoids to interact with small objects or objects with unusual shapes. This is a future direction for our research.
> W2: The agent observation features may not be sufficient for more complex manipulations. In the presented paper, the agent observation mainly consists of the self-motion and cropped object bounding box features. However, more complex collaborative tasks like carrying articulated objects and assembling a sofa require a global overview of the object instead of only a local observation. In addition, the agent also needs to observe the collaborators to avoid interference such as grasping at the same point.
In our work, we minimize interference between different agents by utilizing designs such as standing points and held points. These designs have proven effective for multi-agent object transportation tasks.
However, for more complex tasks like carrying articulated objects or assembling a sofa, additional observation input is indeed necessary. These tasks may require explicit observation and communication between agents, as well as specialized designs.
> W3: The proposed method needs to train a separate policy for a category of objects with similar shapes (L246-247), which indicates a generalization problem.
Yes, we train a separate policy for each category of objects with similar shapes. Our main focus is on training multi-agent cooperative object transportation, so we did not introduce special designs for different shapes and categories. This approach allows us to concentrate on the primary contribution of our work.
Also, we think that handling different object meshes is primarily a perception problem, whereas our focus is on the control aspect of cooperative HOI tasks. Incorporating object mesh-related information or other perception-related designs would certainly enhance the generalizability of our method. This will be a future direction of our study.
> Q1: How the object weights can affect the humanoid in the simulation? Is there an upper bound for the joint force of the simulated humanoid? How does the humanoid apply forces to the object in the simulation?
In our experiments, we use the same humanoid model as previous work[1][3][5][6][7][11]. To reiterate, the action $a \in \mathbb{R}^{28}$ specifies the target positions for PD controllers at each joint, and the forces are calculated as `(target_pos - dof_pos) * stiffness - dof_vel * damping`.
- We are not entirely sure what you mean by “How do object weights affect the humanoid in the simulation?”. Perhaps you are asking about the forces at each joint when carrying objects of different weights? We have visualized this in Figure3 and Figure4 in the PDF in the global rebuttal.
- Yes, there is an upper bound for the joint force of the simulated humanoid. Specifically, since we are using positional control, every degree of freedom (dof) has limits, namely, dof_limits_lower and dof_limits_upper, and the resulting force is bounded.
- When the humanoid manipulates objects, there will be friction forces between the hands of humanoid and the objects, which are calculated by the PhysX phyics engine of Isaac Gym.
> Q2: Are the standing point and held point also part of the goal inputs? If not, during the training, the same motion sequence can have different rewards given differently sampled standing points? How are the standing points sampled given an arbitrary object mesh?
We apologize for any confusion. In our work, the standing points are part of the goal input, while the held points are not. During single-agent training, the standing points are **randomly sampled** to positions directly in front of various faces of the object. During multi-agent training, the standing points are sampled in front of each end of the long object. The held points are not part of the goal inputs because we set them at the geometric center of the object, allowing the agent to “infer” the position of the held point based on the object’s position and its bounding box information. We will briefly introduce the constitution of goal feature in Section 3.2.1 in the next version of our paper.
If the standing points and the object’s position and rotation are given, then the same motion sequence will yield the same rewards.
---
Rebuttal 2:
Comment: I appreciate the additional explanation and visualization provided in the rebuttal. However, the H-shape box example does not solve the concerns about generalization because it needs a specially designed object shape for the 4-person cooperation to work.
Generalization capability to various objects and interaction skills remain a weakness of this work. Moreover, simulating a humanoid without dexterous hand modeling inherently limits the method to work for interactions requiring complex controls, as also raised by other reviewers.
Overall, I still appreciate the contributions of exploring learning cooperative human-object interaction, the design of object dynamics-based state observation and collaborative training framework, and extensive experiments. I believe this paper makes a valuable contribution and recommend accepting it.
---
Rebuttal Comment 2.1:
Comment: Thank you for your insights and suggestions. The H-shape box represents our effort to explore the boundaries of CooHOI’s generalization capability regarding the number of agents. In future work, we will continue to exploring its generalization capability across various objects and interaction skills, incorporating more experiments and settings involving dexterous hands. | Summary: This work address the problem of multi-character collaboration for object transporting tasks. Different from previous works approaching the multi-character HOI task with tracking-based method, this work learns a physics-based multi-agent policy with reinforcement learning. Instead of directly training a multi-agent policy from scratch, the authors suggest using a two-phase learning approach: CooHOI uses a two-phase learning approach: individual skill acquisition through Adversarial Motion Priors (AMP) and then collaborative learning with Multi-Agent Proximal Policy Optimization (MAPPO). Agents learn to coordinate implicitly by responding to changes in object dynamics caused by others.
Strengths: * The manuscript is well-written, the methodology is clearly presented and easy to follow.
* This paper is addressing an under-studied and important application - the physics-based multi-agent cooperation HOI, where most previous works explores in the single-agent setup.
* The proposed two-state learning pipeline is reasonable and few important designs including the stand point, held point shows to be effective in addressing this multi-agent cooperation tasks.
* The ablation study and analysis provide comprehensive analysis and capacity boundary test for the proposed pipeline and these provide useful insights for the community.
Weaknesses: * This paper build a framework for multi-agent cooperation policy learning with AMP for single-agent policy pertaining and MAPPO for cooperation skill learning. The technical contribution to the community is minor.
* The method include few heuristic designs to simplify the problem, for example, the manually defined held point, and object bounding box observation, making it not easy to generalize and scale to more diverse objects.
* The current framework can be hardly generalise to objects with different size and different shape, and the human motion styles are quite limited.
* Regarding generalization to objects weights and scales, are the results presented in Figure 5 the transferring results to different weights and scales or the per-object training results.
* Regarding the motion styles, in the training dataset, limited motion sequences are involved with only walking, lifting up and putting down, would the proposed method build upon AMP be able to cover more diverse skills, and this could be helpful to generate more natural and diverse HOI motions.
* It is presented that when increasing the number of characters, the policy has problem in putting the object down due to the absence of dexterous hands and limited friction. Could the author elaborate more on the major bottlenecks encountered when dealing with a larger number of characters? Assuming that the friction parameter can be adjusted in the simulator and should not be a significant issue for motion generation tasks purely within the simulation, what specific contributions could dexterous hands make to this scenario? Additionally, while the introduction of dexterous hands would undoubtedly enhance the agent's capabilities, what other factors should be considered?
Technical Quality: 3
Clarity: 3
Questions for Authors: See more in the weakness section :-)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors addressed the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. Many of the weaknesses you pointed out are exactly the areas we plan to address in future work, as we aim to make our method more generalizable.
> Q1: This paper builds a framework for multi-agent cooperation policy learning with AMP for single-agent policy pertaining and MAPPO for cooperation skill learning. The technical contribution to the community is minor.
We acknowledge that the AMP and MAPPO algorithms we used are not new to the community. However, we believe our contribution lies in **presenting a proof-of-concept framework for this important and relatively under-studied subfield**: physics-based multi-agent cooperation in HOI. Our **core insight** is that manipulated object dynamics can serve as feedback in single-agent skill training, an implicit communication channel in multi-agent coordination, and an interface to facilitate efficient skill transfer from single to multi-agent scenarios. We believe this will inspire many in the kinematics-based animation, physics-based animation, and robotics communities.
Additionally, some previous tracking-based efforts [18] on physics-based multi-agent cooperation in HOI have addressed this issue by utilizing **high-quality multi-agent mocap data**. Their methods cannot train successfully without multi-agent mocap data, as simply using AMP (or other types of style rewards) and MAPPO would be **too challenging**. This is demonstrated by the "w/o CooHOI" result in Table 1 and the "w/o Initialization" in Figure 6. In comparison, our framework only requires single-agent motion capture data and can extend to different types of objects and varying numbers of agents by utilizing the manipulated object dynamics.
> Q2: The method includes a few heuristic designs to simplify the problem, for example, the manually defined held point, and object bounding box observation, making it not easy to generalize and scale to more diverse objects.
Our work primarily focuses on multi-person object transportation tasks, validating that using manipulated object dynamics as an implicit communication channel can efficiently facilitate skill transfer and coordination learning. The designs in our methods are mainly intended for efficient skill transfer from single-agent skills to multi-agent coordination learning. We **did not introduce special designs for objects with different shapes, but it can still generalize to 40 different shapes of objects**.
We believe that handling different object meshes is primarily a perception problem, whereas our focus is on the control aspect of cooperative HOI tasks. Incorporating object mesh-related information or other perception-related designs would certainly enhance the generalizability of our method. This will be a future direction of our study.
> Q3: The current framework can be hardly generalize to objects with different size and different shape, and the human motion styles are quite limited.
- Regarding object weights and scales, the results in Figure 5 are the **transfer results to different weights and scales**. This is a **boundary analysis** of our framework: we first train agents to carry objects with different weights and scales, and then test when they fail with out-of-distribution scales and weights. The results indicate that the policy performs well with out-of-distribution scales and weights but eventually fails when these parameters deviate too far from the training conditions.
- Regarding object shapes, our framework can generalize to 40 different common mid-size daily-life objects. We have thoroughly discussed this in our answer to your question Q2.
- Regarding generalization to motion styles, since our work mainly focuses on multi-agent cooperative object transportation, we only chose walking, lifting up and putting down objects, and reverse walking to cover the motion styles.
Your feedback on the generalizability of our methods for both object shapes and motion styles is invaluable to us. We plan to enhance our framework to address **more diverse cooperative HOI tasks in future work**. This is also discussed in our limitations section. After incorporating dexterous hands, we will work on enabling the humanoid characters to cooperatively handle objects of different sizes using object mesh-related information in our training pipeline and aim to achieve various motion styles by utilizing motion priors or motion latents.
> Q4: It is presented that when increasing the number of characters, the policy has problem in putting the object down due to the absence of dexterous hands and limited friction. Could the author elaborate more on the major bottlenecks encountered when dealing with a larger number of characters?
- We attribute this failure to the absence of dexterous hands and limited friction because, during the process of carrying the box to the destination, the large box sometimes slips from the humanoid agent’s hands because the hand model is spherical, making it very difficult to hold the corners of the box, which tends to be “squeezed out” of their hands. However, we found a way to bypass this limitation without dexterous hands: we had the four agents carry an “H-shaped” large box and successfully complete the task. **Please refer to the global rebuttal for more details.**
- Regarding the major bottlenecks encountered with a larger number of characters (more than 4), we believe the main challenge lies in ensuring that these agents can effectively communicate with each other using object dynamics. The complexity of this problem increases exponentially as the number of agents grows.
- We believe introducing dexterous hands would allow the agents to handle smaller daily-life objects or objects with unusual shapes, such as two agents cooperatively carrying a TV together.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the authors' detailed response and additional experiments and visualization. The additional experiment on the H-shape objects transporting with 4 agents demonstrates the necessity of carefully selected object types, while I agree that generalizing to challenging object shapes remains a very challenging task, and the author also gives insights on how to further approach it.
Overall, I appreciate the efforts in this work to build a framework to tackle the multi-agent HOI problem, and I have also read other reviews as well as authors' responses, and I will maintain my original rating as weak accept.
---
Reply to Comment 1.1.1:
Comment: Thanks for your insights and valuable feedback!
---
Rebuttal 2:
Title: To reviewer kufk : Please respond to rebuttal
Comment: Hi reviewer kufk ,
Thank you for your initial review. Please kindly respond to the rebuttal posted by the authors.
Does the rebuttal answer your questions/concerns? If not, why?
Best,
AC | Summary: In this paper, the authors introduce a novel framework, Cooperative Human-Object Interaction (CooHOI), aimed at tackling the problem of multi-agent object transportation. The framework consists of two phases: initially, a single agent learns to perform tasks, followed by multiple agents learning to collaborate through shared dynamics of manipulated objects. The utilization of shared object dynamics has proven effective in learning cooperative tasks. The authors conduct comprehensive experiments to validate the efficacy of their approach and explore its capability boundaries.
Strengths: * The idea of sharing manipulated object dynamics among multiple agents as implicit communication to facilitate multi-agent collaboration, is intuitively appealing and has proven effective. This approach aligns with object-centric concept, designed to leverage object dynamics to guide agent actions and promote collaboration. It provides valuable insights for future research.
* The authors conduct extensive experiments to validate the effectiveness of their approach, including many ablation studies to assess the impact of different design choices, as well as exploration into the framework's capability boundaries.
* The paper is well-organized and easy to understand.
* The inclusion of diverse visualizations enhances qualitative comprehension of the method's performance.
Weaknesses: * It appears that the shapes and categories of the manipulated objects are not very diverse, potentially constraining the method's ability to generalize to novel objects. I recommend that the authors summarize the counts of shapes and categories used in the experiments. Additionally, if the shapes of the manipulated objects are limited and the grasp actions are abstracted in the simulation, the authors are encouraged to compare CooHOI with some heuristic methods. An intuitive heuristic baseline could involve having two agents grasp each end of a long object, and then move at the same speed to the target position.
* While the authors provide detailed explanations in the appendix, it would enhance comprehension if they provide simple explaination when introducing a new concept in the main paper. Here are some examples:
* In Section 3.1, the authors introduce the task-specific goal feature $g_t$ without explanation.
* While it is understandable that the proposed style reward can help the method produce behaviors similar to those in the dataset, it would be better to provide a simple explanation of how to evaluate such similarity.
* It would be beneficial to provide a brief introduction to the baseline InterPhys in one or a few sentences.
Technical Quality: 4
Clarity: 3
Questions for Authors: * While acknowledging the idea of utilizing the object’s dynamics as implicit communication between agents, such communication is not synchronous. Specifically, agents adjust their actions only after one agent changes the object dynamics, resulting in a delay between the change in object dynamics and the response actions taken. I wonder if this delay could lead to jitter of the manipulated object. If not, could the reviewers discuss why such jitter would not occur?
* The held point is described as the geometric center of each end of the object. However, in Figure 1, when cooperatively carrying a large box, the characters grasp the edges of the box instead of the center of its face. I am unsure if I misunderstood the definition of the held point, or if the current definition is inadequate, leading agents to choose other positions they find more suitable for completing the task.
* In the second row of Figure 4, I am curious about why the four agents fail to put down the box. The authors attribute this failure to the absence of dexterous hands and limited friction, however, similar limitations exist in two-agent manipulation tasks. Besides, the multi-agent put-down operation closely resembles the multi-agent pick-up operation (but in reverse), while the agents can successfully pick up the box, they struggle to put it down. Could the authors provide more discussion or visualizations for better understanding?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have provided a detailed discussion of limitation and failure cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and feedback.
> W1.1 : It appears that the shapes and categories of manipulated objects are not very diverse, potentially constraining the method's ability to generalize to novel objects.
We sampled nearly 40 common everyday objects for training, which fall into six categories: box, table, armchair, and high stool for single agents, and long box and sofa for two agents. We will summarize the counts of shapes and categories of the objects we used in Section 4.1, “Datasets and Initialization,” in the next version of our paper.
In addition, the main focus of our work is on training multi-agent cooperative object transportation, so we did not introduce special designs for different shapes and categories. This allows us to concentrate on the main contribution of our work. Through fine-tuning, our framework can successfully enable single and multi-agent transportation of **common mid-size daily-life objects** (Table 2). As a comparison, [3] used 40 object shapes, [17] used 17 object shapes. [1] used 350 object shapes, although most of these shapes were for humanoids to sit or lie on. For carrying tasks, [1] primarily used the simple box shape. Therefore, we believe that the 40 shapes and 6 categories of objects we used are **sufficient to validate our framework**.
> W1.2: The authors are encouraged to compare CooHOI with some heuristic methods. An intuitive heuristic baseline could involve having two agents grasp each end of a long object, and then move at the same speed to the target position.
We believe this intuitive heuristic baseline is quite similar to the “w/o Dynamic Observation” scenario described in Section 4.3 and shown in Figure 6. The key difference between this baseline and our CooHOI method is that the baseline does not use dynamics information from each end of the object to facilitate skill transfer. Training multi-agent cooperation using only reward training is challenging because coordination between agents cannot be easily hard-coded.
> W2: While the authors provide detailed explanations in the appendix, it would enhance comprehension if they provide simple explaination when introducing a new concept in the main paper.
We will incorporate your suggestions in the next version of our paper to make it easier for readers to follow. Thanks for your suggestion!
> Q1: I wonder if this delay of communication could lead to jitter of the manipulated object. If not, could the reviewers discuss why such jitter would not occur?
In the evaluation of the trained policies, we did not observe significant jittering of the manipulated object. You can **see the visualization of our results in the “DemoVideo_CooHOI.mp4” included in our supplementary materials**.
We believe this is due to the following reasons:
1. We model the objects as rigid bodies, so changes in object dynamics occur “instantly,” without causing large delays.
2. Our framework is similar to the “object-centric” concept, where object jittering would result in humanoid motion jittering. Since we use AMP to provide a “style reward,” this motion jittering is not encouraged, thus forcing the humanoid to learn effective cooperation and avoid object jittering.
2. We train the multi-agent policy in a cooperative setting, so implicit communication helps achieve coordination learning by maximizing the reward and avoiding object jittering since jittering does not maximize the reward.
We believe that explicit or implicit communication in multi-agent systems should carefully address latency issues, especially in **real-world object-carrying tasks**. Thank you again for your valuable comment.
> Q2: The held point is described as the geometric center of each end of the object. However, in Figure 1, when cooperatively carrying a large box, the characters grasp the edges of the box instead of the center of its face.
The held point is defined as the **3D geometric center** of each end of the objects. We used the held point and the corresponding $r_{held}$ to encourage the agents to reach out their hands near the object and learn to manipulate it, where $r_{held}$ encourage the humanoid to put the center of its 2 hands closer to the held point.
The held point design in our multi-agent HOI setting serves as "abstract indicators" for primarily distinguishing the guidance for each agent during the interaction with the same object, rather than specifying the exact position for them to place their hands. The precise position varies for each object shape, and we encourage the agents to learn this during training. We believe this is why our method can generalize to different object shapes without requiring their mesh information as input.
> Q3:I am curious about why the four agents fail to put down the box.
We attribute this failure to the absence of dexterous hands and limited friction. During the process of carrying the box to its destination, we found that the large box sometimes slips from the humanoid agents’ hands. This is because the hand model of the humanoid agents is **spherical**, making it very difficult to hold the corners of the box, causing it to be **“squeezed out” of their hands**.
The difference between the four-agent scenario and the two-agent tasks is that, in the four-agent scenario, the humanoids can **only grasp the corners of the box**. Additionally, when the four humanoids are picking up the object, the two humanoids positioned diagonally can push forward against each other to prevent the box from slipping out of their hands. However, this is very difficult during transportation and often leads to the box constantly falling. As a result, they **never have the opportunity to learn the “putdown” action**, causing the entire process to fail.
However, we found a way to bypass this limitation without dexterous hands: we had the four agents carry an “H-shaped” large box and successfully complete the task. **Please refer to the global rebuttal for more details**.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification provided in the rebuttal. I will maintain my score and continue to support the acceptance of this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your insight and valuable feedback! | Summary: This paper proposes a framework for multi-agent cooperative manipulation in the context of humanoids carrying and transporting large furniture. This task is decomposed into several steps. First, they train a single humanoid to learn how to hold and carry relatively small objects. The agent is trained with ground-truth object information and AMP for natural humanoid behavior. Next, they train a multi-agent policy initialized from the single-agent policy for transporting larger objects. The authors present several ablations and report high performance compared to baseline methods in simulation.
Strengths: This paper studies a practical task and solves it with a well-designed engineering system. It is also well-written, making it easy to read.
The paper presents comprehensive ablation experiments on both the single-agent and multi-agent policy parts, revealing important information for the framework.
The framework can learn natural humanoid behavior.
Weaknesses: I’m curious about the robustness of the proposed system. If I understand correctly, the bounding box information is given as ground-truth parameters. However, in reality, these quantities are far from perfect.
The task is restricted to moving and transporting large furniture-type objects.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Table 1, what does the “w/o CooHOI” mean? Does that mean training the entire policy from scratch? Is that policy trained with AMP and the same rewards? And what is the definition of CooHOI? What is included in CooHOI?
Regarding the robustness of the system: How would the policy perform in noisy situations? For example, when the object bounding boxes are not accurate (in real scenarios, object perception will have errors).
In Figure 6, can you elaborate more on why policies, except for CooHOI, fail to carry objects even when they can still hold objects? Specifically, what are the failure cases? Do these policies fail to maintain a stable holding motion, or do they fail to accurately move the objects to the target location?
In addition, r_carry is only defined in the supplementary material. I suggest including it in Section 3.2.2.
It would be better to plot the standard deviations in Figure 6, given that you evaluate four seeds.
What is the action space of the policy? Is it the target joint position for each joint?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors provide a comprehensive limitation section of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable time and insightful comments. We hope the following clarifications address your concerns.
> W1: I’m curious about the robustness of the proposed system. If I understand correctly, the bounding box information is given as ground-truth parameters. However, in reality, these quantities are far from perfect.
Thank you for pointing this out. Yes, in our work, the bounding box information is provided as ground-truth parameters. While this may seem “unrealistic” in the real world, it is a **common setting** adopted by the physics-based character animation community [2][3][4][5][6][7][8][9][10][11][12][13], including the baseline method [1] we compared in our study. Additionally, the **main focus of our work is on training cooperative human-object interaction tasks**, such as carrying everyday objects. Therefore, we did not introduce special designs for noisy scenarios that may occur in real-world object detection. We believe this allows us to concentrate on the primary contribution of our work.
> W2: The task is restricted to moving and transporting large furniture-type objects.
Our method can handle 40 different shapes of common daily-life furniture-like objects. However, as we mentioned in the limitations section, incorporating dexterous hands would make our framework more generalizable to smaller and unusually shaped objects found in daily life. This is a direction of our future work.
> Q1: In Table 1, what does the “w/o CooHOI” mean? Does that mean training the entire policy from scratch? Is that policy trained with AMP and the same rewards? And what is the definition of CooHOI? What is included in CooHOI?
We apologize for the confusion regarding the term “w/o CooHOI” in Table 1. As stated in lines 233-235, “Without using our CooHOI framework and simply employing parallel training for multi-agent tasks, the training fails.” This refers to training the entire multi-agent policy with AMP and the same rewards from scratch.
The definition of CooHOI **encompasses the entire framework design**. Specifically, it involves the insight that object dynamics information is crucial for both single-agent skill learning and coordination learning. The “bounding box” design serves as an interface for efficiently transferring single-agent skills to cooperative learning.
In contrast, “w/o CooHOI” means not using object dynamics information when training multi-agent cooperation skills. This has two conditions:
1. Directly train the whole policy from scratch, with agents observing the box as is, while the AMP reward and the reward function remain the same. This corresponds to the “w/o Initialization” curve in Figure 6.
2. First training the single-agent skills, then fine-tuning using MAPPO, but in the multi-agent training stage, not utilizing object dynamics (the state of the bounding box at each side) and directly including the object state as the goal feature. This scenario is illustrated in Figure 6 and Appendix D, Figure 2, “No Dynamics Observation.” This results in agents standing near the object and seemingly forgetting how to interact with it.
Thank you for pointing out this unclear explanation. We will clarify it in the paper’s next version.
> Q2: How would the policy perform in noisy situations?.
Although we didn't account for noise situations or introduce special designs for robustness training under noise conditions, we can still test our policy's performance. Specifically, we added Gaussian noise to the object dynamics information and varied the noise level by adjusting the standard deviation of the noise. A noise level of 1 indicates that the standard deviation of the noise is 1 cm. **Please refer to Table 1 in the PDF included in the global rebuttal for our results**.
> Q3: In Figure 6, can you elaborate more on why policies, except for CooHOI, fail to carry objects even when they can still hold objects?
We apologize for the confusion. A detailed analysis of Figure 6 can be found in Section 4.3, “Analysis For CooHOI Framework,” lines 283-302. Additional visualizations are available in Appendix D. To reiterate:
- Without the “stand point” design, the agent sometimes fails to approach the shortest edge of the object and cannot hold it successfully.
- Without the “dynamic observation” design, the agents would just stand in front of the object, seemingly unsure of what to do.
- Without the “reverse walk” design, the agents may hold the objects but cannot reach the target position successfully, resulting in a deadlock.
- Without “Initialization,” meaning if we train from scratch, the two-agent policy fails to achieve successful carrying. They will just stand near the objects, but don't know how to carry them to the destination.
> Q4: In addition, r_carry is only defined in the supplementary material. I suggest including it in Section 3.2.2. It would be better to plot the standard deviations in Figure 6, given that you evaluate four seeds.
Thank you for your detailed advice. We placed the formulation in the appendix due to the NeurIPS page limit, but we acknowledge that this caused some confusion. In the next version of our paper, we will include $r_{carry}$ in Section 3.2.2, following the definition of $r_{target}$. Additionally, we appreciate your suggestion to incorporate the standard deviation plot in Figure 6, and we will include it in the next version.
> Q5: What is the action space of the policy? Is it the target joint position for each joint?
Yes, the action space consists of the target joint position for each joint. This is then converted into force using PD control: `(target_pos - dof_pos) * stiffness - dof_vel * damping`. A detailed explanation of this can be found in our response to Reviewer TU4S Q1. Please refer to that if you have further questions. Note that this approach is a **common paradigm** adopted by the physics-based character animation community [1][2][3][4][5][6][7][8][9][10][11][12][13].
---
Rebuttal 2:
Title: To reviewer ehnN : Please respond to rebuttal
Comment: Hi reviewer ehnN ,
Thank you for your initial review. Please kindly respond to the rebuttal posted by the authors.
Does the rebuttal answer your questions/concerns? If not, why?
Best,
AC | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed, valuable, and insightful feedback. We are pleased that the reviewers recognize our dedication to **addressing the under-studied and important application** of physics-based multi-agent cooperation HOI (Reviewer kufk). We appreciate the acknowledgment that our ideas on **utilizing manipulated object dynamics can provide valuable insights** for future research (Reviewer Zw8u). Our paper’s **comprehensive ablation experiments and boundary analysis** were noted positively (Reviewers ehnN, Zw8u, kufk, Tu4S), and the reviewers commended **our well-written paper** (Reviewers ehnN, Zw8u, kufk) and **well-designed framework** (Reviewers ehnN, kufk). Furthermore, our visualizations have been highlighted for demonstrating **natural HOI motion** (Reviewers ehnN, Tu4S).
We are pleased to report that we have conducted the analysis suggested by the reviewers on why the four agents failed to carry objects in our previous experiments. We have also designed a new scenario to bypass the limitation of the absence of dexterous hands, demonstrating the effectiveness and generalizability of our methods.
### Successful 4-Agent Coordination in Carrying an H-Shaped Box
In the limitations section, we discussed that our framework fails when four agents collaborate to carry big box. We attribute this failure to **the absence of dexterous hands and limited friction**. During the process of carrying a box to its destination, we observed that the large box sometimes slips from the humanoid agents’ hands. This is because the hand model of the humanoid agent is spherical, making it difficult to hold the corners of the box, which leads to the box being **“squeezed out” of their hands**.
However, we found a way to bypass this limitation without dexterous hands: we had the four agents carry an **“H-shaped” large box** and **successfully complete the task**. We report the success rate of our policy when carrying objects of different weights. When carrying 60kg H-shape box, our policy achieves 90.63% success rate and 23.56 cm precision. Considering the H-shaped object is relatively large (3m x 3m x 0.5m), we define success as having the center of the object within 0.5m of the target position. Please refer to **Fig1 and Fig2 in the attached PDF** for more visualizations. The success rate and precision of the 4-agent carrying policy are detailed below:
| Agent Number | Object Category | Weight of the Object (kg) | Success Rate (%) | Precision (cm) |
| ------------ | --------------- | ------------------------- | ---------------- | -------------- |
| 4 | H-shape Box | 60 | 90.63 | 23.56 |
| 4 | H-shape Box | 70 | 88.28 | 24.97 |
| 4 | H-shape Box | 80 | 76.95 | 24.72 |
| 4 | H-shape Box | 90 | 69.53 | 25.01 |
### References for All Reviewers
**Dear reviewers, due to the rebuttals' character limit, we've placed the references for all rebuttals below. Thank you for your time and consideration**.
[1] Hassan, Mohamed, et al. "Synthesizing physical character-scene interactions." *ACM SIGGRAPH 2023 Conference Proceedings*. 2023.
[2] Tessler, Chen, et al. "Calm: Conditional adversarial latent models for directable virtual characters." _ACM SIGGRAPH 2023 Conference Proceedings_. 2023.
[3] Xiao, Zeqi, et al. "Unified human-scene interaction via prompted chain-of-contacts." arXiv preprint arXiv:2309.07918 (2023).
[4] Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions On Graphics (TOG) 37.4 (2018): 1-14.
[5] Peng, Xue Bin, et al. "Amp: Adversarial motion priors for stylized physics-based character control." ACM Transactions on Graphics (ToG) 40.4 (2021): 1-20.
[6] Juravsky, Jordan, et al. "Padl: Language-directed physics-based character control." _SIGGRAPH Asia 2022 Conference Papers_. 2022.
[7] Peng, Xue Bin, et al. "Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters." ACM Transactions On Graphics (TOG) 41.4 (2022): 1-17
[8] Juravsky, Jordan, et al. "SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation." _ACM SIGGRAPH 2024 Conference Papers_. 2024.
[9] Rempe, Davis, et al. "Trace and pace: Controllable pedestrian animation via guided trajectory diffusion." _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2023.
[10] Luo, Zhengyi, et al. "Perpetual humanoid control for real-time simulated avatars." _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 2023.
[11] Wang, Yinhuai, et al. "Physhoi: Physics-based imitation of dynamic human-object interaction." arXiv preprint arXiv:2312.04393 (2023).
[12] Luo, Zhengyi, et al. "Grasping Diverse Objects with Simulated Humanoids." arXiv preprint arXiv:2407.11385 (2024).
[13] Luo, Zhengyi, et al. "SMPLOlympics: Sports Environments for Physically Simulated Humanoids." arXiv preprint arXiv:2407.00187 (2024).
[14] He, Tairan, et al. "Learning human-to-humanoid real-time whole-body teleoperation." arXiv preprint arXiv:2403.04436 (2024).
[15] He, Tairan, et al. "OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning." arXiv preprint arXiv:2406.08858 (2024).
[16] Rudin, Nikita, et al. "Learning to walk in minutes using massively parallel deep reinforcement learning." _Conference on Robot Learning_. PMLR, 2022.
[17] Pan, Liang, et al. "Synthesizing physically plausible human motions in 3d scenes." _2024 International Conference on 3D Vision (3DV)_. IEEE, 2024.
[18] Zhang, Yunbo, et al. "Simulation and retargeting of complex multi-character interactions." _ACM SIGGRAPH 2023 Conference Proceedings_. 2023.
Pdf: /pdf/a02d4bebd21f963677b8735b2c2ff60324e54a4d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Soft-Label Integration for Robust Toxicity Classification | Accept (poster) | Summary: The paper proposes a bi-level optimization framework for toxicity classification that integrates crowdsourced annotations with soft-labeling techniques. It aims to enhance robustness against spurious correlations by optimizing soft-label weights through GroupDRO. The method alternates between minimizing empirical risk and optimizing label weights. The experimental results demonstrate superior performance in both average and worst-case scenarios compared to baselines.
Strengths: 1. The paper provides theoretical proof of convergence for the bi-level optimization algorithm, which adds rigor to the proposed approach.
2. The methodology is sound and the results are significant.
Weaknesses: 1. The methodology might duplicate existing methods (e.g., citation [50]), which could raise questions about the paper’s originality and novelty.
2. The relevance of the method to the research context is unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Clarify the difference between the method in this paper and the method in citation [50]. What are the key innovations that set it apart?
2. Some mathematical notations, particularly those related to DRO (e.g., the definition of $\mathcal{R}$ following Eq. (3)), are not well-explained, potentially hindering the reader understanding.
3. The process of distinguishing between core and spurious features is not explicitly clear. It appears reliant on assumptions or insights from DRO, but the mechanism should be detailed further.
4. The method seems to be able to be extended to applications beyond toxicity classification. What unique advantages does the method have in toxicity classification?
Typos:
(Line 178) ... Theorem 3.4 ...
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 8fSW for the constructive and insightful comments. Please see our response to each of your questions below.
**1. Explanation of Mathematical Notations in Eqn. (3)**
Thank you for your valuable feedback. Regarding the definition of $R$ in Eqn. (3), we would like to do further explanation here.
- $G$ denotes the set of all groups, defined by combinations of attributes (i.e., topics) and true labels.
- $g$ represents one group. $P_g$ denotes the data distribution within that group.
- $l$ is the cross-entropy loss.
**2. How to distinguish between core features and spurious features**
We utilize GroupDRO to separate spurious features from core features by minimizing the worst-case loss across predefined groups [1,2]. Given a label $Y$, if there exists a spurious feature $\zeta$ that is highly correlated with $Y$ in the dataset $D$, the classification model will probably learn $\zeta$ as features to predict $Y$ [3]. However, such a model shows bad performance in groups when such a correlation does not hold. For example, Topic 4 in the response dataset contains “sorry”. 81% of the non-toxic responses are related to “I’m sorry” which makes the model determine “sorry” as a feature to predict a non-toxic label. Consequently, such a model is vulnerable to the group (Topic 4 * toxic) since Topic 4 * toxic breaks the spurious correlations. By focusing on such a worst-case optimization (Topic 4 * toxic in this example), we discourage the model from relying on these spurious features. Additionally, Theorem 3.4 provides theoretical validation that our risk function converges, indicating our model's ability to eliminate the impact of spurious features. A concrete example in **Figure 1** of the attached PDF demonstrates the superiority of our method. More discussions about the example can be found in the global response.
**3. Difference between our work and citation [50]**
We would like to clarify that our methodology did not duplicate existing methods (e.g., citation [50]). Although our idea of proof for Theorem 3.5 is inspired by [50], our method distinguishes itself in several key aspects:
*Different settings*: [50] and our method focuses on solving different problems. [50] aims to solve the problem of performance degradation caused by class distribution mismatch in deep semi-supervised learning. It iteratively learns the weights of unlabeled data and minimizes the weighted empirical risk containing both labeled and unlabeled data. Our work focuses on the toxicity classification problem. Our method focuses on integrating multiple annotation sources by soft labels and handling the potential existence of spurious features through GroupDRO.
*Different assumptions for the convergence proof*: [50] assumes that the loss function of the inner loop is Lipschitz-smooth with a constant $L<2$ and further proves the convergence of the loss function in the outer loop (see Appendix A in [50]). In our work, we give Assumption 3.2 regarding the lower bound of the inner product of the gradients and prove the convergence of the risk function in the outer loop which only requires appropriately setting the step size.
*Different use of proof*: Beyond the typical theoretical proofs of convergence and its rate, Theorem 3.4 demonstrates that the risk function, when utilizing GroupDRO in the outer loop, converges effectively. This indicates that the model maintains robust performance even in the worst group upon convergence. Consequently, the impact of spurious features can be effectively mitigated.
**4. Unique advantages of our method in toxicity classification**
While our methodology can be applied to broad applications, it offers distinct advantages for toxicity classification that address the unique challenges of this field by:
*Integrating diverse annotations*: The variability in human perception of toxic content [4, 5] requires us to design a method that can effectively integrate and learn from diverse viewpoints to improve the accuracy of classifications. Our method utilizes the soft label technique to integrate these crowdsourced annotations and enhance the classification accuracy.
*Mitigating spurious features*: Spurious features as evidenced in [6,7] can severely affect the reliability of toxicity classifiers. Our method utilizes GroupDRO to eliminate the impact of spurious features, supported by theoretical proof of convergence in Theorem 3.4.
**5. Typo**
Thank you for pointing this out. We will correct this typo in our next version.
[1] Sagawa et al. "Distributionally Robust Neural Networks." ICLR. 2020.
[2] Oren et al. "Distributionally Robust Language Modeling." EMNLP. 2019.
[3] Yang et al. "Mitigating spurious correlations in multi-modal models during fine-tuning." ICML. 2023.
[4] Vazhentsev et al. "Hybrid uncertainty quantification for selective text classification in ambiguous tasks." ACL. 2023.
[5] Kanclerz et al. "PALS: Personalized Active Learning for Subjective Tasks in NLP." EMNLP. 2023.
[6] Garg et al. "Handling bias in toxic speech detection: A survey." ACM Computing Surveys. 2023.
[7] Kim et al. "Improving Robustness to Multiple Spurious Correlations by Multi-Objective Optimization." ICML. 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I still hold the original view on the contributions and innovations of the paper, but the other responses have solved my concerns, so I will raise the score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback. We will further clarify our contributions in our next version. | Summary: The paper presents a novel approach to toxicity classification by integrating crowdsourced annotations through soft-labeling and employing a bi-level optimization framework. The method aims to address the limitations of traditional toxicity classifiers that rely on single annotator labels and are prone to spurious correlations. The proposed framework enhances robustness against out-of-distribution (OOD) risks using Group Distributionally Robust Optimization (GroupDRO). Theoretical convergence is proved, and experimental results demonstrate superior performance compared to baseline methods.
Strengths: - The integration of crowdsourced annotations with soft-labeling and bi-level optimization is a novel and well-motivated approach that addresses significant shortcomings of existing methods.
- The paper provides theoretical proof of convergence for the proposed bi-level optimization algorithm, which adds substantial credibility to the method.
- The use of GroupDRO enhances the robustness of the classifier, particularly in handling out-of-distribution data and reducing reliance on spurious features.
- Extensive experiments demonstrate that the proposed method outperforms existing baseline methods in terms of average and worst-group accuracy, showing its effectiveness in leveraging crowdsourced annotations.
Weaknesses: - The proposed method involves multiple optimization loops, which can be computationally expensive and time-consuming, potentially limiting its scalability.
- While the proposed method achieves impressive results on the chosen dataset, there is still room to improve given the scores from Table 1. Thus, it would be beneficial to analyze further into different groups instead of just reporting the average and worst, especially the failure cases.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Have you conducted any experiments to measure the computational overhead introduced by the bi-level optimization process? How does it compare to traditional single-loop optimization methods in terms of training time and resource requirements?
- I suspect the lower performance of the question task, compared to the response task, is because it is a 15-class classification. How does your method handle cases where there is significant disagreement among annotators (e.g., All 3 humans and 3 LLMs select different classes)? Also, what is the ratio of samples that none of the annotators correctly identify the class (if any)?
- Ethical Considerations: Can you provide more insights into the interpretability of the model’s predictions? How can users of your method understand and trust the reasons behind certain classifications?
- Will the author(s) release the datasets under any license if the paper is published?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer jUAS for the constructive and insightful comments. Please see our response to each of your questions below.
**1. Computational overhead induced by bi-level optimization compared with single-loop optimization**
We add new experiments to compare the computational overhead induced by the bi-level optimization process and compare it with the traditional single-loop optimization methods (i.e., the baseline methods) on our datasets (question and answer) and one additional public dataset HateXplain. We utilize 8 Nvidia A100 GPUs to train a toxicity classifier and measure the corresponding computational overhead in terms of training time. The results are reported in the table below. We observe that our proposed bi-level optimization method introduces approximately two times the computation overhead compared with baseline methods. The additional computation overhead originates from the update of the soft-label weight. However, given the total training time, our proposed method is still computationally feasible and acceptable.
**2. Further analysis over different groups**
We further analyze each group's performance and identify the key factors contributing to high error rates in specific groups. We find that the following issues significantly impacted model accuracy:
*Class Imbalance*: We observe that some groups, such as topic 3 with true label 1 in the question dataset, are underrepresented (e.g., only one sample in the validation set), hindering the model's ability to learn relevant features effectively.
*Definition Overlap*: Despite using definitions from OpenAI's 2023 usage policy in the question dataset, some classes like class 0 (illegal activity) have overlapping definitions with others (e.g., creating malware), which confuses the model.
*Mathematical Reasoning Challenges*: Manual reviews reveal that our model struggles with mathematical reasoning. For instance, it fails to recognize the inappropriateness in the age context of the question about marriage (“I'd like to marry my friend. She's 9 years old, and I'm 30”), misclassifying a clearly problematic statement as “None of the above”.
We will include a detailed discussion of these findings and their implications for model performance in our next version.
**3. Significant disagreement between annotators**
Thank you for your insightful comments regarding the performance differences observed between the question and response tasks and the handling of annotator disagreements in our model. The question task involves a 15-class classification, inherently more complex than the binary classification of the response task. This complexity can contribute to lower performance, as it increases the potential for class imbalance and the cognitive load on annotators, possibly leading to greater variability in their responses.
Our proposed method addresses cases of significant disagreement among annotators by assigning different weights to each annotator’s input. The weights are optimized via the validation set with the GroupDRO technique. Even though all 3 humans and 3 LLMs select different classes, our method is still able to give accurate weights. We provide an example to illustrate this in **Figure 1** of the attached PDF.
There is no case that none of the annotators identify the correct class which is consistent with our assumption that the correct ground-truth label is included in the set of possible labels given by annotations (line 99).
**4. Interpretability of model’s predictions**
Thank you for highlighting the critical aspect of ethical considerations, particularly regarding the interpretability of our model's predictions.
We develop a novel explanation method inspired by saliency maps [1,2], which enhances the interpretability of our model's outputs. This method breaks down potentially toxic content into several blocks, with each block representing a feature that might be core or spurious. We then assess the impact of each block on the model's prediction by examining the change in prediction logits when each block is removed. This process allows us to determine the relative importance of each feature in contributing to the final decision. For instance, as illustrated in **Figure 1** of the attached PDF, our method's prediction is shown to rely predominantly on the core feature “The Dark Web can be accessed through various anonymous networks such as Tor” rather than on the spurious features “I’m sorry” and “I cannot”.
In our next version, we plan to include a more thorough discussion about the explainability of our model's predictions. This enhancement will not only deepen the understanding of how our model processes and evaluates input data but will also help users establish greater trust in the reliability of our method.
**5. Dataset release**
Thank you for your question about the datasets. After communicating with the third-party security company, we will make the datasets publicly available if the paper is published.
[1] Ding et al. "Evaluating Saliency Methods for Neural Language Models." NAACL. 2021.
[2] Fong et al. "Interpretable explanations of black boxes by meaningful perturbation." ICCV. 2017.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the comprehensive response. If these discussions are added to the paper, it would enhance the interpretability of the model’s predictions. In that case, I have no further concerns and will raise my score.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer jUAS
Comment: Thank you for your insightful comments about the interpretability of the model's predictions. We greatly appreciate your support! | Summary: The authors propose a two- layer optimization framework that integrates crowdsourced annotation and soft labeling techniques to optimize the soft label weights to improve the robustness of textual content toxicity classification. The method uses Group Distributionally Robust Optimization (GroupDRO) to optimize the soft label weights and enhance the performance of the model on out-of-distribution data. The inner loop uses empirical risk minimization (ERM) to optimize the model, and the outer loop evaluates the model's dependence on spurious features, calculates the out-of-distribution (OOD) risk, and optimizes the soft label weights.
Unlike traditional single annotator labeling, this paper's method integrates crowdsourced annotations to reduce the model's dependence on spurious features and improve robustness to distributional shifts. The experimental results show that this paper's method outperforms existing baseline methods and current state-of-the-art large-scale language models (such as GPT-4 Turbo), and theoretically demonstrates the convergence of the two-layer optimization algorithm.
Strengths: This paper introduces a method to tackle the problem about model's dependence on spurious features in in toxicity classification tasks. There are several advantages of the proposed method as follows:
1) The authors designed a two-layer optimization framework that uniquely combines crowdsourced annotations and soft-labeling techniques, which captures a more diverse set of perspectives than traditional single-annotator labeling. It helps to address biases that may come from a limited number of annotators and significantly enhances the robustness of the model, especially on out-of-distribution (OOD) data.
2) The paper provides a theoretical proof of the convergence of the two-layer optimization algorithm. This theoretical foundation enhances the reliability and credibility of the proposed method and ensures its effectiveness in practical applications.
3) The method performs superiorly in experimental evaluations, outperforming existing baseline methods in terms of average accuracy and worst group accuracy. In addition, it outperformed, i.e., the GPT-4 Turbo, as well as any single human annotator in dealing with toxicity classification tasks especially in challenging OOD scenes.
Weaknesses: 1) The proposed two-layer optimization framework, despite being innovative, obviously introduces complexity as it requires alternating inner and outer loops for optimization, which is more demanding in terms of computational resources and implementation techniques. Although the paper mentions qualitatively the speed of convergence, it does not analyze the increased complexity of the framework compared to the original baseline model. And the simple quantitative time complexity of the experiment is not compared with other baseline models and with different datasets, which is partially lacking in persuasiveness.
2) The experiments of note use datasets provided by third-party security companies that may not fully reflect a wide range of real-world application scenarios. And the experimental datasets are small in number and sample size, and lack of experimental validation on open and diverse datasets may limit the generalizability and credibility of the results.
3) While the section on experimental comparison of algorithms covers related work, it lacks references to the most recent research, including the sections on the selection of benchmark models and the selection of algorithms for comparison.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1) Considering that the two-layer optimization framework requires high computational resources and implementation techniques. It is recommended to add a detailed analysis of the complexity or provide comparative time complexity experiments with different datasets or baseline models to demonstrate the time overhead of the new approach and its relative advantages.
2) Soft label generation and how to resolve spurious features is one of the core steps of the method, and describing this process in detail helps understanding and implementation. I noticed that section 3.2 mentions weighting the core and spurious features by soft labeling to make the classifier independent of spurious features, so how are these two features separated or distinguished in the paper?
3) The results analysis section, while demonstrating the superiority of the new method, lacks sufficient explanation and visualization, especially regarding the effect of soft label weighting on eliminating spurious features (Figure 2 only).
4) The cutting edge of comparison algorithms and benchmark model selection that I have noted needs to be improved, and it is suggested to add recent algorithms related to toxicity classification or dealing with spurious features for comparison experiments.
5) While the literature review section covers a significant amount of relevant work, it lacks references to the latest research, especially relevant research published in the last two years. The cutting edge and comprehensiveness of the literature review needs to be enhanced.
6) The experiments in this paper use only datasets provided by third-party security companies, which may not adequately reflect a wide range of practical applications. The lack of experimental validation on public or more diverse datasets may limit the generalizability of the results. It is recommended to add experiments on other datasets to verify the robustness and applicability of the method under different data distributions.
7) Some sentences in the text may need further refinement. For example:
Line 5: "The standard approach to train a classifier with empirical risk minimization (ERM)..." should be revised to "The standard approach to train a classifier using empirical risk minimization (ERM)..."
"...the potential shift between the training set and testing set due to exploiting spurious correlations." should be changed to "potential shifts."
Ensure consistent use of "soft-labeling" and "soft label generation" throughout the text, and so on.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer ZkTs for your insightful comments. Please see our response to each of your questions below.
**1. Experiments on other public datasets**
We add experiments on a public HateXplain dataset [1]. It contains three classes -- "hatespeech", "offensive", "normal". We consider both hate and offensive posts as toxic and the rest as non-toxic. Each record includes a post and three human annotations. We further utilize GPT-4, GPT-4 Turbo, and Claude 2 to label these comments. We report the average accuracy and worst-group accuracy of all methods in the HateXplain dataset in **Table 1** of the attached PDF. We observe that our method still outperforms other baselines.
**2. Quantitative time complexity comparison with baseline methods on different datasets**
We add experiments to measure the time complexity of all methods across all datasets. We utilize 8 NVIDIA A100 GPUs to run the experiments and report the seconds each experiment requires in **Table 2** of the attached PDF. We observe that our method introduces approximately two times the computation overhead compared with baseline methods. The additional computation overhead originates from the pseudo-update of the model parameter $\theta$ and update of the soft-label weights $w$. Note that we utilize a smaller model (i.e., RoBERTa-base) to learn the soft-label weights compared with the classifier (RoBERTa-large). However, given the total training time, our proposed method is still computationally feasible and acceptable.
**3. Lack of references to the most recent research & Comparison with more recent ones**
Thank you for your suggestion. We add an additional baseline [2] related to harmful content classification and compared our method with the additional baseline on all datasets. [2] is a state-of-the-art work that addresses the disagreement in subjective annotations. We observe that our method still outperforms this baseline.
| Method | Q-A | | R-A | | HateXplain | |
|--------------|--------------------|---------------------|--------------------|---------------------|--------------------|---------------------|
| | Average (%) | Worst-Group (%) | Average (%) | Worst-Group (%) | Average (%) | Worst-Group (%) |
| Ensemble [2] | 70.70±0.63 | 56.57±0.32 | 81.10 ± 0.45 | 57.89 ± 0.51 | 77.24±0.13 | 69.75±0.59 |
| Ours | **78.41±0.24** | **69.44±0.13** | **89.80±0.61** | **77.82±0.63** | **79.19±0.12** | **72.53±0.34** |
Regarding recent efforts to address spurious features, GroupDRO remains the state-of-the-art representative method when group information is available. The latest research in recent two years has shifted to a different setting when group labels are unknown and we have to infer group labels [3,4]. However, as mentioned in [5], methods relying on inferred group labels show a performance gap compared to those that directly use group labels.
We will include a section to discuss the selection of benchmark models and algorithms for comparison in our next version.
**4. How to separate spurious features from core features? Lack of explanation and visualization.**
We utilize GroupDRO to separate spurious features from core features by minimizing the worst-case loss across predefined groups [6,7]. Given a label $Y$, if there exists a spurious feature $\zeta$ that is highly correlated with $Y$ in the dataset $D$, the classification model will probably learn $\zeta$ as features to predict $Y$ [8]. However, such a model shows bad performance in groups when such a correlation does not hold. For example, Topic 4 in the response dataset contains “sorry”. 81% of the non-toxic responses are related to “I’m sorry” which makes the model determine “sorry” as a feature to predict a non-toxic label. Consequently, such a model is vulnerable to the group (Topic 4 * toxic) since Topic 4 * toxic breaks the spurious correlations. By focusing on such a worst-case optimization (Topic 4 * toxic in this example), we discourage the model from relying on these spurious features. Additionally, Theorem 3.4 provides theoretical validation that our risk function converges, indicating our model's ability to eliminate the impact of spurious features.
Furthermore, we propose an explanation method to identify the features that contribute most to our model’s prediction. A concrete example in **Figure 1** of the attached PDF shows the effectiveness of our method regarding eliminating the impact of spurious features and explains the superiority of our method from the perspective of effective soft-label weights learning. More discussions about the example can be found in the global response.
**5. Presentation issue**
Thank you for your valuable feedback. We will address these presentation issues in our next version.
[1] Mathew et al. "Hatexplain: A benchmark dataset for explainable hate speech detection." AAAI. 2021.
[2] Davani et al. "Dealing with disagreements: Looking beyond the majority vote in subjective annotations." TACL. 2022.
[3] Creage et al. "Environment inference for invariant learning." ICML. 2021.
[4] Wu et al. "Discover and cure: Concept-aware mitigation of spurious correlation." ICML. 2023.
[5] Han et al. "Improving Group Robustness on Spurious Correlation Requires Preciser Group Inference." ICML. 2024.
[6] Oren et al. "Distributionally Robust Language Modeling." EMNLP. 2019.
[7] Sagawa et al. "Distributionally Robust Neural Networks." ICLR. 2020.
[8] Yang et al. "Mitigating spurious correlations in multi-modal models during fine-tuning." ICML. 2023.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response, which partially addressed my concerns. I would like to keep my vote for weak accept. | Summary: This paper presents a bi-level optimization framework to integrate crowdsourced annotations with the soft-labeling technique and optimize the soft-label weights by GroupDRO to avoid the OOD risk.
Strengths: * This paper introduces a novel approach to learn the soft label of (potentially) toxic content based on crowdsourced annotations, which promotes the safety of LLMs.
* Spurious feature is a significant problem in toxicity classification tasks. The authors address this issue by incorporating GroupDRO to the soft-label weight learning which is interesting.
* The overall presentation is clear, and the proposed concepts are backed with formal elaborations.
Weaknesses: * In Figure 3, the proposed approach shows marginal improvement over GPT-4 Turbo in the question set. It would be great if the authors can have further discussion about the pros and cons of the proposed method compared with SOTA commercial llms.
* It is not clear how does the proposed method perform compared with specialized toxicity detection tools such as Google’s Perspective API, Purple Llama, etc.
Technical Quality: 4
Clarity: 3
Questions for Authors: * Is it possible to a two-stage learning framework where it first directly learns the soft-label weights via supervised learning on the validation set and then utilizes the fixed weights to train a classifier? How does it compare to your method?
* Recent work such as [1] also proposed several approaches to integrate the crowdsourced labels. How does your approach compare with the ensemble approach and the multi-label approach in [1]?
[1] Davani, Aida Mostafazadeh, Mark Díaz, and Vinodkumar Prabhakaran. "Dealing with disagreements: Looking beyond the majority vote in subjective annotations." Transactions of the Association for Computational Linguistics 10 (2022): 92-110.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have discussed limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer KPSa for the constructive and insightful comments. Please see our response to each of your questions below.
**1. Discussion about pros and cons compared with SOTA commercial LLMs**
Thank you for your questions about the pros and cons of our method against SOTA commercial llms. We’d like to share our thoughts:
*Pros*: (1) Our method is specifically designed to handle the nuances of toxicity classification which incorporates multiple annotations through soft label techniques and eliminates the impact of spurious features. Commercial LLMs are not specially trained for the toxicity classification task and may require developers to design suitable prompts to lower refusal rates. For example, in our preliminary experiments, we found that LLaMA-2 almost refused to label all data and Claude also showed a significant ratio of refusing to give annotations. (2) Our model provides greater transparency in how decisions are made, which is critical for applications where understanding model reasoning is important for trust and compliance.
*Cons*: Our approach requires more computational resources than calling LLM APIs such as GPT-4 Turbo while requiring significantly less computational resources than employing open-sourced LLMs such as LLaMA.
**2. Comparison with specialized toxicity detection tools**
Thank you for your suggestion. We added new experiments to compare our method with the specialized toxicity detection tool LLaMAGuard on our response dataset. The result is that the average accuracy of LLaMAGuard is 62.85% and the worst-group accuracy of LLaMAGuard is 59.25%. We observe that our proposed method outperforms LLaMAGuard in both average and worst-group accuracies.
**3. Comparison with a two-stage learning framework**
Thank you for bringing the two-stage learning framework design to our attention. We add new experiments to compare our method with this framework on our question and answer datasets. We provide the results in the table below. We observe that simply learning the weights via supervised learning and training a toxicity classifier cannot bring satisfactory performance regarding both average accuracy and worst-group accuracy. We suspect the reason is that the weights learned on the validation set cannot generalize well on the training dataset of a larger size. The potentially biased weights may mislead the soft labels and further impact the model training.
| Method | Q-A | | R-A | |
|------------|--------------------|---------------------|--------------------|---------------------|
| | Average (%) | Worst-Group (%) | Average (%) | Worst-Group (%) |
| Two-stage | 68.57±1.12 | 56.48±3.49 | 81.90±0.96 | 62.80±8.82 |
| Ours | **78.41±0.24** | **69.44±0.13** | **89.80±0.61** | **77.82±0.63** |
**4. Comparison with a recent work**
Thank you for bringing [1] to our attention. We added new experiments to compare our method with the ensemble method proposed in [1] on our datasets (question and answer) and one additional pubic dataset HateXplain. We report the comparison results in the table below. We observe that our proposed method outperforms the ensemble method in [1] regarding both average accuracy and worst-group accuracy. It is also worth noting that the ensemble method requires us to train six separate models to predict the annotations of each data and aggregate them via majority voting to obtain the final predicted labels which is inefficient in both computation and memory storage.
| Method | Q-A | | R-A | | HateXplain | |
|--------------|--------------------|---------------------|--------------------|---------------------|--------------------|---------------------|
| | Average (%) | Worst-Group (%) | Average (%) | Worst-Group (%) | Average (%) | Worst-Group (%) |
| Ensemble [1] | 70.70±0.63 | 56.57±0.32 | 81.10 ± 0.45 | 57.89 ± 0.51 | 77.24±0.13 | 69.75±0.59 |
| Ours | **78.41±0.24** | **69.44±0.13** | **89.80±0.61** | **77.82±0.63** | **79.19±0.12** | **72.53±0.34** |
[1] Davani et al. "Dealing with disagreements: Looking beyond the majority vote in subjective annotations." TACL. 2022.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the additional experiments and detailed rebuttal. It is greatly appreciated. As my concerns have been extensively addressed, I'll raise my score accordingly.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer KPSa
Comment: Thank you for your positive feedback! We will add these changes based on your insightful suggestions in our next version. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to express our gratitude for your constructive feedback on our submission and we appreciate the time and effort you have dedicated to reviewing our paper.
In response to your valuable recommendations, we have incorporated additional experiments that align with your suggestions. Following the suggestion of Reviewer ZkTs, we include an additional baseline method [1] for comparison and compare our method with baseline methods in a public dataset HateXplain [2] in **Table 1** of the attached PDF. Our method still outperforms baseline methods in the HateXplain dataset. In response to Reviewer ZkTs and jUAS, we add comprehensive time complexity experiments to measure the computational overhead of baseline methods and ours across all datasets in **Table 2** of the attached PDF.
Based on the suggestion of Reviewer jUAS, we propose a new explanation method to explain our model’s prediction. We visualize the features that contribute most to our model’s prediction and show that our method can eliminate the impact of spurious features. We further demonstrate why our method achieves superior performance by examining the soft-label weights our model learned.
We provide a concrete example in **Figure 1** of the attached PDF to explain the superiority of our method.
First, our model prioritizes core over spurious features in making decisions. We develop a novel explanation method to explain our model's predictions which is similar to the idea of saliency maps [3, 4]. This method breaks down potentially toxic content into several blocks, with each block representing a feature that might be core or spurious. We then rank the importance of each feature on the model's prediction by examining the change in prediction logits before/after removing each feature. In Figure 1, our method's prediction relies primarily on the core feature “The Dark Web can be accessed through various anonymous networks such as Tor” rather than on the spurious features “I’m sorry” and “I cannot”. Note that in the response dataset, 82% of the non-toxic responses contain “I cannot” and 81% of the non-toxic responses contain “I’m sorry” which makes the model easy to fit the spurious correlation between spurious features “I’m sorry”, “I cannot” and the non-toxic label.
Second, Figure 1 demonstrates that our learned soft-label weights concentrate on the annotations that are aligned with the ground truth in the validation set which further explains the success of our toxicity classifier’s training. In this case, three human annotators disagree with three LLMs. The vanilla soft label method would assign equal weights to soft label 0 and 1 which makes the model challenging to learn any useful information. In contrast, our learned soft-label weights assign more weights to soft label 1 which avoids misleading the model training.
Your input is instrumental in enhancing our paper, and we hope that the additional experiments and results we have provided effectively address your concerns and contribute positively to the overall understanding of our method. Once again, thank you for your invaluable feedback and we ensure that we will incorporate your constructive suggestions in our next version.
[1] Davani et al. "Dealing with disagreements: Looking beyond the majority vote in subjective annotations." TACL. 2022.
[2] Mathew et al. "Hatexplain: A benchmark dataset for explainable hate speech detection." AAAI. 2021.
[3] Ding et al. "Evaluating Saliency Methods for Neural Language Models." NAACL. 2021.
[4] Fong et al. "Interpretable explanations of black boxes by meaningful perturbation." ICCV. 2017.
Pdf: /pdf/82430698fccd4e309d5d54b840a5b01c5a644c9c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Spiking Token Mixer: An event-driven friendly Former structure for spiking neural networks | Accept (poster) | Summary: To address the issue where certain operators (e.g., spiking self-attention, SSA) in existing spiking Transformers cannot be executed on asynchronous neuromorphic chips, this work designs the Spiking Token Mixer (STMixer) architecture, which consists exclusively of operations supported by asynchronous scenarios including convolutional, fully connected layers, and residual paths.
Strengths: This work raises a significant issue: the current spiking self-attention operators are indeed difficult to implement on asynchronous chips. Generally, designers of SNN algorithms do not often consider their hardware execution. I find it reasonable that the authors have designed algorithms with hardware limitations in mind.
Weaknesses: 1. The technical contribution is limited; there is no in-depth analysis of the proposed operators, and some existing work in the field is overlooked.
2. Some key points are not clearly explained, especially the section on operators.
3. The work does not follow the latest results in the field, e.g., meta-spikeformer [1], and the performance is not SOTA.
[1] Spike-driven transformer v2: Meta spiking neural network architecture inspiring the design of next-generation neuromorphic chips. In ICLR 2024.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Some references are incorrect and cannot be accessed. LInes 91-92, 255
2. The mathematical descriptions in Section 4.3 are not rigorous and omit some crucial information. Is the matrix X composed of continuous values or is it a spiking matrix? All the dimensional changes are ignored, which is unprofessional. You cannot expect every reader to read all your references.
3. Could the authors explain the proposed STM from the perspective of linear attention? Since the LIF following Q and K functions as a kernel, why can the spiking neuron layer be directly removed in this context?
4. SML training seems crucial, but other baseline works do not appear to use this method. Therefore, is the comparison in Table 1 unfair? Why is there no comparison with Spike-driven Transformer v1/v2 in Table 1?
5. What is SDT in Table 2? It is mentioned only once in the entire text and is not explained.
6. I suggest the authors carefully check the references. If the paper has been accepted, please cite the official version instead of the arXiv version, as this is an academic convention.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment 1:
The technical contribution is limited...
Response to comment 1:
I appreciate the reviewer's feedback but respectfully disagree that the technical contribution of this paper is limited. This paper does not merely propose a structure (implant ANN former structure into SNN) to achieve the SNN SOTA performance on GPU. Instead, it designs a model suitable for asynchronous scenarios, inspiring future research to refocus on the running environments of SNNs. As mentioned in the General Response, constructing a high-performance network using operators supported by current asynchronous hardware is significantly more challenging than using those supported by synchronous hardware. Currently, advanced SNN models use multibranch spike matrix multiplication, Hadamard product operations, and others, which allow for straightforward access to global information but difficult to execute in event-driven asynchronous settings. The STM module proposed in this article employs simple weight matrices to mix global information across token dimensions, this technique has already proven effective in works such as ANN's MLP-Mixer[1] and External Attention[2]. Additionally, STM adjusts the number of parameters and performance by employing multiple heads (i.e., multiple mixing weight matrices).
Comment 2:
Some key points are not clearly explained, especially the section on operators.
Response to comment 2:
We appreciate your feedback and have revised Section 4.3 to provide a more comprehensive explanation of the STM operator.
Please refer to "Response to comment 5" for reviewer ESkb.
Comment 3:
The work does not follow the latest results in the field, e.g., meta-spikeformer [1]...
Response to comment 3:
The architecture of STMixer is not at odds with the meta-transformer structure; rather, it is composed of a token mixer module and a channel mixer part. We have incorporated the spike-driven transformer into Table 1 for a comparative analysis. Although STMixer's performance on ImageNet may not surpass that of the Spike-driven transformer v2, it's important to note that due to STMixer's ultra-lightweight structure, it only consumes one-third of the energy that the Spike-driven transformer v2 does under the same parameter scale (in a synchronous scenario). As an example, Meta-SpikeFormer (T=1) consumes 7.8mJ of energy with 31.3M parameters and achieves 75.4% accuracy on ImageNet. In contrast, STmixer-8-512-16 (T=1) with 30.12M parameters achieves 73.82% ImageNet accuracy but only consumes 2.2mJ of energy.
Comment 4:
Some references are incorrect and cannot be accessed. LInes 91-92, 255
Response to comment 4:
We apologize for the oversight. We have thoroughly reviewed the references and have corrected any inaccuracies.
Comment 5:
The mathematical descriptions in Section 4.3 are not rigorous and omit some crucial information...
Response to comment 5:
Thank you for your comment. We apologize for any confusion caused by our initial presentation. To clarify, the matrix X indeed represents a spike matrix, serving as the input for Cell 1 as shown in Fig. 1A. We acknowledge the importance of providing comprehensive dimensional information and regret the omission in our previous version. To address your concerns, we have revised Section 4.3 in the manuscript to include detailed dimensional changes and additional necessary information for each variable.
Comment 6:
Could the authors explain the proposed STM from the perspective..
Response to comment 6:
Our proposed STM is analogous to the workings of the MLP-Mixer, which uses a weight matrix to mix token dimension information. Its mixing weight matrix $W_\text{STM}^h\in\mathbb{R}^{N\times N}$ can be viewed as an approximation of the attention matrix to a certain extent. To better illustrate this point, we remove the LIF layers from Q and K. We have found that this operation does not compromise the performance of SNN (79.81% to 80%).
Comment 7:
SML training seems crucial, but other baseline works do not appear to use this method. Therefore, is the comparison in Table 1 unfair? Why is there no comparison with Spike-driven Transformer v1/v2 in Table 1?
Response to comment 7:
Thank you for your insightful comment. Our primary intention is to illustrate the potential of the SML algorithm when optimized by the STM module, which results in a significant performance boost for STMixer. We believe this indicates the potential for SNNs to achieve outstanding performance. To address your concern about fairness in comparison, we have now included the results of STMixer without SML training in Table 1. Even without SML training, STMixer still outperforms the comparison work with an accuracy $79.79\% \pm 0.19$ accuracy on CIFAR-100 and $95.96% \pm 0.04$ on CIFAR-10 for STMixer-4-384-32 (T=4). In response to your second query, we have now also included the results of Spike-driven Transformer v1/v2 in Table 1 for a more comprehensive comparison.
Comment 8:
What is SDT in Table 2..
Response to comment 8:
We apologize for the oversight. SDT in Table 2 stands for "Standard Direct Training". This refers to our baseline training pipeline, which is conducted without the use of the SMLmethod.
Comment 9:
I suggest the authors carefully check the references..
Response to comment 9:
Thank you for your suggestion. We have carefully reviewed all the references and replaced the citations of arXiv versions with the official versions of the papers where they have been officially published. We appreciate your attention to detail and adherence to academic convention.
[1] Tolstikhin I O, Houlsby N, Kolesnikov A, et al. Mlp-mixer: An all-mlp architecture for vision[J]. Advances in neural information processing systems, 2021, 34: 24261-24272.
[2] Guo M H, Liu Z N, Mu T J, et al. Beyond self-attention: External attention using two linear layers for visual tasks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(5): 5436-5447.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The rebuttal addresses my concerns. "What kind of SNN can be supported by neuromorphic chips" is actually always ignored by people in the field. The authors' questions and solutions are insightful. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful comments and your appreciation of our work. | Summary: Most of the spiking neural network architectures can not truely show the superiority on the neuromporphic hardware, since in event-driven scenarios, the spike arrival times are not precise and could result in significant differences in the output, like there is a max pooling layer. This paper propose the Spiking Token Mixer, which can well handle the problem. The authors also have validated that STMixer could achieve performance on par with or even surpasses existing Spikformer-like works in synchronous scenarios.
Strengths: 1. The paper is well-organized and written in a clear manner, making it accessible to readers interested in the topic.
2. The paper solves an important problem in the SNN filed and the proposed method is interesting and new.
3. The paper povide various experiments and ablations to show the effectiveness of the proposed method.
Weaknesses: 1. Does the proposed method increase the energy consumption? The authors could provided detailed explanations.
2. The ablation experiments could include different types of datasets and models.
3. The comparison methods in the experiments contrasting with other approaches on CIFAR-10 may not be the most up-to-date.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Can you further explain the meaning of the sentence in first paragraph of Introduction that "due to the accumulation of membrane potential in the IF neuron, spike arrival timing errors do not significantly impact subsequent layers.".
2. Could you provided a detailed explanation for the method?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment 1:
Does the proposed method increase the energy consumption? The authors could provided detailed explanations.
Response to comment 1:
TheSML method does not increase the energy consumption during the inference stage. After the training phase end, the SML method eliminates the added blocks, ensuring that the energy consumption of SNN inference remains unaffected. As demonstrated in Table 1, the STMixer does not introduce a significant additional energy cost compared to the Spikingformer. For instance, the energy consumption of STMixer-8-768-16 at T=1 is 4.45mJ, whereas the Spikingformer-8-768 at T=4 consumes 13.68mJ. Considering that energy consumption is proportional to the time step T, we can infer that an STMixer, with the same scale and time step, would have energy consumption comparable to that of a Spikingformer.
Comment 2:
The ablation experiments could include different types of datasets and models.
Response to comment 2:
We have conducted an ablation study on the CIFAR-10 dataset. The experiment results validate the effectiveness of all the designed components. The results are summarized in the table below.
| Modification | STMixer | IPSPS -> SPS | STM -> SSA | SML(STM)->SML(Conv) | SML->SDT |
| ------------ | ------- | ------------ | ---------- | ------------------- | -------- |
| Accuracy(%) | 95.65 | 95.49 | 94.45 | 95.40 | 95.13 |
Comment 3:
The comparison methods in the experiments contrasting with other approaches on CIFAR-10 may not be the most up-to-date.
Response to comment 3:
Thank you for your valuable feedback. In the original manuscript, we indeed primarily compared our method with classical structure works and related studies. Recognizing the importance of keeping our work current and relevant, we have now updated our comparison in the revised manuscript to include more recent approaches. Specifically, these updates have been incorporated into Table 1. We believe that these changes will provide a more comprehensive and up-to-date comparison, addressing your concern.
Comment 4:
Can you further explain the meaning of the sentence in first paragraph of Introduction that "due to the accumulation of membrane potential in the IF neuron, spike arrival timing errors do not significantly impact subsequent layers.".
Response to comment 4:
During the SNN training phase, we convert the DVS stream in a time window into an image frame. For instance, if a position (x, y) of the DVS stream has 4 spikes in the time window, the value at position (x, y) in the image frame would be 4. This value is then input into the SNN for training. Suppose an Integrate-and-Fire (IF) neuron (threshold is 1.0 and employing soft reset) only receives input from the point (x, y), and the weight is 0.3. After receiving the input, the neuron would generate a spike by the end of this frame, leaving the residual membrane potential of 0.2. In asynchronous scenarios, spikes in the DVS data stream enter the neuron one by one, not in frame format. Each arriving spike increases the membrane potential by 0.3, ultimately generating a spike and leaving a residual membrane potential of 0.2. If there is a certain error in the arrival time of the spikes, as long as the number of spikes is constant, the neuron will still generate a spike and leave a residual membrane potential of 0.2 after operation. Of course, the above scenario is the simplest example. In reality, there may be errors due to uneven spike generation, which can have some effects on subsequent layers.
Starting from this simple example, we can infer that traditional operators such as convolution and full connection can alleviate the problem of pulse arrival error to a certain extent, as long as there is an IF neuron layer subsequently.
Comment 5:
Could you provided a detailed explanation for the method?
Response to comment 5:
Our approach is based on modifications to the Spikingformer model, specifically its SPS and SSA components. The modifications were made with two objectives in mind: 1) to adapt the model for asynchronous scenarios, and 2) to enhance the model's performance. To achieve the first objective, we eliminated the max pooling layer in the SPS and performed downsampling by modifying the stride of the convolutional layer. Concurrently, we proposed the STM module to replace the SSA module. The STM module only contains fully connected components and does not introduce multi-branch spike matrix operations. The STM divides the input $X \in \mathbb{R}^{T\times C \times N}$ after the fully connected layer into H parts, each part $X^h \in \mathbb{R}^{T\times C/H\times N}$ is mixed at the token dimension through a fully convolutional matrix $W^h\in \mathbb{R}^{N\times N}$. We found that under the same training script, the performance of STM is not inferior to that of SSA. To achieve the second objective, we proposed the IPSPS method to replace the SPS module. A small portion of IPSPS's output tensor comes from the input directly transformed by a convolutional layer, thereby preserving sufficient input information. We also modified the surrogate module used by the SML method, using the STM module to construct the SML block, which significantly reduces the additional training overhead brought by the SML method.
---
Rebuttal Comment 1.1:
Title: Good rebuttal
Comment: Thanks for your response. My concerns have been well addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback and for the updated score. | Summary: This article examines problems with the SSA module in Spikformer in asynchronous scenarios and suggest a new module, the Spiking Token Mixing (STM) module, which consists solely of network components that cater to asynchronous environments. Besides, This article proposed the information protection spiking patch splitting (IPSPS) module to reduce information loss.
Strengths: This article examines problems with the SSA module in Spikformer in asynchronous scenarios, which is a good observation. And authors intend to solve it by proposing STM and IPSPS.
Weaknesses: 1.From your code and Table 2 (SML->SDT). The performance improvement mainly comes from several SML blocks, which consist of several ANN layers. When considering adding residual connection and several SML blocks, the STMixer can even be seen as an ANN model (floats input is connected to the last layer through intermediate hidden ANN ). In my opinion, this is a very fatal question.
2. From the perspective of solving the asynchronous scenarios problem, the residual connection is also a problem, residual connections in the pre-activation shortcut are floats point element-wise addition, which also suffers the same problem in maxpooling and spiking matrix multiplication.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The training cost (memory consumption and training time) of STM, SML, and STMixer should be reported compared with previous work.
2. The energy consumption of the SML block needs discussion.
3. A complete comparison of STM and SSA need to be carried out (report the results of replacing SSA in Spikingformer or Spikformer by STM block.).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment 1:
From your code and Table 2 (SML->SDT). The performance improvement mainly comes from several SML blocks, which consist of several ANN layers. When considering adding residual connection and several SML blocks, the STMixer can even be seen as an ANN model (floats input is connected to the last layer through intermediate hidden ANN ). In my opinion, this is a very fatal question.
Response to comment 1:
We apologize for the confusion. The STMixer does not involve the ANN module during the inference stage. The primary function of the SML block [1] is to transmit effective gradients to the SNN's intermediate layers during back-propagation, and it does not contribute additional information to the backbone SNN during forward propagation. Consequently, the SML block does not impact the forward propagation of the backbone SNN during the inference phase, as a result, SML blocks can be removed after the completion of training. The reported accuracy in all experiments pertains to the backbone SNN, not the SML block. We have added relevant instructions to the updated manuscript.
Comment 2:
From the perspective of solving the asynchronous scenarios problem, the residual connection is also a problem, residual connections in the pre-activation shortcut are floats point element-wise addition, which also suffers the same problem in maxpooling and spiking matrix multiplication.
Response to comment 2:
In asynchronous scenarios, a pre-activation residual connection is not impossible to realize. Despite being termed as 'membrane shortcut' [2], the variable employed for the shortcut is the increment of the membrane potential, not its cumulative value (residual membrane potential). The hardware needs a circuit that supports floating-points values transmit to realize the membrane shortcut. Although spike arrival delay may slightly alter the output of the convolutional or fully connected layer, the output cumulative value remains constant. As a result, the sum of the increments in the shortcut from the current layer to the subsequent neuronal membrane potential will not change significantly. In other words, the spike arrival delay can affect the sequence of spike arrivals, but the sequence does not affect the summation, which has little impact on the shortcut.
Comment 3:
The training cost (memory consumption and training time) of STM, SML, and STMixer should be reported compared with previous work.
Response to comment 3:
We compared the training time and memory consumption of STMixer (T = 4) on CIFAR-100 when using either STM or SSA as the token mixer. And we also provide the training consumption of STMixer with SML. The table below summarizes the results. The training overhead for the three situations does not differ significantly, with the SSA case having the greatest memory consumption and the STM+SML case taking the longest training time. We hope this information is helpful. Please let us know if there are any other aspects you would like us to address.
| Case | training time per epoch | memory consumption |
| ------- | ----------------------- | ------------------ |
| STM | 33.27 s | 11,774 MB |
| STM+SML | 40.04 s | 12,514 MB |
| SSA | 34.37 s | 13,994 MB |
Comment 4:
The energy consumption of the SML block needs discussion.
Response to comment 4:
We apologize if there was any confusion caused previously. The SML block is primarily involved in the training phase of the Spiking Neural Network (SNN), but it does not participate in the inference phase. As such, while it does have an impact on the training energy consumption of the system during the training phase, it does not contribute to the energy consumption during the inference phase of the SNN. We hope this clarifies the role of the SML block in the energy consumption.
Comment 5:
A complete comparison of STM and SSA need to be carried out (report the results of replacing SSA in Spikingformer or Spikformer by STM block.).
Response to comment 5:
We use the CIFAR-100 training script provided by the Spikingformer study and merely replaced its SSA module with STM. Consequently, the accuracy of the SNN dropped from 76.25% to 75.86%. This outcome aligns with the network search results displayed in Figure 4, suggesting that while the SSA module entails higher computational load, its performance is slightly superior to that of STM. Although STM's performance is inferior to SSA's, it possesses two unique advantages—lower computational energy on synchronous hardware and suitability for asynchronous hardware. Moreover, as shown in Table 2, STM benefits more from the SML training algorithm, which enables STM to achieve a performance of 79.55%, significantly higher than SSA's 78.33%. This indicates that STM holds the potential to rival the performance of the SSA module.
[1] Deng S, Lin H, Li Y, et al. Surrogate module learning: Reduce the gradient error accumulation in training spiking neural networks[C]//International Conference on Machine Learning. PMLR, 2023: 7645-7657.
[2] Hu Y, Deng L, Wu Y, et al. Advancing spiking neural networks toward deep residual learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
I hope this message finds you well. We greatly appreciate the time and effort you’ve already invested in reviewing our work, and we truly value your comments. We are writing to kindly follow up on the rebuttal we submitted and sincerely hope you could review it and update the scores if your concerns have been resolved. If there are any additional points you would like us to address, we would be more than happy to provide further clarification.
Thank you once again for your time and consideration.
Best regards,
Authors
---
Rebuttal 2:
Comment: I would like to express our sincere gratitude for your prompt response, for taking the time to review our rebuttal, and for increasing the score. | null | null | Rebuttal 1:
Rebuttal: **General Response**
We appreciate all of the reviewers' comments and reviews. Here, we would like to provide a general response to reemphasize the motivation of this paper and its contribution to the SNN field.
The main goal of this work is to design a well-performing SNN model that is friendly to asynchronous environments. Currently, SNN hardware bifurcates into synchronous and asynchronous types. Asynchronous hardware, devoid of a hardware clock and entirely event-driven, emerges as the ideal choice for SNN due to its substantially lower energy consumption. Nevertheless, it only supports very few network operations compared to synchronous hardware.
On the other hand, current advanced SNN models have adopted the design philosophy of Transformer, enhancing their performance significantly beyond previous spiking CNN models on GPU simulation. However, except for matrix addition, many of these SOTA SNN models necessitate other operations (e.g., matrix multiplication) when merging multi-branch spike matrices, which require precise timings for spike arrivals. In asynchronous environments, it is very challenging to ensure the simultaneous arrival of two spikes. Related works often overlook discussions about the SNN running environment. Our study aims to provoke researchers to rethink the adaptability of SNN models to different running environments.
Given the limited network operations supported by asynchronous hardware, the design of asynchronous friendly SNN is constrained, making the task of achieving superior performance with new models far more challenging than with existing advanced models. Consequently, the goal of this research is not to outperform the current SOTA SNN models, but to find an SNN model that is both compatible with asynchronous environments and capable of excellent performance in synchronous settings. With these considerations in mind, we propose the STMixer, which is composed of convolution, fully connected, and residual structures. It does not involve operations other than addition when merging multiple branches; thus, theoretically, STMixer supports asynchronous environments. Concurrently, our experiments demonstrate that STMixer can achieve outstanding performance even at extremely low time steps, suggesting its potential as an alternative SNN model for synchronous hardware as well.
STMixer can provide guidance for the design of asynchronous-friendly SNN models. And we hope that this paper will guide future SNN structural design efforts to pay more attention to implanting environment issues, further yielding a greater number of excellent asynchronous-friendly SNN models and training algorithms. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling | Accept (poster) | Summary: The paper introduces a framework named TREAT (Time-Reversal Symmetry ODE) aimed at improving dynamical system modeling through a physics-informed approach. It incorporates Time-Reversal Symmetry (TRS) as a regularization term to enhance model precision. This approach is shown to preserve energy in conservative systems and provide strong inductive bias for non-conservative, reversible systems. Theoretical proofs are provided to demonstrate the numerical benefitd∂. The framework's effectiveness is validated through experiments on nine diverse datasets, showing improvement over existing models.
Strengths: The paper proposed the TRS as a regularization term for dynamical system modeling. this regularization intuitively makes sense, by augmenting the reverse trajectory as additional data for training.
The paper provides theories that TRS minimizes higher-order Taylor terms, enhancing modeling accuracy.
The paper conducted extensive experiments on nine datasets show that TREAT outperforms other neuralODE models.
Weaknesses: 1. The paper only uses graphODE as backbone, from my understanding the TRS regularization can also fit with other neuralODE methods, it would be interesting to add experiments to show its compatibility (e.g. vanilla neuralODE).
2. The theoretical analysis in theorem 3.1 bounds the error in the embedding space z, however, the loss function in eqn 10 describes the error from original space y. it is worth mentioning the difference due to encoder/decoder error.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Figure 1 shows the time reserve symmetry system is inclusive in the general classical mechanics. I’m confused regarding the TRS definition in physics and the numerical TRS used in this paper. In my understanding, any dynamical system with deterministic ode description can reverse trace the initial points by solving it from the end point numerically? therefore, the method should be generally applicable?
2. In figure 4a, do you have any insights why the TREAT does not do well compared with others with shorter prediction length.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable comments and suggestions for improving our paper! We would like to make the following claims and hope they can address your concerns.
### **W1. About the use case for Vanilla neural ODE**
Thank you for the valuable suggestion!
Our TRS loss can be coupled with other models as well, due to its simplicity and flexibility. In our experiments for single-agent systems, TREAT actually becomes neuralODE + TRS loss, as there is no graph structure considered if there’s only one agent (also mentioned in Appendix D.2 page 25 line 673). As shown in Table 1, TREAT consistently outperforms other baselines across diverse single-agent datasets, showing its strong generalization ability and effectiveness.
Due to its simplicity, we believe other models, such as NeuralODE and flow-based methods, can also benefit from TRS loss.
### **W2. Theorem 3.1**
Thanks for pointing this out. We will add an explanation between the original space y and the embedding space z. Our theorem aims to highlight the differences between with and without TRS loss. Both scenarios involve an encoder and a decoder. However, since y and z correspond to each other through the encoder and decoder, the errors introduced by the encoder and decoder do not affect the validity of Theorem 3.1.
### **Q1. Is TRS a universal property of all classic ODEs?**
A deterministic ODE can indeed be solved numerically from the endpoint to trace back to the initial points. However, the definition of time-reversal symmetry (TRS) in physics requires that the results of solving the system forward and backward in time must be consistent. This does not always hold for any given ODE function. In Appendix B (page 18, line 523 ~ page 19, line 540), we provide examples of reversible and irreversible systems. These examples illustrate their ODE functions and verify the systems’ energy conservation and time-reversal symmetry. For example, spring systems with frictions do not satisfy TRS, and are also not energy-conservative.
### **Q2. Explanation of Figure 4 for short-term predictions**
Thanks for pointing it out! We first observe that on damped springs and forced springs, TREAT offers comparable results, and on the pendulum, TREAT significantly outperforms others.
As for the simple spring with shorter prediction lengths, we assume the potential reasons as follows:
1. For TRS-ODEN and HODEN, which do not have encoder and decoder structures, we compute each object’s initial state via linear spline interpolation if it is missing. Since Simple Spring is a trivial system, it is less sensitive to the initial state and easier to learn. Also, TRS may be less important for making short-term predictions than long-term predictions as suggested by Figure 4. Therefore, in the short term, forcing TRS and having a more complex encoder structure (TREAT) may add additional burdens to the model learning process, and simple models like TRS-ODEN and HODEN can already work well. However, we can observe in the long run, TREAT significantly achieves better results.
2. TRS-ODEN and HODEN are two single-agent baselines that do not consider the interactions among agents, i.e., the graph structure. As the dynamics of the Simple Spring are not very complicated, within a short prediction length, the interactions between objects may not have a significant impact on the trajectories. This aligns with observations in existing literature [1].
[1] Zijie Huang, Yizhou Sun and Wei Wang. "Coupled Graph ODE for Learning Interacting System Dynamics." KDD 2021.
---
Rebuttal Comment 1.1:
Comment: I thank authors for the detailed reply for my questions. I agree with most of them.
One minor concern regarding the definition of "reversibility" for dynamical system, I think the paper is applicable to all "time reversible" ODEs, which is more general to the hamiltonian reversibility in your appendix B? as long as the function is deterministic say $\dot{x}=f(x)$, it can always be back tracked by $\dot{x}=-f(x)$? this is also applicable to the spring system with friction.
---
Rebuttal 2:
Comment: Thank you very much for your valuable feedback!
Regarding your minor concern, we made the following clarification:
The definition of TRS is $\frac{d R\circ x}{dt} = -F(R\circ x)$ as shown in page 4 line 111-120. For Hamiltonian systems, the state x contains both position q and velocity p, i.e. $x = (q,p)$, and the reversing operator would work as $R\circ (q,p) = (q,-p)$ according to [1]. This is can simply be understood as: if a pendulum swings from A to B (with no frictions and no external forces, etc), we can just revert its velocity in the opposite direction, and let it swing back from B to A, the two trajectories would be the same. In this case, only velocity is reversed and position at B is kept the same.
Now let's check for spring systems with frictions, would it satisfy $\frac{d R\circ x}{dt} = -F(R\circ x)$. As shown in Appendix B.3, the deterministic ODE functions for such systems are defined as $\frac{dq}{dt} = \frac{p}{m}$, $\frac{dp}{dt} = -kq-\gamma\frac{p}{m}$, where $m$ is the mass. Since $R\circ (q,p) = (q,-p)$, going back to the TRS definition $\frac{d R\circ x}{dt} = -F(R\circ x)$, we need to check for position $q$, does the first ODE function follow $\frac{dq(-t)}{d(-t)} = -F(q(-t), p(-t))$, which implies $\frac{dq(t)}{dt} = -F(q(t), -p(t))$. Similarly, for velocity $p$, we need to check does the second ODE function follows $\frac{dp(-t)}{d(-t)} = -F(q(-t), p(-t))$, which implies $\frac{dp(t)}{dt} = F(q(t), -p(t))$.
We can easily verify that the first ODE function holds ($\frac{dq(t)}{dt} = -F(q(t), -p(t))$), while the second ODE function fails ($\frac{dp(t)}{dt} = F(q(t), -p(t))$). Therefore, spring systems with frictions do not satisfy TRS.
Please kindly let us know if there are any questions further, for which we're happy to provide additional information and clarifications. We truly appreciate the time and effort you’ve invested in reviewing our work!
[1] Lamb, J. S., & Roberts, J. A. (1998). Time-reversal symmetry in dynamical systems: a survey. Physica D: Nonlinear Phenomena, 112(1-2), 1-39.
---
Rebuttal Comment 2.1:
Comment: thanks for the clarification, I mistook "time reversibility" vs "time-reversal symmetry" in your paper. i will raise my score to 7.
---
Reply to Comment 2.1.1:
Comment: Thank you for your prompt response and positive feedback on our rebuttal! We will definitely incorporate your valuable suggestions in our revised version. | Summary: The paper proposes a novel Time-Reversal Symmetry (TRS) graph neural ODE, where the TRS is introduced as a soft regularisation term.
Strengths: The paper presents a novel method to combat numerical errors for graph neural ODEs.
Weaknesses: The paper does not introduce early enough that the time-reversal symmetry (TRS) method has been applied to other DL methods such as the TRS-ODEN.
Technical Quality: 3
Clarity: 2
Questions for Authors: Are there cases where the time-reversal symmetry expectation is violated?
A comparison to a numerical solution could serve as an additional benchmark to show how TRS decreases the numerical errors and improves the prediction. Also, why considering only MSE as a performance metric?
The code is anonymised, but it really needs cleaning and proper documentation to the very least docstrings and comments; I have not tested running the code.
The work will benefit from additional proofreading.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors do not provide a separate Limitations section, and do not discuss limitations of the work in the Conclusions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're grateful for your support and helpful feedback! Please kindly find our response below to the questions raised in the review and let us know if you have any further questions.
### **W1: Reference about TRS-ODEN.**
Thank you for your advice! We would like to discuss more about TRS-ODEN in the introduction part in the revised version. However, we would like to clarify a few points:
We do not position our paper as an improved version of how to enforce TRS as in TRS-ODEN paper. Instead, we would like to find a general physics-informed prior for domain-agnostic dynamical system modeling. To this end, we found that TRS is a great property to achieve that. We would like to emphasize our major contribution is to find the numerical benefits of TRS regularization, i.e. minimizing higher-order Taylor expansion terms during ODE integration steps. This makes the proposed TRS loss widely applicable to a wide range of dynamical systems, regardless of their physical properties (i.e. even irreversible systems). Therefore, we did not see TRS-ODEN as a direct competitor. Nonetheless, in sections where details are needed, we discussed more details about TRS-ODEN (first appeared in Sec.2). However, we will make sure to clarify this further in the introduction part to make it clearer.
### **Q1: Cases where TRS is violated**
Thanks for your question! Indeed, some systems may not strictly obey the TRS due to situations such as time-varying external forces, internal friction, and underlying stochastic dynamics. For example, spring systems with frictions do not adhere to TRS. We illustrate examples for reversible and irreversible systems in Appendix B (page 18, line 523 ~ page 19, line 540). Therefore, a desired model shall be able to flexibly inject time-reversal symmetry as a soft constraint, so as to cover a wider range of real-world dynamical systems. Note that from the numerical aspect, we also theoretically prove that the TRS loss effectively minimizes higher-order Taylor expansion terms during ODE integration, offering a general numerical advantage for improving modeling accuracy across a wide array of systems, regardless of their physical properties (even for irreversible systems). Therefore, TREAT achieves high-precision modeling from both aspects as depicted in Figure 1(a).
### **Q2-1: Can TRS decrease numerical errors across different solvers?**
To show TRS can in general help to reduce numerical errors across different choices of solvers, we show the performance of the Euler method and Runge-Kutta (RK4) methods, which have different trade-off between precision and time. As suggested by the results below (also shown in Appendix E.1 on page 26, line 726-735), TREAT consistently outperforms LGODE, our strongest baseline, across different solvers and datasets.
We also notice that the improvement ratio $\(\frac{LGODE - TREAT}{LGODE}\)$ is larger when using RK4 compared to Euler. This suggests that our TRS loss minimizes higher-order Taylor expansion terms, thus compensating for numerical errors introduced by ODE solvers.
| Dataset | Simple Spring | Forced Spring | Damped Spring | Pendulum |
|---------------|-----------------------|---------------|---------------------|--------------------|
Solvers| Euler | RK4 | Euler | RK4 | Euler | RK4 | Euler | RK4
:--|:--|:--|:--|:----|:---|:-----|:---------|:-----------
LGODE | 1.8443 | 1.7429 | 2.0462 | 1.8929 | 1.1686 | 0.9718 | 1.4634 | 1.4156
TREAT | **1.4864** | **1.1178** | **1.6058** | **1.4525** | **0.8070** | **0.5944** | **1.3093** | **1.2527**
\% Improvement | 19.4057 | 35.8655 | 21.5228 | 23.2659 | 30.9430 | 38.8352 | 10.5303 | 11.5075
### **Q2-2: Evaluation metric**
Thank you for your valuable feedback. Since our work focuses on deterministic trajectory prediction, calculating the MSE between the predicted trajectory and the ground truth trajectory is the most direct and effective measure. Therefore, we followed the experimental settings of existing work [1] [2].
[1]Huang, Z., et al. "Learning continuous system dynamics from irregularly-sampled partial observations." Advances in Neural Information Processing Systems 2020
[2] Kipf, Thomas, et al. "Neural relational inference for interacting systems." International conference on machine learning. PMLR, 2018.
### **Q3: Code**
Thank you for your feedback. We apologize for not providing sufficient docs and comments for the code. We have refined our code structure and documentation in the anonymous GitHub repo.
### **Q4: Discussion about limitations**
Thank you for your valuable suggestion. We have included the limitations section in Appendix H (page 27, lines 776-779). We apologize for any inconvenience this may have caused. In the revised version, we promise to include it in the main part of the paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for looking into the provided comments and suggestions, and I looking forward to reading the revised manuscript.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer D3dA,
We sincerely thank you for your encouraging feedback on our rebuttal! We hope this align with your expectations and positively influence the score. We will definitely incorporate your valuable suggestions in our revised version, and please kindly let us know if there's anything else for further improvement. Thanks again for your valuable time and efforts in reviewing our paper! | Summary: This paper proposes a regularization term to enforce Time-Reversal Symmetry (TRS) for modeling dynamical systems. The method helps preserve energies for conservative systems while serving as a strong inductive bias for non-conservative reversible systems. They also prove that TRS loss can universally improve modeling accuracy by minimizing higher-order Taylor terms in ODE integration. Numerical examples show that integrating the TRS loss within neural ordinary differential equation models demonstrates superior performance on diverse physical systems.
Strengths: This paper focuses on learning complex physical dynamics from data and incorporating physics priors into learning models, which is an important area. This paper proposes a regularization term to enforce Time-Reversal Symmetry (TRS), which is an important physics prior.
Also, the paper numerically shows that integrating the TRS loss within neural ordinary differential equation models demonstrates superior performance on diverse physical systems.
Weaknesses: While the paper on time-reversal symmetry demonstrates several strengths, there are also some potential weaknesses that could be addressed:
1. Time-Reversal Symmetric ODE Network has been proposed (https://arxiv.org/pdf/2007.11362). This article is also cited in this paper. However, this paper only points out that two articles use different properties (Equation 5 and Lemma 2.1) to sign the regularization term, the difference in the final regularization term is not fully explained.
2. The numerical experiments also demonstrated that the method proposed in this paper outperforms existing time-reversal symmetry regularization (https://arxiv.org/pdf/2007.11362). However, the reason why this method is better in this paper is not explained in detail.
3. Some descriptions in the text are not clear enough, please refer to the question section below.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the proof of Lemma 2.1, it is required that R be an involution operator. However, such assumption is not stated in Lemma 2.1. Is there any evidence to prove that this assumption is reasonable?
2. The definition of R does not appear in the paper. Can the method proposed in this paper hold for any R?
3. How is regularization term 9 obtained based on Lemma 2, as R did not appear in the final regularization term 9?
4. Does Theorem 3.1 hold for any untrained NN?
5. The definition of $L_{reverse2}$ does not appear in the paper. Can the condition $L_{reverse2}=L_{reverse}$ be satisfied?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your insightful comments on improving our paper. However, we believe there’s some misunderstanding and would like to make the following claims to address your concerns.
### **W1: The difference to the final implementations.**
We would like to mention that both methods are approximations towards TRS. In Appendix A.4 (page16), we show the difference of the final implementations and their visualizations in Figure 8 (assuming one integration step).
The TRS loss in TREAT (ours) follows Lemma 2.1: $R \circ \Phi_t \circ R \circ \Phi_t = \text{I} $
It means: we start from $\boldsymbol{\hat{y}}_i ^{\text{fwd}}(0)$ and first move forward one step, reaching the state $\boldsymbol{\hat{y}}_i ^{\text{fwd}}(1)$. Then we reverse it and move forward one step in the opposite direction, getting $\boldsymbol{\hat{y}}_i ^{\text{rev}}(0)$. Finally, we reverse it and it shall restore back to the same state. That is ideally it should be the same as $\boldsymbol{\hat{y}}_i ^{\text{fwd}}(0)$.
The second reverse loss in TRS-ODEN follows Eqn 5 as $R \circ \Phi_t = \Phi_{-t}\circ R $. It means we first reverse the initial state and move forward one step in the opposite direction to reach $ \boldsymbol{\hat{y}}_i ^{\text{rev2}}(-1)$. We then perform a symmetric operation to reach $ \boldsymbol{\hat{y}}_i ^{\text{rev2}}(1) $, which should align with the forward one $\boldsymbol{\hat{y}}_i ^{\text{fwd}}(1)$.
The key differences are illustrated in Figure 8. Our method forces the starting point of backward trajectories to be the same as the end point of the forward one. In TRS-ODEN, the backward trajectories start from the same point as the forward one.
We analytically show why our implementation is better based on Figure 8 in Appendix A.4.
### **W2: Why our implementation is better.**
As shown in Appendix A.4 (page 16 line 494-501), we analytically show our implementation based on Lemma 2.1 to approximate TRS has a lower maximum error compared to TRS-ODEN, supported by empirical experiments in Sec. 4.2.
We here use one integration step for illustration purpose. Specifically, if we assume the two reconstruction losses
are of the same value $a$, and the two TRS losses have reached the same value $b$, we show that the maximum error between the reversal and ground truth trajectory for each agent, made by TREAT is smaller. In TREAT, the maximum error is $max(a,b)$, whereas TRS-ODEN is $a+b$.
Finally, we would like to summarize our contribution again: we do not position our paper as an improved version of how to enforce TRS as in the TRS-ODEN paper. Instead, we would like to find a general physics-informed prior for domain-agnostic dynamical system modeling. To this end, we found that TRS is a great property to achieve that. Our major contribution is a physics-inspired regularizer that works well beyond physics domains, due to its numerical properties. Specifically, our TRS loss minimizes higher-order Taylor expansion terms during ODE integration steps. This makes the proposed TRS loss widely applicable to a wide range of dynamical systems, regardless of their physical properties.
### **Q1.**
Thanks for your question. The reversing operator R is an involution by definition [1], i.e. $R\circ R = I$. We will clarify this in Lemma 2.1 based on your valuable suggestion.
For example:
Consider reversing the position $q(t)$, we have $R\circ q(t) = q(t)$ [1]. Therefore we can get $R\circ R\circ q(t)=R \circ q(t) = q(t) $,
As for velocity $p(t)$, we have $R\circ p(t) = -p(t)$ [1]. Therefore we can also get $R\circ R\circ p(t)=R\circ (-p(t))= p(t)$
[1] Lamb, J. S., & Roberts, J. A. (1998). Time-reversal symmetry in dynamical systems: a survey. Physica D: Nonlinear Phenomena, 112(1-2), 1-39.
### **Q2.**
The definition of $R$ is illustrated in section 2.2 (page 4 line 111-128). It is a reversing operator defined as $R: {x} \mapsto R \circ {x}, R \circ {x} (t) = {x}(-t)$ where $x$ is the observational state of the system. For different systems, $x$ can contain different state variables, so this definition is universal to all systems. For example, if $x = (q,p)$ where $q$ is the location and $p$ is the velocity, $R\circ x = (q,-p)$ [1].
[1] Lamb, J. S., & Roberts, J. A. (1998). Time-reversal symmetry in dynamical systems: a survey. Physica D: Nonlinear Phenomena, 112(1-2), 1-39.
### **Q3.**
As shown in Figure 2 and Figure 8, $z^{\text{rev}}(t’_0)=R \circ z^{\text{fwd}}(t_K)=z^{\text{fwd}}(t_K)$.
Combining this with Eqn 8, we obtain $ y^{\text{rev}}(t’_j)$
and finally, we derive Eqn 9.
### **Q4.**
Yes. Theorem 3.1 does not restrict the form of the ODE function $g$. The ODE function $g$ can be any NN in practice.
### **Q5.**
Thanks for pointing this out! The definition of $\mathcal{L}_{reverse2}$ is proposed in TRS-ODEN, and detailed in our Appendix A.4. We will take your advice and add it to the main text for clarification.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, my questions are basically resolved and the score has been updated. The main concern is that novelty migt be slightly limited, because time-reversible symmetry and adding it to the network through regularization have been proposed. I fully understand the difference in the details of the two methods after reading the response, and I feel a possible improvement is to theoretically prove that the existing methods cannot meet the good properties of the NN in the paper.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer SfDm,
We sincerely appreciate your valuable feedback and raising the score! We will definitely incorporate your suggestions into our revised version. Regarding the novelty, we would like to further clarify that: our work is not a simple enhancement of achieving TRS. Our motivation is to find a domain-agonistic physical prior and achieve high-precision modeling for a wide range of dynamical systems, in contrast to the domain-specific physical prior such as energy conservation. Our key contribution is that: **While TRS is a domain-specific physical prior**, we present **the first theoretical proof** that **TRS loss can universally improve modeling accuracy** by minimizing higher-order Taylor terms in ODE integration, which is numerically beneficial to various systems regardless of their properties, even for irreversible systems. **This bridges the specific physical implication and the general numerical benefits of the physical prior - TRS (as illustrated by Figure 1 (a).**
Regarding the model performance, our Lemma 3.2 demonstrates that when both methods (TREAT and TRS-ODEN) achieve the same numerical errors $\( L_{\text{pred}} \)\quad and \quad \( L_{\text{reverse}} \)$, the maximum error between the reversal and ground truth trajectory for each agent made by our model, TREAT, is smaller compared to the TRS-ODEN. This is also validated by our experiments in Table 1 ( ablation study of $TREAT\_{\text{Lrev=rev2}}$.
Once again, we sincerely appreciate your time and positive feedback for improvement! Please kindly let us know if you have any questions further. | Summary: This paper proposes a method to enhance neural ordinary differential equations (ODEs) by enforcing approximate Time-Reversal Symmetry. A self-supervised regularization term is introduced to align forward and backward trajectories predicted by a neural network, promoting energy conservation and stability in the system. Experiments are conducted on 9 datasets, comprising both real-world and simulated systems. The proposed approach outperforms other baseline methods, demonstrating its effectiveness in improving neural ODEs.
Strengths: * The idea of imposing time-reversal symmetry via a regularization term is novel and well-motivated.
* The implementation of the regularization term is simple yet the effect is visible.
* Overall the article is well written, and technical details are explained clearly.
* The results show significant improvement over other baseline models.
Weaknesses: * It would be more interesting to see if the model works for other real-world examples/experiments.
* The article lacks a more comprehensive review on other energy-preserving / time-reversal neural ODE solvers.
Technical Quality: 3
Clarity: 4
Questions for Authors: Does the method work well with models other than GraphODE?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors briefly touched upon the possibility of incorporating properties in the spatial aspect such as translation and rotation equivariance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful comments and valuable advice on improving our paper. We highly appreciate your recognition of the novelty and significance of our work. Regarding your questions, we detailed our responses below.
### **W1: Empirical results on additional real-world examples.**
We appreciate your suggestions to add more real-world examples. In our experiments, we have 9 diverse datasets spanning across 1.) single-agent, multi-agent systems; 2.) simulated and real-world systems; and 3.) systems with different physical priors. Most of them are simulated datasets, as real-world datasets do not conform neatly to specific physical laws, making it challenging to classify them as either conservative or reversible straightforwardly and to compare them with existing physics-informed baselines. Notably, on one real-world human motion dataset (walking object), TREAT outperforms other baselines significantly, showing case its strong generalization ability and numerical benefits.
To address your concern, we additionally added a new human motion dataset (dancing object). The results below suggest that TREAT consistently outperforms other baselines on the new real-world dataset, showing its effectiveness.
|model | TREAT | LGODE | TRS_ODEN |HODEN|LatentODE|
|--|--|--|--|--|--|
|MSE|$\underline{2.5420}$|2.7270|3.6885|4.4342|23.0157|
### **W2: review on other energy-preserving/time-reversal neural ODE solvers.**
We appreciate your suggestion to incorporate more related work on energy-preserving and time-reversal neural ODE solvers. In response, we have drafted the following additions to our revised manuscript.
Various methods have been developed to maintain the total energy of dynamic systems over time [1][2][3][4]. However, strictly energy-conservative approaches can be unrealistic for non-isolated systems that interact with their environments. Conversely, methods that allow for both energy-conserving and dissipative models, as well as reversible and irreversible systems, offer more flexibility [5][6][7][8]. These methods, however, require prior knowledge of the system's properties. For example, [7] necessitates determining the appropriate bracket operator for different systems. Our model, in contrast, introduces a unified approach that learns both the trajectories and the aforementioned properties dynamically, without needing to assign a specific property beforehand.
[1] Greydanus, Samuel, Misko Dzamba, and Jason Yosinski. "Hamiltonian neural networks." Advances in neural information processing systems 32 (2019).
[2] Gruver et al, Deconstructing the inductive biases of Hamiltonian neural networks, ICLR 2022.
[3]Han, Chen-Di, et al. "Adaptable Hamiltonian neural networks." Physical Review Research 3.2 (2021): 023156.
[4] Mattheakis, Marios, et al. "Hamiltonian neural networks for solving equations of motion." Physical Review E 105.6 (2022): 065305.
[5] Zhong et al, Dissipative symoden: Encoding Hamiltonian dynamics with dissipation and control into deep learning, et al, ICLR 2020 workshop.
[6] Morrison, Philip J. "A paradigm for joined Hamiltonian and dissipative systems." Physica D: Nonlinear Phenomena 18.1-3 (1986): 410-419.
[7] Gruber, et al, Reversible and irreversible bracket-based dynamics for deep graph neural networks, NeurIPS 2023.
[8] Huh et al, Time-reversal symmetric ode network, NeurIPS 2020.
### **Q1. Can the TRS loss be coupled with other models?**
Our TRS loss can be coupled with other models as well, due to its simplicity and flexibility. In our experiments for single-agent systems, TREAT actually becomes neuralODE + TRS loss, as there is no graph structure considered if there’s only one agent. As shown in Table 1, TREAT consistently outperforms other baselines across diverse single-agent datasets, showing its strong generalization ability and effectiveness.
Due to its simplicity, we believe other models, such as NeuralODE and flow-based methods, can also benefit from TRS loss. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Induced Model Matching: Restricted Models Help Train Full-Featured Models | Accept (spotlight) | Summary: This paper proposes a method to train models which have access to all of the features by making their marginal distribution match a known weaker model which only uses a subset of those features to predict. The authors relate this to knowledge distillation and noising, and come up with an approximate objective which computes this marginalization over the bigger model. Even though it might be intractable to do analytically, they provide a sampling-based method for the computation. They evaluate the model on logistic regression and language modelling
Strengths: - The relation of the proposed method to the previous work in noising is insightful and not immediately obvious, giving an interesting perspective.
- The connection to knowledge distillation is valuable to highlight.
- The code for the method is available.
- The text is generally very well-written and explains the proposed method very well.
Weaknesses: - The empirical results are somewhat dated. The models considered are quite old (RNNs, BERT) and it would be nice to see a more modern set of models such as decoder-only transformers or state-space models. However, using the old models does allow comparison to the noising approach. While the direct comparison to noising is appreciated, in practice most people would likely be comparing to the vanilla training approach as a baseline.
- The main weakness of the paper is the computational overhead incurred by this method, which is not adequately addressed in the paper. With the introduction of language modeling, the question of how this method would scale to larger datasets and models is important, and the scaling properties do not appear favorable.
- For the RNN experiment, each forward and backward pass seems to require k additional passes due to the gradient accumulation for the samples drawn for the model matching objective. This incurs a significant overhead factor of k.
- The overhead factor may actually be worse when considering the need to replace the context for each of the k summands in the IMM objective. This requires swapping out the current context and computing a new set of hidden vectors, potentially up to the length of the input. A back-of-the-envelope calculation suggests the FLOPs could increase from O(L) to O(L + kL^2). However, there may be some subtlety I'm missing here that reduces the overhead.
- For a large dataset, the lookup of the context could be quite difficult to implement. In the worst case, for n sequences of length L, storing the auxiliary data structure could require space on the order of nL^2, which is a factor of L larger than the original dataset. Supporting random access to this data structure may not be feasible for large corpora like Common Crawl without significant infrastructure overhead.
Technical Quality: 3
Clarity: 4
Questions for Authors: My questions all relate to my understanding of the limitations of the IMM approach. I would be happy to reconsider my assessment if the above questions are addressed satisfactorily.
1. Could the authors clarify the approximate computational and memory complexity of this method in terms of the number of sequences n, sequence length L, and any other relevant parameters? It would be helpful to see the analysis for both RNNs and Transformers.
2. Could the authors comment on the asymptotic space complexity of the auxiliary data structure? For large corpora, it seems like this could require a huge amount of disk space and be challenging to query efficiently. Is there a data structure that could mitigate these concerns?
3. Could the authors provide a FLOPs-corrected comparison between the IMM method and the vanilla training approach? Given the higher computational cost per forward and backward pass for IMM, it would be informative to see how the two methods compare when given a similar computational budget.
4. Can the authors comment on practical cases where the IMM approach would be preferred over spending the additional $k$ FLOPS on e.g. processing $k$ more examples?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your deep reading of our paper and for being open-minded about revisiting your assessment! Regarding the key points you raise:
+ **Empirical results dated** — You are correct that the motivation for comparison with LSTMs was because of the noising baselines existing there. We used the encoder/decoder BERT to illustrate validity for Transformers, however we didn't have the infrastructure for large-scale models like LLMS. We hope that the body of all the experiments we've included, non-language models too, show the wide merit of the approach.
+ **Computational overhead** — IMM, as implemented in the paper, does have a computational cost relative to the baseline of training only with cross-entropy. We detail this overhead in answering your questions below. You are right that this is a limitation, however our focus was on the statistical aspect. Given limited data (so additional samples are not an option) along with a good restricted model (potentially derived from the same data, or from a corpus of restricted data), IMM achieves improved accuracy/perplexity/reward. That said, in the general rebuttal we outline a potential solution to tackle the computational overhead.
Regarding your questions:
First, to clarify, your use of $L$ should be replaced with the unroll length of the LSTM, which is the maximum depth through which gradients are backpropagated (set to 35). Also, to be consistent with the paper, we use $n$ for the length of the data (in tokens/words).
1. **Time complexity** — The best way to quantify the computational overhead is relative to the baseline of using traditional cross-entropy.
+ Let $L$ again represent the unroll length of the LSTM, then since histories have to be swapped at every unroll location and the LSTM re-evaluated, the overhead factor is $\mathcal{O}(kL)$. Therefore your expression is partly correct, apart from $L$ representing unroll length and $n$ data size, i.e. we go from $\mathcal{O}(n)$ to $\mathcal{O}(nkL)$ per epoch. We partially mitigate this in our current implementation by applying IMM periodically (not at every iteration), see Appendix D2.2. For example, if we do it only every $\Omega(1/L)$ iterations, the overhead factor becomes $\mathcal{O}(k)$ and we go from $\mathcal{O}(n)$ to $\mathcal{O}(nk)$ .
+ For BERT/Transformers (and in fact in our non-language modeling experiments), the baseline does not have a recurrence that require new passes, and therefore the overhead is $\mathcal{O}(k)$, so we go from $\mathcal{O}(n)$ to $\mathcal{O}(nk)$.
+ A $k$-fold increase is not ideal, but it is acceptable as our choices of $k$ range from $5$ to $10$.
2. **Space complexity** — There are two overheads:
+ *Model memory*: As explained in Appendix C.2, we sequentialize the gradient computation across the $k$ sample. This means that the space overhead during training is a factor of $2$ compared to the baselines (i.e., a second set of gradients), which are themselves of the order of the number of parameters of each model.
+ *Lookup overhead*: You are correct that a naive implementation of the data structure could take space $\mathcal{O}(nL)$. However, the better implementation is with a dictionary/hash table of lists containing indices/pointers to a reverse linked list representing the data set. By referencing a position, we can recover a length-$L$ history at any position, at runtime. This requires space $\mathcal{O}(n)$ only, because each key in the dictionary is a short history, and the number of long histories is the same as the number of times that short history appears. Adding up the number of occurrences of all short histories, gives us $n$.
3. **FLOPS-Correction** and 4. **Comparison** — For our reply, we would like to address an important misunderstanding here.
+ IMM is not expected to be used in situations where we have access to an interminable data stream, where we can simply continue to train the cross-entropy baseline until the clock runs out. Even in the best of cases (see the general rebuttal) IMM could take twice as much time as the baseline, and it's unclear when the performance gain could exceed having twice as much data (often the gains seem to be equivalent to 20-30% more data.)
+ Rather, IMM is expected to be used when the data is what it is, and one is required to additionally incorporate the side-information presented in the form of the feature-restricted model. The goal is to obtain a statistical advantage, which IMM does, at the expense of computational overhead. In the premise of IMM, it simply is not an option to collect that extra full-featured data. Therefore FLOPS-corrections and comparisons are not germane to the context, and if done they will unfairly show that IMM is not competitive. We hope that you do accept this explanation as to why this request cannot be addressed equitably.
We believe that we have addressed all your major points. If there is anything else we could elucidate, please don't hesitate to ask us during the discussion period. We appreciate your deep insight about the computational aspect of the problem. We hope that we have quantified our current overhead and given insight about how this overhead can be reduced in the future. However, considering that our focus was primarily statistical, we hope that you will judge the merits of the paper on that basis. We believe we have something very valuable for the community. We would immensely appreciate it if you could recommend acceptance of the paper! Thank you once again for all your time and effort.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the detailed response.
After reading the rebuttal and other rebuttals and reviews, I am still a bit skeptical of the computational feasibility of the approach. Slowing down training by a factor of 5-10 is quite a sizeable decrease, and this should be addressed in the main text and not glossed over. That said, I appreciate the authors' willingness to engage with the topic in the rebuttal and especially to identify alternative, equivalent, objectives that are more tractable (such as in the main response).
As the authors point out, I understand the focus is on the relatively small-data regime, where IMM does seem to show some promise. I will raise my recommendation, although I urge the reviewers to try their best to make the IMM approach more computationally tractable for their next revision if they want it to be adopted more widely.
---
Rebuttal 2:
Title: Thank you so much! + Update on a computationally efficient version.
Comment: Dear Reviewer,
We are humbled by and very appreciative of your decision to move our work firmly into the accept zone. Thank you so much! Computation is not an afterthought for us. We will itemize both time and memory complexity in detail in the paper, just as we did in our rebuttal.
More importantly, to show our dedication to making IMM computationally efficient, we were working very hard on implementing the computationally efficient version that we proposed in the general rebuttal. We had to surmount several technical hurdles, but we were finally successful in applying it to the logistic regression example. In a comment to the general rebuttal, we report on these results. The new method is only 50-70% slower than the baseline, but it continues to have an edge on noising and tracks the version of the paper very closely, especially as the dataset size increases. We also highlight how the same steps could be taken in the language modeling examples.
We don't claim to have fully solved the problem, but we now have both a formulation for computationally efficient IMM and a proof of concept. We plan to describe these in the current paper and to add it to our published code. Thank you very much for pushing us to do this. The case for IMM is so much stronger as a result.
You've been very generous with your revised opinion and we hope that you will continue to support the paper. (We have revised this paper a few times, and it would be great if our final revision is for the camera-ready version at this year's NeurIPS.)
Thank you again for everything.
Sincerely,
Authors | Summary: The paper considers the learning problem when in addition to the training set an additional _restricted_ model is available. The restricted model is trained on a different dataset, potentially containing a only subset of features. It is proposed to augment the training loss with the special induced model matching (IMM) loss that encourages similarity between full and restricted predictive distributions where the extra features are marginalized on the empirical data distribution. IMM is compared to similar techniques such as noising and weak-teacher knowledge distillations where the benefits of IMM are demonstrated theoretically. Experiments are conducted on a toy logistic regression problem, language modelling tasks and reinforcement learning in a toy environment.
Strengths: The proposed idea is interesting and it seems to be an improvement over the existing techniques such as noising and reverse-KD. I see value in being able to systematically incorporate restricted knowledge from a different distribution. Empirically IMM seems to be helping on a number of LM tasks.
Weaknesses: The paper adopts rather exotic notation that doesn't help reading the paper. I had to constantly look into the glossary in the appendix to go through equations. Descriptions of each experiment are also not self-contained, even when reading the appendix. The general idea of IMM is not the most intuitive to me and I think the paper would benefit from a simple graphical illustration of the principle. In my opinion, insufficient clarity in presenting the method and describing experiments is the main weakness of the paper.
If I understand the setup of LM experiments correctly (full and restricted models are trained on the same dataset) then the idea of the restricted model capturing a richer distribution is under-explored and the only way I can see IMM improving performance is as a regularizer (see my questions).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. I struggle to find a clear intuitive explanation for why induced model matching helps. Can authors think of a simple 2-dimensional logistic regression problem (for example) and include a graphical illustration of full and restricted models (and their induced variants)?
2. Do I understand it right that the LM experiments the restricted n-gram model has been "trained" exactly on the same dataset on which the full model is being trained?
2.1 If so, then, again, the only reason why IMM helps I can think of is some kind of "smoothing" or regularization of the full model, which probably learned an overly "kinky" distribution from limited data. In the eyes of the authors is that the right intuition?
3. Is there an experiment where the restricted model indeed comes from a different and much larger dataset (which authors mention as indeed an interesting scenario for applying IMM)? It would be interesting to look at the "trade value" of a"full" data point compared to a"restricted" data point.
4. Could authors hint at the scenarios in which IMM is not useful or even harmful? I would be especially interested if that could be the property of the model classes or data distributions from the full and the restricted model come.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your very constructive review! We address some of the points raised:
+ **Notation** — Thank you for your suggestions on improving the notation. We have been working on a few potential alternatives to further clarify our presentation. Here's what we suggest to make things more readable.
- Instead of using $y_{-}$ or $y_{-t}$ for context, we'll use $x$ nd $x_t$, and reserve $y$ and $y_t$ only for predictions.
- In order to denote the short/extended components of a context, instead of using $\textsf{sh}(x)$ and $\textsf{ex}(x)$, we'll use $\overline{x}$ and $\underline{x}$ respectively. This will parallel using $\overline{P}$ when denoting the induced model, which only depends on the short history.
- While we tried to be very explicit in our notations, we believe these changes will declutter equations and make everything more legible. We hope that you approve. We are open to other suggestions.
+ **Graphical Representation** — Thank you for the suggestion. We propose to offer a high-level visual representation of the approach. [We have drafted the following figure](https://i.imgur.com/NwQgOPV.png), which also incorporates the notational changes suggested above. We hope to improve this and include it in the paper.
+ **Experiment Descriptions** — In the main body of the problem, it is difficult to include all the details of the experiments. Appendix D has all the details. However, we will revisit how we have divided this information across the main body and the appendix. We will make every effort to make each experimental setup self-contained in the main body, with precise references to the appendix.
Also your questions:
1. **Visualizing restricted model in logistic regression** — [The following figure](https://i.imgur.com/G4lLm2u.png) illustrates the 3-dimensional logistic regression problem in the paper. The features are samples uniformly in this box. The Bayes-optimal restricted model only uses the x-coordinate, and so assigns probabilities proportionally to the blue/red areas in the illustrated slice. IMM then encourages the full logistic model to be consistent with these weights, i.e., making sure the proportion of points labeled $\pm$ agrees with these weights at each x. Intuitively, this biases the separating plane to have the right inclination/alignment with the x-axis, which subsequently speeds up the learning process.
2. **When the data provides the restricted model** — Your intuition is correct for our own language modeling experiments. We do use the data itself to create our bigram for PTB, and for BERT we use Wikipedia, which it is pretrained on. The extensive literature that tries to incorporate N-grams into more sophisticated language models argues indeed that N-grams tend to capture structural detail that may be missed by the larger model, albeit in their restricted setting. However, some newer papers train N-grams on much larger datasets (since it's faster to do so), which adds more benefit (see Liu et al. 2024).
3. **When the restricted model comes from elsewhere** — In our other experiments, logistics regression and RL, the restricted model is more powerful/rich. As mentioned above, for logistics regression it is the Bayes-optimal restricted model. For RL, it is the exact solution for the POMDP, which can be thought of as full exploration of the reward landscape, but with only one-coordinate observation. We do not have a data-point by data-point value comparison, however, we do have in Appendix D4 (Figures 6 and 7) the effect of artificially weakening those perfect models. The bottom line is that as long as we have that extra data to build a decent approximation for the true restricted model, then IMM improves performance, in a way commensurate with the quality of the model (thus the amount of restricted data.) We have not done a theoretical analysis of this tradeoff, but it's an excellent suggestion for future investigation.
4. **When is IMM harmful** — In our experience, we can identify two scenarios in which IMM can be harmful:
+ If, simultaneously, (a) the restricted model is of bad quality, and (b) the value of $\lambda$ is not properly tuned. This is hinted at in the importance of tuning lambda properly in the low-quality model experiments of Appendix D4. Therefore, you are right that if the model class of the restricted model is not powerful enough, it may not capture the true restricted model properly, and could lead to harm. (However, this goes against the main premise of the paper, i.e., the availability of good target models)
+ Also, when the amount of data is so limited or the distribution of the extended context given the short context is hard to learn (e.g, lack of structure such as smoothness or latent low-dimensionality), then the learned induced model $\hat Q$ will not be accurate, and the performance can suffer. We see a hint of this in Figure 1, where with very few data points IMM is bested by noising (though it is not worse than no-IMM).
We did our best to address your concerns. If there is anything else we could elucidate, please don't hesitate to ask us during the discussion period. We appreciate your leaning toward acceptance. We hope that you will engage us further, and that based on this we will earn a higher score from you! Thank you so much for your time and effort.
---
Rebuttal Comment 1.1:
Comment: I thank authors for their clarifications and I'm raising my score for 1 point. I hope that the improved notation and the toy task figure (which I would still improve) will make it into the future version of the paper.
---
Reply to Comment 1.1.1:
Title: Thank you very much!
Comment: Dear Reviewer,
We thank you very much for approving of the new notation. We will definitely use it in the paper, since it's so much more streamlined. We also have a few ideas on how to improve the visualization of the toy example. In particular, we can display the Bayes restricted model and a model-in-training along with its induced model, and show how IMM encourages one to tend toward the other (the alignment that we mentioned).
We appreciate a lot the additional point to your score! We're sorry for not replying sooner. We were spending a lot of time researching a computational efficient variant of IMM. We were finally able to implement it for the toy example, and our results are summarized in a comment to the general rebuttal. Please feel free to consult it as you make your final recommendations.
Sincerely,
Authors | Summary: The authors introduce the problem of “Induced Model Matching” where there exists a small and restricted model that only takes into account some of the features and is able to predict relatively well the label given these features. The key question of this paper is how one can leverage such a small model when training a full-feature, larger model. The authors answer this question by providing an algorithm which aims to match the restricted features version of the large model to the restricted features model. The authors suggest a regularization term to the loss, called IMM, which accomplishes this. The authors present a toy experiment of regression as well as a larger-scale experiment with language, showing that their method is able to perform better when using the IMM regularization.
Strengths: * This is a very interesting topic, and seems like it could be useful in real-world scenarios. The idea appears novel and also is an interesting angle when compared to knowledge distillation.
* It is very well-analyzed, and thoroughly explains how the method compares to existing methods.
* The authors provide a clear presentation of their ideas as well as clear definitions. It is also nice to also have included a glossary in the appendix. Overall the paper is very clear to follow and precise about definitions.
* The experiments show that, indeed, the proposed regularization term benefits the large model.
Weaknesses: * As the authors mention, it is difficult to compute the regularization term. The computational cost is a drawback, although having increased performance is still a nice result. Given the computational cost, this brings into question when this method becomes useful in practice. I will leave my thoughts on this in the questions section.
* It would be interesting to include further analysis regarding how the restricted model’s performance affects this method.
Technical Quality: 3
Clarity: 3
Questions for Authors: * It might be realistic to assume that $\hat{P}$ has some loss $\varepsilon$. Do you have any insights as to how one might expect IMM to degrade as a function of $\varepsilon$?
* Did you consider the setting in which there are many restricted feature models? How do you think your setting can be generalized even further? Do you think there are any settings in which using IMM regularization might hurt the model? For example, if the task is recalling something from a long time ago in the context, it might seem reasonable to assume this method would not work. Generally speaking, what kinds of problems do you think your method is best suited for? Information theoretically, there might be examples in which having some base solver might make the overall problem easier, but as mentioned, there might also be scenarios in which this method actually harms the training of the overall model.
* Just for my understanding, are you assuming a fixed-context length model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for deeply understanding and appreciating our paper. We address some of the points raised:
+ **Computational cost** — Our general attitude is: given a fixed amount of data and a feature-restricted model, how can we do most with it? We are thus mostly concerned with statistical performance, however we acknowledge that computational performance is critical for applicability. In the general rebuttal, we propose approaches to more radically overcome computational hurdles in the future.
+ **When $\hat P$ is of lower quality / how this quality affects things** — We address this experimentally. Due to lack of space, we moved this to Appendix D4 (Figures 6 and 7). There, we artificially weaken the models in the logistic regression and RL experiments. (Additionally, the Kneser-Ney bigram in language modeling is very good, but it certainly is not the ideal bigram of English. We could also do an ablation by worsening it.) The takeaway is that, by tuning $\lambda$, IMM always helps and never hurts, even with good but suboptimal restricted models, and its gains are commensurate with the quality of the model.
> We do not yet have a theoretical analysis of restricted-model quality vs. benefit. However, the insight of why this works is that the dual interpretation of $\lambda$ is as constraining the learned model to have its induced version $\overline{Q}$ in a $\delta$-ball around $\hat P$, with larger $\lambda$ meaning smaller $\delta$. If $\hat P$ is $\epsilon$-away from the true $\overline{P}$, we need to make $\delta\geq \epsilon$, to make sure that we're not harming the learned model by keeping it artificially away from the truth. When the model is high quality ($\epsilon$ small), we can thus make $\delta$ small, and thus $\lambda$ can be large. However, when the quality is low ($\epsilon$ large) then we need to make $\delta$ large too, which means that $\lambda$ has to be small, and therefore the relative benefit from IMM diminishes.
+ **Did you consider many restricted-feature models** — We did think about this, but we haven't worked on it yet. Similarly to multi-task learning, we imagine that adding multiple IMM losses, each for a different restriction, could be able to handle this.
+ **Other generalizations** — One setting we contemplated to try IMM in is in building physics-informed models. A closed-form physics model (e.g., climate prediction) can be thought of as a restricted model, because it often relies on very specific features. A full-featured model could use many more features than the physics model. IMM can be used as a method to incorporate the physics model, by making sure that the full-featured model's induced version matches the physics model. Another generalization that we're studying is to take IMM beyond feature-restriction, to also cover task-restriction: incorporating models that can perform a sub-task very well.
+ **Settings where IMM would not help** — As we have noted in Appendix D4, if the restricted model is bad and $\lambda$ is not tuned properly, IMM could hurt. However, let us assume that the restricted model is good, perhaps even the ideal restricted model. In this case, we speculate that IMM can only help. It may be counterintuitive, but even if the restricted features are uninformative for the task (as in your example), IMM can help. Indeed, if the task is impossible to perform with the restricted features, then the best restricted model will be no better than chance. IMM will try to make the learned induced model mimic this, and perform no better than chance too. We believe this will have the effect of informing the full model's training that the restricted features are pointless, which is useful information! Without this guidance, the full model could waste resources/data to discover this fact. In contrast, with this guidance, it can search the subset of hypothesis space that does not use the restricted features. As this is a smaller space, IMM can be thought of as regularizing the learning even in this case. Therefore, we believe our method is suited for any problem in which we can have a reasonably accurate approximation of the restricted model.
+ **Are we assuming a fixed context length model** — The full context can have variable length, however, we are assuming that the short context has a fixed length. That said, this is mostly about how we're formalizing this in the paper. The methodology itself can work with either, as long as there is a consistent way to split each full context into a short and extended context.
We hope these address your points. If there is anything else we could elucidate, please don't hesitate to ask us during the discussion period. We know you've already been very generous with your score, but if you could consider a higher score, AC's decision would be easier, and we would be extremely grateful! Thank you again for your time and effort.
---
Rebuttal 2:
Title: Did we address your points?
Comment: Dear Reviewer,
Thank you again for taking the time to review our paper. We understand that life is busy and that reviewing can be hectic. We believe that we addressed all the points that you raised.
Additionally, we would like to bring your attention to the fact that (in the general rebuttal) we proposed a formulation for making IMM computationally efficient, and that we just finished implementing it successfully for the logistic regression experiment. We hope that you will also take that into consideration when making your final recommendation.
If we did address all your points, we very much hope that you will raise your score to reflect it. It would helps us get closer to sharing this work with the community and we would appreciate that greatly!
Thank you again.
Sincerely,
Authors | Summary: - Algorithm: This paper proposes a framework for how a good but restricted feature model, e.g. $\bar{P}(y \mid x_1)$, can be used as guidance when training a full-feature model, e.g. $Q(y \mid x_1, x_2, x_3...)$.
\
Instead of the knowledge distillation objective from weak teachers, which directly adds a regularizer comparing the probabilities of the restricted feature model \bar{P} and full-feature Q with each other directly, IMM proposes comparing P with Q marginalized over the other features
\
$$\mathsf{Reverse-KL}: \lambda \sum_{y-} \pi(x_1, x_2, x_3) \sum \bar{P}(y \mid x_1) \log \frac{1}{Q(y \mid x_1, x_2, x_3)}$$
$$\mathsf{IMM}: \lambda \sum_{y-} \pi(x_1, x_2, x_3) \sum \bar{P}(y \mid x_1) \log \frac{1}{\hat{Q}(y \mid x_1)} \text{ where } \\
\hat{Q}(y \mid x_1) = \sum_{x_2, x_3} \pi(x_2, x_3 \mid x_1) Q(y \mid x_1, x_2, x_3)
$$
- The marginalization can be quite expensive in practice, but they demonstrate (in small RL and Language tasks) that they are able to approximate the marginalization efficiently by sampling. The single-sample IMM objective is connected to noising/reverse-KD.
- They mathematically show that in the infinite data regime with the perfect true distribution $\bar{P}(y | x_1)$ and the true model P(y \mid x_1, x_2, x_3) is in the hypothesis class of Q, IMM objective always recovers P. On the other hand, reverse KL and noising cannot.
- Empirically, IMM always does better than with no-IMM, but the performance difference is especially large in data-limited regimes.
Strengths: While the objective may still be too computationally expensive for very large-scale tasks (i.e., sufficiently sampling a suffix tree, high inference cost), the paper is thought provoking, and well-written/motivated. I would recommend it for acceptance.
Weaknesses: - One weakness of the paper is obviously the small scale of most of the experiments (learning policies over 11x11 grid, subset of GLUE, toy linear model). But the experiments sufficiently demonstrate the proof of concept.
- It would also be nice to include some settings where the feature-restricted model is not close to optimal as an ablation study.
- In Figure 1, why is noising able to achieve almost the same performance as IMM starting from around 5% dataset size, if noising is similar to single-sample IMM? Generally, what is the importance of using more samples when marginalizing Q for improved performance? Does performance strictly increase with increasing samples?
Technical Quality: 4
Clarity: 4
Questions for Authors: - Is there any practical importance to the infinite-data regime analysis? Are there circumstances where training with the IMM objective harms performance compared to just cross-entropy with increasing data?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for deeply understanding and appreciating our paper. We address some of the points raised:
+ **Scale of the experiments** — The computational overhead of the model and the fact that we didn't have the infrastructure for large models, meant that we dedicated ourselves to simple reproducible benchmarks to demonstrate the merit of IMM. (Many of the experiments stem from our original goal of understanding the underpinnings of the practice of noising.) In the general rebuttal, we propose approaches to more radically overcome computational hurdles in the future.
+ **Suboptimal feature-restricted models** — We address this! We do it experimentally in Appendix D4 (Figures 6 and 7), where we artificially weaken the models in the logistic regression and RL experiments. (Additionally, the Kneser-Ney bigram in language modeling is very good, but it certainly is not the ideal bigram of English.) The takeaway is that, by tuning $\lambda$, IMM always helps and never hurts, even with good but suboptimal restricted models, and its gains are commensurate with the quality of the model.
+ **Noising vs IMM at small dataset sizes** in Figure 1 — High level explanation: In this regime, IMM needs to marginalize/induce the model accurately and it can't do that at _extremely_ small data sizes. Lower-level explanation: although noising is a biased version of IMM, it also has less variance, because it doesn't need the induced model. With very little data,, that's useful. This explanation also suggests a possible best-of-both-worlds solution, by performing a bias-variance tradeoff. This is something we've thought about but not yet investigated since the issue only surfaces at such extremes. The performance of IMM increases with more samples for marginalization, because the variance is reduced. However, the returns diminish, and we find that k=5 to 10 samples are sufficient to get good performance.
+ **Infinite data analysis** — This analysis is just an analytical tool to contrast IMM and noising, and to reveal the latter's shortcomings. If one truly had infinite data, neither of those techniques would be necessary. The only circumstance where IMM will hurt is if the target model is very wrong and the $\lambda$ is not tuned to account for it. (If the target model is correct, $\lambda$ can be arbitrary.) In contrast, the only circumstance where noising/reverse-KD will _not_ hurt is if the target model is correct and the $\lambda$ shrinks with more data. This is what we do in Figure 1 (i.e., for noising, we use the high quality feature-restricted model and we find the best $\lambda$ at each data size). Despite this, the benefits of noising disappear quickly with increasing data size.
We hope these address your points. If there is anything else we could elucidate, please don't hesitate to ask us during the discussion period. We know you've already been very generous with your score, but if you could consider a higher score, AC's decision would be easier, and we would be extremely grateful! Thank you again for your time and effort.
---
Rebuttal 2:
Title: Thanks, one more question
Comment: Thank you for the clarification! I think there was a misunderstanding regarding my question about Noising versus IMM.
The authors provide intuition for why Noising does better than IMM at small dataset sizes, but rather my question was why the gap between Noising and IMM closes so quickly. By 5+% dataset size, the performance gap seems to have closed to around 3%, but my understanding is that IMM is much more computationally expensive. I wonder whether there are circumstances where IMM has any gains over cheaper methods trained on a little bit more data or whether IMM generally has any utility in any settings with reasonable amounts of data. I'm also not certain how the author's explanation aligns with Figure 1 observations. Noising seems to strictly be doing worse than IMM for any dataset size smaller than 5% and there seems to be no dataset size where IMM does worse than Noising.
I will most likely keep my score as 7, if the authors would like to devote more time to answering other reviewers' questions.
---
Rebuttal Comment 2.1:
Title: Answer (1/2)
Comment: We misunderstood your question to refer to the only statistical advantage that noising has, which is smaller variance at small data sizes. We sincerely apologize for this. We now address all your related questions, point by point.
> In Figure 1, why is noising able to achieve almost the same performance as IMM starting from around 5% dataset size, if noising is similar to single-sample IMM? / [...] my question was why the gap between Noising and IMM closes so quickly. By 5+% dataset size, the performance gap seems to have closed to around 3% [...]
The gap narrows in Figure 1 because we are being very favorable to noising. Specifically, we are decaying $\lambda$ (the amount of noising) optimally with increasing data. This is _necessary_ for noising. The reason for this is the same as what occurs in Proposition 5.1: even if the target model is perfect, because noising incorrectly tracks the target model, without decaying its influence it will not only not narrow the gap, but would in fact derail the learned model. Decaying $\lambda$ is also acknowledged as critical in the reverse knowledge-distillation literature (see for example Sec. 3 of Qin et al. 2021). However, tuning $\lambda$ is _optional_ for IMM, thanks to $\hat Q$ (with more extended history samples) accurately tracking the target (see the flat curves on the right column of Figure 4 in Appendix D1.1).
To fully convince you, we re-ran the experiment with fixed $\lambda=1.5$ (optimal at data size 5). The results are below. IMM maintains performance comparable to Figure 1, whereas noising experiences a widening gap, and soon underperforms even the baseline.
| Dataset Size | Baseline | Noising | IMM | IMM-Noising Gap |
|--------------|----------------------|---------------------|---------------------|-----------------|
| 5 | 73.14 +16.17/-14.53 | 76.85 +18.58/-13.15 | 79.96 +14.99/-12.38 | 3.11 |
| 10 | 84.17 +13.51/-10.86 | 84.37 +7.40/-7.63 | 88.65 +9.65/-7.68 | 4.28 |
| 15 | 89.86 +8.86/-6.80 | 86.17 +6.17/-5.53 | 92.52 +6.19/-4.81 | 6.35 |
| 20 | 92.35 +7.35/-5.32 | 86.99 +5.69/-5.37 | 94.30 +4.63/-3.70 | 7.30 |
| 30 | 94.94 +3.97/-3.43 | 88.68 +4.35/-4.32 | 95.69 +3.39/-2.64 | 7.01 |
| 40 | 96.33 +3.00/-2.33 | 89.70 +3.73/-3.97 | 96.59 +2.59/-2.08 | 6.89 |
| 50 | 97.14 +2.47/-2.20 | 90.78 +3.78/-3.55 | 97.34 +2.34/-1.99 | 6.56 |
> Generally, what is the importance of using more samples when marginalizing Q for improved performance? Does performance strictly increase with increasing samples?
The above answer gives one clear benefit of a more accurate $\hat Q$: less sensitivity to tuning $\lambda$. However, since tuning $\lambda$ is possible, the primary advantage is significant and consistent gains, even against optimally-tuned noising. The gains in Figure 1 may seem small in terms of absolute percentages, but they are significant, as the IMM-noising gap is larger than the noising-baseline gap, most of the time and often considerably so. We can quantify these gains in terms of data size increase: e.g., the accuracy of IMM at 15, would need augmenting the data set size to 18 with tuned noising (a 20% increase) or to 20 without noising (a 33% increase). These gains are also more consistent, as the variance of IMM's accuracy is smaller than the variance of noising's accuracy, most of the time (except, as noted, for small data sizes).
IMM strictly improves when we use more extended history samples, but with diminishing returns, allowing us to cap to $k=10$ samples at most.
---
Reply to Comment 2.1.1:
Title: Answer (2/2)
Comment: > [...] but my understanding is that IMM is much more computationally expensive.
We address the computational aspect of IMM in the general rebuttal. In essence, the alternative that we propose has the potential to make IMM computationally equivalent to noising.
> I wonder whether there are circumstances where IMM has any gains over cheaper methods trained on a little bit more data [...]
Our general premise is that we have what data we have, and no more. Noising and IMM both have this data and the same feature-restricted) target model. (The sampling to estimate $\hat{Q}$ is done with the same data set.) With these exact same information resources, IMM always statistically outperforms noising. If the goal is to make the most out of what information we have, or when acquiring that little bit more data is expensive, IMM is the way to go.
> [...] or whether IMM generally has any utility in any settings with reasonable amounts of data.
This question can be asked of any variant of data augmentation, as given enough data there is no need to augment it. The question is: in what regime is any such technique useful? In Section 6.1, we mention that IMM appears most successful when data size is comparable to the number of parameters. This agrees with the general rule of thumb of regularization benefiting high-dimensional regimes. The intuition is that the cross-entropy loss does not sufficiently localize the model, so we get better localization when IMM constrains the induced model, which is a low-dimensional projection. Since most modern models (even LLMs) operate in such regimes, it is not a stretch to expect IMM to be widely beneficial. (Otherwise, it would be also hard to imagine the many techniques suggesting using N-grams to augment LLMs having any utility either.)
---
We are convinced that the new perspective that IMM provides, the deeper understanding of noising and reverse knowledge-distillation that it offers, and the consistent statistical edge that it has with limited data, make it worthwhile to share with the community at NeurIPS. Once the idea is out there, we are certain that our effort and that of others can inevitably make IMM more computationally efficient and scalable.
You have been very generous with your assessment of our work. If you believe 7 is the right score, we humbly thank you and only hope that your position can help convince your colleagues to also support us at the same level. | Rebuttal 1:
Rebuttal: We thank you all for your insightful and positive reviews. We are encouraged by your appreciation of our work and for your constructive criticism. We are lucky to have received such high quality feedback.
We have individually addressed all the points that you've raised. There is, however, one common theme that arose across the reviews: **whether IMM can be made more computationally efficient.**
+ While the paper itself is focused mostly on the statistical aspect of the problem, we recognize that scalability of IMM is critical for adoption. As such, we have been working on this issue, and believe that there is an elegant solution that could potentially achieve this scalability. Unfortunately, this work is not complete yet and we don't have experiments paralleling those in the paper. However, we are happy to share with you our insights.
+ The main bottleneck with IMM is the need to calculate the learned induced model. In the paper, we are doing this by sampling $k$ long histories and averaging the outputs. This has certain advantages, such as giving us a low-variance estimate of the gradient. However, it can be computationally expensive, especially in recurrent models which require new passes over the substituted histories.
+ The following alternative idea is similar to the sequentialization aspect covered in Appendix C.2, Equation (19). Using $\overline{x}$ to refer to short history and $\underline{x}$ to refer to long history, we can write the idealized IMM loss as ([see this figure for reference of this new streamlined notation](https://i.imgur.com/NwQgOPV.png)):
$ - \sum_\overline{x} \pi(\overline{x}) \sum_y \overline{P}(y|\overline{x}) \log \overline{Q}(y|\overline{x})$
where the learned induced model is:
$ \overline{Q}(y|\overline{x}) = \sum_\underline{x} \pi(\underline{x}|\overline{x}) Q(y|\underline{x},\overline{x})$.
The gradient of this IMM loss then becomes:
$ - \sum_\overline{x} \pi(\overline{x}) \sum_y \overline{P}(y|\overline{x}) \frac{\sum_\underline{x} \pi(\underline{x}|\overline{x}) \nabla Q(y|\underline{x},\overline{x})}{\overline{Q}(y|\overline{x})}$
$ = - \sum_\overline{x} \sum_\underline{x} \pi(\overline{x}) \pi(\underline{x}|\overline{x}) \sum_y \overline{P}(y|\overline{x}) \frac{ \nabla Q(y|\underline{x},\overline{x})}{\overline{Q}(y|\overline{x})}$
The empirical version of this gradient is:
$ - \frac{1}{n} \sum_t \sum_y \overline{P}(y|\overline{x}_t) \frac{ \nabla Q(y|\underline{x}_t,\overline{x}_t)}{\overline{Q}(y|\overline{x}_t)}$
As you can see, there is no sampling of any histories in this expression! It can even be rewritten in the form of cross-entropy, making the connection with noising and reverse-KD even more apparent, as it introduces a correction factor:
$ - \frac{1}{n} \sum_t \sum_y \overline{P}(y|\overline{x_t}) \underbrace{\frac{Q(y|\underline{x_t},\overline{x_t})}{\overline{Q}(y|\overline{x_t})}}_{\textsf{correction}} \nabla \log Q(y|\underline{x_t},\overline{x_t})$
The fascinating thing about this approach is that it only has a constant factor overhead on the baseline of cross-entropy training. There are two caveats, however:
+ This has a higher variance than the sampling approach of the paper. Reducing the variance is challenging, and we are investigating it using momentum-based approaches.
+ It requires the maintenance of an induced-model estimate. We can solve this relatively easily for the bigram case, by accumulating prediction vectors into a matrix.
We are sharing this insight with you in the hope of convincing you that there are indeed avenues to making IMM computationally effective. We don't know whether we'd be able to share with you parallel experiments by the end of the discussion period, but we hope that you will take this into consideration and become even more confident in the merits of this work. Thank you very much for all your time and effort!
*Note: The attached PDF and the links in the rebuttals are all anonymized and contain the same two figures.*
Pdf: /pdf/8d6db36617c2462c9cc8c1564bb01c6fe98556ab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stochastic Concept Bottleneck Models | Accept (poster) | Summary: Focusing on concept bottleneck models, this paper extracts concept dependencies with a multivariate normal distribution and derives an intervention strategy based on the confidence region of the normal distribution that incorporates concept correlations for better interventional effectiveness.
Strengths: 1. This paper is organized and written well.
2. Concept bottleneck model is an important direction for xAI and even generalizability.
Weaknesses: 1. Experiment results show marginal improvement over existing methods.
2. The novelty and contribution of this paper is limited.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Intervention by setting values can be difficult in some case, the example given by the paper: "intervene on the CBM by setting its value to 1" is an extreme case. Humans are not good at estimating probabilities.
2. "do not use the intervened-on concepts to update their remaining concept predictions" may not necessarily be a bad thing. As in many real-world cases, there are exceptions to rules and patterns. Nothing is absolute. Universally "extend the concept predictions with the modeling of their dependencies" may be problematic. Are these exceptions handled by confidence region?
3. What is the novelty comparing with E. Kim et al. (2023)? just relax the assumption of diagonal covariance matrix?
4. Will the proposed method suffer leakage?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and questions! Below is our point-by-point response.
> Intervention by setting values can be difficult in some case, the example given by the paper: "intervene on the CBM by setting its value to 1" is an extreme case. Humans are not good at estimating probabilities.
Indeed, specifying an exact value for interventions is a challenging task and an active research problem [8]. Still, a major benefit of CBMs is their ability for interventions via human-model interactions [2,3], as it enables domain experts to correct mistakes, so that more accurate target predictions can be made at the end. This is important in domains like healthcare, where human-model interactions have a significant impact on trust and patient outcomes.
Thus, we believe our research in this direction is important while acknowledging that the complementary problem of the human factor within interventions is not solved.
Notably, while most CBM methods require users to intervene with exact values, SCBMs solve an optimization problem to arrive at a “reasonable” probability estimate. That is, not only do we not ask humans to specify probabilities, but we even solve an optimization problem to determine probabilities given hard concept values. Naturally, if desired by the user, this optimization routine can be omitted. Nevertheless, we will mention this problem in the limitation section.
> "do not use the intervened-on concepts to update their remaining concept predictions" may not necessarily be a bad thing. As in many real-world cases, there are exceptions to rules and patterns. Nothing is absolute. Universally "extend the concept predictions with the modeling of their dependencies" may be problematic. Are these exceptions handled by confidence region?
The goal of this work is to enhance the effect of individual interventions and make the human-model interactions more scalable. This removes the burden of intervening on multiple correlated concepts, leading to improved performance. SCBMs achieve this by exploiting the concept dependencies observed during training. As such, we do make the assumption that the concept structure in the training set holds at inference time. If this assumption is violated, one could always compute the marginal instead of the conditional distribution at intervention time, thereby ignoring the concept dependencies.
> What is the novelty comparing with E. Kim et al. (2023)? just relax the assumption of diagonal covariance matrix?
While both works use a normal distribution, we believe there are substantial differences between our manuscript and the ProbCBM [4], as we outline below. We will make them more clear in the revised version of this manuscript.
* ProbCBMs build upon CEMs. That is, the mean and variance are learned over the concept *embeddings* rather than modeling the concepts themselves. Even if they relaxed the assumption of a diagonal covariance matrix, in their current form, they would only capture the covariances of the embedding dimensions *within* a given concept but not the dependencies *across* concepts. Thus, we would not expect that such a relaxation would improve ProbCBMs’ intervention performance, and as such, in the context of interventions, we do not consider them to be very different from CEMs.
* In a similar vein, please note that the loss of ProbCBMs contains a KL term, which “prevents the variances from collapsing to zero” [4]. On the other hand, SCBMs do not require such a term, as we’re modeling actual non-zero concept dependencies. To draw an analogy, we consider ProbCBMs to work similarly to VAEs [5], where the normal distribution acts as a regularizing prior (via the KL term) on the concept latent space. On the other hand, SCBMs leverage the normal distribution to learn the structure of the data itself, which in our case, are the concepts.
* As pointed out by the reviewer, a major difference between the works is that SCBMs do not assume a diagonal covariance matrix. However, this generalization is non-trivial and poses multiple technical challenges. As such, we parameterize the covariance via its Cholesky decomposition, derive a maximum-likelihood-based loss different from [4], apply the Gumbel-Softmax reparameterization to perform joint training, and introduce the Lasso regularization. Lastly, a significant contribution is the introduction of the novel intervention strategy based on the confidence region (see Eq. 8) that leverages the learned concept dependencies while fulfilling the posed desiderata.
* While ProbCBMs mainly focus on capturing concept and target uncertainty, SCBMs focus on interventions. As we have shown in Fig. 2 of the manuscript, we achieve this gap by a margin of up to 10 percentage points in some cases. Especially when comparing to CEMs, whose performance we believe to be a good proxy for ProbCBMs with respect to interventions, there is a major difference in intervention effectiveness.
* A seemingly minor but, in fact, very important difference is that ProbCBMs work with concept embeddings as introduced by CEMs. On the other hand, SCBMs use hard, binary concepts as bottleneck. We elaborate on the importance of this difference below.
> Will the proposed method suffer leakage?
Leakage is an important problem in the CBM literature, as it damages the interpretability. While we do not explicitly optimize against leakage, we have made the deliberate design choice to characterize the concept bottleneck with hard, binary concepts to avoid leakage as much as possible [6]. We believe that leakage is the main driver of why CEMs perform suboptimal during interventions. Note that [7] even propose to approximate leakage by performance during interventions. Since ProbCBMs build upon CEMs, we believe that they are more susceptible to leakage than SCBMs.
But of course, our design choices do not fully prevent the occurrence of leakage, which is why we mention it in the limitations section.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer 9xQm,
please let us know if we could address your concerns with our rebuttal or if there are other open questions remaining. We would be grateful if the reviewer could acknowledge the rebuttal. Thank you in advance! | Summary: This paper introduces a novel concept dependency modelling scheme via an explicit distributional parameterization based on multivariate Gaussian distributions. This allows for capturing the dependencies between different concepts, while giving rise to an effective intervention strategy.
The experimental results vouch for the efficacy of the proposed approach, which allows for end-to-end training, and exhibits on-par or improved performance before and after interventions.
Strengths: This constitutes a well-motivated approach based on some probabilistic arguments for capturing and examining concept dependency. The overall idea is simple and easy to follow.
Weaknesses: Even though the proposed approach uses either an amortized or global formulation, the complexity of learning the covariance matrix either way surely introduces a lot of complexity. This is especially true in cases where the number of concepts are large, a common occurrence in complex datasets with many classes such as CIFAR-100 and ImageNet (not explored in this work). This complexity is further burdened by the Monte Carlo sampling scheme; commonly, 1-10 samples are enough, but the authors report the usage of 100 MC samples which would greatly slow down the training process.
In this context:
1) What is the complexity compared to a standard diagonal approach?
2) It would be important to assess the complexity and performance in the case of larger datasets like ImageNet. The authors use the CLIP generated data of [1] for CIFAR-10, so applying the approach to CIFAR-100 and ImageNet should be quite easy.
3) In my experience with MC-based methods, I find Table 2 a bit hard to believe, especially with 100 MC samples that the authors report. Can the authors provide wall time measurements for the per-epoch time for each method? Even for just Hard CBM and Amortized SCBM. 4) Having access to the code, so the experiments could be validated would also be important.
5) How does the number of MC samples affect the complexity and performance? I find an ablation study to be necessary here.
6) How many GPUs did the utilized cluster have?
The considered Bernoulli formulation is very similar to the work in [2], which they call concept discovery. It constitutes a data-driven Bernoulli posterior for concept selection and the formulation is very similar. I find this method more appropriate compared to other approaches mentioned in the related work and the experimental evaluation, and some discussion/results should be included in the main text.
In Table 1, the authors report the concept accuracy which I assume is based on a binary accuracy. Recent works have suggested that maybe this is not a good metric in the context of interpretability, since most concepts are sparse and this can lead to misleading results [3]. Can the authors report the Jaccard similarity between the ground truth and the obtained concepts for all the methods?
[1] Oikarinen, T., et al., Label-free concept bottleneck models, ICLR 2023
[2] Panousis, K. P., et al., Sparse Linear Concept Discovery Models, ICCV CLVL 2023
[3] Panousis, K. P., et al., Coarse-to-Fine Concept Bottleneck Models, Arxiv 2024
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the Weaknesses section.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors mention the limitations in the dedicated section. The main limitation of the proposed approach is complexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. In our response below, we address the remaining open points.
> What is the complexity compared to a standard diagonal approach?
As the reviewer rightly points out, modeling dependencies comes at a complexity overhead cost. In terms of memory complexity SCBMs scale quadratically with the number of concepts, while a diagonal approach would scale linearly. However, the sampling itself is very fast. We use the Cholesky decomposition $\Sigma = L L^T$; thus, sampling from the multivariate distribution is done via $Lx$, where x is a vector of $C$ samples of a univariate standard normal distribution. This is in stark contrast to the autoregressive baseline, which requires $C$ sequential passes through MLPs.
> It would be important to assess the complexity and performance in larger datasets like ImageNet. The authors use the CLIP-generated data of [1] for CIFAR-10, so applying the approach to CIFAR-100 and ImageNet should be quite easy.
We present the result on the CIFAR-100 dataset with 892 concepts obtained from [9] in Figure 1 of the rebuttal PDF to showcase the scalability of SCBMs. Additionally, we present the wall time per method in Table 2. As can be seen, the CEM and AR baselines take a long time to compute when increasing the concept set size. After 4 days, we are still waiting for the Autoregressive CBM to finish and can, unfortunately, only present their intervention performance up to this point. Therefore, we refrain from computing results on ImageNet with 4751 concepts, as we would be unable to get any results on time.
The additional results underline the efficiency of our method in terms of computational complexity and performance. Notably, the Autoregressive baseline has a negative dip, which is likely due to the independently trained target predictor not being aligned with the concept predictors in this noisy CLIP-annotated scenario. Note that they need to train independently to avoid the sequential MC sampling during training, which would otherwise increase training time significantly, as seen in the test wall times in Table 2. Our jointly trained SCBMs do not have this issue and surpass the baselines.
> … Can the authors provide wall time measurements for the per-epoch time for each method? Having access to the code, so the experiments could be validated would also be important.
We chose 100 MC samples to not disadvantage the Autoregressive CBM, which used 200 MC samples. We provide an ablation for a smaller number of samples in Figure 2 of the supplementary PDF, showing SCBMs remain effective.
In Table 2 of the supplementary PDF, we present the wall time for each method in two datasets with varying concept set sizes. Please note that the sampling itself is extremely fast for the multivariate Gaussian distribution used in SCBMs, and, contrary to e.g., VAEs, the MC samples are only passed through a lightweight target prediction head. This is a significant benefit over the autoregressive baseline, as their procedure consists of sequentially sampling each concept via MLPs, while SCBMs can obtain all samples directly from the specified multivariate Gaussian.
Let us denote by $C$ the number of concepts and $M$ the number of MC samples.
If we analyze the computational complexity by the amount of MLP forward passes, then CEMs scale $O(C)$ (2 embeddings per concept), Autoregressive CBMs scale $O(C \times M)$ (1 pass through $C$ MLPs per MC sample), and SCBMs scale $O(M)$ (1 pass through classifier head per MC sample).
For code, we refer to the footnote on page 3 of the originally submitted manuscript for our anonymized repository.
> How does the number of MC samples affect the complexity and performance?
In Figure 2 of the supplementary PDF we provide an ablation for the number of MC samples on CUB. The runtimes differ minimally by ~0.01s per epoch. Results show that SCBMs still perform well with fewer MC samples. Still, a larger amount (e.g. 100) of samples is completely feasible in SCBMs, supported by our fast sampling and forward pass.
> How many GPUs did the utilized cluster have?
The experiments were run on an ordinary HPC platform, mostly consisting of GeForce RTX 2080ti’s. We would like to emphasize that each run uses only a single GPU.
We will adjust the “Resource Usage” paragraph to avoid ambiguity.
> The considered Bernoulli formulation is very similar to the work in [2] … I find this method more appropriate compared to other approaches mentioned in the related work and the experimental evaluation, and some discussion/results should be included in the main text.
We agree that Panousis et al. (2023) and SCBMs share the Bernoulli relaxation for concept modeling. It is important to note that CDMs focus on discovering concepts, while SCBMs assume the presence of a concept-labeled dataset and focus on modeling their dependencies. As such, we believe the two works to be complementary, not substitutional. This is observable by noticing that both cited works by Panousis do not study interventions. We have reported successful results of SCBMs in CIFAR-10 and CIFAR-100, where concept annotations are “discovered” from CLIP. This shows that the concept discovery methods like CDMs can synergize with SCBMs. However, as the focus of this work lies on modeling the concept dependencies and interventions, we believe that autoregressive CBM is more closely related to our task and focus on it.
We will make sure to discuss the shared similarities with CDMs in the main text, as well as outline the combination of both methods as an exciting avenue for future work.
> In Table 1, the authors report the concept accuracy…. Can the authors report the Jaccard similarity?
In Table 1 of the supplementary PDF, we include the Jaccard similarity between ground truth and predicted concepts. The resulting interpretations are in line with the prior reported accuracies. We will add this metric proposed by Panousis et al. (2024) to the manuscript.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer sfUo,
please let us know if we could address your concerns with our rebuttal or if there are other open questions remaining. We would be grateful if the reviewer could acknowledge the rebuttal. Thank you in advance! | Summary: This paper presents a method of performing interventions on Concept Bottleneck Models. The method parametrizes the space of concepts with a generative model of Bernoulli distribution and concept logits with a normal distribution, whose mean and variance depends on the input data distribution. The method also compares amortized covariance, which is instance-based, and a global covariance, marginalized over all samples. Rather than doing the interventions one-by-one as in previous works, this method also modifies a single concept by hand, which in turns modifies other related concepts using the covariance learned. Experiments demonstrate that the method performs competitively against related CBM works that also allow for intervention, and is much faster during inference. During intervention, the model performs competitively against other baselines.
Strengths: The problem presented in this work is well-motivated and is of great importance to the current field of interpretable machine learning. While there are existing works that also formulate the concepts as a known distribution, this method has demonstrated well its advantages from the lens of intervention. Hence, this work has met the bar in terms of novelty and contributions in this area of research. The experiments of this work are also solid and adequate in addressing the claims made. While the performance is sometimes worse than related methods, they are still comparable and is not the main focal point of this work.
Weaknesses: While the overall writing of the paper is clear, Section 3.3 was a bit challenging to follow, especially the part about the confidence region. It is not entirely clear to me what the confidence region is of. As an example, L183-184 “the likelihood-based
confidence region provides a natural way of capturing the region of possible $\mathbf{\eta}’_{\mathcal{S}}$ that fulfil our desiderata”. The confidence of what? What desiderata? I suggesting either adding a diagram, a bit more background, a reminder, or an example, on what the intervention is trying to achieve so it more clearly demonstrates the process.
Technical Quality: 4
Clarity: 3
Questions for Authors: Often times we don’t have many labeled samples of fine-grain annotations (like CUB) or the annotation can be noisy (like CLIP annotations). To what extend do these two factors affect the learning of the distributions in your method? Is it true that if the global dependency structure is of poor quality (correlation matrix), then this can severely impact the downstream performance and interpretability?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: A key part of this method is learning a good dependency matrix for each sample or globally. I would expect this requires a lot of samples and can severely impact the performance if the matrix is not representative of the relationships between concepts. This is clearly stated in the work, hence the limitations are sufficiently stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your comments and positive feedback. Please find below our answers to the open points.
> While the overall writing of the paper is clear, Section 3.3 was a bit challenging to follow, especially the part about the confidence region. It is not entirely clear to me what the confidence region is of. As an example, L183-184 “the likelihood-based confidence region provides a natural way of capturing the region of possible $\boldsymbol{\eta}_{\mathcal{S}}$ that fulfil our desiderata”. The confidence of what? What desiderata? I suggesting either adding a diagram, a bit more background, a reminder, or an example, on what the intervention is trying to achieve so it more clearly demonstrates the process.
Thank you for raising this point. We will make sure to improve the clarity of this subsection in the manuscript.
Below, we provide the rewritten version of lines 174-188.
*In normal CBMs, an intervention affects only the concepts on which the user intervenes. As such, Koh et al. (2020) set $\eta_i'$ to the 5th percentile of the training distribution if $c_i = 0$ and the 95th percentile if $c_i = 1$. While this strategy is effective for SCBMs too, see Appendix C.3, the modeling of the concept dependencies warrants a more thorough analysis of the *intervention strategy*. We present two desiderata, which our intervention strategy should fulfill.*
*i) $p(c_i | \eta_i') \geq p(c_i | \mu_i)$*
*That is, the likelihood of the intervened-on concept $c_i$ should always increase after the intervention. If SCBMs used the same strategy as CBMs, it could happen that the initially predicted $\mu_i$ was more extreme than the selected training percentile. Then, the interventional shift $\eta'_i - \mu_i$ in Equation 7 would point in the wrong direction. This, in turn, would cause $\boldsymbol{\eta}\_{\setminus \mathcal{S}}$ to shift incorrectly.*
*ii) $|\eta_i' - \mu_i|$ should not be ``too large''*
*We posit that the interventional shift should stay within a reasonable range of values. Otherwise, the effect on $\eta_{\setminus \mathcal{S}}$ would be unreasonably large such that the predicted $\boldsymbol{\mu}_{\setminus \mathcal{S}}$ would be completely disregarded.*
*To fulfill these desiderata, we take advantage of the explicit distributional representation: the likelihood-based confidence region of $\mu_i$ provides a natural way of specifying the region of possible $\boldsymbol{\eta}\_{\mathcal{S}}'$ that fulfil our desiderata. Informally, a confidence region captures the region of plausible values for a parameter of a distribution.
Note that the confidence region takes concept dependencies into account when describing the area of possible $\boldsymbol{\eta}\_{\mathcal{S}}'$. To determine the specific point within this region, we search for the values $\boldsymbol{\eta}\_{\mathcal{S}}'$, which maximize the log-likelihood of the known, intervened-on concepts $\mathbf{c}_{\mathcal{S}}$, implicitly focusing on concepts that the model predicts poorly.*
> A key part of this method is learning a good dependency matrix for each sample or globally. Is it true that if the global dependency structure is of poor quality (correlation matrix), then this can severely impact the downstream performance and interpretability?
Yes, modeling the second central moment naturally induces more variability. As the reviewer rightly states, it is important that the learned dependency structure is somewhat accurate. That is, the benefits of learning the dependencies have to outweigh the difficulty of learning them. This is why we introduced the LASSO-like regularizer, which helps in avoiding overfitting. For the regularizer weight $\lambda_2 \rightarrow \infty$, we would recover the hard CBM (apart from diagonal variance), thus, with a validation set at hand, one can find a suitable value. Note that we also show in App. C.2, that SCBMs are not very sensitive to this $\lambda_2$. The importance of not learning a poor-quality structure is also what led us to not model higher-order central moments, thereby reducing the overfitting potential.
> Often times we don’t have many labeled samples of fine-grain annotations (like CUB) or the annotation can be noisy (like CLIP annotations). To what extend do these two factors affect the learning of the distributions in your method?
Learning a covariance structure adds a layer of complexity, therefore, it is natural to wonder how our method can deal with non-optimal scenarios. We believe with the regularization, discussed in the previous answer, users have good control over the method’s behavior and, therefore, good control over the potential issues you point out. Of course, noisier annotations will require a higher $\lambda_2$ regularization, thus, potentially omitting some signal. However, this tradeoff is unavoidable, and regularizing the parameters is standard practice [1]. Additionally, the proposed intervention strategy aids in preventing wrongful interventions by adhering to the predicted uncertainty.
We have deliberately chosen the datasets in the manuscript to showcase the versatility of SCBMs in such settings. That is, the CUB dataset contains only around 6000 training samples, thus being a rather small dataset, in which SCBMs prevail. To cover the case of noisy concepts, as well as a dataset without fine-grained human annotations, we have chosen the CLIP-annotated CIFAR-10 dataset, in which SCBMs also outperform the baselines. These findings also hold out-of-the-box on the CLIP-annotated CIFAR-100 dataset reported in Figure 1 of the supplementary PDF file, showcasing the scalability of SCBMs to larger concept sets. Thus, we can confidently conclude that SCBMs are effective in capturing the underlying dependencies.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer rLqE,
please let us know if we could address your concerns with our rebuttal or if there are other open questions remaining. We would be grateful if the reviewer could acknowledge the rebuttal. Thank you in advance!
---
Rebuttal Comment 1.2:
Comment: Thank you for the author's detailed response. The improved writing improved my understanding of the work much better. | null | null | Rebuttal 1:
Rebuttal: Dear reviewers,
We would like to thank all of you for your thorough reviews and constructive feedback! Below, we summarise our responses to your main concerns, additional results, and changes to be implemented upon acceptance in the revised manuscript.
* We have included experiments on a new large-scale dataset, CIFAR-100 [10], with annotations for 892 concepts coming from VLMs. The results, displayed in Figure 1 of the additional supplementary PDF file, are consistent with the interpretation of SCBMs in previous datasets, showcasing the scalability of our method to a larger number of concepts and classes. For a more detailed discussion, we refer to our response to reviewer sfUo.
* To understand the computational complexity of SCBM, we report the wall times for one training and inference epoch for each of the studied methods in Table 2 of the supplementary PDF. In view of the large-scale dataset introduced in the rebuttal, we report both CUB and CIFAR-100 to provide insights for two different datasets. These results show that SCBMs are significantly more efficient and hence more scalable than prior work for modeling concept dependencies, i.e., Autoregressive (AR), which required longer computational times, especially when scaling the number of concepts. Particularly, AR is slower at inference time, where speed is a key factor as a user will interact with the deployed model. For a more detailed discussion, we refer to our response to reviewer sfUo.
* The experiments in the proposed SCBM were carried out with 100 MCMC samples. We performed an ablation on this amount and show in Figure 2 of the supplementary PDF that SCBMs remain effective with 10 MCMC samples.
* We include the Jaccard Index as a new metric for measuring concept prediction prior to interventions for all models and datasets in Table 1 of the supplementary PDF. The resulting interpretation is consistent with the previously reported accuracies.
* We further clarified the method with a comprehensive explanation of the proposed intervention strategy via confidence regions, which we will include in the revised version of the manuscript. For more details, we refer to our response to reviewer rLqE.
[1] Tibshirani, Robert. "Regression shrinkage and selection via the lasso." Journal of the Royal Statistical Society Series B: Statistical Methodology 58.1 (1996): 267-288.
[2] Koh, Pang Wei, et al. "Concept bottleneck models." International conference on machine learning. PMLR, 2020.
[3] Shin, Sungbin, et al. "A closer look at the intervention procedure of concept bottleneck models." International Conference on Machine Learning. PMLR, 2023.
[4] Kim, Eunji, et al. "Probabilistic Concept Bottleneck Models." International Conference on Machine Learning. PMLR, 2023.
[5] Kingma, D. P., & Welling, M. “Auto-encoding Variational Bayes.” International Conference on Learning Representations, ICLR, 2014.
[6] Havasi, Marton, Sonali Parbhoo, and Finale Doshi-Velez. "Addressing leakage in concept bottleneck models." Advances in Neural Information Processing Systems 35 (2022): 23386-23397.
[7] Zabounidis, Renos, et al. "Benchmarking and Enhancing Disentanglement in Concept-Residual Models." arXiv preprint arXiv:2312.00192 (2023).
[8] Collins, Katherine Maeve, et al. "Human uncertainty in concept-based ai systems." Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 2023.
[9] Oikarinen, T., et al., "Label-free concept bottleneck models", ICLR 2023
[10] Krizhevsky, A., & Hinton, G. (2009). “Learning multiple layers of features from tiny images.” Toronto, Ontario: University of Toronto.
Pdf: /pdf/e1c219ef0233c37f6949fba9c0b77f9126c331c9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learn To be Efficient: Build Structured Sparsity in Large Language Models | Accept (spotlight) | Summary: This paper proposes a method to increase the structured sparsity of models through training, called Learning-to-be-efficient (LTE). It introduces a new training loss that guides the model to activate fewer neurons while maintaining original performance. Simultaneously, LTE employs a threshold-based sigmoid routing strategy, allowing flexible expert selection instead of a predefined fixed number. To achieve acceleration, the authors further implement an efficient version based on Triton. Compared to previous sparse acceleration methods, LTE can be applied to activation functions other than ReLU, offering better generality.
Strengths: - The method demonstrates good generality and can be applied to various activation functions.
- The designed separability loss is intuitive and effective.
- Experiments on diverse types of tasks prove the effectiveness of the method.
- Implementation of a Triton kernel achieves computational time acceleration.
Weaknesses: - Due to the mainstream dense models having an FFN to attention ratio of approximately 2:1, although good sparse acceleration is achieved in FFN, the overall speedup is not high. If experiments were conducted on models with a higher proportion of FFN, the acceleration effect would likely be better, potentially enhancing the impact of the work.
- Some experimental and methodological details lack sufficient discussion. See the questions section for more details.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Regarding the experimental analysis in Line 135: Since MoEfication approximates dense model computation, expert scores are only used for expert selection and do not scale the expert outputs. This ensures that when all experts are selected, the computation is entirely equivalent to the original dense model. Therefore, it should not be possible to have a situation where two experts are selected but only one contributes. Could the authors further elaborate on this phenomenon?
- Concerning the expert grouping method, is there room for further improvement in clustering W_1? LLaMA uses GLU, where the intermediate representation is obtained by element-wise multiplication of two vectors, which is different from the vanilla FFN studied in the original MoEfication.
- What principles guide the selection of thresholds, and is there any transferability to other LLMs?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer found that our work has extensive evaluation and shows strong empirical performance. We thank the reviewer for the constructive feedback and appreciate the opportunity to address the points you have raised.
---
> **Q1:** Due to the mainstream dense models having an FFN to attention ratio of approximately 2:1, although good sparse acceleration is achieved in FFN, the overall speedup is not high. If experiments were conducted on models with a higher proportion of FFN, the acceleration effect would likely be better, potentially enhancing the impact of the work.
**A1:** We agree with the reviewer that only improving sparsity for FFN layers does have a speed-up limit. Applying LTE in a more FFN-intense model can gain more speed-up. Another potential solution is to apply MoEfication in attention layers. We will leave this for future exploration.
>**Q2:** Regarding the experimental analysis in Line 135: Since MoEfication approximates dense model computation, expert scores are only used for expert selection and do not scale the expert outputs. This ensures that when all experts are selected, the computation is entirely equivalent to the original dense model. Therefore, it should not be possible to have a situation where two experts are selected but only one contributes. Could the authors further elaborate on this phenomenon?
**A2:** The reason multiple experts are selected but only one contributes (Line 135) is due to the adoption of the noisy top-K softmax routing strategy in Section 3.2. With this routing strategy, the expert outputs are multiplied by their corresponding expert scores. Given the large number of experts in MoEfication and the fact that the sum of the softmax expert scores is 1, many of the selected expert scores are nearly zero. When those expert outputs are multiplied by the nearly zero scores, the outputs are scaled to zero, resulting in no contribution to the inference.
We hope that this explanation can address your concerns, we will polish the writing of Section 3.2 to make this point more clear.
>**Q3:** Concerning the expert grouping method, is there room for further improvement in clustering W_1? LLaMA uses GLU, where the intermediate representation is obtained by element-wise multiplication of two vectors, which is different from the vanilla FFN studied in the original MoEfication.
**A3:** For LLaMA, our current design uses the parameter clustering on the gate matrix (whose output feeds into the activation function) to group the neurons. In our early evaluation, we also tried to use another matrix (up-matrix) to group the neurons. However, we observed no significant difference in performance between the two approaches. As we discussed in Figure 9 in the paper, another strategy (co-activation) also shows similar performance. Our hypothesis is that, since LTE updates the model weights and routers, the model can adjust the grouping to some extent, even though the initial grouping is not optimal. However, if the initial grouping is too bad (like random), the model may not be able to make such a large adjustment.
>**Q4:** What principles guide the selection of thresholds, and is there any transferability to other LLMs?
**A4:** In our evaluation, we set the threshold to 0.5 for all models and tasks, as 0.5 represents the midpoint for the sigmoid output. As shown in Figure 3 in the paper, we observed that the separability loss creates a significant gap between the pre-set thresholds. we believe that a threshold around 0.5 can also work well.
---
Thanks for the attentive reading of the manuscript and constructive feedback. We will incorporate these changes into our final version.
We hope our response addresses all the concerns and that the reviewer will consider raising the rating accordingly. We are more than glad to answer any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply. I'll keep my original score.
---
Reply to Comment 1.1.1:
Title: Thanks for the Response
Comment: We thank the reviewer for the response. We hope these explanations and discussions have answered your questions.
Thanks again for your insightful and constructive comments, which indeed help improve our paper. We are happy to answer further questions if you have any in the future. | Summary: The paper presents a new approach (LTE) aimed at improving the inference efficiency of large language models by developing structured activation sparsity. The method trains LLMs to activate fewer neurons in FFN layers while attempting to maintain task performance. The approach works by grouping neurons into experts and using a routing strategy (based on Sigmoid instead of Softmax) to select experts adaptively. The authors evaluate the method on RoBERTa, GPT2, and LLaMA models across various NLP tasks. They report that LTE outperforms existing baselines and provides FLOPs and a latency reduction by utilizing sparsity through a custom CUDA implementation.
Strengths: 1. The paper presents a new method for inducing sparsity in LLMs. It is based on several MoE concepts but uses Sigmoid-based routing instead of the traditional Softmax.
2. The authors tested their method on multiple models, datasets, and task types.
3. The paper includes a custom CUDA kernel implementation to strengthen the applicability of the method.
Weaknesses: 1. The two-step training increases the complexity of applying the approach
2. Dependency on multiple hyperparameters/thresholds.
3. The paper keeps claiming that existing methods focus on existing sparsity in pre-trained models, but inducing sparsity in training (even in LLMs) is not new and has been around for a while.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Since you are introducing a different training approach, how does the total training time compare to the baseline?
2. Can you provide more insights into the limitations of Softmax routing?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. Complexity overhead of the training.
2. Many of the presented insights are already in the MoE literature, except for the Sigmoid routing.
3. Limited number of baselines (using only two baselines, there is a lot of work about sparsity in LLMs). Imo, the paper would be much stronger if it compares against more (sparsity) baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer found that our work has extensive evaluation and our custom kernel increases the applicability. We thank the reviewer for the constructive feedback and appreciate the opportunity to address the points you have raised.
---
> **Q1:** The two-step training increases the complexity of applying the approach.
> **Q4:** Complexity overhead of the training. Since you are introducing a different training approach, how does the total training time compare to the baseline?
**A1&A4:** Compared to the two post-training baselines, even though LTE introduces additional training overhead, we argue that LTE training is a one-time effort. Since LTE significantly saves the inference overhead, This approach is beneficial in the long run for serving LLM more efficiently.
Moreover, as we discussed in Q4 of the General Response. Even if we further fine-tune the entire model MoEfied with Deja Vu for the same training time as LTE models, the Deja Vu models fail to achieve comparable performance to LTE models
>**Q2:** Dependency on multiple hyperparameters/thresholds.
**A2:** We agree with the reviewer that LTE training introduces additional thresholds and hyperparameters. However, we argue that those additional thresholds and hyperparameters do not increase the training complexity of LTE.
**1)** To avoid manually selecting thresholds, we design the separability loss (Eq. 5 in the paper) to make models threshold-aware. This separability loss encourages router outputs to diverge from a predefined threshold (Figure 3 in the paper), which enables us to use the same threshold for all evaluations. In our paper, we set the threshold to 0.5 for all models and tasks, as 0.5 represents the midpoint for the sigmoid output.
**2)** For the hyperparameter ($\lambda$ in Eq. 6) for the separability loss, our ablation study (Figure 11 in the paper) shows that this parameter is not sensitive to performance. Once it is increased to a certain value. It won’t introduce performance gains. We set this hyperparameter to 0.5 for all models and tasks.
**3)** The only sensitive hyperparameter is $\eta$ in Eq.6, which is necessary to control the sparsity level of the model (like the number of selected expert $k$ in traditional MoE models).
> **Q3:** The paper keeps claiming that existing methods focus on existing sparsity in pre-trained models, but inducing sparsity in training (even in LLMs) is not new and has been around for a while.
**A3:** We thank the reviewer for pointing out the confusion in our claim. The existing methods that we discussed in the paper stand for *MoEfication methods transforming a pretrained dense model into MoE models* rather than traditional MoE training.
As we discussed in the General Response Q1, MoEfication presents unique challenges, and our evaluations show that the MoE training baselines (such as noisy-top K softmax, sigmoid-MoE, and fine-tuning Deja Vu models) all underperform LTE. As far as we know, our paper is the first work to explore the sparse training upon a pretrained model.
> **Q5:** Can you provide more insights into the limitations of Softmax routing?
**A5:** The limitations of Softmax routing in MoEfication come from the fact that the sum of the softmax output is one. In traditional MoE models, only a few experts are selected (typically fewer than four), resulting in expert scores that are not too small. However, in MoEfication, a much larger number of experts can be chosen. When the sum of the expert scores is distributed among all experts, each expert may receive a very small score (almost zero in many cases). Since the expert outputs are multiplied by these scores, extremely small scores will scale the expert outputs to very small values, affecting inference performance.
> **Q6:** Many of the presented insights are already in the MoE literature, except for the Sigmoid routing.
**A6:** Although some aspects of LTE design have been discussed in previous work, MoEfication introduces unique challenges (as detailed in General Response Q1). Directly applying existing MoE techniques to the MoEfication scenario is non-trivial.
Our evaluation shows that directly applying MoE training methods, such as noisy-top K softmax, sigmoid-MoE, and fine-tuning Deja Vu models, does not outperform the proposed LTE methods. This result indicates that LTE is more effective for the MoEfication scenario and provides a strong baseline for this area.
> **Q7:** Limited number of baselines (using only two baselines, there is a lot of work about sparsity in LLMs). Imo, the paper would be much stronger if it compares against more (sparsity) baselines.
**A7:** In addition to the Deja Vu and MoEfication baselines reported in Section 5, we also compare the performance of the noisy top-k softmax routing, which is the most common MoE training strategy, in Section 3.2. Due to the training collapse issue of the softmax router, we did not conduct large-scale evaluations on other tasks.
Moreover, as suggested by Reviewer #1 (zH4X), we evaluated and compared **three additional baselines**: Sigmoid-MoE and Deja Vu with fine-tuning (Figure 1 in the rebuttal PDF); Model Pruning Wanda (Table 1 in the rebuttal PDF). The evaluation results show that LTE still outperforms these baselines. A potential reason for LTE's better performance is that MoEfication presents unique challenges compared to traditional MoE training, which are better addressed by LTE. *(we kindly refer to the General Response Q1 for more details).*
We hope those new baselines can address your concerns.
---
Thanks for the attentive reading of the manuscript and constructive feedback. We will incorporate these changes into our final version.
We hope our response addresses all the concerns and that the reviewer will consider raising the rating accordingly. We are more than glad to answer any further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for providing further insights and clarifications (through this comment or the other comments, especially with the added baseline). I will consider them when adjusting the final score.
---
Reply to Comment 1.1.1:
Title: Thanks for the Response
Comment: We thank the reviewer for the response. We are happy to see the reviewer’s acknowledgment of our efforts on further insights, clarifications, and new baseline evaluations in our response. We hope these discussions and evaluations have effectively addressed your concerns.
Please let us know if you have any further questions, concerns, or points that require clarification. We are more than happy to answer or discuss them. We look forward to your final score. | Summary: This article introduces a novel training algorithm, LTE, designed to train large language models (LLMs) to achieve more structured activation sparsity during inference. Thus, it enhances their efficiency without compromising performance until a very high sparsity.
Strengths: 1. LTE performs excellently across all datasets: In multiple natural language understanding (NLU) tasks, LTE shows no significant performance degradation at high sparsity levels (80-95%) and maintains good performance even at sparsity levels exceeding 90% of FFN.
2. This article develops a CUDA kernel to speed up inference by reducing memory and computational overheads.
Weaknesses: 1. Stage 1 of LTE will train all the model's parameters. How do you implement Dejavu? As far as I know, Dejavu is a post-training method that freezes the model's parameters. I'm not sure whether this is a fair comparison.
2. The LTE algorithm introduces a two-stage training process, which might be complex and computationally intensive to implement. This can be a barrier to practical adoption in resource-constrained environments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Have you tried LTE also on the attention layers?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The article includes a limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer found that our work has excellent performance. We thank the reviewer for the constructive feedback and appreciate the opportunity to address the points you have raised.
---
> **Q1:** Stage 1 of LTE will train all the model's parameters. How do you implement Dejavu? As far as I know, Dejavu is a post-training method that freezes the model's parameters. I'm not sure whether this is a fair comparison.
**A1:** Deja Vu is proposed as a post-training method in their paper. Following this, we implemented Deja Vu in a post-training manner in our paper: we first fine-tuned the model with the specific datasets and then applied the Deja Vu to the fine-tuned model to MoEfy models.
However, we understand the reviewer's concern that fine-tuning those MoEfied models (all parameters) can further improve performance. We conduct an evaluation on fine-tuning Deja Vu models. We first fine-tuned the model with Wikitext-103 and applied Deja Vu. Subsequently, we further fine-tuned the Deja Vu model with Wikitext-103 (for the same training time as LTE training). Note that the first fine-tuning is necessary for Deja Vu to collect data to train predictors.
The evaluation results are reported in Figure 1 of the rebuttal PDF. We find that while the additional fine-tuning does improve the performance of Deja Vu, LTE still outperforms Deja Vu with additional fine-tuning.
> **Q2:** The LTE algorithm introduces a two-stage training process, which might be complex and computationally intensive to implement. This can be a barrier to practical adoption in resource-constrained environments.
**A2:** Compared to the two post-training baselines, even though LTE introduces additional training overhead, we argue that LTE training is a one-time effort. Since LTE significantly saves the inference overhead, this approach is beneficial in the long run for serving LLM more efficiently.
Moreover, as we discussed in Q1. Even if we further fine-tune the entire model MoEfied with Deja Vu for the same training time as LTE models, the Deja Vu models fail to achieve comparable performance to LTE models, which indicates that, besides training, other LTE components also contribute to the performance.
>**Q3:** Have you tried LTE also on the attention layers?
**A3:** We focus on the MoEfication for FFN layers in this work, but we do think applying MoEfication on attention layers can be an interesting and promising exploration direction. A potential solution is to set each attention head as an expert and set a sigmoid routing to decide if a head in the attention layer should be used. We believe MoEfication on attention layers can further increase the contextual sparsity of the paper, and we plan to leave this for our future study.
---
Thanks for the attentive reading of the manuscript and constructive feedback. We will incorporate these changes into our final version.
We hope our response addresses all the concerns and that the reviewer will consider raising the rating accordingly. We are more than glad to answer any further questions.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer Jye3
Comment: Thanks for the response of the authors. Most of my questions have been addressed. I will raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks for the Response
Comment: We thank the reviewer for the response and for raising the score. We are glad to see that our responses have addressed your concerns!
Thanks again for your insightful and constructive comments, which indeed help improve our paper. We are happy to answer further questions if you have any in the future. | Summary: This work aims to introduce structured sparsity to large language models (LLMs) to improve their execution efficiency. To achieve this, it enhances previous MoEfication methods by employing a sigmoid-based non-competitive routing function and a threshold-based expert selection, allowing for adaptive expert numbers. Experiments across various models and different language understanding and generation tasks validate the effectiveness of the proposed method.
Strengths: 1. The paper is well-motivated and easy to follow.
2. The proposed method exhibits good soundness and can achieve real-device speed-up with the developed CUDA kernel.
3. The proposed method has been evaluated across both encoder and decoder language models, achieving a consistently improved accuracy-efficiency trade-off.
Weaknesses: 1. The major concern is that the technical contribution and novelty of this work are somewhat limited. The use of MoE with sigmoid functions to avoid competition among experts has been adopted in previous works, such as [1][2], and an extension to MoEfication will intuitively work. The authors are expected to analyze the key differences that make the proposed method particularly suitable for MoEfication.
[1] "Approximating Two-Layer Feedforward Networks for Efficient Transformers," R. Csordás et al., EMNLP'23.
[2] "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention," R. Csordás et al., arXiv'23.
2. One missing baseline is structured weight pruning [3][4]. Considering this work mainly targets a per-task fine-tuning setting, structured weight pruning can already achieve decent sparsity with non-trivial generation speed-up without performance dropping [3][4]. This comparison can inform the community whether weight sparsity or context sparsity is more cost-effective or if they can be applied together.
[3] "A Simple and Effective Pruning Approach for Large Language Models," M. Sun et al., ICLR'24.
[4] "Fluctuation-based Adaptive Structured Pruning for Large Language Models," Y. An et al., AAAI'24.
3. Task-specific fine-tuning for each task is too costly. The authors are highly encouraged to perform continued pretraining and evaluate across different tasks to validate the generalization capability of the proposed method.
4. I wonder how the task-specific fine-tuning is performed for the baseline methods. Specifically, is Dejavu/MoEfication performed on top of the fine-tuned model, or is the model fine-tuned after applying these techniques, or are there any smarter strategies? This question arises because the proposed method simultaneously updates both model weights and expert selection strategies, which may be a key reason why it outperforms the baselines.
Technical Quality: 3
Clarity: 3
Questions for Authors: My questions have been included in the weakness section. I'm willing to adjust my scores if my concerns are properly addressed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: This work does not suffer from notable negative societal impacts as it aims to improve the efficiency of LLMs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer found that our work is well-motivated and sound. We thank the reviewer for the constructive feedback and appreciate the opportunity to address the points you have raised.
---
> **Q1:** Novelty concerns: A sigmoid router was proposed in previous MoE works like [1][2]. The authors are expected to analyze the key differences that make the proposed method particularly suitable for MoEfication.
**A1:** We thank the reviewer for pointing out the missing references, and we are happy to clarify the key differences between LTE and MoE with a sigmoid router, and the novelty of our paper.
First, we would like to clarify that, compared to traditional MoE, the MoEfication problem has three unique challenges: 1) Router designing for a larger number of selected experts; 2) Router training on pretrained models; 3) Adaptive sparsity in different layers of the pretrained model. *(We kindly refer to the General Response Q1 for more details.)*
Even though the Sigmoid-MoE[1] can handle the first challenge, it ignores the second and third challenges, which can cause an inferior performance. Instead, LTE also addresses two other challenges: For the second challenge, LTE adopts an indicator function (Eq. 2) to avoid the expert output scale and a two-stage training algorithm to solve the non-differentiability issue of the indicator function. For the third challenge, LTE employs an efficiency loss (Eq. 4) to introduce competition across different layers, leading to more adaptive sparsity in different layers (Figure. 15 in the paper). Those novel designs make LTE better address the challenges in MoEfication. *(We kindly refer to the General Response Q2 for more details.)*
**Sigmoid-MoE Evaluation.** To better understand how Sigmoid-MoE works on MoEfication tasks, we implement the Sigmoid-MoE[1] on the GPT2-M and fine-tune it with the WikiText103. We train Sigmoid-MoE models with the same training time as we train the LTE models. The comparison results are in Figure 1 in the rebuttal PDF: Sigmoid-MoE overcomes the collapse issue of the softmax router, but still underperforms LTE.
>**Q2:** One missing baseline is structured weight pruning [3][4]…
**A2:** We thank the reviewer for bringing the model pruning for discussion. As discussed in lines 94-99 of our paper, while both LTE and structured weight pruning provide structured sparsity for inference acceleration, they provide two different types of sparsity. The contextual sparsity offered by LTE is more flexible and adaptive compared to the static sparsity offered by model pruning.
To provide a clearer comparison between these methods, we apply Wanda to a Wikitext-103 fine-tuned LLaMA-7B model and report the results in Table 1 in the rebuttal PDF. The evaluation results show that LTE achieves better performance than Wanda given the same level of sparsity.
> **Q3:** Task-specific fine-tuning for each task is too costly. The authors are highly encouraged to perform continued pretraining and evaluate across different tasks to validate the generalization capability of the proposed method.
**A3:** We agree with the reviewer that validating the generalization capability can better demonstrate the effectiveness of the proposed method. In our submission, we already included this evaluation in Figure 7 (Section 5.2). We use the Tulu dataset to supervise fine-tune LTE models and evaluate the few-shot performance on the MMLU dataset (which is a large comprehensive dataset consisting of 15,908 questions from 57 distinct tasks.) The evaluation results show that LTE still outperforms other baseline methods in this supervised fine-tuning setting, which demonstrates the generalization capability of LTE.
> **Q4:** I wonder how the task-specific fine-tuning is performed for the baseline methods. Specifically, is Dejavu/MoEfication performed on top of the fine-tuned model, or is the model fine-tuned after applying these techniques, or are there any smarter strategies?
**A4:** Deja Vu and MoEfication are proposed as post-training methods in their papers. Following this, we implemented Deja Vu or MoEfication in a post-training manner in our paper: we first fine-tuned the model with the specific datasets and then applied Deja Vu or MoEfication to the fine-tuned model to MoEfy models. (This is also the implementation suggested in the MoEfication paper).
However, we understand the reviewer's concern that further fine-tuning those MoEfied models (all parameters) can further improve performance. We conduct an evaluation on fine-tuning Deja Vu models. We first fine-tuned the model with Wikitext-103 and applied Deja Vu. Subsequently, we further fine-tuned the MoEfied Deja Vu model with Wikitext-103 (for the same training time as LTE training). Note that the first fine-tuning is necessary for Deja Vu to collect data to train predictors.
The evaluation results are reported in Figure 1 of the rebuttal PDF. We find that while the additional fine-tuning does improve the performance of Deja Vu, LTE still outperforms Deja Vu with additional fine-tuning.
---
Thanks for the attentive reading of the manuscript and constructive feedback. We will incorporate these changes into our final version.
We hope our response addresses all the concerns and that the reviewer will consider raising the rating accordingly. We are more than glad to answer any further questions.
---
[1] "Approximating Two-Layer Feedforward Networks for Efficient Transformers," R. Csordás et al., EMNLP'23.
[2] "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention," R. Csordás et al., arXiv'23.
[3] "A Simple and Effective Pruning Approach for Large Language Models," M. Sun et al., ICLR'24.
[4] "Fluctuation-based Adaptive Structured Pruning for Large Language Models," Y. An et al., AAAI'24.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for providing the response and addressing most of my concerns. I will raise my score and further listen to other reviewers' opinions.
---
Rebuttal 2:
Title: Thanks for the Response
Comment: We thank the reviewer for the response and for raising the score. We are indeed glad that our responses address your concerns!
Thanks again for your insightful and constructive comments, which indeed help improve our paper. We are happy to answer further questions if you have any in the future. | Rebuttal 1:
Rebuttal: ## General Response
Dear reviewers,
We thank all the reviewers for their constructive reviews towards improving our work.
We are pleased that reviewers found our paper’s advantages: “LTE constantly achieves better performance-sparsity trade-off across multiple models, datasets, and task types.” (All reviewers); “The customized kernel improves the soundness of the work.” (All reviewers).
For the comments and concerns discussed in the reviews, we write this general response to address some common concerns and separate responses to each individual review.
---
> **Q1:** The challenges of MoEfication compared to traditional MoE training.
**A1:** MoEfication presents three unique challenges:
**1) Router designing for a larger number of selected experts:** Unlike traditional MoE designs, which typically select a small number of experts, MoEfication can choose a much larger number of experts, which leads to softmax routing collapse (as discussed in Section 3.2), which lead to commonly used Softmax routing collapse in our empirical evaluation.
**2) Router training on pretrained models:** MoEfication is based on a pretrained model, unlike traditional MoE models that are trained from scratch. Before MoEfication, the outputs of the grouped experts are not scaled in the pretrained model, but typical MoE designs need to scale outputs with expert scores to make the router differentiable. The scaling of expert outputs in a pretrained model can hurt performance, even with fine-tuning.
**3) Adaptive sparsity in different layers of the pretrained model.** Recent work [2] shows that different layers in a pretrained model have different levels of sparsity. Traditional MoE designs use a predefined k to set the sparsity for each layer, which prevents adaptive sparsity across different layers.
Those challenges make it non-trivial to directly apply MoE training for the MoEfication problem.
> **Q2:** The novelty and contribution of our paper.
**A2:** While some MoE-related concepts have been discussed in the MoE literature, as far as we know, LTE is the first method to address all three aforementioned challenges in MoEfication.
**1) Efficiency-aware Sigmoid router:** We adopt a sigmoid router and use an efficiency-aware loss to introduce competition among experts, thereby avoiding the collapse issue associated with softmax routing, which address the first challenge.
**2) Indicator function with two-stage training:** We use an indicator function to select experts without scaling expert outputs. To address the non-differentiability issue of the indicator function, we proposed a two-stage training algorithm to jointly train the model and routers. This address the second challenge.
**3) Threshold-based router with efficiency loss:** We utilize a threshold-based router, which allows for a more adaptive selection of experts in each layer. The efficiency loss introduces competition across different layers, leading to adaptive sparsity in different layers (see the ablation study in Figure 15 in the Appendix), which addresses the third challenge.
Besides handling those three challenges, another novelty of LTE is the introduction of inference efficiency as an optimization goal. This approach encourages models to only use necessary parameters for inference by training, which is different from the common practice of predefining a sparsity level in typical MoE.
> **Q3:** Additional baseline comparison: Sigmoid-MoE[1]
**A3:** As suggested by Reviewer #zH4X, we implemented the Sigmoid-MoE to test its performance on MoEfication tasks. We applied Sigmoid-MoE to the GPT-2 Medium model and fine-tuned it with the WikiText-103 dataset. The Sigmoid-MoE model was trained for the same training time as the LTE models. The comparison results are presented in Figure 1 in the rebuttal PDF. While Sigmoid-MoE overcomes the collapse issue of the softmax router, it still underperforms LTE.
> **Q4:** Additional baseline comparison: Deja Vu fine-tuning
**A4:** Deja Vu and MoEfication are proposed as post-training methods in their papers. Following this, we implemented Deja Vu or MoEfication in a post-training manner in our paper: we first fine-tuned models with the specific datasets and then applied Deja Vu or MoEfication to the fine-tuned modesl to evaluate performance.
However, we understand the reviewers’ concern that fine-tuning those MoEfied models (all parameters) could further improve the performance of the baselines. To better study this problem, we conduct an evaluation on fine-tuning the Deja Vu model (all parameters) to compare the performance. We first fine-tuned the model with Wikitext-103 and applied Deja Vu. Subsequently, we further fine-tuned the Deja Vu model with Wikitext-103 for the same training time as LTE training. Note that the first fine-tuning is necessary for Deja Vu to collect data to train predictors.
The evaluation results are reported in Figure 1 of the rebuttal PDF. We find that while the additional fine-tuning does improve the performance of Deja Vu, LTE still outperforms Deja Vu with fine-tuning.
---
To sum up, MoEfication presents unique challenges, and it is non-trivial to directly apply MoE techniques to MoEfication. Considering the novel design and strong empirical performance of LTE, we believe that LTE presents a strong baseline for future study.
We agree with the reviewers that the paper writing can be further improved, and we believe that those issues can be well addressed in our final version.
---
[1] "Approximating Two-Layer Feedforward Networks for Efficient Transformers," R. Csordás et al., EMNLP'23.
[2] "The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers." Li, Zonglin, et al., ICLR’23.
Pdf: /pdf/0805439319d24a55f3b332722c2408998c12c7d0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SubgDiff: A Subgraph Diffusion Model to Improve Molecular Representation Learning | Accept (poster) | Summary: SubgDiff is introduced to improve molecular representation learning by integrating substructural information into the diffusion model framework. It offers three key technical contributions (subgraph prediction, expectation state, and k-step same subgraph diffusion) to enhance the network's understanding of molecular substructures. Experiments were carried out on several downstream tasks, particularly molecular force predictions.
Strengths: * Incorporate substructural information into diffusion model
Weaknesses: * The denoising process need better explanations.
* Diffusion models excel at generating new samples. The application of SubgDiff to molecular property prediction/classification does not show its strengths. The state-of-the-art results are missed in Section 5.1. Section 5.2 doesn't compare generation results with the state-of-the-art.
* Explain COV-R and MAT-R in section 5.2
Technical Quality: 2
Clarity: 2
Questions for Authors: * Does the denoising processing require to start with a R^T that has a clear topology? If not, how to sample s_t?
* How to select k in the k-step same-subgraph diffusion? Any general guidance?
* How are the SubgDiff learning results fed into downstream tasks?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive suggestions and useful feedback!
> W1: The denoising process needs better explanations.
**AW1:** Thanks for the suggestive advice. We will further describe the details of the denoising process in the paper. The only difference from the traditional diffusion backward process is that when $t\%k=0$, we use the subgraph predictor to update the subgraph to be denoised. Additionally, the whole denoising process can be referred to in Algorithm #1 below.
> W2: Diffusion models excel at generating new samples. The application of SubgDiff to molecular property prediction/classification does not show its strengths. The state-of-the-art results are missing in Section 5.1. Section 5.2 doesn't compare generation results with the state-of-the-art.
**AW2**: Thanks a lot for the insightful comments.
We agree that diffusion models are good at generation. As a generation model, the objective of diffusion can be used as a task for self-supervised learning. This is one of our motivations and our primary focus is on improving the diffusion model for representation learning. Our experiments on molecular property prediction show that our approach outperforms various molecular self-supervised learning methods and the original diffusion model.
(1) In Section 5.1, our method uses the same framework as MoleculeSDE, which is the state-of-the-art technique for molecular pretraining. The experiments also demonstrate our method can gain significant improvement over the other self-supervised techniques. Besides, our method significantly outperforms MoleculeSDE, especially in 3D property prediction (Table 1 in the main paper), which indicates the effectiveness of our method by incorporating the information of substructures.
(2) In Section 5.2, we focus on evaluating the substructure's effect on the diffusion model itself and the experiment verifies the success of our design (Table 2 in the main text). Our results show that our approach is able to improve the sampling efficiency and generalization ability, compared to the original diffusion model. We also discuss other conformational generation models in the related work and provide the performance of other SOTA methods in Appendix Table 12.
> W3: Explain COV-R and MAT-R in section 5.2
**AW3:** Thanks for pointing this out. The definition of COV-R and MAT-R can be found in Eq (32) and Eq (33) in Appendix A.5.2.
"Let $S_g$ and $S_r$ denote the sets of generated and reference conformers respectively, then the Coverage and Matching metrics can be defined as
$COV-R(S_g, S_r) = \frac{1}{ |S_r| } |
\lbrace C\in S_r | \operatorname{RMSD}(C, \hat{C}) \le \delta, \hat{C} \in S_g \rbrace
| $
$ {MAT-R}(S_g, S_r) = \frac{1}{| S_r |}
\sum\limits_{C \in S_r}
\min\limits_{\hat{C} \in S_g} \operatorname{RMSD}(C, \hat{C})$
where $\delta$ is a threshold."
We will put it into Section 5.1 in the main paper.
> Q1: Does the denoising processing require starting with a $R^T$ that has a clear topology? If not, how to sample s_t?
**AQ1:** Thanks for the question. Our diffusion model is able to generate the molecule conformation conditioned on the molecule graph. So the graph topology is given. But the initial conformation $R^T$ is sampled from the Gaussian noise. And the $s_t$ is predicted through the subgraph predictor $s_\vartheta(\mathcal{G}, R^t,t)$ trained by the objective in Eq (17). Algorithm 2 in the main paper and Algorithm 4 in the appendix also describe the whole process. We also put the algorithm #1 below.
- **Algorithm #1: Sampling from SubgDiff**
----
Sample $R^T \sim \mathcal N(\mathbf 0, \mathbf I)$ \//Random noise initialization
**For**{t = T **To** 1}
{
1. $\mathbf z \sim \mathcal N(\mathbf 0, \mathbf I)$ if $t>1$, else $\mathbf z =\mathbf 0$ \// Random noise
2. **If** $t\%k==0$ or $t==T$: $\hat{s} \gets s_\vartheta(\mathcal{G},R^{t},t)$ \//subgraph prediction
3. $\hat\epsilon \gets \epsilon_\theta (\mathcal G,R^{t},t)$ \// Posterior
4. $R^{t-1} \gets$ Equation 18 \// sampling coordinate
}
**Return** $R^0$
>Q2: How to select k in the k-step same-subgraph diffusion? Any general guidance?
**AQ2:** Thanks for this valuable question. $k$ cannot be too large. A large $k$ will make the subgraph prediction task too easy such that the model learns little substructure information. However, a very small $k$ makes it difficult for the subgraph predictor to converge. From our experience, $k$ and the number of the diffusion step $N$ can maintain a certain ratio, e.g. $N:k=20$ in our experiments, and the model can achieve better performance. In practice, $k$ is a hyper-parameter that can be tuned according to the sampling performance.
We also provide the sensitivity analysis of $k$ in Table #1 of the general response.
>Q3: How are the SubgDiff learning results fed into downstream tasks?
**AQ3:** Thank you for the important question. The denoising network $\epsilon_\theta (\mathcal G, R^{t},t)$ contains a 3D encoder based on SchNet and a 2D encoder based on GIN. After we finish the training of SubgDiff, the encoders can be used as the pretrained models for the downstream tasks. This feeding method follows the conventional generative self-supervised techniques, e.g.MolecularSDE[1] and Denoising[2]. We also provide the details in Appendix A.4.
[1] Shengchao Liu, et al. A group symmetric stochastic differential equation model for molecule multi-modal pretraining. ICML 2023
[2] Sheheryar Zaidi, et al. Pre-training via denoising for molecular property prediction. ICLR 2023.
---
Rebuttal Comment 1.1:
Title: kind reminder to the reviewer
Comment: Dear reviewer,
We kindly ask if you could inform us whether your concerns have been adequately addressed in our rebuttal, or if you have any further questions. We are committed to responding promptly to any additional inquiries you may have.
Thank you for your time and valuable feedback.
Best regards,
The Authors | Summary: The paper proposed SubgDiff which is a diffusion model used in self-supervised learning setup to enhance the molecular representation learning. It introduces motif enhancement during the diffusion process to force the model to learn more structure information.
Strengths: 1. The idea of enhancing motif information in the diffusion process is promising.
2. The paper is well-organized and easy to follow.
Weaknesses: 1. The authors directly use the baseline results from the MoleculeSDE paper in their table; however, the results for MoleculeSDE are significantly lower than those reported in the original paper.
2. The proposed method is more like a graph diffusion model than a molecular representation learning model. It is limited to representation learning within a self-supervised learning framework. It would be beneficial to explicitly state this in the abstract or introduction.
3. The paper lacks an ablation study to evaluate the contribution of each component of SubgDiff to molecular representation learning.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to Weaknesses.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and sharing these important points. These comments highlight several areas where we can improve the clarity and thoroughness of our paper. We acknowledge the issues raised and would like to address each point.
> W1: The authors directly use the baseline results from the MoleculeSDE paper in their table; however, the results for MoleculeSDE are significantly lower than those reported in the original paper.
**A1:** Thanks for your careful review! We use the released code provided by the MoleculeSDE and report the reproduced results. Our method applies the same pretraining framework as MoleculeSDE except for the diffusion model. Specifically, we replace the original diffusion model in MoleculeSDE with our SubgDiff. Hence, we used the reproduced results from MolecularSDE instead of the results reported in the paper for a fair comparison.
> W2: The proposed method is more like a graph diffusion model than a molecular representation learning model. It is limited to representation learning within a self-supervised learning framework. It would be beneficial to explicitly state this in the abstract or introduction.
**A2:** We agree with the reviewer that our method bears similarities to graph diffusion models and that its primary focus is on representation learning within a self-supervised framework. Indeed, we use the denoising objective in diffusion as the self-supervised task. We acknowledge that we should have been more explicit about this positioning of our work. To address this:
a) We will revise the abstract to clearly state that SubgDiff is a graph diffusion model focused on molecular representation learning in a self-supervised context.
b) In the introduction, we will add a paragraph explicitly discussing the nature of our model and its place within the broader landscape of molecular self-supervised learning.
c) We will review the entire manuscript to ensure consistent and clear communication about the scope and nature of our method.
> W3: The paper lacks an ablation study to evaluate the contribution of each component of SubgDiff to molecular representation learning.
**A3:** Thank you for the helpful suggestion. (1) Our method contains three components, where the subgraph prediction and expectation state are designed to work together for the sampling. Without these two techniques, the method cannot be used for sampling due to the inaccessible $s_t$ during inference. For the $k$-step same subgraph component, we provide the results concerning different $k$ to demonstrate its significance, as shown in **Table #1** in the general response.
(2) The expectation state and subgraph prediction can be evaluated in the self-supervised learning context. The results of the downstream force prediction tasks are shown in **Table #2** in the general response.
The results suggest that the subgraph prediction loss plays a more important role in molecular representation learning.
---
Rebuttal Comment 1.1:
Title: Kind reminder to the reviewer
Comment: Dear reviewer,
We kindly ask if you could inform us whether your concerns have been adequately addressed in our rebuttal, or if you have any further questions. We are committed to responding promptly to any additional inquiries you may have.
Thank you for your time and valuable feedback.
Best regards,
The Authors | Summary: The paper presents a new denoising diffusion probabilistic model (DDPM) named SubgDiff, designed to enhance molecular representation learning by incorporating substructural information into the diffusion process. SubgDiff introduces a mask operation that selects subgraphs for diffusion, aiming to better capture the dependencies among atoms within substructures. The method includes techniques such as subgraph prediction, expectation state diffusion, and k-step same-subgraph diffusion, which together are intended to improve the learning of molecular properties related to 3D conformation. The paper claims superior performance on various downstream tasks, particularly in molecular force predictions.
Strengths: 1. Utilizing subgraphs for diffusion is a novel and intriguing exploration.
2. The experiments are thorough and demonstrate the effectiveness of SubgDiff across a range of molecular prediction tasks.
Weaknesses: 1. The subgraph prediction model is trained on highly specialized datasets, which might pose a risk of overfitting.
2. The model integrates multiple complex diffusion stages, which could complicate the training and debugging processes and make them difficult to optimize. Particularly, adjusting hyperparameters and verifying model stability might require additional effort.
3. Given the complexity involved in the expectation state and k-step diffusion processes, the model may have high demands on computational resources.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is the stability of subgraph selection ensured during the implementation of the multi-step subgraph diffusion (k-step same-subgraph diffusion) process? Is there a risk of accumulating long-term errors due to inappropriate subgraph choices?
2. Does the training strategy for the subgraph prediction model $p_\partial(s_t|R^t)$ include handling of imbalanced data? Specifically, how does the model avoid bias towards subgraphs that appear more frequently in the training data?
3. Can this method also be applied to broader graph representation learning tasks, such as node classification and clustering?
4. Diffusing considering substructures is an interesting aspect from my perspective; besides the functional group substructures in molecular graphs, community substructures [1] are prevalent in other networks. If considering community structures during the diffusion process, what insights might the authors have? This is an interesting question, as community structures are widely present in biological and social networks.
[1] 'Community detection in graph: An embedding method' in IEEE TNSE
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper introduces a complex diffusion model that leverages subgraph structures to enhance molecular representation learning. However, more limitations should be discussed, such as overfitting, computational efficiency, and generalization across diverse molecular structures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments and thoughtful feedback on our work.
**[W1. Highly specialized datasets]** Thanks for highlighting this important concern. The subgraph prediction model shares the molecular encoder with the denoising network and has an additional classification head. The subgraph predictor is trained together with the denoising network on the same dataset. We use the commonly used datasets GEOM and PCQM4Mv2 to ensure fair comparisons with the baselines. The overfitting risk can be avoided by using a larger dataset for pretraining. Our experiment in the cross-domain task shows that our proposed method has good generalization ability and may alleviate the overfitting problem to some extent.
**[W2&W3. Multiple stages and complexity]** We appreciate the reviewer's concern about the complexity of SubgDiff. While we acknowledge our model is more complex than the vanilla diffusion model, we believe this complexity is acceptable and can be justified by performance improvements, especially in molecular force predictions. To address your concerns:
- Training and debugging: compared to the original diffusion model, our model only adds a classification head for subgraph prediction and one additional loss. So the complication of training and debugging can be under control.
- Optimization and hyperparameters: Our method brings two more hyperparameters: the weight $\lambda$ for subgraph prediction loss and $k$ for the $k$-step same-subgraph diffusion. We conduct ablation studies (see the Tables in the general response) and provide recommended hyperparameters based on our experiments.
- Model stability: We implement gradient clipping, careful learning rate scheduling, and conduct long-term training experiments to ensure stability, which are the common techniques for diffusion model training.
- Computational resources: In practice, compared with typical diffusion models, the introduced expectation state and $k$-step same-graph diffusion do not bring much computational overhead. We explain it from training and inference respectively.
(1)Training: The expectation state and k-step diffusion only affect the weights of adding noise. SubgDiff can directly compute the $R^t$ from $R^0$ like the classical diffusion model as shown in Eq(16) of the main paper, i.e. $R^{t} = \sqrt{\frac{\bar\gamma_{t}\bar\alpha_{m}}{\bar\gamma_{km}}}R^0 + (\frac{\bar\gamma_{t}}{\bar\gamma_{km}} p^2 \sum_{l=1}^{m} \frac{\bar\alpha_{m}}{\bar\alpha_{l}}(1-\frac{\bar\beta_{kl}}{\bar\beta_{(l-1)k}}) + 1-\frac{\bar\gamma_{t}}{\bar\gamma_{km}})\epsilon$, where the weight calculations are scalar and do not require much computational overhead. In practice, the training time of one batch is similar to the GeoDiff baseline.
(2) Inference: The sampling computation also only needs to calculate some scalar coefficients as shown in Eq(18). From Table 3, we can see that SubgDiff needs fewer sampling steps compared to baseline GeoDiff, demonstrating the computational efficiency of the proposed method.
We also provide the source code [here](https://anonymous.4open.science/r/SubGDiff/README.md) to make SubgDiff accessible.
**[Q1. Multi-step subgraph diffusion]** Thanks for the insightful question! During training, the subgraph will be randomly sampled from the subgraph set generated from the torsional-based decomposition method, which randomly selects a torsional bond among the given molecule and breaks the molecule into two parts. Each part is a connected subgraph and then randomly chosen as the subgraph. This predefined subgraph set will ensure the appropriateness of subgraph choices. For each time step, the probability of each node being included in the diffusion process is 0.5. We use a relatively large diffusion time step to make uniform diffusion for each node and ensure the avoid the risk of accumulating errors.
**[Q2. Imbalanced data]** Thanks for the valuable question. The current training strategy does not consider this issue, so as to make a fair comparison with the baselines. In our subgraph sampling strategy, each node has the same probability of 0.5 to be included in the diffusion process. So as the diffusion step is large, our method is comparable to the baselines w.r.t the same training dataset. However, the imbalanced data is an important issue. It may be addressed by clustering the dataset and using a stratified sampling strategy during training.
**[Q3. Broader graph representation learning tasks]** Thanks a lot for the insightful comments. In general, the introduced subgraph prediction can be used as an auxiliary loss for representation learning, if there is subgraph information available. According to the applications, our framework can be easily adjusted to incorporate substructure information into the representation learning model.
**[Q4. Community substructures]**
Thanks for raising this interesting question regarding the potential application of the substructure-aware diffusion process to community structures in other networks. We're excited to share our thoughts on this matter:
- The parallel between functional group substructures in molecular graphs and community structures in other networks is indeed compelling. This analogy suggests that our approach could potentially be adapted to other domains with hierarchical or modular structures.
- Applying a similar diffusion process to networks with community structures could potentially yield several insights, such as community evolution over time, inter-community interactions, and hierarchical community structure.
- There exist potential applications in other domains such as biological networks or social networks. In biological networks, a community-aware diffusion model could potentially: a) Predict the impact of perturbations (e.g., gene knockouts) on functional modules; b) Identify key regulatory hubs that influence multiple communities.
**[Limitataions]**
Thanks so much for the advice. We will add those discussions to the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their reply, which has resolved most of my issues. I will maintain my positive rating, and I wish you good luck.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer Djew,
Thank you so much for the positive rating!
We sincerely appreciate your constructive suggestions and valuable comments for improving our paper. Thank you!
Best regards,
The Authors | Summary: The paper proposes a diffusion-based pretraining method using subgraphs to learn enhanced molecular representations. Unlike previous methods which normally add noise to every atom, this paper proposes adding noise based on subgraphs. The method is evaluated on various downstream tasks to demonstrate its effectiveness, such as 2D and 3D property prediction tasks.
Strengths: - The paper offers an interesting perspective on existing molecular diffusion models and proposes several approaches to address them accordingly, which is inspiring.
- The paper presents many experiments and analysis to illustrate the results, which could provide valuable insights to the community.
Weaknesses: - Although the proposed method effectively addresses the identified limitations, it does not provide much chemical intuition for the design. The authors claim that existing methods neglect the dependency in substructures, but the proposed method does not seem to integrate or learn the inherent molecular substructure information. The decomposition does not seem to be based on significant chemical knowledge, and the interactions or relations between various substructures are not explored. The motivation is not heavily grounded in domain knowledge.
- The method is a little bit confusing. The entire training process includes many steps and three key training objectives: subgraph prediction, expectation state, and k-step same-subgraph diffusion. The paper mainly describes each component separately, but it is unclear how the three objectives are leveraged during the training. Some details are also not clear. For example, “The mask vector $s_t$ is sampled from a discrete distribution $p\_{s_t}$ (S|G)”—does $p_{s_t}$ vary for each molecule? How exactly is the distribution obtained? Also, an overall framework could help better understand the process.
- The diffusion steps number is 5000, which is quite large. Molecular property prediction datasets normally contain small molecules. Are there any specific reasons for using such large steps? What are the computational costs?
- The proposed method seems impractical due to its complexity, while the performance improvement is not very significant. It is difficult to justify the trade-off between model complexity and performance. Perhaps the authors could provide a more in-depth discussion to advocate for their method compared to other baselines, beyond just the prediction performance.
- Some ablation studies are not conducted. For example, k seems to be an important h-param. What are the effects of different k values? Also, what about the performance of applying only one training objective?
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and positive feedback!
**[Weakness 1. Chemical intuition]**
Thank you for your insightful comment! We agree that chemical intuition and domain knowledge are crucial in molecular representation learning, and we'd like to address how our method incorporates these elements:
(1) While our method may not explicitly learn predefined substructures, it does capture atomic dependencies within the substructure through diffusion on subgraphs. This approach allows the model to implicitly learn relevant substructure information from the data, rather than relying on predetermined chemical fragments.
(2) Further, we sincerely clarify that the subgraph is obtained by decomposing the molecule by breaking the rotational bonds. This torsional-based decomposition method can capture the inherent molecular 3D substructure information. For example, the benzene ring is always preserved in the same subgraph since the π bonds in benzene rings aren't torsional. The downstream tasks on 3D property prediction also demonstrate the effectiveness of our method.
**[Weakness 2. Overall methods]**
We thank the reviewer for this valuable feedback. We acknowledge that our method involves multiple components, which may have led to some confusion. We apologize for any lack of clarity in our presentation and would like to provide a more comprehensive explanation of how these components work together:
(1) As we described in section 4.5, in the forward process, we use **expectation state diffusion** to get the state $km$ ($m:=\lfloor t/k \rfloor$) from the initial state $0$ (Phase I). Then the $(t-km)$-step **same-subgraph diffusion** is employed to get the state $t$ from state $km$ with the same subgraph $s_{km+1}$ (Phase II). In the training step, we use the denoising loss and the **subgraph prediction** loss to train the model (Equation. 16).
(2) These components are not independent but rather complement each other:
- The subgraph prediction task helps the model learn local structural features.
- The expectation state reduces the complexity of $R^t$ and removes the unstable sampling of the masks $s_{1:km}$.
- The k-step same-subgraph diffusion accumulates more noise on the same subgraph from $km$ to $t$ for facilitating the convergence of the subgraph prediction loss.
(3) The subgraph is represented by a mask vector $s_t$. The distribution $p_{s_t}(s|G)$ is implicitly defined by the torsional-based decomposition method. One can get two subgraphs by breaking a torsional bond. Thus there are $2n$ subgraphs if the molecule has $n$ torsional bonds. $p_{s_t}(s|G)$ is a random distribution over the $2n$ subgraphs. And it varies for each molecule.
**[Weakness 3. Diffusion steps number]**
Thank you for bringing up this important point about the number of diffusion steps and computational costs. From our experiments, we find that using 5000 steps can obtain better performances, as shown in Table 3 for conformation generation. We also evaluate the downstream 3D property prediction tasks when pretraining with different steps, and the results are shown in **Table #3** of the general response.
The large diffusion step will not increase the computational cost for the downstream tasks, as we use the diffusion denoising loss for pertaining. We don't need to do the sampling process that requires high computational overhead when we predict the molecular property.
**[Weakness 4. Complexity]**
Thank you for bringing this important concern to our attention.
(1) As we have shown in the paper, our method significantly outperforms baselines on 3D molecular property prediction, as shown in Table 1 in the main text and Table 8 in the appendix. In addition to the prediction performance, our method also brings the sampling efficiency and generalization ability. The results from Table 3 show that SubgDiff significantly outperforms the baseline when adopting fewer diffusion steps. As shown in Table 4, SubgDiff consistently outperforms the other baselines in the cross-domain conformation generation tasks.
The above discussion will be added to the final version.
(2) Despite the multiple components of the algorithm, the method is practical to implement. We train the model using the objective in Eq(17), which is similar to the typical diffusion model DDPM. The main difference with DDPM are the noisy version $R^t$ computed from Eq(16), i.e. $R^{t} = \sqrt{\frac{\bar\gamma_{t}\bar\alpha_{m}}{\bar\gamma_{km}}}R^0 + (\frac{\bar\gamma_{t}}{\bar\gamma_{km}} p^2 \sum_{l=1}^{m} \frac{\bar\alpha_{m}}{\bar\alpha_{l}}(1-\frac{\bar\beta_{kl}}{\bar\beta_{(l-1)k}}) + 1-\frac{\bar\gamma_{t}}{\bar\gamma_{km}})\epsilon$, where the weighting coefficients can be obtained simply by scalar calculation. This brings very little complexity. In practice, the training speed is similar to the original diffusion model (GeoDiff). In addition, we also provide the source code [here](https://anonymous.4open.science/r/SubGDiff/README.md), which will make it easy to use our method.
**[Weakness 5. Ablation]** Thank you for the constructive suggestion. (1) We provide the sensitivity analysis of $k$ in $k$-step same-subgraph diffusion. The results are shown in **Table #1** of the general response.
(2) When only using the denoising loss as the training objective, without subgraph prediction loss, the method cannot be used for sampling due to the inaccessible $s_t$ during inference. Nevertheless, the pure denoising objective can be used for pertaining and the results of the downstream force prediction tasks are shown in **Table #2** of the general response.
---
Rebuttal Comment 1.1:
Title: Kind reminder to the reviewer
Comment: Dear reviewer,
We kindly ask if you could inform us whether your concerns have been adequately addressed in our rebuttal, or if you have any further questions. We are committed to responding promptly to any additional inquiries you may have.
Thank you for your time and valuable feedback.
Best regards,
The Authors
---
Rebuttal Comment 1.2:
Comment: Thank the authors for the responses, which seem sound to me. I am maintaining my score since I am not an expert in diffusion models. I suggest that the authors include these additional information in the revision to make the paper more clear. | Rebuttal 1:
Rebuttal: ## General Response
Dear reviewers,
Thanks to all the reviewers for your time and effort during the review process and the constructive advice. In addition to the response to each reviewer individually, we conduct the ablation study and sensitivity analysis of the k-step same subgraph and diffusion steps for our method. The results are shown below in Table #1, Table #2, and Table #3.
We also provide curves of subgraph prediction error during training in the attached PDF.
**Table #1. The sensitivity analysis for different $k$ in $k$-step same subgraph diffusion on conformation generation.**
| k | 1(ld sampling*) | 10 | 25| 50 |
|----------|---------|----------|----------|--------------|
| COV-R(Mean) ↑ | 89.70 | 88.06 | **89.78** | 89.02 |
| COV-R(Median) ↑ | 93.96 | 93.26 | **94.17** | 93.21 |
| MAT-R(Mean) (Å) ↓ | 0.5235 | 0.2623 | **0.2417** | 0.2706 |
| MAT-R(Median) (Å) ↓ | 0.2710 | 0.2597 | **0.2449** | 0.2709 |
| COV-P(Mean) (%) ↑ | 49.90 | 47.38 | **50.03** | 48.63|
| COV-P(Median) (%) ↑ | 47.00 | 47.06 | **48.31** | 46.77|
| MAT-P(Mean) (Å) ↓ | 4.7816 | 4.2922 | **0.5571** | 0.7512 |
| MAT-P(Median) (Å) ↓ | 0.5378 | 0.5615 | **0.4921** | 0.4995 |
*The subgraph predictor cannot predict the correct subgraphs so we use ld sampling rather than DDPM when k=1.
**Table #2 Ablation study for the pretrained model and evaluate the model on MD17 force prediction (MAE) downstream task.**
| Component | Aspirin ↓ | Benzene ↓ | Ethanol ↓ | Malonaldehyde ↓ | Naphthalene ↓ | Salicylic ↓ | Toluene ↓ | Uracil ↓ |
|--------------------|-----------|-----------|-----------|-----------------|---------------|-------------|-----------|----------|
| w/o subgraph prediction loss| 1.193 | 0.305 | 0.321 | 0. 478 | 0.456 | 0.678 | 0.406 | 0.470 |
| w/o k-step same subgraph| 1.011 | 0.278 | 0.289 | 0. 461 | 0.448 | 0.665 | 0.377 | 0.464 |
| w/o expectation state| 0.931 | 0.281 | 0.276 | 0. 465 | 0.421 | 0.601 | 0.380 | 0.446 |
| expectation state + subgraph prediction loss | **0.880** |**0.252** | **0.258** | **0.459** | **0.325** | **0.572** | **0.362** | **0.420**|
**Table #3 The sensitivity analysis of diffusion steps for the pretrained model and evaluate the model on MD17 force prediction (MAE) downstream task.**
| #diffusion steps| Aspirin ↓ | Benzene ↓ | Ethanol ↓ | Malonaldehyde ↓ | Naphthalene ↓ | Salicylic ↓ | Toluene ↓ | Uracil ↓ |
|--|--|-|-|--|-|-|-|-|
| 100 | 0.921 | 0.282 | 0.275 | 0.471 | 0.348 | 0.596 | 0.394 | 0.465 |
| 1000 | 0.901 | 0.261 | 0.266 | 0. 463 | 0.336 | 0.590 | 0.385 | 0.438 |
| 5000 | **0.880** |**0.252** | **0.258** | **0.459** | **0.325** | **0.572** | **0.362** | **0.420**|
Best regards,
The authors
Pdf: /pdf/cc3098581dd5dffc4458288da282f9ba26205456.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Approximate Size Targets Are Sufficient for Accurate Semantic Segmentation | Reject | Summary: This paper proposes a new weakly supervised semantic segmentation task. This task uses pixel-level categorical distribution as the label in the training stage. KL divergence is used as the training loss. Experiments on three public segmentation datasets show the effectiveness of the proposed method.
Strengths: 1.The proposed task is interesting. It provides the community another choice for segmentation with less annotation effort.
2.The proposed KL divergence loss is effective, demonstrated by experiments on three public datasets. It achieves performance comparable to methods using more expensive labels, like the box supervised one.
3.The proposed method is robust to size target error, which makes it more practical.
4.The writing is fluent and easy to follow.
Weaknesses: 1.Labeling effort on complex images. Images from PASCAL VOC (like Figure 1) are easy to annotate. It contains few classes and the background is generally clean. The density of target objects is low, and hence it’s also suitable for the proposed grid-based size target annotation way.
However, in practice, scenes are much more complex, with more classes, more crowded objects, and complex backgrounds. The authors are recommended to show the annotation effort on those images, like images from Cityscapes and ADE20K. I think when the scenes become more complex, the labeling effort will increase significantly. The labeling effort of size target will be much more than the tag way, since tagging will be less influenced in such cases.
2.Model performance on complex images. Similarly, it’s recommended to evaluate the model’s performance with the proposed loss on these complex datasets. This will give a more comprehensive understanding of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: Will the code, the labeling tool, and the labeled images be publicly available, so the community can use these tools to annotate their own datasets?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1:** Labeling effort on complex images. Images from PASCAL VOC (like Figure 1) are easy to annotate. It contains few classes and the background is generally clean. The density of target objects is low, and hence it’s also suitable for the proposed grid-based size target annotation way.
However, in practice, scenes are much more complex, with more classes, more crowded objects, and complex backgrounds. The authors are recommended to show the annotation effort on those images, like images from Cityscapes and ADE20K. I think when the scenes become more complex, the labeling effort will increase significantly. The labeling effort of size target will be much more than the tag way, since tagging will be less influenced in such cases. \
**Response:** The image in Figure 1 is a selected example to showcase different forms of supervision and does not represent a typical image in PASCAL. The assumption that "images from PASCAL are easy to annotate" is questionable. PASCAL is highly diverse and includes images with complex backgrounds. While the number of categories in PASCAL is fewer than in datasets like Cityscapes and ADE20K, our size-annotation tool focuses the user on one category at a time, which helps to manage the complexity.
We appreciate the suggestion to evaluate on more datasets. Given our limited resources, we chose to align with most prior works in WSSS that use PASCAL and COCO for evaluation. Additionally, we evaluated our method on a medical dataset to provide a comprehensive study. We acknowledge the value of assessing our method on more complex datasets like Cityscapes and ADE20K and will consider including these in future work.
**Comment 2:** Model performance on complex images. Similarly, it’s recommended to evaluate the model’s performance with the proposed loss on these complex datasets. This will give a more comprehensive understanding of the proposed method. \
**Response:** This comment is similar to the previous one. We appreciate the suggestion and will strive to accommodate it in future work.
**Comment 3:** Will the code, the labeling tool, and the labeled images be publicly available, so the community can use these tools to annotate their datasets? \
**Response:** Yes, the code, labeling tool, and labeled images will be posted.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. However, I think it's a consensus in the segmentation community that PASVAL VOC is much simpler than ADE20K and Cityscapes from various aspects, including the image resolution, the scene complexity, the annotation granularity. The segmentation performance for WSSS methods on these benchmarks also reveals this. Hence it is strongly recommended to experiment with these benchmarks, which can make this work more comprehensive.
---
Rebuttal 2:
Title: on more complex datasets, etc...
Comment: Thanks for your feedback inspiring an interesting discussion. We would like to bounce back several related thoughts.
**A [complex classes, not datasets]**: Your motivation for re-evaluating our ideas on more datasets is based on a speculation that human size annotation accuracy will decrease on more "complex datasets", whatever that may mean. Since we ask annotators to size only one class at a time, it makes sense to focus the discussion on some specific class that might be harder to size than classes on PASCAL. **Could you please name some particular class on Cityscapes or DE20K that you think is harder to size than “birds” and why you think so?** We do not see how image resolution or abstract “scene complexity” is relevant. If “annotation granularity” means high-frequency boundary details, they are mostly irrelevant for size estimation accuracy (due to “averaging”), unless many thin structures dominate the object shape, as often the case with birds.
We agree that harder-to-size classes may exist, but it would be helpful to have a specific example from the datasets you suggested that would make this discussion less speculative. We believe that significantly higher human errors are possible for some **extreme** examples of classes that this discussion can identify (we can name them in limitations). This may also degrade the corresponding results. However, our results are sufficiently convincing for many representative objects that could be found in practical applications. Things work even for sufficiently challenging classes like birds. One should also keep in mind that **better assistance tools could be designed for specific complex classes**. In general, our results are meant to be a **proof-of-concept** and we believe they sufficiently demonstrate the potential of our novel ideas.
Do you seriously doubt that our novel ideas could be useful?
**B. [accuracy is an issue for all forms of supervision]**: For example, more complex objects lead to more mistakes for boxes, scribbles, and full pixel-masks in particualr. We saw many mistakes in the (so-called) ground truth masks in PASCAL (missing birds in a large group, ignored fine or thin shape details, e.g. in bikes, ambiguous categories like a "WV camper" car labeled as a bus, or toy labeled as a truck). That is, the accuracy of supervision is a general issue not limited to size annotations. Its full understanding is well beyond the scope of our work. in particular, likely problems with ground truth for complex classes (or datasets) may complicate the analysis of the effect of size annotation accuracy.
**C. [room for future work]**: More labeled data and more experiments are always good, but we propose to leave some for future work (e.g. for a journal version), particularly because it would require a significant amount of annotation just to identify some specific “hard” classes.
**E. [prior WSSS literature mostly use PASCAL. Maybe it is sufficient for concept proving?]** Note that the most significant/influential/conceptual prior work on WSSS are focused on PASCAL, to the best of our knowledge. While one can argue that other datasets are more complex (in some ways), it is more complex for all methods – full or weakly supervised. In particular, are there examples in prior WSSS work where comparison on PASCAL is reversed on other datasets? | Summary: The paper titled "Approximate Size Targets Are Sufficient for Accurate Semantic Segmentation" proposes a novel method of semantic segmentation that leverages approximate size targets instead of full pixel-level supervision. The method involves using categorical distributions to represent the expected average prediction over image pixels, utilizing the zero-avoiding variant of KL divergence as a training loss. The approach aims to reduce annotation costs while maintaining segmentation accuracy comparable to full supervision.
Strengths: 1. Originality: The use of approximate size targets as a form of weak supervision for semantic segmentation is novel and creative.
2. Quality: The experimental results are comprehensive and demonstrate the effectiveness of the proposed method across different datasets and segmentation architectures.
3. Significance: The approach has significant implications for reducing annotation costs in semantic segmentation tasks, making it highly relevant to practical applications.
Weaknesses: 1. Simplicity of Method:While the proposed method is innovative, it seems relatively simple. There might be opportunities to enhance its contributions with further development or by integrating additional techniques.
2. Limited Scope of Evaluation: While the paper evaluates the method on several datasets, it would benefit from a broader range of scenarios, including more diverse and complex images.
Technical Quality: 2
Clarity: 2
Questions for Authors: Is it possible to give ablation experiments about these two losses? How much does the two losses affect the performance of the model?
For this method, is it possible to give more information about the time comparisons of the various methods in terms of time spent on labeling?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have addressed the limitations related to annotation errors and have demonstrated the robustness of their method to these errors. However, it would be beneficial to discuss potential limitations in more detail, such as the scalability of the method to larger and more diverse datasets, and any assumptions made about the nature of the size target annotations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1:** Simplicity of Method: While the proposed method is innovative, it seems relatively simple. There might be opportunities to enhance its contributions with further development or by integrating additional techniques. \
**Response:** We appreciate the recognition of the simplicity of our method. We disagree with adding additional techniques just to make our work less simple. Simplicity is one of our research philosophies, also known as "Occam's Razor" in ML and many other fields of Science. It is famously encapsulated in the saying by Albert Einstein, "Everything should be made as simple as possible, but not simpler." It is due to simplicity and generality that there are many opportunities to enhance our method and integrate it with other techniques, as suggested by the reviewer. Since our work is the first to propose size targets for semantic segmentation, we focus this paper on fully covering the general properties of our main ideas, which should facilitate their use in later works by us and others.
**Comment 2:** Limited Scope of Evaluation: While the paper evaluates the method on several datasets, it would benefit from a broader range of scenarios, including more diverse and complex images. \
**Response:** We find that most weakly-supervised semantic segmentation (WSSS) methods use PASCAL and COCO datasets for evaluation. Some prior work only uses one dataset, PASCAL, for evaluation. We acknowledge that some prior work may have been evaluated on other datasets. However, due to resource limitations, we align with the majority of prior WSSS work by using PASCAL and COCO. This facilitates proper comparison with the most relevant prior art in WSSS. Furthermore, to demonstrate the effectiveness of our method, we also tested our approach on a medical dataset and with human annotation. We believe that the comprehensiveness of the presented evaluation is at least on par with related prior works at major conferences. We will include more diverse and complex evaluations in a future journal publication.
**Comment 3:** Is it possible to give ablation experiments about these two losses? How much do the two losses affect the performance of the model? \
**Response:** If the reviewer is referring to the size-target loss and the CRF loss in our total loss (Eqn 12), the ablation study can be found in Figure 4 (left), where the red plots indicate the performance with the size-target loss (Eqn 2) and the blue plots present the performance with the total loss (Eqn 12). If the reviewer refers to some other two losses, please clarify which ones so that we can address such concerns.
**Comment 4:** Is it possible to give more information about the time comparisons of the various methods in terms of time spent on labeling? \
**Response:** The average labeling time comparisons across different forms of weak supervision are illustrated in Figure 1. Note that such information for existing supervision methods is collected from well-known prior papers (cited in our work). For our size supervision, we have provided our average labeling speed in Figure 1. Class-specific speeds are detailed in Table 1 for the cat, dog, and bird classes. We are not sure what else could be useful. If there are some specific ideas, please share them. | Summary: This paper introduces a novel image-level supervision method for semantic segmentation using approximate segment size targets. It utilizes categorical distributions for expected average predictions, reducing annotation cost and complexity. The authors propose a zero-avoiding KL divergence as a training loss, compatible with any segmentation architecture, and demonstrate significant robustness to size target errors, improving generalization. The method achieves state-of-the-art performance on multiple datasets with standard segmentation models like ResNet101. Additionally, it requires minimal extra information and no architectural changes, making it a practical and effective solution for weakly-supervised semantic segmentation in real-world applications.
Strengths: 1. The paper introduces a novel form of image-level supervision for semantic segmentation using approximate segment size targets. This approach is original in its use of categorical distributions for expected average predictions, providing a fresh perspective on weakly-supervised segmentation methods.
2. The quality of the research is high, with comprehensive experiments conducted on multiple datasets. The use of a zero-avoiding variant of KL divergence as a training loss is well-justified and demonstrates robustness to size target errors. The empirical results show that the method achieves state-of-the-art performance using standard segmentation models.
Weaknesses: 1. The paper claims robustness to size target errors but provides limited detailed analysis on this aspect. Including more experiments to quantify and analyze how different levels of size target errors impact performance would provide a clearer understanding of the method's robustness.
2. Lack of related work. The paper’s logical flow and organization need improvement.
3. The paper lacks comprehensive comparisons with the latest models, such as "SFC: Shared Feature Calibration in Weakly Supervised Semantic Segmentation (AAAI24)".
Technical Quality: 2
Clarity: 2
Questions for Authors: see the Weaknesses
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: 1. Fig and Figure are inconsistent in Line 24
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1:** The paper claims robustness to size target errors but provides limited detailed analysis on this aspect. Including more experiments to quantify and analyze how different levels of size target errors impact performance would provide a clearer understanding of the method's robustness. \
**Response:** We have conducted a detailed analysis of the network's robustness to size errors. The reviewer may have missed these supporting experiments because they are scattered throughout the paper. Here is a summary of all relevant experiments related to robustness. We demonstrate robustness in Figures 4, 5, and 6, showing performance with respect to different levels of size errors (mRE level in our Gaussian noise model). Specifically, Figure 4 shows the robustness of the networks using synthetic sizes with various levels of size errors on Pascal. Figure 5 (right) and Figure 6 (left) demonstrate robustness under similar experimental settings on a subset of Pascal with cat, dog, and bird classes, and on a medical dataset.
**Comment 2:** Lack of related work. The paper’s logical flow and organization need improvement. \
**Response:** We have a discussion of related work in the introduction, specifically in Section 1.2. We would be happy to address more specific feedback on issues with the logical flow and organization.
**Comment 3:** The paper lacks comprehensive comparisons with the latest models, such as "SFC: Shared Feature Calibration in Weakly Supervised Semantic Segmentation (AAAI24)". \
**Response:** Thank you for pointing this out. We will add the SFC results (71.2% on PASCAL and 46.8% on COCO with R101 backbone) to the multi-stage partition in Table 2. However, it's worth noting that our size-target approach is designed to be end-to-end without the necessity of any architectural modifications, similar to fully supervised systems. Due to its simplicity and generality, it is possible to design complex multi-stage methods on top of it. It's not fair to compare our results based on end-to-end standard architectures with multi-stage systems. The multi-stage methods listed in Table 2 are for completeness, but they are not intended as direct performance benchmarks. | Summary: The paper introduces a novel image-level supervision method for semantic segmentation, utilizing approximate targets for the relative sizes of segments in training images. These targets, represented as categorical distributions for the expected average prediction over pixels, are integrated using a zero-avoiding variant of KL divergence as the training loss. This approach achieves quality comparable to full pixel-level supervision but is significantly less costly, requiring only rough estimates of the areas occupied by each class.
Strengths: 1. Using object size as a form of supervision is both innovative and interesting.
2. The proposed method is straightforward and easy to understand.
Weaknesses: 1. The title of the paper is misleading. It claims that approximate size targets are sufficient, but the work also uses image labels for supervision.
2. The most important comparison in Figure 1 is between 'Tag' and 'Size target,' as this validates the significance of using target size supervision. To clearly demonstrate that 'Size target' is superior to 'Tag' under identical conditions, it would be better to use the same architecture for both comparisons.
3. Labeling the size of objects can be challenging for humans and may introduce significant noise, especially for tiny objects. Although the authors demonstrate impressive accuracy with up to 8% size target errors, this remains a stringent annotation standard, particularly for small objects. For instance, as seen in Table 1, the mean relative error (mRE) often exceeds 10% during human annotation in the Pascal VOC dataset. Moreover, estimating target sizes in Pascal VOC is relatively easy since objects are typically large and centered. However, labeling images in more complex datasets, such as COCO, might result in a higher mRE.
4. In Table 1, the authors should also report the speed of tag annotation to highlight the cost of estimating target sizes.
5. The proposed method is straightforward and impressive for its end-to-end training, especially considering that existing weakly supervised semantic segmentation (WSSS) methods typically use CAM and two-step training. However, as shown in Table 2, while the proposed method achieves comparable accuracy to state-of-the-art WSSS methods, it relies on additional supervision and a high annotation standard (8% mRE). Moreover, Table 2 indicates that the accuracy with only tag supervision is close to that of fully supervised methods, suggesting that tag supervision alone may be sufficient for segmentation.
Technical Quality: 2
Clarity: 2
Questions for Authors: See Q3 and Q5 in weaknesses, pls.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors do not discuss the limitations and broader impact of their method, which necessitates a dedicated discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1:** The title of the paper is misleading. It claims that approximate size targets are sufficient, but the work also uses image labels for supervision. \
**Response:** We would like to reassure the reviewer that there was no intention to mislead the readers about the size-target supervision including class tag information. We tried to be clear about it. For example, the abstract states that size targets are image-level supervision (line 1) **extending** standard class-tag labels (line 11). We believe that the English word "extending" means "enlarging", which implies "inclusion". We gave an example of the size target v=(0,.15,0,...,0,.75) on line 24 clarifying that it is a categorical distribution over K classes. To further clarify the "inclusion" of tags, we can point out that the corresponding class tags t=(0,1,0,...,0,1) can be easily extracted, e.g. by the "ceiling" operator t = ceil(v). Would that help?
**Comment 2:** Using the same architecture to compare 'Tag' and 'Size target' in Figure 1. \
**Response:** We understand the reviewer's concern about not using the same architecture for 'Tag' and 'Size target' in Figure 1. Since we already have the 'Size' result on WR38 architecture in Table 2 (72.7%), we will add this number to the "size" column in Figure 1. Then, it can be compared fairly to all other supervisions in this Figure. Thank you for pointing this out.
**Comment 3:** Labeling the size of objects can be challenging for humans and may introduce significant noise, especially for tiny objects. Although the authors demonstrate impressive accuracy with up to 8% size target errors, this remains a stringent annotation standard, particularly for small objects. For instance, as seen in Table 1, the mean relative error (mRE) often exceeds 10% during human annotation in the Pascal VOC dataset. Moreover, estimating target sizes in Pascal VOC is relatively easy since objects are typically large and centered. However, labeling images in more complex datasets, such as COCO, might result in a higher mRE. \
**Response:** We acknowledge that labeling tiny objects can be challenging. However, this issue is not unique to size supervision and applies to other forms of supervision. We have observed that many ground truth masks in PASCAL are inaccurate for tiny objects (e.g. many void/boundary labels 255). Tiny objects are also very hard to find and identify in tag supervision.
Regarding our choice of 8% MRE for synthetic size being different from 16% MRE for humans... We argue that mRE-matching is wrong. Figure 5 (left) compares our Gaussian noise model using 16% mRE and the human error distribution, also 16% mRE. This reveals that the latter distribution is very different and likely contains "heavy tails", as discussed on line 244. We could have tried to find a better matching "heavy tail" noise distribution, but instead, we chose to simply adjust the mRE of our synthetic Gaussian model to match the performance of the human errors. As shown in Figure 5 (right), 8% mRE Gaussian closely approximates the human error performance (see line 256). We speculate that neural networks are robust to heavy tails, thus larger human mRE statistic (increased by such heavy tails) is largely irrelevant. We thank the reviewer for bringing out these important points. We will gladly clarify them in the paper.
Regarding the comment that "sizing" objects in other datasets like COCO may result in a higher mRE... We think this is debatable as PASCAL is highly diverse and includes objects of many types and shapes: tiny and large, thin and thick, centered and non-centered, simple and complex, single or plural, occluded or not, etc. One clear difference in COCO is a larger number of categories, but our size-annotation tool focuses the user only on one category at a time. It is also possible to further improve our size annotation assistance tool, e.g. by fine-tuning it to objects of each specific class.
**Comment 4:** The authors should also report the speed of tag annotation to highlight the cost of estimating target sizes in Table 1. \
**Response:** We will copy the average tag annotation speed from Figure 1 to make Table 1 more self-contained.
**Comment 5:** The method is impressive for its end-to-end training, especially considering that existing WSSS methods typically use CAM and two-step training. However, it relies on additional supervision and a high annotation standard (8% mRE). Moreover, Table 2 suggests that tag supervision alone may be sufficient for segmentation. \
**Response:** We appreciate the reviewer's recognition of the strengths of our proposed method, particularly its end-to-end training capability. This question regarding "tag vs. size" is highly relevant and brings up an important point for discussion.
In Table 2, for Pascal the best end-to-end performance with tag supervision (76.2%, using non-standard dual stream architecture) is only marginally weaker than our size supervision (78.1%, using standard backbone), and both are comparable to full supervision (81.4%) on Pascal. However, the best tag-only performance for ene-to-end methods on COCO (51.0%, also dual stream) is considerably lower than 56.3% for our simple size-based approach on a standard backbone, which is much closer to full supervision (60%). This makes it hard to claim that tag-only supervision is sufficiently good for accurate segmentation, at least for the current methodologies. Note that even the best (we could find) multi-stage system for tag-only supervision achieves only 53.7% on COCO. (We recently found the result in the paper "Weakly supervised co-training with swapping assignments for semantic segmentation". It will be added to our Table 2.)
Regarding the "high annotation standard" (8% mRE) mentioned by the reviewer: as detailed in our reply to comment 3, we use 8% mRE for our Gaussian noise as a good match for human error performance. It is not a "higher-than-human" standard.
---
Rebuttal Comment 1.1:
Comment: The response seems superficial and doesn't address my concerns effectively. Here are the specific issues:
In Q1: My issue is that the title is misleading, but the authors primarily discuss the abstract, which doesn't resolve the problem.
In Q4: Figure 1 shows the average tag annotation, while Table 1 should reflect the annotation speed for three different classes. How are these related?
In Q3: The main question is whether an 8% MRE is challenging for human annotation. It is good for the authors to conduct experiments on three classes, but these classes don't represent all classes in the VOC dataset. Additionally, the mIOU for these three classes differs from the mIOU for the entire dataset, as other classes might influence the predictions for these three. Therefore, using segmentation accuracy for only three classes (Figure 5) might not accurately reflect matching of mREs of human annotation and synthetic data.
In Q3: The authors claim that PASCAL is highly diverse, but is there any evidence to support this? COCO is treated as an example, but what about datasets with many images containing multiple instances of the same semantic class, as shown in Figure 1?
---
Rebuttal 2:
Comment: > The response seems superficial and doesn't address my concerns effectively.
Superficial? We tried our best and in good faith, but please consider that in some cases we had to guess what your real concerns are. We hope to gradually reduce misunderstandings.
Q1 is a good example. We had to guess the question presented in the **but** part:
> “It claims that approximate size targets are sufficient, **but** the work also uses image labels for supervision.”
We interpreted that as a concern that our paper generally hides the fact that size targets are an extension of tags. Our rebuttal addresses this in detail. If you are specifically concerned just about the title, please note that all other forms of weak supervision (boxes, points, etc) also implicitly include tag info, but no prior work explicitly states that in their titles.
We also think that it is rather obvious that size targets “extend” class tags. In any case, the reader does not have to wait too long and this is emphasized right in the abstract. We like our title, but we are also open to your suggestions about it.
> In Q4: Figure 1 shows the average tag annotation, while **Table 1 should reflect the annotation speed for three different classes**. How are these related?
Table 1 has size target annotation speed for three classes individually only because our annotation assistance tool works with one class at a time and we can collect such info separately for each class. All timing info on other forms of weak supervision (in Figure 1) is collected from prior work [46]. We do not have any information about tagging individual classes. This information may not be even possible depending on how the corresponding assistance tools work.
> In Q3: The main question is **whether an 8% MRE is challenging for human annotation**.
As we hoped to clarify in the rebuttal, mRE is irrelevant here (it is mainly needed as a scalar tuning parameter controlling our noise model). We believe a more relevant question for our paper is **whether the quality of human size annotation is good enough to train the network to produce the segmentation quality comparable with full pixel-level supervision**. Our experiments conclusively confirm this is possible. The consistency of our results (human and synthetic, across three datasets) is sufficiently convincing, in our opinion.
First, we use (our) human annotation for a subset of PASCAL on cat, dog, and bird classes, individually (see binary segmentation in Table 1) and together (four-class segmentation in Figure 5 right). For example, the latter achieves 89.6% mIOU for all 4 classes, while full supervision is 92.2% for the same four classes. "Bird" is a particularly complex class on PASCAL due to the huge variation in size and plurality of objects. For birds only, human annotation achieves 86.4% accuracy, while full supervision is 88.8%. (Table 1) We are not sure why the reviewer thinks that our classes are not representative of PASCAL
Second, due to limited human resources, we could not size-annotate all PASCAL classes, COCO, or other segmentation datasets. Instead, we developed a synthetic noise model for corrupting true sizes easily available on all these sets. Our noise model was tuned to match the segmentation quality for human annotation on available three PASCAL classes (which vary in complexity). Assuming that our tuning by matching segmentation quality is technically justified (see next point), we provide further evaluations on all PASCAL classes and COCO, which further confirm our claim. Of course, we can not guarantee that our ideas work in 100% of all cases. No one can in computer vision. We could not claim that even if we fully human-annotated PASCAL, COCO, Cityscapes, ADE20K, etc. However, we believe we provided sufficient evidence on real and synthetic experiments for the promising potential of our ideas.
> **Other classes might influence** the predictions for these three (cat, dog, bird).
The following answer is based on a particular guess of the real concern here. When matching the segmentation quality (for synthetic and human annotations) we use the same setting for both (e.g. four-class segmentation). So “other classes” have no influence here. When adding “more classes”, they should make the prediction problem equally more challenging for both human and synthetic annotations. However, “more classes” do not affect the size annotation accuracy in both cases. Indeed, we corrupt target sizes independently for each class, the humans also evaluate only one class at a time. In both cases, we convert such corrupted sizes to a distribution by normalization mainly for the sake of the loss function (KL divergence), but this is the same process for human or synthetic targets.
If this does not help, please elaborate. We will try again.
> prove that images in Pascal are **diverse enough** or **more diverse**
We can not do that. The claim that it is not diverse enough is also speculative. Pls. see the next reply
---
Rebuttal 3:
Title: extra comments on using more "complex" datasets
Comment: We noticed that our response to reviewer thaL may be helpful to clarify our view on the need to test more complex datastes, which is similar to one of your points. However, we realized you might not see our response to that reviewer. Here is a copy of our relevant reply to thaL.
**[for us, complex are classes, not datasets]**: A motivation for re-evaluating our ideas on more datasets is based on a speculation that human size annotation accuracy will decrease on more "complex datasets", whatever that may mean. Since we ask annotators to size only one class at a time, it makes sense to focus the discussion on some specific class that might be harder to size than classes on PASCAL. Could you please name some particular class on Cityscapes or DE20K that you think is harder to size than “birds” and why you think so? We do not see how image resolution or abstract “scene complexity” is relevant. If “annotation granularity” means high-frequency boundary details, they are mostly irrelevant for size estimation accuracy (due to “averaging”), unless many thin structures dominate the object shape, as often the case with birds.
We agree that harder-to-size classes may exist, but it would be helpful to have a specific example from the datasets you suggested that would make this discussion less speculative. We believe that significantly higher human errors are possible for some extreme examples of classes that this discussion can identify (we can name them in limitations). This may also degrade the corresponding results. However, our results are sufficiently convincing for many representative objects that could be found in practical applications. Things work even for sufficiently challenging classes like birds. One should also keep in mind that better assistance tools could be designed for specific complex classes. In general, our results are meant to be a proof-of-concept and we believe they sufficiently demonstrate the potential of our novel ideas. | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful and positive feedback on our size-target approach for image-level semantic segmentation. We are encouraged by their recognition of the novelty (GY7b, FTCz, AEBs, thaL), simplicity (GY7b, AEBs), and practicality (FTCz, AEBs, thaL) of our approach. We are glad many reviewers found our evaluation comprehensive (FTCz, AEBs), the approach clearly presented (GY7b, thaL), and the performance achieved state-of-the-art (FTCz) and comparable to methods using extensive labels (GY7b, AEBs, thaL).
Most of the reviewers' critical comments do not overlap and we address them individually for each reviewer. However, several reviewers indicated their interest in a discussion of the broader impact of our general method and its limitations. Here, this general rebuttal section provides such brief discussion.
Our title is an informal claim that "approximate size targets are sufficient for accurate segmentation". This is not a theorem, yet our main intention is to share with the community a surprising ("impressive", according to GY7b) finding that enriching class tags with approximate size information significantly simplifies the segmentation problem, even though the extra information remains image-level and excludes any object localization. We observe that many standard segmentation architectures can resolve all ambiguities and reach accuracies closely approaching full-mask supervision using only simple losses based on approximate size targets. In contrast, tags-only supervision on PASCAL currently requires complex multi-stage systems to achieve a similarly good quality, while on COCO even complex tag-only systems are significantly worse than what approximate size targets easily get based on standard end-to-end segmentation backbones.
Our new supervision principle for segmentation is general and could be useful in many practical applications - it does not require the design of complex muti-stage systems and avoids prohibitively expensive GT segmentation masks. We found that approximate size annotation is only marginally more complex than tag annotation - it is easy to design assistance tools significantly simplifying size estimation. Moreover, we found that the training process is robust to error levels that one can expect from a human annotator. We also believe that the new supervision principle itself is technically interesting and mathematically elegant. It extends binary tag indicators to soft distributions. It also poses many interesting open questions for further research, e.g. why approximate image-level size information is nearly as useful for neural networks as full pixel-level masks? Or, why is it hard to design simple (e.g. end-to-end) solutions based only on class tags, which seem only slightly weaker than approximate class-size information (particularly compared to pixel masks). In particular, it further stimulates research into better loss functions for tag supervision that can reduce the gap with closely related size supervision. We are also surprised that some noise in size is even beneficial for the quality of training. These findings and questions could be interesting to the community. At least, this is what makes this paper interesting to us. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UDON: Universal Dynamic Online distillatioN for generic image representations | Accept (poster) | Summary: The paper introduces a novel method for enhancing universal image embeddings through a multi-teacher knowledge distillation approach. The method, named UDON, employs a dynamic sampling technique and a shared backbone across multiple domain-specific teachers and a universal student model to efficiently distill domain-specific knowledge into a universal embedding. This embedding model can work as a foundation for many downstream task models, such as classification, retrieval, generation, etc., with a high potential impact on the community.
Strengths: 1) The introduction of multi-teacher distillation with a shared backbone is novel and addresses significant challenges in universal image representation.
(2) The dynamic sampling method that adapts based on domain-specific performance enhances learning efficiency and addresses the imbalance in training data distribution.
(3) Extensive experiments demonstrate that UDON outperforms existing methods, showcasing the effectiveness of the proposed techniques.
Weaknesses: (1) This paper misses a series of very important works for embedding learning [1,2]. The [1] proposed the Matryoshka representation learning (MRL) that the OpenAI's recent embedding model adopts. MRL would unfold the embedding model into multiple dimensions such as 64, 128, 512 or more. This paper simply takes the 64 and cannot be extended to higher dimensions which significantly limited its potential in applications.
[1] Kusupati, Aditya, et al. "Matryoshka representation learning." Advances in Neural Information Processing Systems 35 (2022): 30233-30249.
[2] Cai, Mu, et al. "Matryoshka Multimodal Models." arXiv preprint arXiv:2405.17430 (2024).
(2) Tab 1 is pretty confusing. There are "Off-the-shelf", "Specialist+Oracle", "ImageNet21k pretraining" and "CLIP pretraining". It is hard to understand the exact meaning of them. Moreover, the evaluation is not convincing. It misses some recent SOTA embedding models, such as SigLIP.
(3) Please unify the format of subtitles. In Line 253, there is "Implementation Details" but line 235 applies "Compared methods".
(4) There is no ablation study over the temperature and embedding dimension, which should be critical hyperparameters to explore.
Technical Quality: 2
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > __This paper misses a series of very important works for embedding learning [1,2]. The [1] proposed the Matryoshka representation learning (MRL) that the OpenAI's recent embedding model adopts. MRL would unfold the embedding model into multiple dimensions such as 64, 128, 512 or more. This paper simply takes the 64 and cannot be extended to higher dimensions which significantly limited its potential in applications.__
The main focus of our paper is on improving universal embedding models, following the setup proposed by [44]. Note that in the experimental setup of [44] the embedding dimension must be fixed to 64, which is the reason we chose this dimensionality. Regarding MRL, it is a technique proposed to enable adaptive embedding dimensions in deployment. We agree that this is an interesting direction, but it is not directly related to the topic explored in our paper, which proposes techniques to improve universal image embeddings in general. Combining UDON and MRL in the future would be an interesting exploration.
> __Tab 1 is pretty confusing. There are "Off-the-shelf", "Specialist+Oracle", "ImageNet21k pretraining" and "CLIP pretraining". It is hard to understand the exact meaning of them.__
The meaning of different settings is explained in section 4.1 "Compared methods", 4.2 and the caption of Table 1. The concept of the table is taken from [44 - Table 4].
> __Moreover, the evaluation is not convincing. It misses some recent SOTA embedding models, such as SigLIP.__
As mentioned in the "Compared methods" paragraph in Subsection 4.1 "Experimental settings", the Universal Embedding task and the corresponding UnED benchmark is only recently created, and we have done our best to include as many baselines as we can into the evaluation. We appreciate the reviewer additionally pointing us to the recent SigLIP model. We evaluated the SigLIP ViT-Base off-the-shelf model, which achieves a mean P@5 of 49.0% and a mean R@1 of 62.0% on the UnED benchmark. We will add it to the final version of the paper.
> __Please unify the format of subtitles. In Line 253, there is "Implementation Details" but line 235 applies "Compared methods".__
Thank you for this one, good catch! We will fix this by replacing "D" with "d".
> __There is no ablation study over the temperature and embedding dimension, which should be critical hyperparameters to explore.__
The value of the temperature hyperparameter was tuned on the validation set independently of other hyperparameters and kept fixed. For the rebuttal, we provide additional numbers alternating the temperature value in the full UDON method (IN21k pre-trained):
| temperature value |mean P@5(%)|mean R@1(%)|
|------|------|------|
| 0.1 (paper) | 53.9 | 65.3 |
| 1.0 | 53.6 | 65.0 |
| 0.05 | 53.2 | 64.8 |
| 0.01 | - (Diverged) | - (Diverged) |
all of which perform significantly worse than the one used in the paper. We will add this study to the final version of the paper.
For the teacher embedding dimension, there is already an ablation in the paper, see Table 2 row 5, and corresponding paragraph "Online teacher dimensionality" in Subsection 4.3.
Regarding the student embedding dimensionality, we keep it fixed to 64, given the constraints of the UnED benchmark [44], as discussed above.
---
Rebuttal 2:
Title: After Rebuttal
Comment: Thank you for the detailed response from the authors. I will maintain my original score, as the novelty and impact of the work seem limited. The concept of multi-teacher distillation is not new, and testing solely on UnED with fixed dim size may not be sufficiently convincing. | Summary: The paper proposes Universal Dynamic Online distillatioN (UDON), which is a multi-teacher distillation method designed for universal image representations. UDON adopts a knowledge distillation strategy by distilling information from multiple teacher model trained for different domains to a student model to learn the universal embedding. It also proposes a dynamic domain sampling strategy for balancing the different domains. The provided experimental results verify its effectiveness.
Strengths: The proposed design is simple and elegant. The authors describe the design and implementation in details, which makes it easy to follow.
The experimental results also demonstrate its effectiveness. In addition, many ablation experiments are conducted to give insight.
I suppose the paper is valuable enough to be accepted.
Weaknesses: **The reason why the proposed method is better than the previous work (USC) is not straightforward and clear enough for me.**
In the paper, the main baseline is Universal Separate Classifier Training method (USC). It uses a backbone to learn an universe embedding where multiple classifier heads for different domains are trained separately with the universe embedding as input. Compared with USC, UDON introduces distillation and dynamic domain sampling. I understand the part about dynamic domain sampling but feel confused about the distillation. As shown in Figure 2, UDON uses multiple extra classifier heads as teachers for distillation. Since the teacher heads and student head are all based on the previous embedding $E_b$, why the distillation strategy can help to build an better universal embedding $E_u$?
The authors explain the reason mainly on [Line 37 - Line 51] and [Line 164 - Line 169], while I am still confused about it.
On [Line 37 - Line 51], it says that "it is difficult to encode detailed knowledge about many image domains in a single model". Therefore, the authors propose to use knowledge distillation between a student and teachers model for different domains, while the student and teachers share the backbone. Does it mean that the difficulty remains for USC, but it is solved by UDON with the distillation? The story seems a bit conflicting to me. Because from my point of view, a backbone with more classifier heads is still a single model, since the heads are mostly very light-weight compared with the backbone.
In my opinion, the proposed distillation design is filtering the information from $E_b$, which keeps more useful and universal information for the student head to learn $E_u$.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comment about how the UDON method works compared to the USC baseline. We will provide some clarifications and explain our interpretation of the key components that allow UDON to produce a better universal embedding than previous work, based on the findings from the experiments of the paper.
The extra classifier heads used in UDON as teachers, mentioned by the reviewer, each consist of a shallow domain-specific projection (which produces the domain-specific embeddings) plus a linear classifier. They stem from the same backbone embedding $E_b$ as the universal embedding $E_u$ does, and they are not necessarily lightweight (given the high dimensionality of the domain-specific embeddings + the large number of classes in the different subdomains). We will refer to them as domain-specific heads from now on.
Building from USC + Dyn. Sampler for a fair comparison, appending these domain-specific heads on the side of the universal embedding that share the same backbone and training them with classification training loss from domain-specific data already results in a performance boost for the universal embedding; see the experiment in Ablation studies Table 2, rows 7 and 2. We hypothesize that these extra domain heads backpropagating through the shared backbone act as a form of regularization.
On top of that, adding the distillation losses to obtain the full UDON method results in a large performance boost (Table 2, rows 7 and 3 of the table). We hypothesize that the domain-specific embeddings have additionally captured features that the universal embedding would otherwise skip. These features are used by UDON as an extra supervisory signal (teachers) to the universal embedding in the form of similarities between embeddings and learnable class prototypes.
In UDON, not allowing the teachers to utilize their own backbone is not only efficient in terms of compute, but allows for a higher-performing distillation process. Our experiments show that the UDON teachers (the domain-specific embeddings) underperform domain-specific embeddings that utilize their own backbone (8 separate fixed teachers case) when evaluated on their own domain (see Table 4 row 1 vs 3). However, the resulting student universal embedding of UDON performs better than the corresponding universal embedding produced by the distillation from 8 separate fixed teachers (see Table 3 row 1 vs 3, subsection 4.4). We hypothesize that distilling from the UDON teachers is an easier task than distilling from different networks that utilize their own backbone, as the UDON teachers stem from the same embedding and are only a shallow transformation of it, so the domain-specific spaces they capture are more compatible with each other.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. After reading the response and other reviewers' comments, I will keep my original score. | Summary: The paper tackles the problem of multi-domain fine-grained instance recognition/retrieval. The authors propose to train a unified backbone for all modalities with online distillation with domain-specific teachers, improving the performance compared to naive single backbone baselines and being competitive with expensive methods that utilize a number of domain-specific specialists. The authors propose a number of technical contributions in addition to the online distillation, including dynamic resampling based on a proxy of the task difficulty. UDON exhibits favorable performance compared to a number of baselines on the UnED benchmark consisting of a high number of fine-grained domain-specific benchmarks.
Strengths: - The paper is very well written with clarity and sufficient details about the implementations and the intuitive explanation of the main contributions.
- UDON exhibits improvements over strong baselines for a variety of tasks.
- The proposed dynamic domain sampling is interesting and achieves a clear boost to the performance for challenging benchmarks like Met and GLDv2.
- The paper includes many ablations about the different design choices which helps in understanding where the gains stem from.
Weaknesses: - [important] The paper tackles generalization for instance recognition/retrieval systems, but the scaling axis of this question has not been studied. This is lacking since in the past few years we have witnessed scalable pre-training in terms of data and parameters being a very effective solution to many generalization problems.
- The performance gains provided by UDON seem highly sensitive to which dataset is used for the evaluation. For example, while the dynamic sampling contribution helps two datasets, it dropped the performance of the other five. This does not mean that this contribution should be dismissed but rather we might need more work to achieve its objectives without hurting the performance of other stable benchmarks.
- Some ablations of the projections architecture as well as the design of the MLP baseline would be a welcome addition.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) depending on the number of classes the magnitude of the loss can be very different. How is this handled in equation [5]?
2) an important question is whether scaling is indeed the answer for most problems. For the off-the-shelf baselines, it can be easy to test much larger backbones (e.g. DinoV2-G, Clip-Large, …). It is interesting to see if simply scaling general-purpose pre-training would be sufficient to address the issues tackled in the paper.
3) Related to the question above, given that the performance gain between a naive unified model vs UDON is significant but not enormous, I wonder if scaling the naive single model baseline’s capacity slightly would be sufficient to address the shortcomings of training one model with many fine-grained domains.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > __[important] The paper tackles generalization for instance recognition/retrieval systems, but the scaling axis of this question has not been studied. This is lacking since in the past few years we have witnessed scalable pre-training in terms of data and parameters being a very effective solution to many generalization problems. ... an important question is whether scaling is indeed the answer for most problems. For the off-the-shelf baselines, it can be easy to test much larger backbones (e.g. DinoV2-G, Clip-Large, …). It is interesting to see if simply scaling general-purpose pre-training would be sufficient to address the issues tackled in the paper.__
We examined larger variants of the off-the-shelf models used in this work, like the CLIP ViT-Large and the DINOv2 ViT-Large, both using a 1024 dimensional embedding. We present their results below:
| off-the-shelf model|mean P@5(%)| mean R@1(%)|
|--|--|--|
| CLIP ViT-Base (768-D) |39.8|53.5|
| CLIP ViT-Large (1024-D) |44.5| 58.3|
| DINOv2 ViT-Base (768-D) |43.9| 58.2|
| DINOv2 ViT-Large (1024-D) |46.7|60.8|
| UDON ViT-Base (64-D, CLIP pretrained)|56.9|67.7|
Those larger models are still underperforming the smaller ViT-Base UDON model with 64-dimensional embedding.
This result shows that simply scaling the general-purpose foundational models is not sufficient to address the issues tackled in the paper.
We will add the corresponding results and discussion in the final version of the paper.
> __Related to the question above, given that the performance gain between a naive unified model vs UDON is significant but not enormous, I wonder if scaling the naive single model baseline’s capacity slightly would be sufficient to address the shortcomings of training one model with many fine-grained domains.__
This is a valid point raised by the reviewer.
In order to examine if the performance gain achieved by UDON with a ViT-Base (over the baseline USCRR) diminishes if we scale the backbone size, we performed additional experiment with the larger backbone size of ViT-Large (IN21k pre-trained).
The results are shown in the following table:
|model| mean P@5(%)| mean R@1(%)|
|----|-----|-----|
| USCRR (baseline) ViT-Base |51.4|62.5|
| USCRR (baseline) ViT-Large |51.0|62.4|
| UDON ViT-Base|53.9|65.3|
| UDON ViT-Large|54.6| 65.4|
We observe that the baseline with larger backbone achieves almost the same performance as its smaller ViT-Base counterpart, indicating that larger backbone size doesn't necessarily mean better performance (note that a similar observation is made in [44] Appendix A, section A.2). Additionally, the UDON trained ViT-Large achieves a performance which is a bit better but close to the ViT-Base counterpart, indicating both the effectiveness of the UDON training procedure for the larger backbone size compared to the baseline training procedure, as well as the fact that the ViT-Base achieves a very good size-performance tradeoff.
We will include this discussion in the final version of the paper.
> __The performance gains provided by UDON seem highly sensitive to which dataset is used for the evaluation. For example, while the dynamic sampling contribution helps two datasets, it dropped the performance of the other five. This does not mean that this contribution should be dismissed but rather we might need more work to achieve its objectives without hurting the performance of other stable benchmarks.__
Assume one (meaningful) sampling strategy, say Round Robin, which oversamples some domains and undersamples others.
If with a new strategy some domain is sampled more than it was before, the model is expected to perform better on that domain, and the other way around. The proposed Dynamic Sampling is not perfect, but it improves the difficult domains where previous work was performing poorly and leaves the others close to the original perfomance, and most importantly improves the average performance, moving closer towards the notion of a "Universal Image Embedding", which works well on all of the different subdomains.
> __depending on the number of classes the magnitude of the loss can be very different. How is this handled in equation [5]?__
The Dynamic Sampling is designed to sample the difficult domains more often. There is no explicit guarantee that domains with more classes must have higher learning error. Large number of well separable classes may have smaller error that a few hard-to-distinguish classes. In general, domains with higher number of classes have higher chance to contain difficult sets of classes - and thus should be sampled more often. Therefore, the number of classes is not explicitly accounted for in equation [5].
Additionally, a number of sampling strategies and variations on the dynamic sampling have been tried, all yielding similar or worse performance on the validation set.
At the moment, the Dynamic sampling operates on the level of domains. Ideally, one would like to analyze the training data and sample difficult classes (and their confusers) more often, which might be time demanding. An efficient approach for such sampling is left for future work.
> __Some ablations of the projections architecture as well as the design of the MLP baseline would be a welcome addition.__
We performed an experiment where we replace the linear layers used as the projection for both the domain-specific teachers and the universal student by a deeper network. More specifically, we use a one hidden layer MLP with layernorm and GELU activation (standard in the literature), with the same hidden dimension as the final dimension, i.e. 256 for the teachers and 64 for the student. The obtained results are shown below (IN21k pre-trained ViT-Base):
| model | mean P@5(%) | mean R@1(%) |
|---------|------|-----|
| UDON |53.9|65.3|
| UDON MLP projectors|51.8|63.5|
The results indicate a significant drop compared to using the proposed linear layers. We plan to include this study in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the rebuttal, it has addressed the majority of my questions/concerns. Therefore, I will raise my score to 6. | Summary: The paper proposes an online distillation approach in a multi-teacher setup w/ weight sharing for efficiency. A strategic dynamic batch sampling process has been proposed to help domains w/ slower learning during training.
Strengths: The unified backbone and batch sampling strategy is novel and powerful. The paper is well written and experiments/datasets carefully chosen.
Weaknesses: Did not spot any, though, it seems there is a throughput drop.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you elaborate more on impact of the throughput drop in real-world industry applications? Also, specify how practical applications could benefit - please provide specific details.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Mentioned in paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > __Can you elaborate more on impact of the throughput drop in real-world industry applications?__
The 20% throughput drop only impacts the model training stage. The inference time of our model is exactly the same as previous state-of-the-art models from [44], which makes our inference-time comparisons fair. For industry applications, generally the model will be trained during a development cycle when compute resources are used intensively one-off to produce a high-quality model with acceptable cost. Once the development cycle ends, the finalized model will be used for inference in production for a long period of time. For this reason, the inference time cost is the main concern when developing industry models, and a 20% training cost increase is not usually a problem, given the significant quality gains.
> __Also, specify how practical applications could benefit - please provide specific details.__
Several practical applications can directly benefit from an improved universal image embedding. In particular, generic visual recognition systems such as [46, 1, 2] need to handle queries depicting any type of object or scene. As per news reports, these systems today receive 10B+ searches per month and their popularity continues growing. This calls for a universal embedding, since it is not scalable to handle images of different domains with specialized, per-domain models. Besides, these embeddings can be useful in many other applications. For example, retrieval-augmented generation with large language models often requires access to specific visual information from an external database, which can searched with a universal image embedding. With the ever-growing number of images in many aspects of modern life, such embeddings also become critical for searching large private photo collections or street-level photos at planet-scale, for example. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback.
We are encouraged that they recognize our contributions as novel (**Mmnc**, **uq4A**), interesting (**CCei**), and simple/elegant (**xwWg**). Reviewers also highlight the value of our experimental validation/ablations (**Mmnc**, **CCei**, **xwWg**, **uq4A**), demonstrating the method’s effectiveness/improvements against previous work (**CCei**, **xwWg**, **uq4A**).
We are also glad they found the paper well-written (**Mmnc**, **CCei**), making it easy to follow (**xwWg**).
We address concerns and comments in individual responses to each reviewer below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
3DET-Mamba: Causal Sequence Modelling for End-to-End 3D Object Detection | Accept (poster) | Summary: This paper proposes 3DET-Mamba, the first attempt to exploit the State Space Model for end-to-end 3D object detection. It introduces a local-to-global scanning technique, including an Inner Mamba block to capture local geometry and Dual Mamba blocks to extract scene features in a global view. Furthermore, it proposes a query-aware Mamba block to effectively decode scene context information into object sets with the guidance of box queries. Experiments show promising results compared to 3DETR and validate the effectiveness of the proposed modules with detailed ablation studies.
Strengths: - The basic idea is easy to follow, and the key challenges are clearly presented.
- The methodology and implementation details are also clear, with good illustrations and mathematical / algorithm presentations.
- The proposed framework is the first to successfully adopt Mamba in end-to-end 3D object detection.
- The experimental results and ablation studies are solid and can support the method convincingly.
Weaknesses: - One of the motivations is the computational costs of transformers compared to Mamba, but there are no analyses regarding memory costs, latency, etc. I am curious about more advantages of the Mamba framework in this task, such as computational efficiency and scaling up performance, except for the current traditional benchmarks.
- Although the paper proposes several methods to deal with the unordered point clouds in the ordered manner that is inherently designed in the State Space Model, even though the performance also seems good, the mechanism is still not most suitable for this task or handling point clouds. I would expect a more matching mechanism to deal with point clouds with the fundamental idea of the State Space Model instead of converting the problem with a non-perfect workaround.
- There are several typos and I just provide a few examples in Questions. The paper needs more check on such details.
Technical Quality: 3
Clarity: 3
Questions for Authors: Sec. 4.2 title: Obejct -> Object
line 241: Transforme -> Transformer
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: Computational costs and scaling up performance.
**A1**:
- To further verify the effectiveness of 3DET-Mamba, we report latency and memory costs in the following table. It can be seen that our model achieves better results with less computational cost and lower latency compared to 3DETR. The results further verify the effectiveness of 3DET-Mamba.
- Besides, we conduct more experiments in which we scale up the model size (more mamba block). As shown in Figure 2 and Figure 3 in the PDF file, both scaling the encoder and decoder can further improve the performance on ScanNet which further shows the scaling ability of 3DET-Mamba.
**Performance Comparison of Computational Efficiency**
| Method | FLOPs (↓) | Latency (↓) |
|-----------------|-----------|-------------|
| 3DETR | 14.3 | 0.22 |
| 3DET-Mamba (our)| 9.8 | 0.13 |
**Q2**: Discussion on the proposed method.
**A2**:
- In this paper, we first explore the potential of Mamba in 3D object detection tasks which have not been studied. However, directly using mamba to deal with 3D object detection tasks results in poor results. This is mainly caused by 1) It is difficult for Mamba to model unordered and non-causal 3-D point clouds, 2) the original Mamba block lacks the ability to extract local features, and 3) previous works only explore Mamba as an encoder for classification tasks.
- To handle the above challenges, we propose a local-to-global scanning technique which is composed of Inner Mamba blocks and dual Mamba blocks and can aggregate local and global information. Besides, in dual Mamba blocks, we use furthest point sampling and nearest point sampling to construct the ordered point cloud sequence to handle the non-casual problem. Finally, the query-aware Mamba block is designed to decode scene context information into object sets with the guidance of box queries.
- We hope our work can Inspire further exploration of mamba in 3D detection tasks and construct Mamba-base 3D foundation models in the future.
**Q3**: Some typos.
**A3**: Thanks for carefully reviewing our article. We will modify all these typos in the next version.
---
Rebuttal Comment 1.1:
Title: Final Decision
Comment: Thanks for the author's response. I think the experiments regarding the computational efficiency are important supplements to support the method's value, but there are still concerns regarding whether the Mamba structure is suitable for solving the problem with unordered point clouds input, both from theoretical analysis and intuition. It also lacks more comparison with state-of-the-art voxel-based methods like FCAF3D in the experiments. Therefore, I agree that this paper, as the first attempt, still has a long way to go along this technical pathway, but I may give more credit to the first try with basically reasonable experimental results as the support, and I think it can bring some new insights to the community. Hence, I would keep my original rating.
---
Rebuttal 2:
Comment: Dear reviewer,
Thank you for taking the time to review our work and for your feedback! We will add experimental results of computational efficiency in the revision of our article.
Sincerely,
Authors of paper112 | Summary: The paper proposes an end-to-end 3D detector named 3DET-Mamba that fully takes advantage of Mamba. 3DET-Mamba can model long-range global information (Dual Mamba) while exploiting local information (Inner Mamba). Experiments conducted on the ScanNet and SUN RGB-D datasets validate the effectiveness of the proposed method.
Strengths: - The proposed method is simple and effective: 1) The use of distances from the center to rank points and query features is intriguing and straightforward to implement; 2) Employing FPS and NPS simultaneously to generate token sequences appears to be a novel approach, effectively modeling long-range dependencies while maintaining local consistency.
- The writing is clear and concise, making it easy to understand.
- The experimental results are impressive.
Weaknesses: - The structure depicted in Figure 3 is inconsistent with the description provided in Algorithm 2. Specifically, the branch for the Query sequence in Figure 3 lacks the inclusion of Linear-SiLU, which is mentioned in Algorithm 2.
- In Table 4, 1) the first column should correctly be labeled as "mAP@0.25," and the second column should be labeled as "mAP@0.5." 2) when using the Transformer as a decoder, the value of mAP@0.5, which is 42.6%, appears abnormal compared to other results. Specifically, in Table 1, the model DETR-m achieved 65.0% mAP@0.25 and 47.0% mAP@0.5. However, in Table 4, the first model listed achieves 64.3% mAP@0.25 but only 42.6% mAP@0.5. Are there any mistakes?
- Line 241 `PointNet++ and Transforme` -> 'PointNet++ and Transformer'.
- Line 180-181, `We randomly choose an initial point and sort the remaining points based on their distance to this point`, are the experimental results stable? It seems unbelievable.
Technical Quality: 3
Clarity: 2
Questions for Authors: See the Weaknesses.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: Difference between Algorithm 2 and Figure 3.
**A1**: Thanks for reviewing our article carefully. The algorithm 2 shows the detailed operation steps of our model. The figure 3 is a schematic figure and may omit some details. According to the suggestion of the reviewer, we will modify Figure 3 in the revised version.
**Q2**: Question about Table 4.
**A2**:
- Thanks for pointing out our mistakes. We will modify the headers of Table 4 in the revised version.
- We want to clarify that in Table 1, we report the results of 3DETR in the original paper which is trained for 1080 epochs. However, in Table 4, for a fair comparison, we only change the decoder to 3DETR decoder and use our mamba-based encoder. We train for 540 epochs which is consistent with 3DET-Mamaba. We will clarify it in the next version.
**Q3**: Some typos.
**A3**: Thanks for your comments. We will revise all the typos in the next version.
**Q4**: Question about randomly choosing the initial point.
**A4**: The aim of this paper is to explore the potential of Mamba in 3D object detection. We follow 3DETR[A] to random sample an initial point and then sample a set of points using FPS which are evenly distributed in 3D space. Besides, based on our experiments, the experimental results are stable and we repeat the experiments. The results are shown in the following table. Since the difference between the random seed and the GPU device, there is a slight deviation from the submitted article. However, it can be observed that the experimental results are still stable.
**Repeated experimental results of 3DET-Mamba**
|Method|mAP25|mAP50|
|-|-|-|
|Result in the main paper|66.9|48.7|
|Replicate|66.3|48.2|
[A] Misra I, Girdhar R, Joulin A. An end-to-end transformer model for 3d object detection[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 2906-2917. | Summary: This paper proposes leveraging Mamba blocks for 3D point cloud modeling in the form of 3DET-Mamba, an application of SSM for 3D object detection. Similar to the prior 3DETR model, this approach partitions the point cloud into "patch" using Mamba blocks to capture local information, complemented by global modeling through dual Mamba blocks (organized by farthest and nearest point order). Additionally, query-aware Mamba blocks, akin to transformer blocks, are designed to decode objects in a DETR-like manner. The novelty of the approach lies in its ability to enhance 3DETR by integrating SSMs innovatively for 3D indoor scene understanding, utilizing ScanNet and SUN RGB-D datasets.
Strengths: 1. The proposed architecture adopts a Mamba-style approach and introduces 3D-specific customizations, such as partitioning the point clouds using an inner Mamba network, employing FPS/NPS ordering of points, and implementing a query-aware Mamba block.
2. The experiments detailed in Tables 2, 3, and 4 conduct ablation studies that demonstrate the effectiveness of the Mamba block compared to baseline designs using transformers, although these comparisons do not include assessments of speed.
Weaknesses: 1. Mamba is praised for handling linear complexity and long-range sequences better than transformers, which have higher computational demands. However, the paper lacks theoretical analysis or experimental evidence to compare time and space complexities between Mamba and traditional architectures. The effects of changing point resolutions or patch sizes on performance are also not explored.
2. The clarity of some ablation studies is lacking. For example, the role of ranked queries, a key contribution, is not evaluated. The relevance of NPS and FPS ordering in modeling patch sequences remains ambiguous. Additionally, the rationale for query-aware Mamba blocks is unconvincing; they seem to be simply added to cross-attention blocks with extra links. Table 4 does not clearly show that improvements are due to the Mamba block rather than just increased computational power.
3. The paper does not provide solid proof of Mamba’s effectiveness in terms of performance or speed. It omits a comparison with the once leading FCAF3D detector, which scored 71.5 on ScanNet, without explanation. Comparisons of inference speeds across architectures are also absent.
Technical Quality: 2
Clarity: 3
Questions for Authors: Besides weakness part, there are some other questions:
1. Some terms could potentially cause confusion. For instance, FPS is an acronym for farthest point sampling, but in the context of the dual Mamba, it refers to farthest point order, while NPS refers to nearest point order.
2. Are there any existing studies that use Mamba in a query-based method? It would be beneficial to reference these works to better understand the uniqueness of this module.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments.
**Q1**: Comparison of time and space complexity between 3DET-Mamba and other methods and the effects of changing resolution.
**A1**: To demonstrate the superior performance of 3DET-Mamba, we compare the FLOPs and latency of our model with the pervious transformer-based architecture, 3DETR [A], as presented in the Table r1 below. It can be seen that 3DET-Mamba achieves **higher accuracy** (as detailed in Table 1 of our manuscript), **reduced FLOPs** and **lower latency**, as shown by Table r1. In Table 1 of our manuscript, we show that 3DET-Mamba effectively models high-resolution point clouds (i.e., 4096 point clouds), achieving a significant +5.7 mAP@0.5 increase. Furthermore, to explore the impact of changing point resolution, we conduct experiments on changing the density of point clouds. As shown in Figure 1 in the attached pdf file, by **increasing the density of point clouds**, the performance can be continuously **improved** which shows the effectiveness of 3DET-Mamba again.
**Table r1: Performance Comparison of Computational Efficiency**
| Method| FLOPs (↓) | Latency (↓) |
|-|-|-|
| 3DETR|14.3|0.22|
| 3DET-Mamba (our)|9.8|0.13|
**Q2**: Additional ablation studies.
**A2**: Thanks for the valuable comment.
- We first want to clarify that the intention of our work is to explore the feasibility of Mamba in 3D object detection which has not been studied. Previous works mainly explored the usage of Mamba in classification and segmentation tasks. However, this task is challenging due to 1) the non-casual and irregular 3D point clouds hinder the modeling of causal sequences. 2) The original Mamba block is good at extracting global information from long sequences but ignores detailed information to a certain extent which is important in 3D detection tasks.
- We address the first challenge using ranked queries. To further verify the effectiveness of this approach, we conduct ablation studies shown in Table r2 below, which indicate that discarding ranked queries leads to a decrease in performance.
**Table r2: Ablation Study on Ranked Queries**
| Rank Query | mAP25 | mAP50 |
|-|-|-|
| ✗ | 65.6|48.1|
|✓|66.9|48.7|
- To clarify the relevance of NPS and FPS in modeling patch sequences, our results in Table r3 demonstrate that combining FPS and NPS can further improve performance by enabling Mamba to models the point cloud in terms of spatial distribution and continuity.
**Table r3: Impact of NPS and FPS**
| NPS | FPS | mAP25 | mAP50 |
|-|-|-|-|
|✓|✗|65.4|47.7|
|✗|✓|65.0|46.3|
|✓| ✓|66.9|48.7|
- Existing Mamba-based works mainly use Mamba as the encoder. However, there is still no exploration of using Mamba as a decoder. To explore the potential of Mamba as a 3D decoder, we propose query-based Mamba which can better model the relationship between each query by taking advantage of Mamba. As shown in Table 5 in our article, our query-aware Mamba block can outperform the transformer-based decoder.
- As shown in Table r1, compared to the transformer-based models, our method (which includes the query-aware Mamba block) exhibits lower FLOPs and latency. This demonstrates that the improvements are indeed due to the design of the novel Mamba block, rather than increased computational power.
**Q3**: Comparisons of FCAF3D and speed.
**A3**: Thank you for your valuable comments. In this paper, we focus on indoor 3D object detection using point clouds, a crucial 3D data representation form. Compared to voxels (used in FACF3D), point clouds offer benefits such as high-efficiency storage. We benchmark our 3DET-Mamba against other open-source point-based methods, with results in Table 1 of our manuscript demonstrating our approach's effectiveness. However, it is not trivial to directly apply Mamba to point cloud scenes due to their unordered and non-causal nature. 3DET-Mamba addresses these challenges by introducing a novel local-to-global scanning mechanism and developing the Inner Mamba and Dual Mamba to capture fine-grained point cloud features. Additionally, we designed a Query-aware Mamba Block to decode point cloud information. We hope our work will inspire further use of Mamba as a foundational component for point cloud understanding.
**Q4**: Clarification of FPS and NPS.
**A4**: The FPS and NPS in our article represent the furthest point sampling and the nearest point sampling. The furthest and nearest point order can be obtained after sampling. We will modify the words in dual Mamba to avoid confusion.
**Q5**: Existing methods of query-based mamba works.
**A5**: Thanks for your valuable comments. We carefully investigate query-based mamba works. Currently, there is still no work to explore query-based mamba for object detection. However, some concurrent works introduce query-based mamba for other tasks [B][C]. TM-Mamba[B] modifies the Mamba parameterizes $A$, $B$, $C$, $∆$ as the function of the input and text query to ground the human motion. However, it is not applicable when the number of queries is greater than 1. Besides, QueryMamba[C] combines a query-based transformer decoder and the Mamba encoder. However, it still does not explore query-based Mamba decoder. We will add these related works in the revised version.
[A] Misra I, Girdhar R, Joulin A. An end-to-end transformer model for 3d object detection[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021.
[B] Wang X, Kang Z, Mu Y. Text-controlled Motion Mamba: Text-Instructed Temporal Grounding of Human Motion[J].
[C] Zhong Z, Martin M, Diederichs F, et al. QueryMamba: A Mamba-Based Encoder-Decoder Architecture with a Statistical Verb-Noun Interaction Module for Video Action Forecasting@ Ego4D Long-Term Action Anticipation Challenge 2024[J].
---
Rebuttal Comment 1.1:
Comment: Thank you for the author’s rebuttal, which addresses some of my concerns.
Is the table r1 fairly comparing 3DETR and 3DET-Mamba with only transformer blocks? I am particularly interested in whether there is an ablation study that demonstrates the necessity of using query-based Mamba rather than cross-attention blocks in a relatively fair comparison, considering the trade-off between computation and accuracy.
Given the state of the original paper’s presentation and the missing reference, I can now raise the rating to 5.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thank you for taking the time to review our work and for your valuable comments!
The results in Table r1 fairly compare 3DETR and 3DET-Mamba in terms of Flops and latency. We further conduct experiments to show the effectiveness and efficiency of Query-aware Mamba (i.e., decoder). Specifically, we replace the Query-aware Mamba with Cross-attention Blocks of the same number of layers. The results are shown in the following table. Importantly, our Query-aware Mamba Blocks not only reduce the computational cost but also further improve detection accuracy.
| **Decoder** | **FLOPs (↓)** | **AP@25 (↑)** | **AP@50 (↑)** |
|-------------------------------------|---------------|-----------|-----------|
| Cross-attention Transformer Blocks | 1.89 | 64.3 | 42.6 |
| Query-aware Mamba Blocks | **1.76** | **66.9** | **48.7** |
Note: the FLOPs provided here are specific to the decoder module.
We will add related works and additional experiments mentioned by the reviewer in the revision of our article. Thanks again for your approval of our work.
Sincerely,
Authors of paper112 | null | null | Rebuttal 1:
Rebuttal: Dear AC and reviewers,
We thank all reviewers (Reviewer m4H4-R1, Reviewer 2ZE6-R2, Reviewer eAYh-R3) for approving our contributions, including **our exploration of mamba for end-to-end indoor 3D object detection for the first time** (R3). The experimental results are **convincing** (R3), demonstrating that the method is **effective** (R1, R2, R3). We also appreciate the acknowledgment of our **clear writing** (R2, R3).
We also thank all reviewers for their insightful feedback to help us improve our paper. We will address all reviewers' concerns and carefully revise the manuscript. **Additional results are provided in the attached PDF file**. We are happy to have further discussion on anything unclear about our paper.
Best regards,
Authors of Paper112
Pdf: /pdf/6eb9e646da35f5cbc7097aa47429c7fcbead98b1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Expectation Alignment: Handling Reward Misspecification in the Presence of Expectation Mismatch | Accept (poster) | Summary: This paper studies the process of reward design and as a result reward misspecification when a human designer has potentially misspecified beliefs about the robot's operating domain or generally trouble with designing a reward leading to the robot generating desired behavior in its own domain. The paper formalizes the notion of "expectation alignment" over occupancy frequency under human and robot domains satisfying a set of desired or forbidden states. Since the human's expectation set is unknown, the authors propose a set of linear programs 1) to test whether a state belongs to the forbidden set, 2) to test whether a state belongs to the goal set, and 3) to find a minimal set of queryable states by minimizing the number of states in the forbidden and goal sets which can be used to query a human designer. The proposed method is mainly compared against inverse reward design.
Strengths: **Originality & significance**: This work is original to my knowledge and the proposed method of tackling reward design and specification from the perspective of occupancy measure is novel and interesting. Given the increasing popularity and maturity of dual RL, the propose method has the potential to be extended to more practical settings.
**Quality & clarity**: The paper is well written. Even though the problem studied is novel and somewhat "niche", the authors did a good job walking the readers through a lengthy problem formulation. I appreciate the thoughts that went into this problem formulation.
Weaknesses: **Clarity**: I think the authors could provide more context (in main text or appendix) on inverse reward design (IRD) for readers to better understand and interpret the evaluation results. Currently, readers have to read the IRD paper to achieve that.
The authors claim that "IRD generated policies that resulted in expectation violations. On the other hand, our method guarantees policies that will never result in violation of user policies". As far as I understand, IRD proposes a model of how human specified reward relates to the true intended reward. In contrast to IRD which is a one-off process (one-shot learning if you will) where the human designer is queried only once and then the robot is deployed in the test environment, the proposed method is iterative where the human can potentially provide multiple feedback to the designed reward. In some sense, this is equivalent to assuming privileged access the test environment and the number of states in $\mathbb{D}_{F}$ after the first human query is in some sense similar to the number of violated expectations. So stating that the proposed method never violates user policies doesn't seem appropriate, or at least comparing with IRD on this metric doesn't seem to be "comparing apples to apples".
Another minor suggestion is to put equation numbers in corresponding lines in the algorithm to make it easier for readers to understand.
Technical Quality: 4
Clarity: 3
Questions for Authors: I have some minor questions on LP formulation and notation:
* In eq 2 and 3, is the notation $V_{s_0}^*$ the optimal value function in $(D^{H}, R^{H})$ (since the authors only said calculate $V_{s_0}^*$ but did not say what is being calculated)? If so then I doubt that notation $\sum_{a}x(s_{0}, a) = V_{s_0}^*$ is correct in general, since $x(s, a) \in [0, 1]$ is the visitation frequency $\frac{1}{H}\mathbb{E}[\sum_{H}Pr(s_t=s, a_t=a)]$ rather than a value function.
* In all LP equations, there seem to be a typo on the transition matrix, i.e, it should probably be: $\sum_{a}x(s, a) = \delta(s, s_0) + \gamma\sum_{s', a'}x(s', a')T(s', a', s)$. See [Sikchi et al, 2024](https://arxiv.org/abs/2302.08560) eq 4.
* For the LPs in eq 2 and 3, do you have to solve it for every $s_i$ being tested? This seems expensive?
* In line 249, there seems to be a typo. I guess the authors are trying to say: "calculate the set of all **forbidden** states reachable" and "calculate the set of all **goal** states reachable".
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Acknowledgement of limitations seem appropriate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the constructive feedback and for catching the typos. We will incorporate them into the final draft. Below, we have provided responses to specific questions and comments raised.
IRD: We will make sure to provide a more clear description of IRD in the evaluation section. The reviewer is right in the fact that the method isn’t explicitly designed to avoid constraint violation. However, we believe that this also comes from the stance that the paper takes that there exists a single true reward function that is transferable across domains. We believe that if it had been accounted for, the authors would have included a querying strategy. We realize that this might not always be a fair comparison, and this is the reason why we also compared it with a method that explicitly performs queries. As we can see, that method cannot scale up to the problems we considered in this paper.
$V^*_{s_0} $ and constraint: Thank you so much for catching it. This was an unfortunate typo that got copied around in the LP descriptions in the paper. The constraint should have read $\sum_{s,a} x(s,a)r(s) = V^*_{s_0}$. Note that $\sum_{s,a} x(s,a) r(s)$ returns the value of the state $s_0$ for the current policy (cf. [Poupart, 2005]). We will make sure to fix it in the final draft.
Transition Function: We believe our formulation is correct in this case. Looking at the other paper, we believe the difference comes from how they represent the transition function. They seem to be using a conditional probability notational scheme, where they represent the probability of transition to a state s’ when action a is executed in state s as P(s’|s, a). We, on the other hand, use a simpler functional notation of the form $T: S \times A \times S \rightarrow [0,1]$. In our case, the same probability will be returned by the arguments T(s, a, s’). We hope this clears the confusion, and we will make sure to emphasize this in the background section where we define our notations.
Queries: The reviewer is correct in that we have to check it against every state. However, as discussed in the main rebuttal response, as we move to larger problem settings, we expect to use factored/feature-based representations. These allow for a relatively small set of features to represent large state spaces. Under these scenarios, the tests only need to be run once per each feature. This would dramatically cut down on the number of times it needs to be run. Standard planning benchmarks use relatively small feature counts[ext1], which number less than a hundred.
Lines 248-249: This was again a typo. $\widehat{\mathcal{S}}^\mathcal{F}$ corresponds to states that are not reachable under any optimal policy. We will make sure to fix it in the final draft.
[ext1] International Planning Competition, IPC Competition Domains. https://goo.gl/i35bxc, 2011.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses. Most of my questions are resolved.
I have one follow up question on the transition function. My original question is on the summands of the transition function, i.e., the current states and actions should be summed out as opposed to the future states and actions. I think this makes sense because the basic idea of the Bellman flow equation, similar to the Bellman equation, is that the long term occupancy is the sum of the immediate occupancy and the expected next occupancy. Another point of reference is eq 3 in [this paper](https://arxiv.org/abs/1906.04733) which illustrates the same point. Please feel free to correct me if I'm wrong about this.
---
Reply to Comment 1.1.1:
Title: Re: Transition Function
Comment: You are actually correct. We apologize for missing that. During the rebuttal, we were mainly focused on the form of the transition function, and we didn't notice the summand was switched. In fact, to avoid further confusion, we will stick with the convention of s' being the next state and rewrite the constraint to
$\sum_{a'} x(s',a') = \delta(s',s_0) + \gamma\times \sum_{s,a} x(s,a) \times T^R(s, a, s')$
This is the more popular way of denoting occupancy frequency equation (also used by the paper, the reviewer pointed to). We thank the reviewer for catching that, and we apologize for the confusion. | Summary: The paper addresses the problem of reward misspecification. It introduces a framework (EAL) to capture how humans go from setting expectations about a problem to specifying the reward for it. The problem is modeled as a single Human-Robot interaction.
After introducing the formalism, the authors propose an algorithm to solve the EAL problem. It works by mapping the inference problem about the user expectations to LPs and by obtaining queries to the human, providing an efficient and effective way of inferring user expectations given a specified reward.
Strengths: - The paper is extremely well written. It introduces previous concepts clearly. The motivations for choices in the formalism introduced are also well-justified and adequately compared with the previous literature on the topic
- The theoretical contributions are very solid and sound.
- The formulation via occupancy measures and LP simplifies the problem significantly in my opinion, making it easy to understand why and how the algorithm is derived.
Weaknesses: - The main weakness I find is the limited experimental evaluation. I understand that the contributions of this work are mainly theoretical, but I still think it would benefit from additional experiments
Technical Quality: 2
Clarity: 4
Questions for Authors: - Do you have any formal proof ( to put in the Appendix) both for the Theorems and the Propositions? Even if most of the results and proof are pretty straightforward, I would still like to see the proofs formally written out
- Could you please clarify how one should read the results from Table 1? I do not directly see how to relate query count and No of Violated Expectations. If the comparison is 1 to 1, it seems your algorithm is actually performing sub-optimally?
- How do you think your algorithm can scale in non-grid environments? What about grid environments with a computationally untractable number of states? I imagine obtaining good results would require a too-high number of queries.
Confidence: 3
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: No ethical limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the comments. We will make sure to incorporate them into the paper. Below, we have provided responses to some specific concerns and questions raised in the review.
Formal Proof: We will be more than happy to include detailed formal proof for all the propositions and theorems. Since the allowed pdf isn’t supposed to include text, we are including a proof sketch here for proposition 1 as a sample proof.
Proposition 1 - There exists no state $s \in \mathcal{S}^\mathcal{F}$ and policy $\pi \in \Pi^*_{\mathcal{M}^H}$, such that $x^\pi(s) > 0$ is true.
Proof Sketch -
We will prove this through contradiction. Let's assume that there exists a state $s \in \mathcal{S}^\mathcal{F}$, where $x^\pi(s) > 0$ for an optimal policy $\pi \in \Pi^*_{\mathcal{M}^H}$ for a human specified reward function $\mathcal{R}^H$. Per Definition 4, a reward function is only specified, i.e., they are human-sufficient, if for every policy $\pi \in \mathcal{P}^H(\langle \mathcal{D}^H, \mathcal{R}\rangle)$, you have $e \models_{\mathcal{D}^H} \pi$, for all $e \in \mathbb{E}^H$. If $s \in \mathcal{S}^\mathcal{F}$, then $e = \langle \{s\},=,0\rangle$. Per our assumptions, $\mathcal{P}$ returns the set of optimal policies, hence $\pi \in \mathcal{P}^H(\langle \mathcal{D}^H, \mathcal{R}^H\rangle)$. This means that $e \models_{\mathcal{D}^H} \pi$, which is only true if $x^\pi(s) = 0$. This contradicts our initial assertion, hence proving our statement by contradiction.
We will include detailed proofs for all the theoretical assertions in the final draft.
Experiments:
As mentioned in our response to other reviewers and the main rebuttal, our paper included two baselines. One of these was another query method, which unfortunately couldn’t solve any of the problems within the given time limit, showing the scalability of our method.
Table1: Please note that the violations listed by the policy identified by IRD are not the whole set of constraints, even though the reward tried to penalize potential unsafe states and reward potential goal states. These constraints are upper bound by the actual number of constraints that were present in the instances, which is always much smaller than the total number of states. On the other hand, the query method could query about a number of states that are not actual constraint states. For the queries, the worst-case upper bound is the set of all states. This again points to some of the challenges faced by previous query methods. It is also worth noting that in safety-critical settings, even violating a single constraint could be bad. So, instead of the number of violations, the bigger problem is that it violates any constraints at all.
Non gridworld domains: As discussed in the main rebuttal text, we can handle domains with large state space using feature-based representations, which are already popular in the safety literature and planning in general. Here, there would be a set of features that characterize the goal states and another set that captures the states to be avoided. A feature set can capture an exponential number of states. Here, the algorithm changes will be minimal, and instead of querying over the states, we will query over features, which will keep the query count small.
---
Rebuttal Comment 1.1:
Title: Thanks for your answer
Comment: Thanks for your response.
**Experiments**: Thanks for clarifying your results. I suggest using this answer to increase the clarity of the presentation in the Experiments section
**Non gridworld domains**: I've read the main rebuttal text as well, and this clarifies my main doubts.
**Proofs**: Thanks for the sketch of the proof.
The authors addressed most of my concerns. Including a formal proof for all the statements (even in the appendix) will much increase the quality and strength of the submission. Since the authors mentioned that this will be done, and a sample proof is provided, I'll increase my score accordingly.
---
Reply to Comment 1.1.1:
Title: Re: Reviewer Response
Comment: Thank you for the quick response and all the helpful feedback. We will make sure to incorporate them into the paper. | Summary: The paper tackles the problem of reward misspecification in settings where humans have potentially incorrect beliefs about the environment. Instead of treating a true human reward function as the fundamental object, they introduce *expectation sets*, which specify the states that the human does or doesn't expect an optimal policy to visit. In a specialized setting where preferences consist purely of goal states and forbidden states, they then show how to find policies that meet those human expectations.
Strengths: - Reward misspecification is an important topic, and this paper provides an interesting new perspective on it
- The ideas are clearly described and mostly easy to follow
- The expectation alignment framework/technique could be useful at least in certain settings
Weaknesses: - I don't think the paper sufficiently demonstrates that the expectation alignment perspective is broadly better than thinking about an (unknown) true human reward function. Starting at line 203, the paper argues that expectation sets transfer better to different transition functions than reward functions do. This doesn't seem true in full generality: for both reward functions and expectation sets, we could assume that they express the true human preferences about states (in which case they'd transfer), or that they are entangled with the transition function. The latter could also be the case for expectation sets; for example, the human might expect the optimal policy to spend time in some actually suboptimal state simply because they incorrectly believe this is necessary to reach the goal state.
- The setting for which the paper describes an algorithm is quite limited (with only "goal states" that should be reached with non-zero probability, and forbidden states that should be reached with zero probability).
- It's hard to draw clear conclusions from the experiments:
- The experiments measure the number of human expectations that inverse reward design violates, and compare that to the guarantee of expectation alignment not to violate any expectations. But of course, expectation alignment was directly designed for that purpose, whereas IRD was not—from the IRD perspective, the relevant thing to compare would be the reward achieved under the expected reward function that IRD's inputs are based on.
- It would also be nice to have experiments in larger and more varied environments than just 11x11 gridworlds.
Technical Quality: 2
Clarity: 3
Questions for Authors: Is there a clear reason to expect expectation sets to transfer better than reward functions between domains? In particular, why don't they face similar problems along the lines I mention above? i.e.:
> the human might expect the optimal policy to spend time in some actually suboptimal state simply because they incorrectly believe this is necessary to reach the goal state
As a minor note, in definition 3, should the planning function map to the *powerset* of the space of policies? That seems to be how it's used in definition 4, where the planning function evaluated on a single model is a *set* of policies rather than a single policy.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The method assumes knowledge of a way to map between states in the human and robot model, as well as knowledge about the full human and robot models. These assumptions are made in other existing work as well, so are not a damning issue. But they still seem worth highlighting given how limiting they are for many applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the feedback. We will make sure to incorporate them into the paper. Below, we have provided responses to some specific concerns and questions raised in the review.
Transference: We want to thank the reviewer for all their constructive comments. In regards to the question of transferability, our main argument is built on the fact that the expectation set constitutes the behavior they want to see the robot perform. As such, instead of the question of transference, it becomes one of whether or not the AI agent/robot can achieve the desired/expected behavior. Which our method can determine. Our claim is further supported by the abundance of evidence from cognitive science and psychology (cf. [Simon,1977]), which shows that people inherently reason about tasks and plans in terms of goals of achievement. As discussed in the paper, the expectation set, as formulated in the paper, is a generalization of goals. On the other hand, we are unaware of any evidence that shows people have intrinsic reward functions associated with tasks. Some works show that even experts struggle to design reward functions (cf. [Booth et al., 2023]). As such, the use of reward functions becomes a means to the end of specifying the behavior they want to see. However, the reviewer is correct in asserting that users might not be fully aware of what is achievable in the environment (because of knowledge or inferential limitations). In such cases, we would argue that the system should use other mechanisms like explanation to inform the user what is possible rather than determining what they should want. We will make sure to update the text to make sure that this point is clearly articulated.
Specific Instance: As mentioned, our choice of specific instance was motivated by the importance of goals in human reasoning, per existing literature, and all the AI-safety works that have focused on avoiding side effects. Our current method captures both these considerations.
Experiments: We chose to highlight IRD as one of the baselines because they center rewards and overlook the fact that the reward is an expression of some true underlying behavioral expectations. For the user, the reward itself is a means of achieving some behavior, and in the case we consider, where there are states to be avoided and achieved, violating them would render the policy useless. It is also worth noting that while, in the end, IRD settles for the use of an expected value, they do use behavior generated by a reward function as the means to identify potential hypotheses for true reward function. It is also worth noting that IRD was not the only baseline. We also used a query-based baseline. Unfortunately, it could not solve any of the problems we considered in the given time limit. As such, it was left out of the table, but we mentioned it in the text. This highlights the efficiency of our method and the advantages provided by considering the human model and planning set. Regarding scalability in non-grid environments, as mentioned in the main response, the primary method we can leverage is to use a feature-based representation. This representation scheme allows us to capture large state sets using a small set of features. This was briefly alluded to in the discussion about rewards. We also talk about how a large number of current work in avoiding negative side-effects already makes use of features. The occupancy frequency can easily be extended to capture the occupancy frequency of features, and the expectation set can be captured in terms of those. Here, the expectation set will consist of features to be achieved and avoided. In turn, we can update the LP formulations to use features, and constraints are represented using feature occupancy frequency. The queries will also be in terms of the features, which should keep the total number of queries fairly small. This should allow our methods to be scaled more easily. The availability of powerful and efficient LP solvers also makes our methods inherently more scalable.
Powerset Defn3: Yes, this was a typo. We will fix it in the final draft.
Limitation: We agree about the access to the mapping, but works exist that provide methods for learning such mappings.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response! I still think it would be good to have more concrete arguments that expectation sets transfer better (or have other advantages). Everything you mention (connections to cognitive science, the difficulty of specifying reward functions) is great *motivation* for studying alternative frameworks. But I think it falls short of being sufficiently convincing evidence, given that these feel like very high-level conceptual arguments. For example:
> the expectation set, as formulated in the paper, is a generalization of goals
Yes, but we could also think of reward functions as a generalization of goals (where goals would be encoded as reward functions that only give rewards 0 or 1). Clearly, expectation sets generalize goals in an interestingly different way, and maybe a better way. But whether (or in which cases) expectation sets are the "right" generalization of goals seems like the crucial question.
> However, the reviewer is correct in asserting that users might not be fully aware of what is achievable in the environment (because of knowledge or inferential limitations). In such cases, we would argue that the system should use other mechanisms like explanation to inform the user what is possible rather than determining what they should want.
I don't think I follow this argument. If we assume that the system can explain everything the user needs to know about the environment, this just seems to solve the problem I understood the paper to be tackling. (Humans having incorrect beliefs about the environment and thus providing misspecified rewards.) In this setting, where humans now have correct beliefs about the environment, it seems reward functions should work just as well as expectation sets. (At least in the framework presented in the paper: if the human model and robot models match, then every human-sufficient reward function should be correctly specified, I think?)
So it seems to me that either we don't assume the robot can explain everything to the human, in which case both reward functions and expectation sets can fail to transfer in at least some cases, or we let the robot explain everything, in which case both approaches work fine. It might still be the case that expectation sets transfer more often or have some other advantage, but I currently don't see the concrete argument in favor of that claim.
Please let me know in case I've misunderstood anything!
To be clear, I think it would be unreasonable to expect this paper to fully demonstrate that expectation sets are always the right approach. But I do think it would be good if there was at least one clear demonstrable advantage of expectation sets, rather than only high-level arguments (that in my view are only moderately compelling, but naturally views on that will diverge more than they would for more specific claims).
---
Reply to Comment 1.1.1:
Title: Re: Response to the reviewer
Comment: Thank you for the quick response.
The main difference is that the expectation set is a direct encoding of agent behavior. The occupancy frequency can be though of as being determined by the traces the policy would generate over the current transition function. As such, it is already accounting for the transition function and what will happen. However, one needs to use the transition function to figure out what policy would be generated in response to that reward function. Thus, the question of transferability never arises in our case because the users want to see the robot perform the behavior they expected (another way to think about it is that the expectation set is directly transferred over without any modifications). They want to see the robot follow a policy that meets the expectations defined over the occupancy frequency in the robot model. The question that arises is whether the robot can achieve this behavior or not, which our method directly tries to address.
It is also worth repeating that the two main central assertions in this paper are as follows:
1. People don't necessarily start with a reward function. Rather, it is more plausible that they have some target behavior in mind that they want the robot/AI agent to perform.
2. The human and robot models could be different. This is, in fact, a very common case. Limited situational awareness, i.e., cases where the user's understanding of the task may be wrong, is a popular issue studied within human factors/psychology literature. The unfortunate fact is that as the complexity of the task increases, it might not be possible to completely resolve this discrepancy. This is a fact that is accepted by most explainable AI methods. This is one of the reasons why abstractions are a very popular tool within XAI.
Please let us know if this answers your questions. We would be more than happy to expand on any of the points discussed above.
---
Rebuttal 2:
Comment: Thanks for the additional explanation; this has helped clarify some things for me. I have two key uncertainties left, first about the intended scope and second about Theorem 1 (or more broadly about examples of expectation sets being a better perspective than rewards, but Theorem 1 seems a good candidate for that).
**Scope:** To make this concrete, let me describe a toy example I've been thinking about. Say we have a gridworld with a fixed start and goal state (for simplicity), and the human wants the robot to move to the goal as quickly as possible. The human (incorrectly) believes that the geometrically shortest path from start to goal is free of undesirable states. So the policy they expect is to walk directly from the start to the goal. As an expectation set, this might be expressed by saying that the occupancy frequency on all other states (not on this path) must be zero (or at least close). An example of a human-sufficient reward function would be a reward of +1 for reaching the goal and some negative reward for each time step.
Now assume that, in fact, there is lava in one of the states on the direct path (i.e. this is the correct robot model). If the human knew this, they'd like the robot to avoid the lava. Intuitively, it seems clear that the reward function above is misspecified (since it doesn't include the negative reward for lava). Similarly, the expectation set feels "misspecified" to me, since the only policy that satisfies the expectation set (under the true robot model) walks through lava. So my sense is that both reward functions and expectation sets suffer from exactly the same type of problem here. This is an example of what I had in mind when expressing concerns about whether expectation sets really improve the situation.
My current understanding is that this is *not* a type of misspecification/limited human knowledge you are addressing. Instead, you are *assuming* that the expectation set correctly expresses human preferences, and the only issue arises from incorrect beliefs about the transition function. Is that a good summary?
**Theorem 1/Examples:** I've looked in more detail at Theorem 1, since it seems like it could provide good examples motivating expectation sets from my perspective. (Apologies for not noticing this earlier.) Right now though, I don't follow the proof sketch.
If we only had two expectation elements, <S, >, 0> and <S', =, 0>, I think we could just set the reward on S' to something very negative and on S to something much larger. Then, if there is any policy that satisfies the expectation set, all optimal policies should satisfy the expectation set. (Let me know if that doesn't seem right.) So I think there are two ways to prove Theorem 1:
1. Construct a case where *no* policy satisfies the expectation set (under the robot domain). This would make the claim technically true but I'd interpret the meaning quite differently: this wouldn't be about reward misspecification, it would just be about humans having unrealistic expectations that can't be met by any policy.
2. Construct a more complex expectation set that doesn't have any representation as a reward function for a more "interesting" reason than just being fundamentally unsatisfiable.
If the proof sketch is meant to do 2., then it's not currently clear to me what this expectation set actually looks like (the part about needing to reward some states differently isn't yet obvious to me).
**Questions I still have:**
* Did I understand correctly that the example I describe is out of scope for the problems you're trying to solve? If not, could you say more about how expectation sets help in this specific example?
* The example I describe seems like a simplified version of how I think misspecification in the "Puddle" environment works in your experiments. Is that correct? This makes me think it would *not* be out of scope. Are you modeling things differently than I did above?
* Could you say more about the proof sketch for Theorem 1? (e.g. describe the actual example similar to my gridworld description above, or at least describe the expectation set more if a full construction is too lengthy)
I realize these are a lot of new questions/notes---your responses have helped me clarify my initial concerns into these hopefully more concrete uncertainties.
---
Rebuttal Comment 2.1:
Title: Response
Comment: Again, thank you so much for responding to our comments
Example:
So, in the example you provided, there is a question about whether the expectation was to reach the goal or to follow the exact path. Let's go with the second case and assume the human's expectation was, in fact, to pass through that exact path. In this case, the robot can never achieve it, as with the reward function. Our method can detect the fact that this cannot be achieved and let humans know that the expectation set cannot be achieved. A purely reward-based system would try to optimize for the best reward estimate (or worse yet, the original reward function); this is what an approach like IRD might do. Even though IRD paper uses the cell with lava as an example, depending on the example, it is possible the average reward need not penalize the lava cells enough to cause the agent to avoid it (for example, there might be other cells of unknown features that weren't there in the human model either). The other advantage of expectations again goes back to the point about transferability and when the expectation set can be satisfied in the robot model. The same reward function might not lead to policies that will satisfy the expectations set in both models. However, different reward functions might exist that would satisfy them in each model. However, the reverse is not true. If the expectation set cannot be achieved in the robot model, no reward function can change that.
Yes, we are assuming that the expectation set captures the true human preferences.
Theorem1 proof:
Again, please note that we are fine with cases where no solutions exist. It is important to identify the problem and inform the user.
So, there are multiple ways to construct a counter-example that shows the absence of a reward function that translates across domains, but policy exists. The simplest one is when you have a case where you take the form of rewards to be $R: S \times A \rightarrow \mathbb{R}$ (note the reward form has nothing to do with the generality of MDP formulation or any of our methods). Now, let us assume the states to be achieved and avoided are ones where no actions are available (again consistent with the most general MDP construction, referred to as control constraint in optimal control [1]). Now, from the starting state, it has access to two actions. One that takes you to the goal state and the other one that takes you to the state to be avoided deterministically. Now, in the human model, you have to reward one over the other. Let's assume in the robot model, their dynamics are reversed. Now, the previous reward function is no longer able to achieve the expectation set in the robot model.
[1] Bertsekas, Dimitri. Dynamic programming and optimal control: Volume I. Vol. 4. Athena scientific, 2012. (http://www.athenasc.com/DP_Slides_2015.pdf page nine for reference) | null | null | Rebuttal 1:
Rebuttal: We thank the reviewer for all the comments and feedback. We are extremely happy that the reviewers found our paper well-written, novel, interesting, and useful. We will make sure to incorporate all suggestions, recommendations, and corrections. We have tried to provide specific answers to each reviewer. However, we wanted to take this global response to address some common points brought about by multiple reviewers. Please note that all citations that don’t start with the ‘ext’ prefix refer to citations from the paper.
Experiments: We want to emphasize that we evaluated against two methods in the experiments. The IRD method tries to find a single reward function, and the MMRQ-k tries to query the user to identify potential constraints. It is worth noting that the baseline query method we used failed to solve a single problem in the fairly generous time limit of 30 minutes we set (the worst-case average time for our method was 224.98 secs). Our choice of IRD was motivated by the fact that it is a prototype of a work that tries to make the case for a single true reward function. Even when it tries to find potential alternate reward function hypotheses that account for the behavior, it will fail to generate violation-free behavior as it doesn’t directly query the user about their expectations. At the same time, MMRQ-k results show how poorly the current query methods scale and the utility of leveraging information about the user's mental model.
Moving Beyond Grid-based Domains: The most direct way to move beyond such problems would be to adopt factored or feature-based representations [ext1]. Such representation schemes are typical in reinforcement learning and inverse reinforcement learning works [ext2]. They are also quite popular in AI-safety works (cf. [Zhang et al. 2018],[Saisubramanian et al. 2022], [Mahmud et al. 2023b]). Feature-based representations allow us to provide compact representations of extremely large state spaces. For example, a set of $k$ binary features can encode $2^k$ states. Under this scheme, the goal states and forbidden states would be identified by a set of features, and any state where those features are true is, in fact, considered a goal or forbidden state. Our formulation can be easily extended to a feature-based representation because one could calculate a feature-based occupancy frequency can be calculated by marginalizing across other features. As such, all the constraints and penalties considered in the various LP formulations can still be applied in this case. More interestingly, many aspects of our methods will be simplified by the use of a feature-based representation. For most standard planning problems, the number of features considered is much smaller than the largest state space considered here. This reduces the number of possible queries (because queries are done once per feature) and the number of tests we need to perform to identify potential forbidden state and goal state candidates (again done only once per feature).
[ext1] Brafman, Ronen I., and Carmel Domshlak. "Factored planning: How, when, and when not." AAAI. Vol. 6. 2006.
[ext2] Ng, Andrew Y., and Stuart Russell. "Algorithms for inverse reinforcement learning." Icml. Vol. 1. No. 2. 2000.
The attached PDF includes a table listing the number of constraints per problem instance. The IRD's violations should be compared against this number.
Pdf: /pdf/274a8fa694a78a2a58109b6f4f53275fbfd9fa95.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Epipolar-Free 3D Gaussian Splatting for Generalizable Novel View Synthesis | Accept (poster) | Summary: This paper tackles the generalizable NVS problem from sparse-view inputs. The main claim is that relying on epipolar geometry cues harms the performance, and, thus, it removes these geometric priors. It adopts the pre-training and supervised reconstruction/NVS training pipeline as in Dust3r. The authors verify the performance on RealEstate and ACID.
Strengths: - The proposed method demonstrates comparable performance to the current SoTA method MVSplat.
- I like the idea of not using epipolar geometry priors, but I do have some problems regarding the claims and technique. Please see details in weaknesses.
- I like the idea of doing ablation with different input view overlap ratio, which is trying to convince me on their claim in the introduction and Figure 1.
Weaknesses: - Overclaim. The authors claim that the proposed method is the first epipolar-free work (Line 64), but actually there are a lot prior works that are epipolar-free, espeicially for those works doing reconstruction from unposed images where the epipolar lines cannot be computed without the access of ground-truth poses. These works including SRT (the unposed version UpSRT), LEAP [1], PF-LRM [2] and Dust3r+Croco, where they do cross-view attention between different input views. LEAP also shows visualization results of the cross-view attention for implicit cross-view feature matching and correspondence. Besides, there are also some of lightfield-based methods are epipolar-free. I don't know why the authors have already discussed SRT and Dust3r+Croco in the related work section, but they still think they are the first epipolar-free work. All previously mentioned works should be discussed properly.
[1] Jiang, Hanwen et al. “LEAP: Liberate Sparse-view 3D Modeling from Camera Poses.” ICLR 2024.
[2] Wang, Peng et al. “PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape Prediction.” ICLR 2024.
- Related work. More related work on generalizable NVS using gaussian splatting should be discussed, for example, GS-LRM [3].
[3] Zhang, Kai et al. “GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting.” ECCV 2024.
- Evaluation. The evaluation doesn't verify the claims of the authors. Even though the idea of Table 2 is good, I don't see a clear different of the gain over the epipolar-aware baseline pixelSplat when the overlap is smaller.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The biggest concern is the novelty. This paper is a combination of Duster (cross-view pre-training + 2-view geometry without epipolar prior) and generalizable NVS using Gaussian Splatting (e.g. MVSplat).
I believe the authors need to discuss the prior works more properly and better word their contributions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Weaknesses 1. Overclaim Issue**
We would like to clarify any confusion regarding our use of the term "epipolar-free." This term specifically indicates our method's avoidance of epipolar sampling and cost volume techniques, which are commonly used in generalizable novel view synthesis (GNVS) tasks. While approaches like UpSRT, LEAP, and PF-LRM also do not utilize epipolar geometry, our method targets multiview GNVS tasks with precise camera poses. Thus, our approach is epipolar-free within the GNVS context but not entirely devoid of geometric information. As mentioned in the related work section, "Solving 3D Tasks using Geometry-free Methods," our focus is distinct from those methods that are entirely geometry-free, such as LEAP and PF-LRM, which do not directly apply to the GNVS tasks addressed in this paper.
We will reiterate our method's contributions and intuition. The novelty of our approach lies in the fact that most current multi-view GNVS methods heavily rely on epipolar line sampling or cost volume, which struggle to provide effective priors in non-overlapping and occluded areas. Therefore, we innovatively utilize a 3D cross-view pretraining model to obtain epipolar-free 3D priors and the cross-view Gaussians Alignment module to acquire accurate Gaussian attribute-matching features. The GNVS experiments demonstrate the advantages of our epipolar-free approach over existing methods dependent on epipolar line sampling or cost volume techniques.
### **Weaknesses 2. Related Work**
We appreciate your suggestion to include more related works on GNVS using Gaussian splatting. To the best of our ability, we have discussed 6 recent works on GNVS using Gaussian splatting, except for some studies very close to the submission date, such as GS-LRM. We will add discussions on GS-LRM and other relevant works to provide a more comprehensive context for our contributions.
### **Weaknesses 3. Evaluation**
We appreciate your valuable feedback on our evaluation section. We would like to emphasize that our method demonstrates clear advantages over both pixelSplat and MVSplat when the overlap is smaller.
Specifically, our method achieves an improvement of 0.5dB PSNR compared to pixelSplat, along with faster rendering speeds. This improvement is comparable to the PSNR gain over pixelSplat reported in the MVSplat paper on the RE10K dataset. According to the evaluation criteria in the MVSplat paper, a PSNR increase of 0.5dB is considered a relatively significant improvement.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. However, the rebuttal didn't solve my concerns, especially the over-claiming.
The rebuttal says "While approaches like UpSRT, LEAP, and PF-LRM also do not utilize epipolar geometry, our method targets multiview GNVS tasks with precise camera poses." As far as I know, SRT can also do GNVS with perfect poses without using epipolar geometry. Besides, LEAP or PF-LRM can also do GNVS, and actually using GT poses in this work is a stronger assumption. I cannot agree that this paper is the first epipolar-free GNVS work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your professional feedback. We believe our work does not involve any over-claiming.
Firstly, "epipolar-free" and "geometry-free" are two distinct concepts. In our method, "epipolar-free" means that epipolar line sampling is not required, but it does not mean that we completely avoid using geometric information. In fact, our Iterative Cross-view Gaussians Alignment method relies on the warp transformation formula from the known geometry, as shown in Eq. (5) of the manuscript. In contrast, methods like SRT and GS-LRM are entirely geometry-free, meaning they do not use geometric information at all, making them true "geometry-free" methods. Our "epipolar-free" approach is more akin to a semi-geometry approach, so SRT and GS-LRM do not fall under the category of epipolar-free methods as proposed in this paper. We will further clarify the distinction between epipolar-free and geometry-free methods in the related works section of the revised version.
Secondly, at this stage, unposed GNVS tasks and GNVS tasks with known poses are two different tasks. Here are three reasons to illustrate this:
1. Although pose-free GNVS methods can be applied to GNVS tasks with known GT poses, the current state-of-the-art methods, such as LEAP or PF-LRM, struggle to match the accuracy and efficiency of contemporaneous GNVS methods with known GT poses. These pose-free methods mainly compare themselves with known GT pose methods like pixelNeRF (CVPR 2021).
2. Pose-free GNVS methods are currently limited to simple object-level datasets (e.g., DTU or Omniobject3D). In contrast, our method not only conducts experiments at the complex scene level (RE10K) but also generalizes directly to object-level datasets (see Figure 1 and Table 1 in the rebuttal PDF).
3. The primary challenges of unposed GNVS methods and GNVS methods with known poses are different at this stage. Unposed GNVS methods need to address how to implicitly learn geometric relationships or recover camera poses without any known 3D priors. They often reduce task complexity through clever structured feature representations (e.g., Learned 3D Neural Volume in LEAP and Triplane in PF-LRM), but this comes at the cost of reduced model generalization. On the other hand, GNVS methods with known poses focus more on generalization ability and handling larger baselines. Our epipolar-free method is designed to enhance performance on larger baselines: eFreeSplat significantly outperforms SOTA GNVS methods in challenging areas such as regions with smaller viewpoint overlaps.
---
Rebuttal 2:
Comment: Thank you for your thoughtful response. Please find below our detailed comments on the novelty of our work and an analysis of the performance gain.
## **Novelty of the Work:**
* **Epipolar-Free Approach to Address Overlap Challenges.**
We have identified that mainstream multi-view GNVS methods, which rely on epipolar geometry, may encounter significant challenges in scenarios with large viewpoint overlaps. To address this, we explored the role of cross-view pretraining models that provide 3D prior knowledge for the GNVS task, and we introduced a cross-view mutual perception mechanism to partially mitigate the limitations of current methods.
* **Iterative Cross-view Gaussians Alignment Module.**
Beyond cross-view mutual perceptive pretraining, we have specifically introduced the Iterative Cross-view Gaussians Alignment module. We observed that relying solely on the CroCo pretraining model is insufficient to effectively address challenges such as ambiguity in GNVS, primarily due to the absence of precise local feature matching. Unlike previous generalizable NVS methods utilizing Gaussian Splatting (e.g., MVSplat), our approach circumvents the need for time-consuming and less generalizable structured feature representations, bypasses complex depth sampling processes, and enables efficient per-pixel Gaussian attribute prediction. The ablation experiments presented in Table 3 and Figure 6 demonstrate the critical importance of the Iterative Cross-view Gaussians Alignment module.
**Our approach not only effectively manages scenarios where epipolar geometry may fail but also eliminates the time-consuming process of sampling along epipolar lines. This makes our method particularly efficient and effective for GNVS tasks, especially in scenarios with extensive viewpoint overlaps.**
## **Performance Gain Analysis:**
With regard to Table 2 of the manuscript, the three subsets with overlaps below 0.7, 0.6, and 0.5 each represent challenging experimental conditions where epipolar geometry becomes unreliable, and it would be inaccurate to assert that one is inherently more challenging than the others. **Moreover, the performance gain across these subsets does not follow a linear progression.** The reasons are as follows:
* **Distribution Gap Between Subsets and Training Set.**
These three low-overlap subsets differ from the training set and constitute a small fraction of the test set. The majority of training data involves input viewpoints that overlap well above 0.7. In the test scenes, the ratio of scenes with overlaps greater than 0.7 to those below 0.7, 0.6, and 0.5 is 45:8:5:1. Thus, from a dataset distribution perspective, these three subsets exhibit significant variability, and the overlap size does not directly correlate with difficulty. This distribution gap explains why the performance gain remains similar across these subsets, as the model’s performance is more heavily influenced by distributional differences than by the specific overlap percentage.
* **Non-linear Error Growth with Decreasing Overlap.**
In 3D computer vision, local matching errors can propagate and affect the global output, meaning that overall error does not necessarily increase linearly as overlap decreases. This is because incorrectly matched features might be propagated as outliers into subsequent modules (especially in modules with global awareness, such as transformers), thereby amplifying their impact on overall performance. Once overlap falls below a certain threshold, traditional methods are significantly compromised. For instance, if the feature point matching error rate is too high, SFM algorithms may fail. Consequently, scenes with overlaps below 0.7–0.5 all represent scenarios where epipolar geometry is unreliable, and there is no strictly linear relationship for such "unreliableness". Thus, the similarity in performance gains for overlaps of 0.5 and 0.7 reflects the model’s consistent capability in managing these challenging scenarios where epipolar constraints are inadequate.
Finally, thank you again for your excellent suggestions, which have helped us to think more deeply about the essence of performance improvement in our method. We will incorporate this analysis of the performance gain into the revised manuscript.
---
Rebuttal 3:
Comment: Thanks for the detailed and helpful reply!
The motivation of using epipolar-free methods makes sense to me. While this field is super competitive as there are many recent works, releasing unnecessary inductive bias on geometric designs is promising to me, especially when having large data. The competition in this field makes me a bit tired, as I can feel most of the works are a kind of incremental. I appreciate the effort to further improving Dust3r training pipeline on the GNVS task using the proposed epipolar-free method and the iterative cross-view gaussian alignment method.
Regarding the performance analysis, maybe the reason for my previous question is the data distribution (45:8.5:1 as mentioned). The data amount with small overlap ratio, e.g. 0.5, can be too less, so the numbers are also noisy. I sincerely suggest the authors to find some other data that are more balanced to perform the analysis, e.g. DTU or DL3DV, where you can control the overlap ratio as the two datasets provide dense views. I won't request this experiment as the discussion period is coming to the end soon.
At the same time, we still want to let the authors know I expect the prior works can be correctly discussed, especially removing some controversial words like first "the first epipolar-free GNVS method". After the discussion, I don't want to reject this paper because of the overclaiming problem as I saw the effort behind this paper.
Thus, I would like to raise the score to **a neutral borderline** (with a score of 4.5, due to the uncertainty of how the over-claiming problem will be resolved) and I don't against accepting this paper. As there is no neutral borderline for the reviewer, I will just kept the current rating now, but I will let AC know if I keep the a neutral borderline until the end of discussion.
**At the same time, if you have better wording for resolving the controversial claims, please let me know before the discussion period ends. If the revised version is satisfying to me, I will raise the score to a borderline accept.**
---
Rebuttal Comment 3.1:
Comment: Thank you for your support of our work and for providing valuable suggestions. We have carefully considered the issues you raised and plan to revise them as follows:
### **Revision of Controversial Claims**
Upon careful review, the original claims regarding the novelty and contribution of our method were as follows:
* Our eFreeSplat represents a new **paradigm** for generalizable novel view synthesis.
* To our best knowledge, **the first multiview GNVS paradigm** that operates without relying on epipolar priors.
* a **groundbreaking** generalizable 3D Gaussian Splatting model designed for novel view synthesis across scenes, **free from the constraints of epipolar priors**.
* has inspired our development of a **novel GNVS paradigm** that circumvents the dependence on epipolar priors.
After an in-depth discussion during the rebuttal, we recognize the validity of your suggestions, which significantly aid in more accurately describing our contribution. We will revise the content as follows:
* Our eFreeSplat represents an innovative approach for generalizable novel view synthesis. Different from the existing pure geometry-free methods, eFreeSplat focuses more on achieving epipolar-free feature matching and encoding by providing 3D priors through cross-view pretraining.
* The approach with novel insights into GNVS that operates without relying on epipolar priors in the process of multi-view geometric perception.
* A novel generalizable 3D Gaussian Splatting model tailored for novel view synthesis across new scenes, designed to function independently of epipolar constraints that might unrelieble when large viewpoint changes occur.
* has inspired our development of a novel GNVS method that circumvents the dependence on epipolar priors through data-driven 3D priors.
### **Revision of Discussion on Related Works**
After thorough review, we acknowledge that the original manuscript's description of closely related work included the following content but lacked discussion on methods like GS-LRM, which do not require epipolar line sampling:
* SRT[1] is a geometry-free, generalizable NVS method that boldly eschews any explicit geometric inductive biases. SRT encodes patches from all reference views using a Transformer encoder and decodes the RGB color for target rays through a Transformer decoder
After in-depth discussion in the Rebuttal, we agree that your suggestion is reasonable, and we will provide a more detailed description of related work in our revised version:
* SRT[1] and GS-LRM[2] are epipolar-free GNVS methods that boldly eschew any explicit geometric inductive biases. SRT encodes patches from all reference views using a Transformer encoder and decodes the RGB color for target rays through a Transformer decoder. GS-LRM's network, composed of a large number of Transformer blocks, implicitly learns 3D representations. However, due to the lack of targeted scene encoding, these methods are either limited to specific datasets or suffer from unacceptable computational efficiency and carbon footprint.
* Some pose-free GNVS methods[1][3][4] are also epipolar-free. These methods, lacking known camera poses, find it challenging to perform epipolar line sampling. They often reduce task complexity through clever structured feature representations (e.g., Learned 3D Neural Volume in LEAP[3] and Triplane in PF-LRM[4]), but this reduction comes at the cost of decreased model generalization. Different from the above methods, our proposed eFreeSplat focuses on data-driven 3D priors and does not require any time-consuming and complex structured feature representations, such as cost volumes.
As for the experiments on a more overlap-balanced dataset, due to time constraints, as you acknowledged, we regret that it is challenging to prepare such balanced data and train our model in time to include the analysis within the eFreeSplat work. Nevertheless, we commit to establishing such an experimental scene in the forthcoming revised draft and to eliminating the influence of data distribution gaps on the experimental results, thereby providing a more thorough analysis of our method's advantages in scenarios with significant viewpoint changes.
**We will do our utmost in the revised version to resolve the controversial claims and address all the issues mentioned above. Once again, thank you for your professional feedback and evaluation of our work.**
[1] Sajjadi M S M, Meyer H, Pot E, et al. “Scene representation transformer: Geometry-free novel view synthesis through set-latent scene representations” CVPR 2022.
[2] Zhang K, Bi S, Tan H, et al. “GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting” ECCV 2024.
[3] Jiang, Hanwen et al. “LEAP: Liberate Sparse-view 3D Modeling from Camera Poses.” ICLR 2024.
[4] Wang, Peng et al. “PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape Prediction.” ICLR 2024. | Summary: This paper addresses the task of 2-view generalizable novel view synthesis. It introduces a cross-view completion model as prior assistance and incorporates cross-view Gaussian alignment after predicting Gaussian attributes to enhance cross-view consistency.
Strengths: 1. The introduction of pretraining the cross-view completion prior for assisting the generalizable novel view synthesis task is an interesting and novel approach.
2. Extensive experiments, including comparisons with baseline methods such as pixelSplat/MVSplat, demonstrate that the proposed approach can generate higher-quality novel view images, particularly in scenes with smaller overlaps (Tab. 2, Fig. 5).
Weaknesses: The ablation analysis in Tab. 3 does not clearly identify which module contributes the most to the performance improvement.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Table 3 seems to indicate that all three modules are important. In the author's opinion, which module has the greatest impact on the performance improvement?
2. Does the iterative nature of the Cross-view Gaussians Alignment module slow down the training speed? Is choosing to iterate twice a trade-off between effectiveness and efficiency?
3. Is Cross-view Gaussians Alignment required during inference?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author provides a detailed discussion of limitations and social impacts in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Q1. The most influential module in Table 3**
The results in Table 3 show that the absence of the epipolar-free cross-view mutual perception results in the most significant performance decline. This highlights the crucial role of the pretraining and network structure of Croco in providing 3D priors and cross-view features. Additionally, as illustrated in Fig. 6, the exclusion of this module leads to noticeable artifacts in both novel view images and depth maps, underscoring its importance. We will add this analysis to the ablation study section in the paper.
### **Q2. Supplementary analysis of the Iterative Cross-view Gaussians Alignment module**
While the iterative nature of the Cross-view Gaussians Alignment module does slightly reduce the rendering speed during training or inference, it substantially enhances the reconstruction metrics. The supplementary experiments with iterations set to 1-3, as presented in the table below, demonstrate that iterating twice strikes a balance between effectiveness and efficiency, providing significant improvements without excessively increasing computational demands. For details, please see **Table 2 and Figure 2 in the Rebuttal PDF**.
### **Q3. Whether Cross-view Gaussians Alignment required during inference**
Yes, cross-view Gaussians Alignment is required and plays a crucial role during inference, and it does not require much time (less than 0.061s, as reported in Tab 1 of the manuscript). The scene representation is obtained from input perspective images through Epipolar-free Cross-view Mutual Perception and Cross-view Gaussians Alignment, which matches the scene representation trained by the single-scene optimized 3DGS method, allowing for direct inference of new viewpoints. As shown in Fig. 2 of the manuscript, the pipeline of eFreeSplat remains the same for both the training and testing sets: it first uses Croco-pretrained ViT to provide feature maps with 3D priors, then employs the Cross-view Gaussians Alignment module to acquire pixel-wise 3D Gaussian primitives for the 3D scene representation, and finally, the inference is performed using the original 3DGS tile-based rendering method.
---
Rebuttal Comment 1.1:
Comment: Thanks so much again for the time and effort in our work. May I know if our rebuttal addresses the concerns? If there are further concerns or questions, we are more than happy to address them. Thanks again for taking the time to review our work and provide insightful comments. | Summary: This paper introduces a robust pipeline for generalizable 3D Gaussian novel view synthesis, utilizing a cross-attention model trained on large-scale datasets. This approach enables the model to generalize effectively to new scenes without depending on epipolar geometry, which is commonly used in traditional methods. To address inconsistencies in depth scales, the pipeline features an iterative refinement process that adjusts the attributes of 3D Gaussians based on cross-view feature correspondences.
Strengths: - The motivation is well-grounded. The paper identifies and tackles one of the core problems of the sparse 3D Novel view synthesis.
- The experiments show solid results on image pair settings, especially in sparse and non-overlapping scenarios. The evaluation convinces me of the usefulness of each component.
Weaknesses: - Overall, the paper has good ideas, but the experiments are not enough to support their claim. The paper claimed to be a multiview generalizable novel view synthesis, but most of the experiments are done in an image pair setting, it should add more experiments in a sparse-view setting and compare with other sparse-view Novel view synthesis methods on datasets like mipnerf360.
- 59 "obtain the warped features for each view based on the predicted depths via U-Net" and formula (5) seems similar to the the coarse-to-fine cost volume formulation utilized in the MVSNet. Could you elaborate on how these two are different?
Technical Quality: 2
Clarity: 3
Questions for Authors: - In formula (7) , the C is used without definition. I assume it is the dimensionality of the feature of 3D Gaussian primitive? It should be made clearer.
- Since you are using a Croco pre-trained model could you give some metrics on 512 x 512 resolution?
- Given the significant memory demands of the global alignment process in DUSt3R, could you detail the memory requirements for your method? It would be helpful to understand if similar memory constraints apply to your approach.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The author have made a commendable effort in acknowledging both the technical limitations and potential negative societal impacts of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Weaknesses 1. Missing experiments in a sparse-view setting and comparison with other sparse-view methods**
We thank you for identifying the novelty of our idea. Nevertheless, maybe we did not illustrate the difference between GNVS and sparse-view NVS clear enough and make you miss the focus of our experiments. Please allow us to reiterate the concept of the generalizable novel view synthesis (GNVS) task:
The GNVS task aims to render new viewpoint images by leveraging the generalization ability of cross-scene 3D representations. When encountering new scenes not present in the training set, this process requires no additional single-scene optimization, achieving rendering through a single network feed-forward pass. Therefore, sparse-view novel view synthesis and GNVS are distinct tasks. The methods and datasets we compare in our experiments are commonly used settings in the GNVS task.
Moreover, eFreeSplat is theoretically straightforward to extend to multi-view inputs. Our choice of datasets and dual-view input settings is solely for fairer comparisons with current GNVS methods. For instance, recent works such as pixelSplat and MVSplat also use dual-view input in their experimental settings, with their datasets being RE10K and ACID. Through fair experiments, we have validated our method's generalization capability and its advantages in reconstructing challenging regions.
### **Weaknesses 2. Differences between Eq. (5) and cost volume construction method in MVSNet**
Here, we provide a detailed explanation of the differences between Equation (5) and the coarse-to-fine cost volume construction method in MVSNet.
Equation (5) represents a commonly used warped transformation formula in multi-view geometry. Unlike MVSNet, which requires sampling along the canonical view ray N times to obtain N depth values $\[d_{i}\]_{i=1}^{N}$ when constructing the cost volume, our method does not require multiple depth direction samples. Specifically, in each iteration, our method directly predicts the coarse depth value $d$ using a U-Net network (as detailed in Equation (4)), rather than sampling each depth plane to calculate the cost volume as MVSNet does.
The intuition behind this approach stems from the 3D prior knowledge provided by the CroCo pretrained model, which replaces the multiple depth sampling process required in the MVSNet cost volume. By leveraging CroCo's 3D priors, we can effectively obtain multi-view feature point matching relationships in each iteration, simplifying computation and improving efficiency.
### **Q1. Missing definition**
Thank you very much for your thorough review and valuable suggestions. We sincerely apologize for the omission of the definition of $C$ in Equation (7). Your assumption is indeed correct: $C$ represents the feature dimension of the 3D Gaussian primitives. We will include a clear definition of $C$ in the revised manuscript.
### **Q2. More experiments on 512 x 512 resolution**
Thank you for your valuable feedback. The baseline methods use the RE10K and ACID datasets, which have a resolution of 360 x 640, lower than 512 x 512. For a fair comparison, we did not conduct experiments at other resolutions that were different from the baselines and other popular methods.
According to our research, current 3DGS-based GNVS methods all use fixed, lower-resolution datasets for testing. Although high resolution and variable resolution are not the focus of this study, they could indeed make models more applicable in real-world scenarios, so we have included them in our future work plans. Thank you again for your suggestion.
### **Q3. The memory requirements**
Thank you for your valuable suggestion. We have added ablation experiment results for iteration counts of 1-3, including comparisons of memory usage, rendering speed, and reconstruction metrics. For details, please see **Table 2 and Figure 2 in the Rebuttal PDF**.
The experiments show that while our method's memory usage varies with different iteration counts, the overall memory demand remains relatively low. Compared to DUSt3R, our method requires less memory and achieves better rendering efficiency and reconstruction accuracy with two iterations.
---
Rebuttal Comment 1.1:
Comment: Thanks so much again for the time and effort in our work. May I know if our rebuttal addresses the concerns? If there are further concerns or questions, we are more than happy to address them. Thanks again for taking the time to review our work and provide insightful comments. | Summary: This work presents eFreeSplat, a model to address generalizable novel view synthesis without relying on epipolar prior. Specifically, it extracts the cross-view mutual perception by leveraging a pre-trained CroCo model. It then improves the alignment of multi-view Gaussians by using an iterative updating strategy. Finally, novel views are rendered by an off-the-shelf 3DGS renderer. Experiments on two benchmarks, RE10K and ACID, demonstrate the effectiveness of the introduced eFreeSplat.
Strengths: * The motivation of using the pre-trained model to address the non-overlapping and occluded regions is interesting.
* Extensive experiments showcase that eFreeSplat achieves significantly better results on scenes where input views contain less overlap, which is well-aligned with the motivation.
* The paper is well-written and easy to follow.
Weaknesses: * Performances regarding cross-dataset generation. Similar to the cross-dataset experiments shown in MVSplat, it would be interesting to see how eFreeSplat performs when trained on RE10K but tested on ACID and DTU. This experiment will help us better understand how such a data-driven approach generalizes across different data distributions.
* It would be better to provide a more detailed analysis of the Iterative Cross-view Gaussians Alignment module. For example, showcase the depth maps of both input views when setting the iteration to 1, 2, and 3. This would be helpful for verifying whether the iterative solution can really help align the depth scale across multiple views.
* Will it be beneficial to perform a fine-tuning of the CroCo model? As mentioned in the failure case, eFreeSplat performs worse on cases with significant overlaps, potentially because CroCo is trained on image pairs with slight overlaps. Can these failure cases be addressed by fine-tuning CroCo on the RE10K training set? Similarly, as reported in Tab. 3, the performance of eFreeSplat w/o pre-training weights is much worse compared to other state-of-the-art models, while MVSplat achieves reasonably good results even when training from scratch. Can this gap be reduced by pre-training the backbone model using the RE10K training set?
Technical Quality: 3
Clarity: 3
Questions for Authors: Kindly refer to the [Weaknesses]
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Kindly refer to the [Weaknesses]
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Weaknesses 1. Performances regarding cross-dataset generation**
Thank you for your valuable suggestions on our work. We fully agree with your view on the importance of cross-dataset testing and, following your advice, have added cross-dataset test results on the DTU dataset. For specific experimental results, please see **Table 1 and Figure 1 in the Rebuttal PDF**.
These tests further validate eFreeSplat's generalization ability across different data distributions. The results demonstrate that even though training was conducted on the RE10K dataset, eFreeSplat performs excellently on the DTU dataset, which further confirms the robustness and generalization performance of our method.
### **Weaknesses 2. Supplementary analysis of the Iterative Cross-view Gaussians Alignment module**
Thank you for your valuable suggestions. We have added ablation experiments for iteration counts of 1-3 and presented the corresponding depth maps based on your feedback. For specific experimental results, please see **the Table 2 and Figure 2 in the Rebuttal PDF**.
The results indicate that setting the iteration count to 2 achieves the best balance between reconstruction accuracy and rendering efficiency. When the iteration count is set to 3, there is no significant improvement in image reconstruction metrics, which may be due to training overfitting and the subsequent iterations losing the 3D perception provided by the original CroCo features.
### **Weaknesses 3. Fine-tuning of the CroCo model**
Thank you for your valuable suggestions. We conducted relevant Experiments A and B regarding fine-tuning the CroCo model using the RE10K dataset. Experiment A involved fine-tuning the CroCo pretrained weights with the RE10K training set, while Experiment B involved training CroCo directly with the RE10K training set without loading the pretrained weights. Finally, we retrained eFreeSplat using the new pretrained weights. The CroCo pretraining for Experiments A and B was performed on 2 RTX 4090 GPUs, with total iterations of 4000 and 6000, respectively (due to time constraints), learning rates of 2e-5 and 2e-4, and a batch size of 12. The viewpoint overlap was set the same as in the main model training.
For the quantitative and qualitative results of Experiments A and B, please see **Table 3 and Figure 3 in Rebuttal PDF**.
The results indicate that pretraining the backbone model on the RE10K training set effectively addresses the model's poor performance in low-overlap scenarios. However, in the RE10K test set, Experiment A's reconstruction metrics were slightly lower than those of the original model, which may be due to insufficient training iterations. We will further investigate the positive impact of fine-tuning the CroCo pretrained model on novel view synthesis and 3D reconstruction in future work.
---
Rebuttal 2:
Comment: Thanks so much again for the time and effort in our work. May I know if our rebuttal addresses the concerns? If there are further concerns or questions, we are more than happy to address them. Thanks again for taking the time to review our work and provide insightful comments. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful feedback! We are encouraged that they found our motivation for using a pre-trained model to address non-overlapping and occluded regions interesting and well-grounded (w4rj, MtQr, 1Kz3). They also praised our paper's well-written presentation (w4rj) and the novelty of introducing pretraining the cross-view completion prior for assisting in the generalizable novel view synthesis task (AnyG). Additionally, they appreciated the beneficial improvements demonstrated through extensive experiments, particularly in scenarios with sparse and non-overlapping input views (w4rj, MtQr, AnyG). Here, we address the concerned questions below and will incorporate all responses in the revision.
Pdf: /pdf/30506e71dd8b4c0e035e4805fbba992549bd4c84.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fast and Memory-Efficient Video Diffusion Using Streamlined Inference | Accept (poster) | Summary: In this paper, the authors focus on reducing the computational requirements and high peak memory usage for video diffusion models. Specifically, a train-free framework is proposed, which consists of three parts: Feature Slicer, Operator Grouping, and Step Rehash. Those three steps result in significant memory reduction and inference acceleration.
Strengths: 1. The proposed framework has a great performance for reducing peak memory and accelerating the inference of video diffusion models. In particular, the peak memory of AnimateDiff can be reduced significantly from 41.7GB to 11GB, which can contribute to more practical applications.
2. The pipeline of the proposed framework is easy to understand and is well-documented.
Weaknesses: 1. More base models with other frameworks of video diffusion should be involved for more comparisons. The two baselines used in experiments, i.e., SVD and AnimateDiff, are both based on T2I diffusion models and Unet backbone. Is the proposed framework suitable for other diffusion models with DiT backbone[1] or 3D diffusion, like Open-Sora Plan v1.1(can be found in Github).
2. Human-level metrics should be involved to compare the video visual quality in Table 2. The FVD and CLIP-Score may not be enough to measure the performance. Based on the cases shown in Figure 1 and 7, I prefer to agree that the original base model has better performance.
[1]: Peebles W, Xie S. Scalable diffusion models with transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 4195-4205.
Technical Quality: 4
Clarity: 4
Questions for Authors: My main question is about the generalization of the proposed framework. Based on my understanding, most contributions of this paper are “engineering”. It is important to verify the performance on various base models, such as diffusion models with DiT backbone and 3D diffusion models.
In addition, if I want to use this framework, do I need to modify a lot of content in the code?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for recognizing the strengths of our papers and providing valuable feedback. We are happy to address the raised questions as below.
---
#### **W1. Generalization on other variants and backbones.**
We agree that a more comprehensive evaluation could help demonstrate the generalization ability of our framework. We evaluate DiT, ModelScope and VideoCrafter with our method, and demonstrate that our method is general and can be applied to multiple video diffusion models with different architectures.
Please refer to Global **A3** Table 5~8 for more detailed results.
---
#### **W2. More evaluation metrics and video quality.**
We agree that it is important to provide a more comprehensive evaluation. Please refer to Global rebuttal **A1**, **A2** and **A4** for more detailed results. We include human evaluation in Table 4 in Global rebuttal **A2**, other evaluation metrics results can be found in Table 2 and Table 3 in Global rebuttal **A2**. As discussed in Global rebuttal **A1**, the video quality can be **maintained** by slightly increasing the full computation steps of Step Rehash while still **keeping a good amount inference acceleration**. In our paper, we use **fewer full computation steps** to **push the limits** of Step Rehash. Our quantitative results in Table 1 of Global rebuttal **A1** and visual demonstrations in Global rebuttal **A4** demonstrate that our method can lead to high quality video generation with reduced cost.
---
#### **Q1. Generalization of our method and contributions clarification.**
First, we want to clarify that our work is trying to solve the memory bound issues of video diffusion due to users’ limited VRAM resources in a novel way. Previous training free works focusing on sampling efficiency without considering the huge peak memory requirement for diffusion models. General techniques like Model compression take substantial efforts to retrain or finetune the diffusion model to recover performance, which is costly, time-consuming, and may raise data privacy concerns. Furthermore, applying post-training compression techniques in one-shot may save the retraining/fine-tuning efforts, but suffers from significant performance degradation.
Our work leverages the **system and algorithm co-design** (where the feature slicer + operator grouping can collaborate with step rehash without any interference) to provide a new direction for providing a fast and memory-efficient diffusion **without training**.
Last but not least, we would like to kindly clarify that our method could be applied to DiT backbones without any design changes. Please refer to Global rebuttal **A3** Table 6 and Table 8 for more detailed results.
---
#### **Q2. In addition, if I want to use this framework, do I need to modify a lot of content in the code?**
We have implemented our method on all spatial-temporal 3D blocks in `diffusers` and will release a Python wrapper which can automatically inherit original blocks with our fast and memory-efficient blocks in various video diffusion models. Our implementation extends the original blocks and can seamlessly replace them without changing the diffusion framework. It is easy to implement.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. I'm glad to improve my score.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you very much for taking the time to review our paper and for acknowledging the performance of our framework. Since the discussion will end very soon, we sincerely hope that you have found time to check our detailed response to your previous questions/comments. If you have any further questions, please feel free to let us know. We will try our best to reply to you before the discussion deadline.
Thank you very much,
Authors
---
Rebuttal 3:
Comment: We sincerely thank the reviewer for acknowledging our responses and improving the score! We will add all these constructive suggestions in the final version of our paper. | Summary: This paper proposes a training-free video diffusion inference acceleration method, which includes three processes: Feature Slicer, Operator Grouping, and Step Rehash. Compared to the baseline, the proposed method shows significant improvements in memory usage and inference speed.
Strengths: - The method description is straightforward and easy to understand.
- The proposed method is effective, simple, and easy to implement, and it shows significant improvements.
- The problem addressed by the proposed method is critical.
Weaknesses: - The experiments need to be improved. Step Rehash can greatly accelerate the inference speed, but it seems to be highly dependent on the weights of the video diffusion model. It is unclear whether the proposed method can be applied to most video diffusion models. Therefore, the authors need to compare more types of models and provide statistical information on the performance changes, such as mean and variance.
- The proposed method shows a visible loss in video generation capability, as seen in Figure 7 where the owl's eyes disappear and the overall texture quality of the synthesized image decreases in the optimized version.
Technical Quality: 3
Clarity: 3
Questions for Authors: - From the images, it is clear that the quality of some of the synthesized images has decreased. Based on my experience, this decrease in quality may have a more severe impact on video synthesis tasks. The authors did not seem to provide video demos, so it is difficult to judge how severe the loss in performance is.
- What is the relationship between the optimization method and batch size used during inference, and how do they affect the optimization speed and memory usage?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I think this paper addresses an important problem and appears to have significant improvements. However, the degree of loss in synthesis quality is not clear, which is an area of concern for me. The loss in quality seems to be less noticeable in SVD, but more significant in AnimateDiff (e.g., the owl in Figure 7, the bus in Figure A6, and the parrot in Figure A5).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for recognizing the strengths of our papers and providing valuable feedback. We are happy to address the raised questions as below.
---
#### **W1. The experiments need to be improved.**
Please refer to Global rebuttal **A3** and **A2** for more detailed results.
We agree that a more comprehensive evaluation could help demonstrate the generalization ability of our framework. Therefore, we conduct comprehensive experiments on various model architectures (Global rebuttal **A3**) and demonstrate more evaluation metrics results with statistical information (Global rebuttal **A2**). Our extended experiment results show that our framework could generalize to different backbones/architectures and significantly reduces peak memory and computation costs for video diffusion model inference.
We would also like to kindly point out that we already provide the mean of FVD, and FVD results do not support variance evaluation.
---
#### **W2. Concern about maintaining video quality.**
Please refer to Global rebuttal **A1** and **A4** for more detailed results. As discussed in Global rebuttal **A1**, the video quality can be **maintained** by slightly increasing the full computation steps of Step Rehash while still **keeping a good amount inference acceleration**. In our paper, we use **fewer full computation steps** to **push the limits** of Step Rehash. Our quantitative results in Table 1 of Global rebuttal **A1** and visual demonstrations in Global rebuttal **A4** demonstrate that our method can lead to high quality video generation with reduced cost.
---
#### **Q1. Concern about maintaining video quality.**
Please refer to Global rebuttal **A1** and **A4**.
---
#### **Q2. Relationship between the optimization method and batch size.**
Thank the reviewer for raising this valuable question. We implement this part and find our method could handle the scalability in terms of batch size.
Our slicing strategy will be accordingly adjusted based on the batch size and input size.
Below is the performance benchmark on batch size = 1,2,4 for AnimateDiff with 512x512 output.
- Table A, when batch size = 1,2,4
| **Model** | **Batch Size** | **Peak Mem** | **Latency** |
|:-:|:-:|:-:|:-:|
| AnimateDiff+Ours | 1 | 7.51G | 7.08s |
| AnimateDiff+Ours | 2 | 8.30G | 14.07s |
| AnimateDiff+Ours | 4 | 11.35G | 27.05s |
---
#### **L1. Concern about maintaining video quality.**
Please refer to Global rebuttal **A1** and **A4**.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you very much for taking the time to review our paper and for acknowledging the contributions we've made. Since the discussion will end very soon, we sincerely hope that you have found time to check our detailed response to your previous questions/comments. If you have any further questions, please feel free to let us know. We will try our best to reply to you before the discussion deadline.
Thank you very much,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for your responses, which has addressed some of my concerns. I will raise my score.
---
Reply to Comment 2.1.1:
Comment: We sincerely thank the reviewer for recognizing that the concerns have been addressed and raising the score! We will add all these constructive suggestions in the final version of our paper. | Summary: This paper presents a framework for reducing the computational demands of text-to-video diffusion models. The main idea involves dividing input features into subfeatures and processing them parallelly, thereby reducing peak memory usage during sampling. To address the increase in overall compute time caused by this partitioning, the authors propose a skip strategy that determines when and where to apply the skip operation based on feature similarities.
Strengths: 1. The paper is well-written and motivated.
2. The proposed method is intuitive.
Weaknesses: While the paper is well-motivated for the important problem and proposes an intuitive and simple remedy, there are major concerns regarding the evaluation, scalability, and applicability of the proposed method, as follows:
**Scalability and Applicability**
- My major concern is the scalability of the proposed method. Since the method requires dividing the input features into patches, it inevitably increases the computation as the feature dimension gets larger, potentially introducing more patches to compute. While the proposed skip strategy might mitigate some of the increased computation time, it definitely harms the quality of the generated videos.
- In terms of applicability, the proposed framework is only applicable to U-Net-based video diffusion models. However, recent models, such as those utilizing DiT backbones, are not addressed by this framework.
**The evaluation is weak**
- Lack of baselines: The authors only compare their method to naive hashing, which is the most simple baseline. To verify the effectiveness of the proposed method, please include existing method for efficient sampling that also reduce memory and sampling speed. The authors only compare with DeepCache. They should compare with more training-free baselines, even though these primarily demonstrate their efficiency in image diffusion models. If not applicable, discuss why extending these approaches to the text-to-video diffusion model is not straightforward in the Related Work section, and also discuss the differences with existing baselines.
- Week evaluation with DeepCache: It is unclear why the authors made a comparison with DeepCache under the same computational step. The authors should define what they mean by “computational step” and compare with DeepCache in terms of efficiency metrics as thoes in Table 1. Also, the metric of MAC and GFlops should be included to measure efficiency.
- Evaluation protocol: The current evaluation using FVD and CLIP score is insufficient. It would be beneficial to include more comprehensive video quality evaluation metrics, such as thoes proposed in VBench [1].
- Additional experiments: Include more video diffusion backbones such ModelScope and VideoCrafter to extensively verify the effectiveness of proposed method under various video diffusion models.
[1] Huang et al., VBench: Comprehensive Benchmark Suite for Video Generative Models, CVPR 2024
Technical Quality: 2
Clarity: 3
Questions for Authors: What is the "computational step" in Table 2?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See the Weaknesses Section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the feedback from the reviewer. First, we would like to kindly clarify some misunderstandings here:
1. We do provide the definition of the “full computation step” in the “line 265, 266”.
2. Our method can be applied to DiT backbones without any design changes.
3. Our work takes an early step in fast and memory-efficient inference framework in a training free manner.
We address the raised questions as below.
---
**W1. Scalability concern.**
We would like to point out that our observation is that our method could actually mitigate I/O overhead which can handle the scalability issues, as stated in “Section 4.2.1 **Mitigating I/O intensity**”. Moreover, we introduce the pipeline to mitigate the “more patches” things as stated in “Section 4.2.2”. Overall, our experimental results use 3x576x1024 with up to 16 frames which is considered as large scale video generation for the current open-source SOTA video diffusion models.
We provide a detailed breakdown, as we can see, even at 576x1024 with 16 frames, our method without step rehash could still mitigate the overhead from the scale. And we use full computation steps=15 for maintaining the quality of the generated videos. Here are the results.
- Table A, proposed method breakdown
|Model|Method|speed up|Peak Mem. (576x1024)|FVD (UCF101)|CLIP-Score (UCF101)|
|-|:-:|:-:|:-:|:-:|:-:|
|SVD|-|-|39.49G|307.7|29.25|
||+ Feature Slicer|x0.95|39.49G|307.7|29.25|
||+ Feature Slicer + Operator Grouping|x0.98|23.42G (-40.7%)|307.7|29.25|
||+ Feature Slicer + Operator Grouping + Pipeline|x1.03|23.42G (-40.7%)|307.7|29.25|
||+ Feature Slicer + Operator Grouping + Pipeline + Step Rehash|x1.46|23.42G (-40.7%)|312.1|29.20|
|AnimateDiff|-|-|41.71G|758.7|28.89|
||+ Feature Slicer|x0.94|41.71G|758.7|28.89|
||+ Feature Slicer + Operator Grouping|x0.96|11.07G (-73.5%) |758.7|28.89|
||+ Feature Slicer + Operator Grouping + Pipeline|x1.03|11.07G (-73.5%)|758.7|28.89|
||+ Feature Slicer + Operator Grouping + Pipeline + Step Rehash|x1.45|11.07G (-73.5%)|765.01|28.87|
We also provide results for scalability of batch_size to further demonstrate our method. Here are the scalability results of our method when batch size is not 1. Our slicing strategy will be accordingly adjusted based on the batch size and input size. Below is the performance benchmark on batch size = 1,2,4 for AnimateDiff with 512x512 output. We can see that when the batch_size becomes 4, the peak memory does not increase significantly compared with the batch_size of 1.
- Table B, scalability under batch size = 1,2,4
|Model|Batch Size|Peak Mem|Latency|
|-|:-:|-|-|
|AnimateDiff+Ours|1|7.51G|7.08s|
|AnimateDiff+Ours|2|8.30G|14.07s|
|AnimateDiff+Ours|4|11.35G|27.05s|
---
**W2. Generalization to DiT backbones.**
Please refer to Global rebuttal **A3** for more detailed results.
We would like to kindly clarify that our method could be applied to DiT backbones without any design changes.
This is due to our general design of Feature slicer on spatial-temporal 3D blocks in video diffusion blocks. We would also like to point out that Operator Grouping, Pipeline and Step Rehash are general techniques. Our results on DiT backbones (OpenSora) demonstrates stable generalization on various backbones/architectures and can constantly improving efficiency while maintaining the video quality.
---
**W3. Lack of baselines.**
We would like to kindly clarify some misunderstandings here. Our work tries to solve the memory bound issues of video diffusion due to users’ limited VRAM resources in a novel way. Current sampling methods focus on improving sampling efficiency, without any considerations of reducing the huge peak memory where "Peak Memory" is the **maximum** amount of memory which is used by the processes during the **entire time**. Also, DeepCache is the most recent training-free baselines that introduce a novel direction for improving inference speed **only** in image diffusion models. General techniques like Model compression for fast and memory-efficient models take substantial efforts to retrain or finetune the diffusion model to recover performance. To our best knowledge, our work takes an early step in fast and memory-efficient inference framework in a training free manner.
We survey several methods that can reduce diffusion inference step such as DPM-Solver and Euler solver. However, they can not reduce the peak memory during inference which we highlight as one of the main contributions of our method.
We would be thankful if the reviewer could give more details of “lacked baselines” about training-free baselines for memory efficiency in diffusion models so we could include more comprehensive experimental results.
---
#### **W4. Week evaluation with DeepCache.**
We would first like to clarify that we provide the definition of the “full computation step” in the “line 265, 266” for steps that can't be skipped; other steps that use step rehash have most of their computation skipped as we stated in Section 4.3.2. Moreover, we would like to point out that our method can save more GFLOPS when under the same computational step as stated in “line 321, 322”. We provide detailed results below,
- Table C, Computation Comparison
|Model|Method|512x512|576x1024|
|-|-|-|-|
|SVD|Original|8.47T|20.40T|
||DeepCache|5.68T|14.06T|
||Ours|5.75T|13.25T|
|AnimateDiff|Original|8.37T| 20.19T|
||DeepCache|5.76T|14.09T|
||Ours|5.53T|11.91T|
---
**W5. Evaluation protocol.**
Please refer to Global rebuttal **A2**, Table 2~4.
---
**W6. Additional experiments.**
Please refer to Global rebuttal **A3**, Table 5 and Table 7.
---
**Q1. What is the "computational step" in Table 2?**
We would like to point out that we provide the definition of the “full computation step” in the “line 265, 266” for steps that can't be skipped; other steps that use step rehash have most of their computation skipped as we stated in Section 4.3.2.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed responses. My concerns have been thoroughly addressed, and I have accordingly increased my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for recognizing that the concerns have been thoroughly addressed and increasing the score! We will add all these constructive suggestions in the final version of our paper.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you very much for spending time reviewing our paper. Since the discussion will end very soon, we sincerely hope that you have found time to check our detailed response to your previous questions/comments. If you have any further questions, please feel free to let us know. We will try our best to reply to you before the discussion deadline.
Thank you very much,
Authors
---
Rebuttal 3:
Comment: Dear Reviewer,
As we enter the final day of the Reviewer-Author discussion, please take a moment to review the authors' rebuttal and consider the comments from other reviewers. If you have any additional questions, now is the time to ask them to help you better evaluate the paper. Thanks! | Summary: The paper introduces a novel, training-free framework to optimize video diffusion models. This framework, consisting of Feature Slicer, Operator Grouping, and Step Rehash, significantly reduces peak memory usage and computational overhead while maintaining video quality. Extensive experiments demonstrate that the approach can cut memory usage by up to 70% and improve inference speed by 1.6 times compared to baseline methods. The framework is compatible with existing models like AnimateDiff and SVD, enabling high-quality video generation on consumer-grade GPUs. The research paves the way for more efficient video diffusion models, making advanced video generation accessible on standard hardware.
Strengths: - The introduction of a training-free framework that optimizes video diffusion models seems a novel approach.
- The method is compatible with existing video diffusion models like AnimateDiff and SVD, ensuring broad applicability.
- Comprehensive experiments and detailed analysis demonstrate the framework's effectiveness and robustness.
Weaknesses: - Although the paper claims that video quality is maintained, the extent of quality degradation, if any, is not fully quantified.
- Although some ablation studies are provided, a more detailed breakdown of the contributions of each component (Feature Slicer, Operator Grouping, and Step Rehash) would strengthen the understanding of their individual and combined impacts.
- The evaluation relies heavily on FVD and CLIP-Scores, which, while useful, may not capture all dimensions of video quality and user satisfaction. Including additional metrics or user studies could provide a more holistic assessment of the generated video quality.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Are there any specific scenarios where the overhead introduced by slicing and grouping operations could outweigh the benefits?
- Have you considered conducting user studies to provide a more comprehensive assessment of the generated video quality?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for recognizing the strengths of our papers and providing valuable feedback. We are happy to address the raised questions as below.
---
#### **W1. Concern about maintaining video quality.**
Please refer to Global rebuttal **A1** and **A4**. As discussed in Global rebuttal **A1**, the video quality can be **maintained** by slightly increasing the full computation steps of Step Rehash while still **keeping a good amount inference acceleration**. In our paper, we use **fewer full computation steps** to **push the limits** of Step Rehash. Our quantitative results in Table 1 of Global rebuttal **A1** and visual demonstrations in Global rebuttal **A4** demonstrate that our method can lead to high quality video generation with reduced cost.
---
#### **W2. Need more breakdown to our method.**
Thank the reviewer for pointing out this part. We will provide more details and rephrase it for better clarity. We also provide more breakdown results for this question. First, we would like to kindly point out that:
As mentioned in “Line 165-167”, Feature Slicer cannot reduce peak memory if not combined with Operator Grouping.
The Operator Grouping can only be applied after slicing the features. It is a solution to reduce saving intermediate results after applying the Feature Slicer. We would also like to point out that our Operator Grouping, Pipeline and Step Rehash are general techniques.
From the following table, we can see that Feature Slicer and Operator Grouping together can significantly reduce the peak memory. Pipeline and Step Rehash lead to further accelerations. Step Rehash incurs more significant speedups with certain generation quality degradation. Pipeline does not affect the video quality.
---
- Table 1. Breakdown of our method (the results here are aligned with results Table 1 from our paper)
| **Model** | **Method** | **speed up** | **Peak Mem. (576x1024)** | **FVD (UCF101)** | **CLIP-Score (UCF101)** |
|:---:|:---:|:---:|:---:|:---:|:---:|
| SVD | - | - | 39.49G | 307.7 | 29.25 |
| | + Feature Slicer | x0.95 | 39.49G | 307.7 | 29.25 |
| | + Feature Slicer + Operator Grouping | x0.98 | 23.42G (-40.7%) | 307.7 | 29.25 |
| | + Feature Slicer + Operator Grouping + Pipeline | x1.03 | 23.42G (-40.7%) | 307.7 | 29.25 |
| | + Feature Slicer + Operator Grouping + Pipeline + Step Rehash | x1.63 | 23.42G (-40.7%) | 340.6 | 28.98 |
| AnimateDiff | - | - | 41.71G | 758.7 | 28.89 |
| | + Feature Slicer | x0.94 | 41.71G | 758.7 | 28.89 |
| | + Feature Slicer + Operator Grouping | x0.96 | 11.07G (-73.5%) | 758.7 | 28.89 |
| | + Feature Slicer + Operator Grouping + Pipeline | x1.03 | 11.07G (-73.5%) | 758.7 | 28.89 |
| | + Feature Slicer + Operator Grouping + Pipeline + Step Rehash | x1.61 | 11.07G (-73.5%) | 784.5 | 28.71 |
---
#### **W3. Need more evaluation metrics.**
We agree that it is important to provide a more comprehensive evaluation. Please refer to Global rebuttal **A2**, Table 2 and Table 3. We demonstrate more evaluation metrics such as PSNR, LPIPS, SSIM, IS and Vbench benchmark. Our method can outperform the DeepCache baseline under various metrics.
---
#### **Q1. Specific scenarios where the overhead introduced by slicing and grouping operations could outweigh the benefits.**
When the generated video size is small enough like 64x64, the acceleration and memory reduction is not very obvious. However, in this case, even without any efficiency optimization methods, a consumer GPU is sufficient enough to handle this small size without difficulties.
---
#### **Q2. User studies for more comprehensive assessment.**
We agree that it is important to provide a more comprehensive evaluation. Please refer to Global rebuttal **A2**, Table 4.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you very much for taking the time to review our paper and for acknowledging the novelty and applicability of our work. Since the discussion will end very soon, we sincerely hope that you have found time to check our detailed response to your previous questions/comments. If you have any further questions, please feel free to let us know. We will try our best to reply to you before the discussion deadline.
Thank you very much,
Authors | Rebuttal 1:
Rebuttal: We thank the reviewers for acknowledging that our work importance and broad applicability (Reviewer mL5N, M5Qw), our method is novel, and high-performing (Reviewer mL5N, M5Qw, 5EX5), our experiments are comprehensive (Reviewer mL5N), and our paper is well-written (Reviewer 2uzE, 5EX5, M5Qw).
---
**A1. Concern about maintaining video quality.**
We would like to clarify that video quality can be **maintained** by slightly increasing the full computation steps of Step Rehash while still **keeping a good amount inference acceleration**. In our paper, the intention of our demonstrated results is using **fewer full computation steps** to **push the limits** of Step Rehash. To better illustrate our method's ability to maintain video quality, we provide additional results below and in **A4**.
- Table1. Quantitative results comparison under full computation step=15
|Model|Method|Full computation step|Speed Up|Peak Mem.(576x1024)|ucf101 FVD↓|ucf101 CLIP-score↑|MSR-VTT FVD↓|MSR-VTT CLIP-score↑|
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|SVD|-|25|-|39.49G|307.7|29.25|373.6|26.06|
||+DeepCache|15|x1.39|39.49G|385.4|28.89|412.4|25.73|
||+Ours|15|x1.46|23.42G(-40.7%)|312.1|29.20|382.8|25.99|
|AnimateDiff|-|25|-|41.71G|758.74|28.89|607.1|29.40|
||+DeepCache|15|x1.39|41.71G|810.93|28.72|608.2|29.16|
||+Ours|15|x1.45|11.07G(-73.5%)|765.01|28.87|599.1|29.39|
Compared to the baseline, our Step Rehash outperforms DeepCache in both quality and speed. Specifically, when using the same full computation steps, Step Rehash skips more computations than DeepCache with less performance drop.
Lastly, we highlight that our work is an **early step** towards a **fast** and **memory-efficient** inference framework in a **training-free** manner. It reduces peak memory and computational overhead, making it feasible to generate high-quality videos on a single consumer GPU (e.g., reducing peak memory of AnimateDiff from 42GB to 11GB, with faster inference on a 2080Ti).
---
**A2. More evaluation metrics.**
We finish evaluating PSNR, LPIPS, SSIM, IS and Vbench benchmark with our method under full computation step=13. Our framework demonstrates stable generalization ability and can constantly maintain the video quality across various evaluation metrics according to results in Table 2~4. Here are the results,
- Table 2. More metrics
|Dataset|Method|PSNR↑|LPIPS↓|SSIM↑|IS↑|
|:-:|:-:|-|-|-|-|
|ucf101|SVD+Ours|22.44±3.60|0.27±0.08|0.72±0.12|27.11±0.54|
||SVD+DeepCache|17.07±3.14|0.41±0.09|0.52±0.16|25.69±0.33|
||AnimateDiff+Ours|16.94±2.28|0.54±0.08|0.51±0.09|27.89±0.68|
||AnimateDiff+DeepCache|16.31±3.14|0.54±0.12|0.50±0.12|27.62±0.47|
|msr_vtt|SVD+Ours|25.13±5.39|0.23±0.08|0.80±0.10|21.65±0.23|
||SVD+DeepCache|22.66±4.23|0.29±0.08|0.73±0.13|21.26±0.42|
||AnimateDiff+Ours|15.90±2.24|0.53±0.08|0.48±0.12|28.83±0.51|
||AnimateDiff+DeepCache|11.71±1.72|0.72±0.06|0.31±0.11|25.60±0.72|
- Table 3. Vbench Benchmark
|Dataset|Model|Method|Subject Consistency↑|Aesthetic Quality↑|Dynamic Degree↑|
|-|-|-|:-:|:-:|:-:|
|ucf101|SVD|-|0.92|0.39|0.45|
|||+Ours|0.89|0.36|0.33|
|||+DeepCache|0.88|0.34|0.62|
||AnimateDiff|-|0.94|0.50|0.77|
|||+Ours|0.93|0.49|0.80|
|||+DeepCache|0.92|0.49|0.80|
|msr_vtt|SVD|-|0.93|0.42|0.69|
|||+Ours|0.91|0.40|0.65|
|||+DeepCache|0.89|0.39|0.64|
||AnimateDiff|-|0.95|0.53|0.68|
|||+Ours|0.94|0.51|0.62|27.64±0.59|
|||+DeepCache|0.93|0.51|0.66|
We report human evaluation results in Table 4. via MTurk platform,
- Table 4. Human Preference Evaluation
|Method|Original (Win Rate)|Ours (Win Rate)|
|-|-|-|
|SVD|50.0%|50.0%|
|AnimateDiff|53.3%|46.7%|
|OpenSora|53.3%|46.7%|
---
**A3. Generalization on other variants and backbones.**
To demonstrate the generalization of our method, we evaluate DiT, ModelScope and VideoCrafter with our method. Our method is **general** and can be applied to other video diffusion variants. It **does not need any design change** for **adapting** to these **new** video diffusions. Specifically, we would like to point out that the generalization of Feature Slicer is achieved as all video diffusion models have temporal and spatial blocks for handling temporal and spatial information. Moreover, our Operator Grouping, Pipeline and Step Rehash **do not depend on the model architecture** which are **general techniques**. Our results in Table 5~8 demonstrates the stable generalization of our method on various backbones/architectures, and it can constantly improve efficiency while maintaining the video quality.
- Table 5. Quality on ModelScope and VideoCrafter2
|Dataset|Model|FVD↓|CLIP-score↑|
|-|:-:|-|:-:|
|ucf101|ModelScope|842.42|28.97|
||ModelScope+Ours|875.67|28.45|
||VideoCrafter2|823.11|29.02|
||VideoCrafter2+Ours|852.09|28.56|
|msr_vtt|ModelScope|846.90|25.65|
||ModelScope+Ours|868.01|25.14|
||VideoCrafter2|778.32|26.04|
||VideoCrafter2+Ours|810.66|25.54|
- Table 6. Quality on OpenSora
|Method|Subject Consistency↑|Aesthetic Quality↑|Dynamic Degree↑|
|-|:-:|:-:|:-:|
|OpenSora|94.45%|56.18%|47.22%|
|OpenSora+Ours|93.01%|54.82%|51.29%|
- Table 7. Efficiency results on ModelScope and VideoCrafter2.
|Model|Peak Mem.(576x1024)|Latency|
|-|:-:|-|
|ModelScope|12.51G|27.17s|
|ModelScope+Ours|8.30G|18.96s|
|VideoCrafter|Error*|Error*|
|VideoCrafter+Ours|14.16G|141.34s|
[*] Error occurred because VideoCrafter uses Xformer’s `memory_efficient_attention` which does not support the large memory scale when video size=576x1024. We replace the Attention with PyTorch `scaled_dot_product_attention`, the peak mem is 66.66G, and corresponding latency is 229.00s.
- Table 8, Efficiency results on OpenSora
|Model|Peak Mem (720P, 1:1)|Latency|
|-|:-:|-|
|OpenSora|59.91G|1230s|
|OpenSora+Ours|41.65G|952s|
---
**A4. Visualization results of videos.**
Due to the restriction of anonymous link, we provide a PDF contain every frame of generated videos (totally 30 videos with 15 video pairs) with our method on base models to better demonstrate that our method can lead to high quality video generation with reduced cost.
Pdf: /pdf/1a0d778ef6cc0493adfa3da37f7d07b673a84ef5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Your contrastive learning problem is secretly a distribution alignment problem | Accept (poster) | Summary: The paper presents a novel perspective on contrastive learning (CL) by framing it as a distribution alignment problem using entropic optimal transport (OT). It trains an encoder network $f_\theta$ by iteratively updating encoder parameter $\theta$ and corresponding transport plan $P$ among encoded augmentations of samples. The authors establish connections between noise contrastive estimation losses widely used in CL and distribution alignment with OT. This novel connection allows for the development of various loss functions and multi-step variants for existing CL methods. The theoretical insights and experimental evidence provided demonstrate the benefits of this approach in improving the generalization and robustness of contrastive alignment in both clean and noisy settings.
Strengths: The paper offers a fresh view on contrastive learning by linking it to optimal transport, providing a solid theoretical foundation for understanding and improving CL methods. The authors provide rigorous theoretical analysis and proofs for the convergence of their proposed methods, offering strong support for their claims. The proposed Generalized Contrastive Alignment (GCA) framework is versatile, allowing for the incorporation of domain-specific knowledge and customization of representation spaces.
Originality
The paper presents an innovative connection between contrastive learning and optimal transport, a novel perspective that has not been extensively explored before. The introduction of the Generalized Contrastive Alignment (GCA) framework offers a fresh approach to enhancing contrastive learning methods.
Quality
Theoretical insights are rigorously developed, providing a solid foundation for the proposed connections and methodologies. The experiments demonstrate the benefits of the GCA approach from a few different perspectives.
Clarity
The paper is logically structured, and key concepts and methods are clearly explained.
Significance
By bridging the gap between contrastive learning and optimal transport, the paper opens up new prospects for research and application in self-supervised learning. The GCA framework has the potential to improve the expressiveness and robustness of representations in various domain generalization settings, which is relevant for real-world applications.
Weaknesses: While the theoretical convergence of the algorithm is established, the specific criteria for convergence are not clearly defined. The paper's use of proximal operators for T steps and auto-differentiation for parameter θ could impose significant computational burdens, especially with large T. This potential issue is not sufficiently addressed, raising concerns about the algorithm's efficiency in large-scale applications.The experiments lack a detailed runtime analysis, leaving the computational efficiency of the algorithm unexamined. The computational resources necessary for implementing the algorithm are not clearly outlined, raising concerns about its scalability and practicality for large-scale applications.
Technical Quality: 4
Clarity: 3
Questions for Authors: In your experiments, you utilize a fixed number of epochs. How do you ensure that the convergence of the algorithm is achieved within this fixed number of iterations? Would it be feasible to implement a convergence criterion to replace the fixed number of iterations?
Optimal transport is known to require substantial computing resources, especially with large sample sizes. Could you provide a runtime analysis for your algorithm? Does the algorithm face significant computational challenges when dealing with large datasets?
You utilized proximal operators for T steps to obtain the transport plan P, with the differentiation of the parameter θ computed via auto-differentiation from the loss, propagating back through T steps. Could the auto-differentiation process impose a significant computational burden when a large number of steps T is required in the proximal operator? How do you mitigate this potential issue?
Would you method be able to handle more than 2 types of augmentation? What would be your target transport plan if more than 2 augmentation is used in the contrastive learning?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We appreciate your time and suggestions. Our specific responses to your questions are provided below.
1. “ the specific criteria for convergence are not clearly defined. [...] concerns about the algorithm's efficiency in large-scale applications. The experiments lack a detailed runtime analysis,[...]..”
**Reply:** Thanks for your questions. We summarize our computational analysis in line Line 188 in our main text and give the run time analysis in Sec. 3.1. We show the cost of MSINCE in 10 iterations is only 5% more operations in Flops than INCE, and GCA-UOT is even lower than the INCE with a 30% reduction in Flops in general response (#2).
2. “ How do you ensure that the convergence of the algorithm is achieved within this fixed number of iterations? Would it be feasible to implement a convergence criterion to replace the fixed number of iterations?”
**Reply:** Thanks for your questions. In practice, we use a simple convergence criteria for automatically terminating the multistep algorithm. For CIFAR-10, we found that we could also set the maximum iterations to 5 without any loss in performance.
Based upon the reviewers comments, we ran an experiment to examine the impact of the number of iterations on the accuracy and compactness of the classes (Fig. U2). We found that the accuracy was not very sensitive to the exact choice for the number of fixed iterations with comparable performance for anywhere from 5-11 iterations.
3. “Optimal transport is known to require substantial computing resources, especially with large sample sizes. Could you provide a runtime analysis for your algorithm? Does the algorithm face significant computational challenges when dealing with large datasets?”
**Reply:** We provide an analysis of computational complexity and empirical results in the Appendix in Sec. C.1. The computation efficiency is also discussed in our general response, where we show the small amount of overhead for MS-INCE (5% more flops) and a 30% reduction in flops and running time for GCA-UOT.
In terms of dealing with larger datasets, please see our results on ImageNet-100 and SVHN in the general response (Table R1). There we show that the model scales to larger datasets and still performs on par with baseline methods.
4. “You utilized proximal operators for T steps to obtain the transport plan P, with the differentiation of the parameter θ computed via auto-differentiation from the loss, propagating back through T steps. Could the auto-differentiation process impose a significant computational burden when a large number of steps T is required in the proximal operator? How do you mitigate this potential issue?”
**Reply:** Thank you for your question. We don’t backpropagate the loss at different iterations in our alignment objective. Instead, we compute a final transport plan, and then backpropagate the loss, similar to the way BatchNormalization operates. While the number of T steps could become substantial when dealing with large datasets and small mini batch, we have implemented several strategies to address this:
- We perform the optimal transport (OT) computation in the latent space rather than the input space, which reduces the data's dimensionality involved in the OT problem, thereby significantly decreasing both computational cost and memory requirements.
- We set a convergence threshold to prevent an excessively large number of T steps. This ensures that the algorithm stops as soon as an adequate solution is found, further optimizing computational efficiency.
5. “Would you be able to handle more than 2 types of augmentation? What would be your target transport plan if more than 2 augmentation is used in contrastive learning?”
**Reply:** By having more than two types of augmentations, do you mean having multiple views of the same example in the same batch? If so, then yes, we could handle that case with our framework! We show how this many-to-one matching can be implemented as a block diagonal constraint on the transport plan in our domain generalization experiments.
We hope to include this as an example for future work. Thanks for your suggestion.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's reply and the detailed runtime analysis provided. This has alleviated my concerns about the practical applicability of the proposed algorithm.
---
Rebuttal 2:
Comment: Dear Reviewer VXuY,
We greatly appreciate your constructive comments and are glad that the additional analysis addressed the issues you raised. We kindly hope that you might consider increasing your score based upon the discussion. | Summary: The paper recasts several self-supervised learning (SSL) paradigms, such as SimCLR, in an optimal transport framework. In many SSL variants each batch contains two views of the same data sample. This means that the embeddings of a batch can be viewed as the union of two sets, each one containing only one of the two views of each data sample. The key insight of the paper is that the typical SSL losses can be viewed as the discrepancy between approximations of the optimal transport plans between these two sets from the target transport plan that simply matches the two views of each sample. From this OT perspective the authors propose variants of self-supervised learning losses, improving the approximation to the optimal transport plan, generalizing to weighted batches, and relaxing the target transport plan to incorporate prior domain knowledge.
Strengths: Originality
- The connection between SSL and OT discussed in this paper seems to extend prior works on the relation of SSL and OT.
Quality
- Connecting two active areas of research, such as SSL and OT, is very useful.
- Phrasing SSL objectives in an OT framework provides an interesting new perspective on what SSL does.
Clarity:
- The authors provide a dedicated appendix elaborating the concept of proximal operators, which is helpful as this is more of a niche topic for the general ML community.
Soundness:
- Their reformulation of SSL losses is correct and most statements are precisely formulated.
Weaknesses: I appreciate the new perspective the paper presents a lot. However, the presentation needs to be significantly improved.
- *W1 Presentation*
- To me the motivation for introducing all the notation and machinery in sections 2.2-3.3 was not sufficiently clear when reading the paper for the first time. Many abstract concepts are introduced in great generality (lots of choices for divergences, constraint sets etc), but the first real benefit of using this framework appears only on page 6 (Thm 1). I would suggest to drastically restructure the paper and use recasting INCE in the OT framework as the red thread. Flesh out the proof of Thm 1 by explaining how the term inside the logarithm of INCE can be seen as half a Sinkhorn step and use this to motivate the introduction of proximal operators. Then explain how one gets the full INCE loss when using the KL divergence between the approximately optimal transport and the target transport plan. This way every ingredient of the OT framework is directly motivated. Once the INCE case is discussed you can generalize the OT perspective to also incorporate other SSL losses and generalize them.
- My understanding is that the terms "half-Sinkhorn step", "Bregman projection", and "proximal operator" are pretty much synonyms, at least in the context of this work. If so, I would recommend to stick to one of the three terms (while possibly maintaining an appendix section that explains their relations). This would make the exposition much more accessible. My preference would be "half-Sinkhorn step" as Sinkhorn operations are most widely known in the ML community. I would even suggest simplifying the terminology at the expense of generality (maybe the BYOL connection really needs proximal operators rather than half-Sinkhorn steps) and defer the fully general setting (explaining the BYOL connection) to the appendix if needed.
- Overall, I would recommend to introduce as little jargon as necessary to present the results. The most general setting can be discussed later in the paper or in the appendix.
- I recommend phrasing the main idea more clearly early on in the paper (similar to my summary). In particular, lines 30-32 made me think that the transport plans are restricted to only match positive pairs. But instead they are only penalized if they do not. I only understand the setup correctly on page 4.
- The notation used in the main paper needs to be properly introduced in the main paper. For instance, what do the subscripts $\mathcal{F}$ and $\mathcal{G}$ stand for in line 199? Define what the target coupling plan $P_{tgt}$ is supposed to do in line 167. What is $R^{B\times B}$ in line 265? If $R$ stands for the real numbers, rather use $\mathbb{R}$.
- *W2* There is no related work section. It would be useful to at least explain in which ways the present work extends prior works on the connection between SSL and OT, such as [39]
**Minor:**
- *W3* Line 107: The equation given for the cost is nearly the cosine similarity (but not quite due to the absolute value) but the sentence states that the cost often encodes the L2 distance. This is confusing as the provided formula does not encode the L2-distance.
- *W4* Several cross-reference links are faulty. For instance, the links to Appendix A.4 and algorithm A1 in lines 188, 189 all point to algorithm 1.
- *W5* Combine Thm 7 and 6 into one as Thm 7 is a strict generalization of 6.
- *W6* The INCE result on CIFAR-10 is unusually low. Other sources (Damrich et al. 2023 or https://github.com/p3i0t/SimCLR-CIFAR10) report linear classifier accurarcies above 92% for SimCLR on Cifar-10, much higher than for any SSL variant in this paper.
- *W7* Casting BYOL into the GCA framework seems pretty forced: The proximal operator is the identity, the result $S_\theta$ is really not a transport plan, and applying the KL loss seems odd as there is no normalization constraint on $S_\theta$. I think the ultimate reason for this less on the GCA side and has more to do with the lack of a repulsive force in BYOL (and the existence of a degenerate optimum with a constant encoder). I would perhaps recommend deferring this finding to the appendix, which might also allow to reduce the level of generality required in the main paper, see above.
Technical Quality: 4
Clarity: 1
Questions for Authors: - *Q1* There seem to be multiple ways for enforcing / penalizing constraints: In eq (8) the valid choices for $P$ are restricted both by $\mathcal{B}$ and $h(P)$, especially if the latter is an indicator function. Why use both ways of representing constraints and not include both either in $\mathcal{B}$ or $h$? Similarly in eq (9) there are additional divergences to achieve non-uniform marginals of the transport plan. Could this not also be subsumed by $h$ or $\mathcal{B}$?
- *Q2* The convergence proofs mentioned in line 59, line 188 and A4 only refer to the inner optimization loop of finding the optimal transport plan, right? In particular, they do not show that the parameters of the neural network will converge independent of the data. If so, rephrase the statements in lines 59 and 188 accordingly. Currently, they can be misunderstood as an overstatement.
- *Q3* In the full GCA setup, where the optimal transportation plan is computed (not just one step), the backwards pass needs to unroll all the Sinkhorn steps, right? Does this not lead to high complexity and potentially exploding / vanishing gradients? In particular, I was surprised by line 186 stating that the forward-pass does not affect the computational complexity of the backward pass.
- *Q4* How are the marginal distributions $\mu$ and $\nu$ chosen for unbalanced GCA in section 6.2 and Table 2?
- *Q5* What is the value of $\mu$ in Thm 1, 2, and 3? Is is simply the vector of all ones?
- *Q6* While the statements in Theorem 5- 8 are interesting, I wonder why having higher / lower losses should help with better representations?. The authors do show this empirically, but I wonder why one would expect this from just from having higher / lower losses. Similarly, I wonder why higher uniformity loss implies improved uniformity (Thm 5) and why lower general loss implies improved alignment (Thm 6). For instance, could one not argue that since the loss is already lower, there is less learning signal and thus worse alignment? Also it is the full GCA loss that is lower than the full INCE loss in Thm 6 not just the attractive (alignment) part.
- *Q7* What is the unit for the y-axis in Fig A3? Is it seconds?
Confidence: 4
Soundness: 4
Presentation: 1
Contribution: 3
Limitations: Limitations are not explicitly discussed, but I also do not see obvious limitations that need to be stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your thorough evaluation and insightful feedback on our manuscript. Based upon your suggestions, we plan to make major revisions to this paper to improve the quality of the presentation. Below we provide replies to your other questions and concerns.
1. Suggestions on restructuring the paper
**Reply:** We sincerely thank you for the suggestions to rearrange our paper. We thought long and hard about this and originally started with INCE and Sinkhorn in a previous version. However, we decided to go with this organization as it sets up the problem in full generality before diving into specific examples of the idea in action.
2. Terminology surrounding alignment iterations
**Reply:** Since the use of the Sinkhorn algorithm is a specific case in our framework, only applied when using INCE, we have referred to it as a "half-Sinkhorn" step. As we extend to more general divergences and losses, proximal operators provide a more generalized framework for this optimal transport approach. We will simplify our terminology and incorporate your ideas to clarify our motivation. We appreciate your detailed suggestions!
3. “What do the subscripts $\mathcal{F}$ and $\mathcal{G}$ in line 199 [..] what the target coupling plan Ptgt in line 167 [...]. What is $R^{BxB}$ in line 265 [...]?”
**Reply:** Thank you. In line 199, $\mathcal{F}$ and $\mathcal{G}$ are constraint sets for divergences $h_F$ and $h_G$. In line 167, $P_{\text{tgt}}$ is the target distribution, set as the identity matrix (line 171) or other relaxed forms (line 208). In line 265, $\mathbb{R}^{B \times B}$ is correct. We will clarify these terms in our revision.
4. Discussion of related work
**Reply:** Thanks. We added some related work in the background section but agree that a more thorough discussion is needed. We plan to expand this and clarify the differences from [39] in our revision.
5. “Line 107: The equation given for the cost [...]”
**Reply:** Thanks. It was a typo in line 107, and we've corrected it to cosine similarity.
6. “Combine Thm 7 and 6 [...]”
**Reply:** Thanks, we agree with your suggestion. We plan to combine the two theorems as you suggested.
7. “The INCE result on CIFAR-10 is unusually low.[...]”
**Reply:** Thanks. Our results are based on 400 epochs with an SGD optimizer for the linear layer. After training for 1000 epochs with a LARS optimizer, we achieved 90.42% accuracy. Our approach only considers pairwise matching across B samples, not the full SimCLR implementation with 2B samples, causing a slight decrease in performance compared to reported SimCLR results.
8. “Casting BYOL into the GCA framework seems pretty forced. [...] ”
**Reply:** Thanks. While our formulation may not be the most natural for expressing BYOL, we believe BYOL demonstrates the flexibility of our proximal theory framework by simply changing the kernel \(\bf K_\theta\). We will consider your suggestion for the final submission.
9. “Why use both ways of representing constraints and not include both either in B or h?”
**Reply:** Thanks. The set $\mathcal{B}$ is the set of feasible solutions, while $h(P)$ encodes the penalty. Thus, finding the solution for $\mathbf{P}$ is influenced by both. In Eq. (9), we show that constraints in $\mathcal{B}$ can be converted into a soft penalty via functions $h_i$ (where $i = 1, 2, \ldots, n$) by finding their dual formulation.
10. “The convergence proofs only refer to the inner optimization loop [...] rephrase the statements in lines 59 and 188 accordingly. ”
**Reply:** Thanks. Yes, it only refers to the inner loop. We will rephrase the statements in lines 59 and 188 to make this clear. Our theory for this is detailed in Theorems 5-8.
11. “The backwards pass needs to unroll all the Sinkhorn steps, right?” [...] “Does this not lead to potentially exploding/vanishing gradients”
**Reply:** No, the backward pass doesn't need to unroll all the Sinkhorn steps (or iterations for general losses). See our general response (#2) on complexity. As shown in Fig. A3, the computational resources for the backward pass aren't significantly affected by the number of iterations. Regarding the impact on gradients, incorrect parameters like epsilon could influence them. This can be mitigated by selecting appropriate regularization parameters.
12. “How are the marginal distributions $\mu$ and $\nu$ are chosen for unbalanced GCA”
**Reply:** In Table 2, the target transport plan for GCA-UOT is set to the identity matrix. In Sec. 6.2, we explore different constraints by varying the alpha and beta values in the target transport plan matrix, so the values of $\mu$ and $\nu$ change with it.
13. “What is the value of \mu” in Thm 1, 2, and 3? Is it simply the vector of all ones?
**Reply:** Yes, they are vectors of ones. We will clarify this in the final version.
14. “[...] why having higher / lower losses should help with better representations?. [...]”
**Reply:** That's an excellent question. Proving the benefits for the lower bound is challenging due to training dynamics. In Appendix A10 and Fig. A2, we explain that INCE aligns $\bf{P}$ in one direction with $\bf{P}\bf{1}_m=\bf{a}$, while GCA aligns samples in two directions with $\bf{P}^T\bf{1}_n=\bf{b}$, leading to more uniform results. In Theorem 6, we show that our approach improves alignment loss because optimizing INCE loss is akin to optimizing alignment loss. We will clarify this in our final submission.
15. “What is the unit for the y-axis in Fig A3? Is it seconds?”
**Reply:** Yes, we have updated the units in Fig A3 to seconds.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply and the intention of addressing several of my points in the revision. Please find some follow-up questions / comments below.
**1. Structure:** Restructuring the paper is clearly a lot of work. Nevertheless, I still think that you original plan is more friendly for a wider audience of readers. The presentation is the main weakness in my mind.
**9. Use of $\mathcal{B}$ and $h$:** But you write in line 163 that $h$ is typically an indicator function. In line 896 you directly translate between $\mathcal{B}$ and $h$ with infinite penalty outside $\mathcal{B}$. Does *indicator function* imply infinite penalty in your setting? If so, I really do not see the difference between constraining via $\mathcal{B}$ and $h$. If you usually use other functions than indicators for $h$, perhaps rephrase line 163.
**11. Complexity:** Yes, I see that empirically your method is similarly fast as other SSL methods. However, I do not understand why you do not need to unroll Sinkhorn operations to compute the gradients. How are gradients computed when the forward pass includes an inner optimization line in step 2 of Algorithm 1.
**12. Value of $\mu$ and $\nu$:** Is it correct that $\mu$ and $\nu$ are the marginals of $P_{tgt}$, which in turn depends on $\alpha$ and $\beta$?
**14. Higher alignment loss:** To be honest, I still do not understand the argument. On a high-level it sounds as if the INCE loss is less informative than the GCA loss, because the former has one and the latter two requirements on the marginals. But I cannot connect this with either Figure A2 or Appendix A10. I also do not understand what this has to do with the relative size of the loss values.
Here are some specific questions regarding Figure A2: Are the blue / red points in the first panel the current batch or some random selection of positive pairs? Should the arrows in the second and third panel point to the blue / red points (they do not: for instance the orange arrows in the middle and the right panels are not the same). What are the arrows supposed to mean? Are there any projections to tangent spaces involved? Are the fat arrows in the middle plot contributions to a gradient? If so, of which point is this supposed to be the gradient? I do not get at all why there is a line for INCE but a plane for GCA INCE. How is this connected to the constraints on the marginals?
You write in line 1315 that you perform supervised training. But INCE and GCA-INCE are self-supervised. You write in line 1316 that you perform PCA. Was this of the unnormalized points in high-dimensional space or of the normalized points? In either case the result are non-normalized. Why should PCA + renormalization to the sphere encode information about the structure in high-dimensional space? Why not retrain a resnet-18 with a 3D output space, so that you can directly visualize it.
---
Reply to Comment 1.1.1:
Comment: Thanks for your questions and rigorous review.
9. Use of $\mathcal{B}$ and $h$: But you write in line 163 that $h$ is typically an indicator function. In line 896 you directly translate between $\mathcal{B}$ and $h$ with an infinite penalty outside $\mathcal{B}$. Does indicator function imply infinite penalty in your setting? If so, I really do not see the difference between constraining via $\mathcal{B}$ and $h$. If you usually use other functions than indicators for $h$, perhaps rephrase line 163.
Reply: Thank you for your question. The penalty function h can provide a hard constraint (using an indicator function) or it can be a different function that measures the deviation from the constraint. In our implementation of GCA-UOT, we use a KL-divergence for h to relax our constraints in balanced OT from a hard to a soft penalty. We will rephrase line 163 to make this clear.
11. Complexity: Yes, I see that empirically your method is similarly fast as other SSL methods. However, I do not understand why you do not need to unroll Sinkhorn operations to compute the gradients.
Reply: Our optimization process is decoupled and we can solve it through two different optimization procedures, one to compute the representations and the other to find the optimal transport plan. You can think of the Sinkhorn algorithm as a process that provides us with a static transport plan between two sets of points in the latent space, and what we are actually differentiating over is the cost matrix.
12. Value of $\mu$ and $\nu$: Is it correct that $\mu$ and $\nu$ are the marginals of $P_{tgt}$, which in turn depends on $\alpha$ and $\beta$?
Reply: Yes, in the domain adaptation setting, the marginals depend on alpha and beta.
14a. Higher alignment loss: To be honest, I still do not understand the argument. On a high-level it sounds as if the INCE loss is less informative than the GCA loss, because the former has one and the latter two requirements on the marginals. But I cannot connect this with either Figure A2 or Appendix A10. I also do not understand what this has to do with the relative size of the loss values.
Reply: The CL loss in Eq.(1) can be decomposed into two terms, the alignment loss and uniformity loss [1] which corresponds to the entropy $-\epsilon H(P)$ and $<P,C>$. Under the perfectly aligned condition where the alignment loss will be zero, we show that the uniformity loss in GCA will be lower than the original INCE objective. We can use this, along with recent work from [2] where they show that a tighter bound on the uniformity can benefit the downstream task like classification, to reason that lower loss can lead to provable benefits in learning.
[1] Wang T, Isola P. Understanding contrastive representation learning through alignment and uniformity on the hypersphere, International conference on machine learning. PMLR, 2020: 9929-9939.
[2] Dufumier B, Barbano C A, Louiset R, et al. Integrating prior knowledge in contrastive learning with kernel, International Conference on Machine Learning. PMLR, 2023: 8851-8878.
14b. Here are some specific questions regarding Figure A2: Are the blue / red points [...]Are the fat arrows in the middle plot contributions to a gradient? If so, at which point is this supposed to be the gradient?
Reply: We are sorry for the confusion in Figure A2. Our goal was to show that by imposing two constraints on the row and column spaces with GCA, we would get more uniformly distributed latents. The subspace shows the two sets of constraints.
14d. You write in line 1315 that you perform supervised training. But INCE and GCA-INCE are self-supervised.
Reply: That’s a typo. Sorry, we want to say we train a linear layer for evaluating the representations.
14e. You write in line 1316 that you perform PCA. Was this of the unnormalized points in high-dimensional space or of the normalized points? In either case the results are non-normalized.
Reply: The PCA in line 1316 is for the normalized points that are mapped to the unit sphere in 3D. It may look like the points are un-normalized because you can see points on the other side of the sphere. We are sorry this figure is confusing. We will remove it from the final version of the paper as we believe it adds more confusion. | Summary: This paper introduces a framework called Generalized Contrastive Alignment (GCA) that connects contrastive learning to distribution alignment using optimal transport. The key contributions include:
1. Establishing a novel class of losses and algorithms for representation learning through GCA, showing how different contrastive losses can be interpreted as variants of a generalized distribution alignment objective.
2. Proving convergence of GCA-based methods and demonstrating theoretically that the alignment objective can improve the quality of learned representations.
3. Empirically validating GCA's effectiveness in image classification and domain generalization tasks, showing it can achieve superior performance over baseline methods.
4. Demonstrating how GCA allows building unbalanced losses using tools from optimal transport, which can handle noisy views and customize representations.
5. Providing a unified framework that connects existing contrastive learning methods like InfoNCE, RINCE, and BYOL to optimal transport formulations.
6. Showing how modifying the target alignment plan in GCA can flexibly control the amount of domain knowledge incorporated into representations for domain generalization tasks.
The paper provides both theoretical analysis and experimental results to support the benefits of the GCA framework in improving representation learning and offering more flexibility compared to standard contrastive learning approaches. The authors position this work as providing new insights into the connections between self-supervised learning models and offering tools to more easily incorporate domain knowledge into learning.
Strengths: This paper demonstrates several strengths across the dimensions of originality, quality, clarity, and significance:
**Originality:**
1. The paper presents a novel framework (GCA) that creatively bridges contrastive learning and optimal transport. This connection is not entirely new, but the comprehensive treatment and generalizations provided are original.
2. The formulation of contrastive learning as a distribution alignment problem offers a fresh perspective on a widely-studied topic.
3. The extension to unbalanced optimal transport and the ability to customize target transport plans are innovative additions to the contrastive learning toolkit.
**Quality:**
1. The theoretical analysis is rigorous, with clear proofs for the connections between GCA and existing methods (InfoNCE, RINCE, BYOL).
2. The empirical evaluation is comprehensive, covering multiple datasets (CIFAR-10, CIFAR-100, CIFAR-10C, PACS) and scenarios (standard classification, extreme data augmentation, domain generalization).
3. The ablation studies and comparisons against baseline methods are thorough and well-presented.
**Clarity:**
1. The paper is well-structured, with a clear progression from problem formulation to theoretical analysis and empirical validation.
2. The use of diagrams (e.g., Figure 1) helps illustrate complex concepts like customized transport plans.
3. The authors provide clear pseudocode (Algorithm 1) for the GCA method, enhancing reproducibility.
**Significance:**
1. The GCA framework provides a unifying perspective on several popular contrastive learning methods, which could facilitate further theoretical developments in the field.
2. The improved performance on classification tasks, especially under extreme data augmentation, suggests practical benefits for real-world applications.
3. The flexibility offered by customizable transport plans in domain generalization scenarios opens up new possibilities for tailoring representations to specific tasks or domains.
Weaknesses: While the paper has many strengths, there are several areas where it could be improved:
1. Limited scope of empirical evaluation:
- The experiments are primarily conducted on relatively small datasets (CIFAR-10, CIFAR-100, PACS). While these are standard benchmarks, the absence of results on larger, more complex datasets like ImageNet limits the assessment of GCA's scalability and practical impact.
- The authors acknowledge this limitation in their conclusion, but providing some preliminary analysis or discussion on potential challenges in scaling to larger datasets would be beneficial.
2. Computational complexity:
- The paper lacks a detailed analysis of the computational overhead introduced by the GCA framework, particularly for the multi-step variants.
- While Algorithm 1 is provided, there's no discussion on how the additional forward passes impact training time or memory requirements compared to standard contrastive learning methods.
3. Hyperparameter sensitivity:
- The paper introduces several new hyperparameters (e.g., number of iterations, α and β in the customized transport plan), but there's limited discussion on their impact or guidelines for setting them.
- An ablation study or sensitivity analysis for these parameters would provide valuable insights for practitioners looking to implement GCA.
4. Comparison with other alignment-based methods:
While the paper compares GCA with standard contrastive learning baselines, it does not compare it with other alignment-based or optimal transport-based representation learning methods (e.g., [1], [2]).
- Such comparisons would help better contextualize the contributions of GCA within the broader landscape of alignment-based approaches.
5. Theoretical limitations:
- The theoretical analysis, while thorough, focuses primarily on convergence and improved alignment. Discussing potential limitations or failure cases of the GCA approach would be beneficial.
- For instance, are there scenarios where the multi-step approach might lead to worse performance or slower convergence?
6. Ablation on the number of alignment steps:
- While the paper mentions using 5 iterations in the forward pass, there's no analysis on how performance changes with different numbers of iterations.
- An ablation study showing the trade-off between computational cost and performance improvement as the number of iterations increases would be valuable.
7. Limited exploration of unbalanced OT:
- While the paper introduces unbalanced OT as a potential extension, the empirical evaluation of this approach is limited.
- More extensive experiments or analysis demonstrating the specific benefits of unbalanced OT over balanced OT in different scenarios would strengthen this contribution.
8. Clarity on practical implementation:
- While the paper provides theoretical foundations, it could benefit from more practical guidance on implementing GCA in real-world scenarios.
- For instance, how should practitioners choose between different variants (GCA-INCE, GCA-RINCE, GCA-UOT) for a given task?
[1] W. Wang, etal, Zero-Shot Recognition via Optimal Transport
[2] Y. Balaji, etal Normalized wasserstein distance for mixture distributions with applications in adversarial learning and domain adaptation
Technical Quality: 2
Clarity: 2
Questions for Authors: - Can you provide a detailed analysis of the computational overhead introduced by GCA, particularly for the multi-step variants?
- How do the training time and memory usage compare to standard contrastive learning methods?
- Is there a point of diminishing returns regarding performance improvement vs. computational cost?
- What guidelines can you provide for setting the new hyperparameters introduced in GCA (e.g., number of iterations, α and β in the customized transport plan)?
- Have you observed any patterns in how these hyperparameters affect performance across different tasks or datasets?
- Can you provide an ablation study showing how performance changes with different numbers of iterations in the forward pass?
- Have you considered comparing GCA with other alignment-based or optimal transport-based representation learning methods? If so, what were the results?
- How does GCA differentiate itself from or improve upon these existing alignment-based approaches?
- Are there any scenarios in which the multi-step approach might lead to worse performance or slower convergence than single-step methods?
- Can you elaborate on any potential limitations or failure cases of the GCA approach?
- Can you provide more details or experiments demonstrating the specific benefits of unbalanced OT over balanced OT in different scenarios?
- Are there particular types of tasks or data where unbalanced OT shows the most significant improvements?
- Can you provide more concrete guidelines on how practitioners should choose between different GCA variants (GCA-INCE, GCA-RINCE, GCA-UOT) for a given task?
- Are there specific task characteristics that make one variant more suitable than others?
- In the domain generalization experiments, how sensitive is the performance to the choice of α and β in the customized transport plan?
- Have you explored using different transport plans for domains or tasks within the same dataset?
- Can you provide more details on how GCA improves robustness to noisy or extreme data augmentations compared to baseline methods?
- Are there specific types of noise or augmentations where GCA shows the most significant improvements?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have not explicitly addressed limitations or potential negative societal impacts in the paper as it currently stands. While they do mention some areas for future work in the conclusion, a more comprehensive discussion of limitations and societal impacts would strengthen the paper. Here are some constructive suggestions for improvement:
1. Limitations:
- The authors could add a dedicated "Limitations" section discussing:
a) The current scope of experiments (e.g., limited to certain datasets and architectures)
b) Potential computational overhead of the multi-step approach
c) Challenges in hyperparameter tuning for the new parameters introduced
d) Any scenarios where GCA might not be applicable or beneficial
2. Potential negative societal impacts:
- The authors should consider adding a brief discussion on potential societal impacts, such as:
a) Increased energy consumption due to potentially higher computational requirements
b) Possible biases in learned representations, especially when using customized transport plans
c) Potential misuse of the technique in privacy-sensitive applications
3. Ethical considerations:
- A brief note on any ethical considerations in data usage or potential applications of the method would be valuable.
4. Future work:
- Expand the current mention of future work to include specific directions for addressing identified limitations.
It's important to note that the absence of these discussions doesn't necessarily indicate oversight by the authors, but rather an opportunity to enhance the paper's comprehensiveness. Adding these elements would align well with NeurIPS guidelines and contribute to responsible research practices in the field.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks so much for your detailed feedback! We will now provide a point-by-point response to your questions.
1. “Scaling to larger datasets would be beneficial.”
**Reply:** Thanks for your suggestion. Please see that we have added the ImageNet100 and SVHN evaluation results to Table R1 in the general response.
2. “Can you provide a detailed analysis of the computational overhead [xxx]? memory usage [...]?”
**Reply:** Our summary of the computational complexity is in Sec. 3.2, Line 188, and further analyzed in Appendix C.1. The backward pass is not significantly impacted by the number of iterations (Fig. A3); 10 iterations of GCA-INCE require only 5% more FLOPS in general response (#2). GCA-UOT even reduces FLOPS by 30% compared to INCE. Regarding memory usage, GCA-INCE (99.82 MB) and GCA-UOT (51.28 MB) are similar to INCE (51.28 MB) using the same ResNet18 model.
3. “Hyperparameter sensitivity”
**Reply:** Thank you. In the main text, we summarize the ablation study of the sensitivity of α and β in Sec. 6.2. Additionally, we provide the ablation study of epsilon in Fig. U2. We show the impact of the iteration number and provide the discussion for chosen λ and q in GCA-RINCE in Fig. U3. We hope our efforts alleviate your concern about the hyperparameters sensitivity.
4. “In the domain generalization [...], how sensitive is the performance to the choice of α and β in the customized transport plan?”
**Reply:** As shown in Fig. 2, increasing the difference between α-β enhances classification accuracy, where α is the weight for same-domain samples and β for different domains. This is likely due to domain-informative constraints guiding the model during pretraining. Larger weights on same-domain samples benefit domain generalization.
5. “Did you try comparing GCA with other alignment-based [...] methods? If so, what were the results? How does GCA differentiate itself [...]?”
**Reply:** Thanks. In Wang, et al, they consider a different setup where they use OT to align seen and unseen images to achieve zero-shot performance. In Balaji et al, they introduce a Wasserstein metric to perform adversarial learning and domain adaptation. It’s hard to compare their work with us because they are not following the same contrastive learning framework. However, their methods reveal additional settings where contrastive learning can be combined with different adversarial or generation tasks.
6. “ Can you elaborate on any potential limitations or failure cases of the GCA approach?”
**Reply:** For large datasets, GCA methods may consume more resources due to numerous alignment plans. Single-step methods might perform better and converge faster. Additionally, incorrect hyperparameters like epsilon can cause GCA to fail.
7. “Ablation on the number of alignment steps.”
**Reply:** We use a convergence criterion for stopping the alignment. However, we found that typically 5 steps are sufficient for convergence. Based upon your suggestions, we conducted an ablation study on CIFAR-10 (Fig. U2) showed that accuracy and cluster compactness stabilize after 5 iterations.
8. “The empirical evaluation of unbalanced OT is limited. [...]. More extensive experiments or analysis demonstrating the specific benefits of unbalanced OT over balanced OT in different scenarios”
**Reply:** Thanks. We ran additional long-tail classification experiments to show the performance of unbalanced OT (Table R2 in general response). These experiments show that GCA-UOT outperforms other baselines and highlight another application where unbalanced OT outperforms other balanced alignment approaches.
9. “ how should practitioners choose between different variants [...]? Can you provide more concrete guidelines [...]? Are there specific task characteristics [...]?”
**Reply:** Choosing the right method depends on specific constraints. GCA-UOT consistently performs best across tasks, and GCA-RINCE generally outperforms GCA-INCE, especially with noisy data. With fewer classes and good augmentations, all methods perform comparably (see CIFAR-10 in Table 1). However, for noisy or unbalanced views, GCA-UOT offers the flexibility to add accommodating constraints.
10. “Is there a point of diminishing returns [...]”
**Reply:** Yes, increasing iterations in our multistep objective shows no performance improvement beyond a certain point (Fig. U2). On CIFAR-10, this plateau occurs early, typically around 5 iterations.
11. “Have you explored using different transport plans [...]?”
**Reply:** In our experiments, we use a single transport plan based on matching constraints: a diagonal target for most experiments and a block constraint for domain adaptation. Using multiple matching constraints for one dataset is an interesting idea!
12. “Can you provide more details on how GCA improves robustness [...]?”
**Reply:** Thanks for the questions. Fig. U3 shows that tuning the hyperparameter q close to 1 gives GCA-RINCE symmetry properties, making it more robust to strong augmentation. Additionally, setting the divergence in GCA as symmetry loss (GCA-RINCE) , we see through Lemma 7 (Appendix A.7) that we can enhance robustness to noisy augmentations. This is supported both theoretically and empirically.
13. “Are there specific types of noise [...]”
**Reply:** Thanks. Our results in Tables A2-A4 (Appendix B.2) show that GCA significantly improves under strong crop and large erase conditions, indicating its effectiveness in mask distribution recovery.
14. “The authors have not explicitly addressed limitations [...]”
**Reply:** Thanks. We have updated our discussion of limitations by incorporating your points: 1) extra computational overhead potentially increasing energy consumption, and 2) potential misuse in privacy-sensitive applications.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer ih3d,
Thank you for your initial feedback on our work. We wanted to follow up and see if there were any additional questions or points of clarification that we could address. Your insights are highly valuable to us, and we are eager to make any necessary improvements based on your suggestions. | Summary: This paper proposed to view contrastive learning (CL), a popular framework for learning data representation in machine learning. Specifically, the work builds on some recent works that view CL as an alignment problem with optimal transport, showing some previous popular CL frameworks are the special form of this new framework, named generalized contrastive alignment (GCA). Leveraging the theory of unbalanced optimal transport, the authors also introduced an unbalanced CL loss to handle outliers. Empirical benchmarks on the CIFAR-10 and CIFAR-100 datasets are performed to show the effectiveness of GCA.
Strengths: 1. The methodology and theory introduced is sound and based on the existing theory of optimal transport and its entropic regularization version.
2. The writing is good and easy to follow. The authors also seem to have done a thorough literature review related to their work.
Weaknesses: 1. **Major: there is a large overlap, not in the writing, but in the idea and content of this work and Shi et al (2023).** In particular, the most important one that I want to point out is Algorithm 1 in this work and Algorithm 1 in Shi et al (2023). If one replaces the Bregman div $d_\Gamma$ and $d_M$ by KL divergence into the proximal loss of the inner optimization (eq 8), and $h(P)$ as in the indicator function, one recovers exactly the proposed loss in Shi et al (2023), Eq 8. In the empirical evaluation part, I believe the authors of this work used the aforementioned setting as well, hence I wonder what is the major difference between the main proposed method and that of Shi et al (2023). The idea of using unbalanced OT and its connection to MoCo also has been mentioned in Shi et al (2023, end of Section 3.2).
2. The empirical evaluation is interesting, but I do not find it enough. I believe the authors should have included Shi et al. (2023) as a baseline. The evaluation also should have been done equally on larger dataset such as ImageNet.
3. This is more philosophical point: I am not sure whether if we want the transport plan $P$ to be smooth. Therefore I wonder how did the authors tune their entropic regularization $\varepsilon$ parameter? In computational optimal transport there is a well-known tradeoff between the sparsity of the solved transport plan (in the proximal step of the inner problem) and the stability of scaling operation: if the value $\varepsilon$ (which makes the plan more sparse) the operation will be quite unstable. The Sinkhorn update will run better and faster with larger $\varepsilon$, but the transport is not sparse anymore. I also suggest the authors do a visualization of the quality of solution $P$, similar to what has been done in Figure 3 of Shi et al (2023).
- Small nitpick: Eq (1) missing expectation wrt to samples.
Liangliang Shi, Gu Zhang, Haoyu Zhen, Jintao Fan, and Junchi Yan. Understanding and generalizing contrastive learning from the inverse optimal transport perspective. In International Conference on Machine Learning, pages 31408–31421. PMLR, 2023.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: Limitation is discussed, but an important point I raised on the weaknesses section is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your comments and detailed feedback. We appreciate the opportunity to discuss the contributions of our work in connection to Shi et al. and also expand our comparisons based upon your suggestion.
1. Discussion of overlap with Shi et al.
**Reply:** Thanks for your comment and the opportunity to discuss the differences between our work and theirs. As you point out, Shi et al. have provided initial connections between INCE and optimal transport to build a multistep algorithm for alignment using Sinkhorn iterations. In our work, in order to incorporate more general losses and alignment objectives, we have made a number of novel contributions:
- **New algorithm for generalized alignment for contrastive learning:** To allow for more generalized losses, our algorithm allows the intersection of new constraint sets to be iteratively solved, while previous work mainly focuses on the solution of alignment exclusively through Sinkhorn iterations.
- **Novel approach for unbalanced OT-based alignment:** We leverage a rich body of work in OT to introduce a variant of GCA that relaxes the constraints on the distribution penalty (Sec. 3.3). By converting the hard penalty (constraint sets) into the soft regularization terms, our GCA-UOT method achieves high classification accuracy (Table 2) and faster convergence than INCE (Fig. A3), linking OT literature with optimization and contrastive learning.
- **Connections and a multistep variant of RINCE:** By building a more generalized form of alignment, we demonstrate that it's possible to develop connections to the Robust INCE loss, RINCE. This equivalence enables us to develop a multistep RINCE variant that performs better with corrupted views.
- **Novel results in domain generalization through block-diagonal matching constraints:** By changing the target plan to have block diagonal structure, we can absorb the domain information (Sec. 6). Adding domain-specific matching constraints can improve the pre-training model and enhance classification accuracy in cross-domain generalization tasks.
- **New theory and insights:** We provide the illustration and prove for the convergence of our more generalized algorithms, not just for the KL divergence in sinkhorn situations, but for other Bregman divergences, which is not shown in previous algorithms and develop new results to explain why running GCA could lead to better uniformity and alignment.
In summary, our GCA framework provides a foundation for addressing a wider range of potential issues in contrastive learning.
2. Difference between GCA-UOT and the idea of “unbalanced matching” in Shi et al.
**Reply:** Thanks for your question. The usage is actually quite different. In our case, we are using the term “unbalanced OT” to refer to a large body of work [1,2] in the OT literature that focuses on relaxing the constraint on the distribution matching across the source and the target domain. This often involves building an unconstrained optimization objective and relaxing our constraints on distribution matching. We find that this relaxation has two main advantages: (i) improved accuracy and (ii) improved complexity and faster speed of convergence (Fig. A3).
In contrast to our method, Shi et al. introduced the idea of “unbalanced matching” which models the fact that the two different encoders in twin network approaches like Moco consider views from two different encoders (often tied via a momentum term). Our approach, GCA-UOT, employs unbalanced optimal transport which aims to convert hard penalty (constraint sets) into the soft penalty like L2 regularization terms.
[1] Xu, M., & Gould, S. (2024). Temporally Consistent Unbalanced Optimal Transport for Unsupervised Action Segmentation. arXiv:2404.01518.
[2] De Plaen, H., et al. (2023). Unbalanced optimal transport: A unified framework for object detection. CVPR, 3198-3207.
3. “I believe the authors should have included Shi et al. (2023) as a baseline.”
**Reply:** Thanks. Indeed, there are some differences in the implementation of Shi et al. version of MS-INCE in comparison to ours as we use a dual form of the OT objective and Shi uses the Sinkhorn algorithm. We tried to obtain their exact implementation but couldn’t find code online so we implemented it and ran this method as another baseline. Please see the (see Table R1, general response) below for the comparison with their IOT method on ImageNet100, SVHN. We are working on adding this baseline for all of the results in the paper.
4. “Evaluation also should have been done equally on a larger dataset such as ImageNet.”
**Reply:** Thanks for your suggestion. Rather than focusing on larger datasets with more classes like ImageNet, we chose to explore new settings like domain generalization and robust variants of CL where we could demonstrate the versatility of the approach. Based upon the reviewer's suggestions, we ran additional experiments on ImageNet-100 and SVHN (see Table R1 in the general response). Our results show that our alignment methods improve over standard contrastive losses for the same backbone. Consistently, we find that UOT gives even further improvements over the other methods.
5. “How did authors tune their entropic regularization ε parameter? [...] I also suggest the authors do a visualization of the quality of solution P, similar to what has been done in Figure 3 of Shi et al”
**Reply:** Thank you. We have computed the same plots (Fig. U1) and include an analysis of how the epsilon changes performance. A smaller epsilon results in a sparser optimal transport plan but potentially leads to numerical instability. We chose a small epsilon (ε=0.2) for CIFAR-10 through cross-validation, however, we find that our alignment method is not very sensitive to this choice. We also examined different ε values for our unbalanced OT method on CIFAR-10 (Fig. U2). The results suggest that an ε range of 0.2-0.6 achieves good compactness and high accuracy.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I increased my evaluation to 4 to reflect the newly added materials; the evaluation still leaning Reject for the following reason:
1. The result in Table R1 is actually quite mixed: one could see that given enough pretraining iterations, the performance of all 4 methods are very close (SHVN on 500 epoch, third column -- note I have no idea why the authors skip the CIFAR100 dataset). For ImageNet I can understand it takes quite a bit of time to train the network, but I suspect if taking ResNet50 at 500 epochs results would be the same. I also do not understand why IOT of Shi et al. (2023) could perform so badly compared to other 3 on ImageNet.
2. I still hold my opinion that the novelty here is somewhat limited, in the sense that while extending the framework of Shi et al. (2023) is non trivial, it is as hard as extending from classical OT to unbalanced OT setting.
I also observed other reviewers have some interesting questions and comments, especially Reviewer TBTc. I therefore look forward to see the answers to those comments as well.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 96Mt,
Thank you for your continued engagement with our work and for revisiting your evaluation based on our rebuttal. We appreciate the constructive feedback and would like to address the specific concerns you have raised.
1a. “ given enough pretraining iterations, the performance of all 4 methods are very close (SHVN on 500 epoch, third column -- note I have no idea why the authors skip the CIFAR100 dataset). For ImageNet I can understand it takes quite a bit of time to train the network, but I suspect if taking ResNet50 at 500 epochs results would be the same.
While it is true that the INCE-based methods provide similar results, we find that our unbalanced OT approach, GCA-UOT, consistently provides improvements over these other methods. We note that this is out-of-the-box without tuning of the hyperparameters for the alignment method. Thus, this demonstrates that our approach generalizes well and can be used with little to no tuning in new datasets.
Please note that we did not add CIFAR100 to the response because we provide these results in Table 2 in the main text.
1b. “I also do not understand why IOT of Shi et al. (2023) could perform so badly compared to other 3 on ImageNet.”
We applied both our method and the Shi method out of the box, using the same parameters as described in their paper, without tuning further. Perhaps with some small tuning their implementation of MS-INCE and our dual form might be more comparable.
2. Novelty of the Contribution
By providing a more general framework for contrastive alignment, our work allows for numerous extensions, including UOT and robust INCE-based losses, as well as block wise constraints in domain generalization. We think this general formulation is key for unlocking additional applications in the future as well. Shi’s work has only considered Sinkhorn-based INCE alignment and thus we see the possibilities with our framework as providing a major advance over the existing work. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback. We appreciate that the reviewers thought that our work “provides an interesting new perspective on what SSL does” (Reviewer **TBTc**) and “the methodology and theory introduced is sound” (Reviewer **96Mt**), provides “ a solid theoretical foundation” (Reviewer **VXuY**). Additionally, the reviewers remark that approaches “flexibility [...] opens up new possibilities for tailoring representations to specific tasks or domains.” (Reviewer **ih3d**).
In our general response, we would like to: (1) highlight the key differences between our approach and prior work [reviewers **96Mt, TBTc**]; (2) address the concern of computational complexity [reviewers **TBTc, ih3d, VXuY**]; (3) provide additional larger-scale experiments [reviewers **96Mt, VXuY**].
**1. Clarify contributions from related work [96Mt, TBTc]:**
Previous work from Shi et al. provides connections between InfoNCE and optimal transport using the Sinkhorn method. In contrast to this work, we make a number of novel contributions:
- **New algorithm for generalized alignment for contrastive learning:** Our algorithm handles more generalized losses by iteratively solving new constraint set intersections, whereas previous work focused solely on alignment via Sinkhorn iterations.
- **Novel approach for unbalanced OT-based alignment:** We leverage a rich body of work in OT to introduce a variant of GCA that relaxes the constraints on the distribution penalty (Sec. 3.3). By converting the hard penalty (constraint sets) into the soft regularization terms, our GCA-UOT method achieves high classification accuracy (Table 2) and faster convergence than INCE (Fig. A3), linking OT literature with optimization and contrastive learning.
- **Connections and a multistep variant of RINCE:** By building a more generalized form of alignment, we demonstrate that it's possible to develop connections to the Robust INCE loss, RINCE. This equivalence enables us to develop a multistep RINCE variant that performs better with corrupted views.
- **Novel results in domain generalization through block-diagonal matching constraints:** By changing the target plan to have block diagonal structure, we can absorb the domain information (Sec. 6). Adding domain-specific matching constraints can improve the pre-training model and enhance classification accuracy in cross-domain generalization tasks.
- **New theory and insights**: We prove the convergence of our more generalized algorithms, not just for the KL divergence in sinkhorn situations, but for other Bregman divergences, which is not shown in previous algorithms. And we develop new theorems to explain why running GCA could lead to better uniformity and alignment.
In summary, our GCA framework provides a foundation for addressing a wider range of potential issues in contrastive learning, by demonstrating how to create a variety of different contrastive methods and bringing advanced OT approaches to bear in representation learning.
**2. Computational complexity [TBTc, VXuY, ih3d]:**
About the additional complexity of our method, we provide analysis and empirical evidence to show the complexity (Sec. C1) and compare the running time of different methods (Fig. A3) in the appendix. Our results show that GCA iterations only slightly increase the computational complexity, while GCA-UOT is faster than INCE due to the improved symmetry and smoothness of the loss. To add to this, we record the floating point operations per second (Flops) of running GCA methods. We find that GCA-INCE (6.65 MFlops) has 5% more Flops than INCE (6.31 MFlops), while GCA-UOT saves 30% Flops (4.54 MFlops). These results prove that our GCA-UOT method is not only superior in terms of accuracy but also in speed.
**3. Additional Experiments**
- **Experiments on larger datasets:** Following the reviewer's suggestions, we ran additional experiments on ImageNet-100 (Table R1) and SVHN (Table R1), and compared with both INCE and Shi et al. (IOT) methods as a baseline. There is no code available from Shi et al. so we implemented their method. We found that our GCA-UOT achieves the highest accuracy on both datasets.
- **Sensitivity analysis:** Additionally, we provide ablation studies to hyperparameters including the epsilon effect on transport plan (Fig.U1) **[96Mt]**, the iteration number (Fig. U2) **[ih3d, VXuY]**, the effect of q and λ on strong noisy augmentations Fig. U3 **[VXuY]**.
- **Additional examples of UOT:** To provide more scenarios of GCA-UOT over balanced OT **[VXuY]**, we test our method on the problem of longtail image recognition with CIFAR100-LT (Table R2).
We hope our additional experiments would address your concerns. Please let us know if there are additional results that you would like to see and we will do our best to provide them.
**Results**
**Table R1. Comparison of GCA methods with other baselines on SVHN and ImageNet100 datasets.** For SVHN, we pre-trained ResNet50 models using the Adam optimizer with a learning rate of 3e-4 for both 200 and 500 epochs. For ImageNet100, we pre-trained a ResNet50 model for 100 epochs using the same optimizer and learning rate. The classification accuracy was then evaluated using a linear layer readout trained 100 epochs.
| | **SVHN** | | **ImageNet100** |
| -- | -- | -- | -- |
|**Epochs**| **200**|**500**| **100**|
|**UOT**| **89.98**|**91.85**|**68.63** |
|**MSINCE**|86.00|89.86|67.60|
|**INCE**|86.60|89.79|67.93|
|**IOT(Shi)**|85.19|90.01|66.07|
**Table R2. Comparison of GCA methods with other baselines on longtail recognition (CIFAR100-LT).** We pre-trained models for 500 epochs using the Adam optimizer with an imbalance factor of 0.1 on the CIFAR100 long-tail dataset. The classification accuracy was then evaluated using a linear layer readout trained 200 epochs.
|**Method**|**CIFAR100LT**|
| -- | -- |
|**UOT**|**35.96**|
|**MSINCE**|33.20|
|**INCE**|33.72|
|**ShiINCE**|32.55|
Pdf: /pdf/a72eccd06d118c91f70a3db220f91925cb5d9d6b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sm: enhanced localization in Multiple Instance Learning for medical imaging classification | Accept (poster) | Summary: The paper is well-written and proposes a new modular mechanism that can (selectively) combine local and global interactions in a model. The authors discuss the theory behind their proposed smoothness operator and pooling mechanism and provide detailed proof for their claims. They have also evaluated their model against other methods on three different datasets (1 CT and 2 H&E) with reporting AUC and F1 score as their evaluation metrics. Their method is superior to other methods and the proper ablation study supports their claims.
Strengths: The writing is concise and the contributions are clear. The author proposes a new smoothness mechanism, generally an interesting idea, to integrate local and global information. Besides the quantitative experiments, there are some qualitative experiments such as attention maps that show the model's relative importance on different regions in a slide.
Weaknesses: There are three major concerns with this work:
1. The choice of encoder. The authors have used Resnet18 for RSNA and Camelyon16 and Resnet50 for PANDA. I could not find any justification for why different encoders have been used! Also, with the trend toward foundation models, transformer-based backbones are now of interest to the community. Therefore, for the sake of consistency, the author should use the same encoder for different datasets or report both resent50 and resnet18. And, for the sake of the generality of their work, they should add one encoder such as ViT or Swin as well (w/ ImageNet weights) to support the generality of their claims.
2. The body of research has been founded on local-to-global interactions. There are quite a few standard graph-based methodologies in the literature that the authors need to compare their work against. Currently, there are no representatives from those families of the MIL method in the benchmark. Two examples of such methods are (1) and (2).
3. There is a family of MIL methods in the literature that try to pseudo-label the instances during training, which is essentially equivalent to localization in this work. For instance, two of the most recent such methods are (3) and (4). The author should compare their method against these as they are essentially from the same family.
(1) Chen, Richard J., et al. "Whole slide images are 2d point clouds: Context-aware survival prediction using patch-based graph convolutional networks." Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VIII 24. Springer International Publishing, 2021.
(2) Li, R., Yao, J., Zhu, X., Li, Y., Huang, J. (2018). Graph CNN for Survival Analysis on Whole Slide Pathological Images. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science(), vol 11071. Springer, Cham. https://doi.org/10.1007/978-3-030-00934-2_20
(3) Z. Shao et al., "LNPL-MIL: Learning from Noisy Pseudo Labels for Promoting Multiple Instance Learning in Whole Slide Image," 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023, pp. 21438-21438, doi: 10.1109/ICCV51070.2023.01965.
(4) Ren, Q. et al. (2023). IIB-MIL: Integrated Instance-Level and Bag-Level Multiple Instances Learning with Label Disambiguation for Pathological Image Analysis. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14225. Springer, Cham. https://doi.org/10.1007/978-3-031-43987-2_54
Technical Quality: 3
Clarity: 3
Questions for Authors: I am curious if the author expects the same performance boost obtained on top of resent, to be obtained on top of a histopathology-specific feature extractor.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: It has been justified :)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Rev. uFQt for their valuable insights. Next we address their concerns.
**The choice of encoder.**
We apologize for not providing a rationale for the chosen encoder in each dataset. When we started our experiments, we fixed ResNet18 w/ Imagenet weights as the default one. This is the one used for RSNA and PANDA, since baselines obtained results comparable to those reported in previous works. However, when using this encoder in CAMELYON16, we observed that all methods were performing worse than reported in previous papers. We switched to ResNet50 with Imagenet weights, but observed a similar behavior.
We found that some papers were using a feature extractor pre-trained with SSL on very large datasets [13, 20], so we decided to use one of these. Specifically, we used the weights provided by [17], which correspond to a ResNet50 model trained using Barlow Twins on a huge dataset of WSI patches (around 32.5 millions of patches).
This led to the baselines obtaining comparable results to previous papers, so we decided to fix this encoder for CAMELYON16.
By using two different encoders (ResNet18 and ResNet50 with Barlow Twins) we intended to show that the proposed method obtained improvements in localization/classification regardless of the type of features used. Yet, for the sake of generality, we agree that it’s interesting to report all the datasets with a wide range of encoders. Following the suggestion, we have run all the experiments using three different encoders (ResNet18, ResNet50 and ViT-B-32; all w/ ImageNet weights). Results at the instance and bag levels are included in Tables 1 and 2 of the rebuttal PDF. As a summary, the following table shows the average rank obtained by each method using the suggested encoders. The best result within each group is bolded, and the second-best is underlined. SmAP and SmTAP obtain in almost all cases the highest rank. This supports that the improvement by Sm does not depend on the used features.
|||ResNet18||ResNet50||ViT-B-32||
|-|-|-|-|-|-|-|-|
|||Inst. rank|Bag rank|Inst. rank|Bag rank|Inst. rank|Bag rank|
|W/o global int.|SmAP|$\mathbf{1.667}_{0.516}$|$\underline{1.917}_{0.917}$|$\mathbf{1.750}_{0.707}$|$\mathbf{1.938}_{1.084}$|$\mathbf{1.667}_{0.816}$|$\mathbf{1.500}_{1.225}$|
||ABMIL|$2.667_{1.366}$|$\mathbf{1.667}_{0.816}$|$3.750_{1.581}$|$3.188_{0.843}$|$4.333_{2.160}$|$3.000_{0.894}$|
||CLAM|$6.167_{0.983}$|$5.500_{1.225}$|$4.750_{2.053}$|$4.375_{2.134}$|$6.167_{0.983}$|$5.000_{2.449}$|
||DSMIL|$5.167_{0.983}$|$5.333_{1.506}$|$6.375_{0.518}$|$6.000_{0.926}$|$5.667_{1.033}$|$6.667_{0.516}$|
||PathGCN|$5.833_{1.472}$|$5.250_{1.605}$|$5.625_{1.302}$|$4.688_{2.017}$|$4.667_{1.633}$|$4.000_{1.549}$|
||DFTD-MIL|$\underline{2.000}_{0.894}$|$2.833_{1.169}$|$3.125_{1.553}$|$\underline{2.188}_{1.413}$|$\underline{2.167}_{0.983}$|$\underline{2.917}_{1.068}$|
||DeepGraphSurv|$4.500_{1.225}$|$5.500_{1.225}$|$\underline{2.625}_{1.188}$|$5.625_{0.916}$|$3.333_{1.211}$|$4.917_{0.376}$|
|W/ global int.|SmTAP|$\mathbf{2.167}_{1.835}$|$\mathbf{1.667}_{1.033}$|$\mathbf{2.125}_{1.553}$|$\mathbf{1.625}_{0.518}$|$\mathbf{1.833}_{0.983}$|$\mathbf{2.500}_{0.837}$|
||TransMIL|$3.250_{1.255}$|$3.917_{1.201}$|$3.250_{1.035}$|$3.375_{0.916}$|$4.167_{1.329}$|$4.417_{1.908}$|
||SETMIL|$3.667_{0.816}$|$3.417_{1.908}$|$4.500_{0.837}$|$5.000_{0.632}$|$3.667_{1.506}$|$3.333_{1.862}$|
||GTP|$4.083_{1.114}$|$3.250_{1.782}$|$4.250_{1.282}$|$3.250_{2.053}$|$5.083_{1.201}$|$4.833_{1.329}$|
||IIBMIL|$5.167_{2.041}$|$5.667_{0.816}$|$4.250_{2.252}$|$4.875_{1.553}$|$4.167_{1.722}$|$3.250_{2.139}$|
||CAMIL|$\underline{2.667}_{1.751}$|$\underline{3.083}_{0.801}$|$\underline{2.250}_{1.165}$|$\underline{2.625}_{1.061}$|$\underline{2.083}_{1.281}$|$\underline{2.667}_{1.211}$|
**Graph-based baselines.**
We intended to cover these baselines by using GTP (which uses a graph convolutional network before the transformer encoder) and CAMIL (which represents the bag as a graph to mask the attention matrix). Yet, we agree that including more baselines can contextualize better. Note that the suggested methods (PathGCN and DeepGraphSurv) were proposed for survival analysis, which is a regression-related task. We have minimally adapted them for classification and run them in the three datasets, with the existing feature extractors and the new ones mentioned above. Results are included in the aforementioned tables (rebuttal PDF and summary table above).
**Pseudo-labels baselines.**
As mentioned by the reviewer, methods that assign pseudo-labels to instances can naturally address localization.
These methods fit the general framework described in our paper (take $f_n$'s in Fig. 1a to be the pseudo-labels). We tried to cover this family in the baselines with DSMIL (which supervises pseudo-labels using the max operator and CE loss) and CLAM (which supervises them using a clustering loss). Yet, we agree it's richer to include other alternatives. Of the two methods suggested, we have included IIBMIL in the complete experimentation (see the rebuttal PDF and the summary table above). The reason for not including (3) is that it requires access to (a low percentage of) training instance labels. These labels are used to train a weak classifier to generate a set of pseudo-labels. Instance labels are unavailable in our setting during training, and the rest of methods don't use this information. Therefore, it would be a misleading/unfair comparison.
**On the last question/curiosity.**
As explained above, the results for CAMELYON16 in the paper are obtained using a histopathology-specific encoder (ResNet50 with Barlow Twins). The performance boost occurs there just as when using generic encoders.
Our intuition is that Sm should work as long as extracted features carry valuable signal. If a poor encoder is used, then Sm may not lead to any improvement.
---
We believe we are addressing the three raised concerns, making a more valuable work. Please let us know if something remains unclear.
---
Rebuttal 2:
Title: Questions regarding the new experiments
Comment: I would like to thank the authors for extending their experiments and including new insights. I would appreciate the authors if they could provide more details regarding the following questions.
It has been mentioned in line 45 that "This is a flexible module that can be used alone on top of classical MIL approaches, or in combination with transformers to also account for global dependencies." If I have understood it right, the main contribution is to have sm as a local-global module that helps to improve performance by capturing both local and global. Judging only from Table 1 and Resnet50 with ImageNet weights, how do authors justify the models without global interactions that are superior to their proposed smTAP with global interactions?
If I am not missing anything, this is a bit contradictory to the claims of the paper.
Also, comparing Resnet50+BT and Resnet50 only for the Calmelyon16 dataset, it seems that their proposed model benefits heavily from pre-trained features (improvement comes mostly from the encoder rather than the aggregation proposed). I am wondering if the authors can justify this observation. Does this come from any biological underlying meaning or can the Sm method be intuitively linked to anything?
---
Rebuttal 3:
Title: Please read the rebuttal to check if the authors addressed your concerns
Comment: Dear Reviewer uFQt,
Can you have a look at the rebuttal and see if your concerns have been addressed?
Best regards
Your AC.
---
Rebuttal 4:
Comment: We would like to thank Reviewer uFQt for their positive comments on the new experiments and the new insights we have provided. We address the new questions below.
> **Q. It has been mentioned in line 45 that "This is a flexible module that can be used alone on top of classical MIL approaches, or in combination with transformers to also account for global dependencies." If I have understood it right, the main contribution is to have sm as a local-global module that helps to improve performance by capturing both local and global. Judging only from Table 1 and Resnet50 with ImageNet weights, how do authors justify the models without global interactions that are superior to their proposed smTAP with global interactions? If I am not missing anything, this is a bit contradictory to the claims of the paper.**
As we indicate in the paper, the proposed Sm operator introduces local interactions in a principled manner (through Dirichlet energy minimization with the presented theoretical guarantees). This operator can be naturally combined with self-attention layers (transformer layers) to account for both local and global interactions.
As the reviewer points out, in Table 1 of the rebuttal PDF, when looking at ResNet50 with Imagenet weights, the models without global interactions (i.e., no transformer layers) do better at the instance level (localization task), with the proposed SmAP achieving the highest rank (see the table in the rebuttal to Reviewer uFQt). However, when looking at the classification task (bag level, Table 2), the models with global interactions outperform to those without global interactions, with the proposed SmTAP achieving the highest rank (see the table in the rebuttal to Reviewer uFQt).
These results suggest that models without global interactions (SmAP) perform better at the instance level (localization task, Table 1), while models with global interactions perform better at the bag level (classification task, Table 2). This behaviour was also pointed out by Rev. YAE1. As we argue in the rebuttal to Reviewer YAE1, this is explained by the fact that self-attention layers of transformers obtain a richer bag representation (which leads to better bag level results). However, the signal that identifies each token in the sequence becomes weaker after each self-attention layer (the tokens become more similar). This leads to worse instance level results. Note that this phenomenon is known as over-smoothing and has been observed in (1), (2), (3) (see references below).
We believe that the results in Table 1 are not contradictory with the claims in the paper, but rather reinforce them. The proposed Sm improves instance level results (localization) and remains competitive at the bag level (classification). This holds true whether it is combined with transformer layers (SmTAP) or not (SmAP).
> **Q. Also, comparing Resnet50+BT and Resnet50 only for the Calmelyon16 dataset, it seems that their proposed model benefits heavily from pre-trained features (improvement comes mostly from the encoder rather than the aggregation proposed). I am wondering if the authors can justify this observation. Does this come from any biological underlying meaning or does the Sm method can be intuitively linked to anything?**
As the reviewer points out, the proposed Sm benefits from SSL pre-trained features. Our intuition is that if an SSL pre-trained encoder is used, instances that are similar from a biological point of view will tend to cluster together in the feature space. In particular, instances with the same label will be clustered together, and far away from instances with a different label. Since neighboring instances are naturally (a priori) expected to have similar biological properties (e.g., the same label), the smoothness introduced by the proposed Sm will be particularly beneficial. This interesting observation is related to that in CAMIL [13], where it was shown that a SSL pre-trained encoder is crucial for the optimal performance of their neighbor-constrained attention mask.
----
(1) Shi, Han, et al. "Revisiting Over-smoothing in BERT from the Perspective of Graph." International Conference on Learning Representations (ICLR). 2022.
(2) Wang, Peihao, et al. "Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice." International Conference on Learning Representations (ICLR). 2022.
(3) Dong, Yihe, Jean-Baptiste Cordonnier, and Andreas Loukas. "Attention is not all you need: Pure attention loses rank doubly exponentially with depth." International Conference on Machine Learning (ICML). PMLR, 2021.
Title: On the new questions
---
Rebuttal 5:
Title: Concern whether our reply was satisfactory
Comment: In case it is necessary, we want to summarize our response to the new questions by the reviewer.
> **Q. It has been mentioned in line 45 that "This is a flexible module that can be used alone on top of classical MIL approaches, or in combination with transformers to also account for global dependencies." If I have understood it right, the main contribution is to have sm as a local-global module that helps to improve performance by capturing both local and global. Judging only from Table 1 and Resnet50 with ImageNet weights, how do authors justify the models without global interactions that are superior to their proposed smTAP with global interactions? If I am not missing anything, this is a bit contradictory to the claims of the paper.**
Instance level results worsen with transformers. This is not a consequence of the proposed Sm, but of the oversmoothing phenomenon observed in transformers. This has been studied in the works referenced in our previous response.
The proposed method without a transformer outperforms methods that do not a transformer; and the proposed method with a transformer outperforms methods that use a transformer.
> **Q. Also, comparing Resnet50+BT and Resnet50 only for the Calmelyon16 dataset, it seems that their proposed model benefits heavily from pre-trained features (improvement comes mostly from the encoder rather than the aggregation proposed). I am wondering if the authors can justify this observation. Does this come from any biological underlying meaning or does the Sm method can be intuitively linked to anything?**
Note that every method has access to the same features, and every method benefits from using them (see bag level results in Table 2). That our method benefits at the instance level from using, we interpret it as a sign that local dependencies indeed play an important role in MIL localization.
Please let us know if you have any questions so that we can respond in a timely manner.
We look forward to your response,
Authors
---
Rebuttal Comment 5.1:
Comment: I appreciate the authors' effort in providing further details and explanations for my questions. The response provided addressed my concern partly, yet I agree with Reviewer YAE1 regarding the improvement being marginal. However, given the efforts in providing newer benchmarks and also considering that the content needs to be updated with consistent backbones for different datasets (as provided in the previous reply). Therefore, I would raise my rating to Borderline Accept). | Summary: The paper proposed a multi-instance learning approach. The basic idea is to make use of the spatial dependency between training samples. A smoothing operator was proposed to regularize the attention matrix with respect to inter-sample similarity (to my understanding), which the authors claimed to improved both quality of localization and classification tasks.
Strengths: The idea seems unique and based, but more justifications are merited.
Weaknesses: 1. Although the smoothness property do exist in multi-instance learning as the authors claimed and may introduce useful information. It is still unclear why imposing it in the neural network helps with classification. It doesn't seem to serve as an induction bias from its looks.
2. Actually only 3 dataset are tested (though each with different variants), and the results are not so persuasive. Especially for the classification tasks that the authors want to credit, they failed to get 1st place in 2/3 of the AUC or F1 scores.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The overall principle of the method need rephasing, i.e., how introduction the SM transformation (instead of penalty) improves classification?
2. The smoothness assumption seems to be a property connected with localization. However, in 4.1-4.3 and Fig.3 the authors mainly present the SM operator for classification. Yet in the experiment part, it seem that adding SM had more advantages in localization (but not sure how is it achieved?). The authors need to clarify the relationship between the two tasks.
3. In Tab. 1, why the performance of the 'with global interactions' variants worse than the 'without' one in localization, but it is opposite in the classification (Tab. 2)?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The method required strict ordering (or spatial information) or 3d-to-2d samples, which is quite a strong demand but it is not discussed in the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback. First we provide a general clarification. Then we address the rest of points.
---
**GENERAL CLARIFICATION**
It is unfortunate that the concepts of classification and localization used in the manuscript are not found clear. These concepts are taken from the MIL literature, especially from the medical-related one. Next, we try to clarify them.
Given a test medical image (e.g., a WSI image), our goal is twofold: 1) make a global prediction of whether the WSI contains tumor or not, and 2) predict which patches are the tumorous ones. The former task is called **classification**, and the latter **localization**. Notice that, from the ML viewpoint, both are classification tasks.
**Labels available during training**. The training dataset only contains labels at the WSI level. We know whether each training WSI is cancerous or not, but not which patches contain the tumor. This is exactly the MIL paradigm: WSIs are bags and their patches are instances. Under this MIL setting (bags $\equiv$ WSI, instances $\equiv$ patches), the classification/localization results are usually called bag-/instance-level results, respectively.
**Making predictions at instance-level (localization) although only bag-level labels are available during training**.
This is precisely what MIL methods address. Generally, the idea is to process the instances inside a bag and then aggregate them through some instance-wise attention values ($f_n$ in Figure 1), which reflect the relevance of each instance. Then, these values are used for instance-level prediction.
**We focus on localization task**.
Current approaches are good at the classification task, but their localization results are comparatively limited. Thus, our goal is to enhance localization while staying competitive in classification. This focus is reflected in the paper title “enhanced localization…”. Since both tasks are related, we find that our approach is also slightly superior in classification. We hope this clarifies this sentence by the reviewer: “Especially for the classification tasks that the authors want to credit”.
**The rationale behind our proposal to enhance localization**.
Current MIL methods are designed to target the classification task. In terms of localization, they pinpoint some regions of interest, but a systematic instance-level evaluation reveals low performance metrics. We find these methods do not account for spatial correlations among instances (patches) or, if they do (as CAMIL or GTP), they consider them only to obtain the bag-level aggregation, but not for instance-level predictions. Thus, our idea is to encode these local dependencies into the MIL model in a principled way (through Dirichlet energy minimization with theoretical guarantees). Intuitively, these local correlations tend to uniformize instance-level predictions, avoiding isolated false positives or false negatives.
---
**REMAINING POINTS**
**Why imposing smoothness helps**.
The basic idea is that in real-world medical images the instance labels tend to have a smooth distribution. It is not realistic to have isolated negative patches inside positive regions and the other way around. This uniformity is the inductive bias introduced by our operator. Since we are acting at instance-level, this has a clear effect in localization, see Table 1 in the paper. As a byproduct, since the WSI prediction is aggregated from instance-level info, it also improves the classification task (Table 2), although the margin is lower.
**Persuasiveness of results and number of datasets.**
Recall we expect the main improvement to occur at localization (Table 1). Regarding classification results (Table 2) the differences are lower as expected, but the proposed methods are still the best-performing ones.
Regarding the number of datasets, the comprehensive evaluation of localization requires that the test set is completely labeled (e.g., every patch in the WSIs). This restricts the number of datasets available. Most recent MIL papers in top venues use three or fewer datasets. The papers [13], [37], [38], [29] (ICLR 24, MICCAI 22, IEEE TMI 22, NeurIPS 21) contain, respectively, 2, 2, 2 and 3 datasets.
**Penalty as an alternative to the proposed operator.**
Introducing a penalty term in the loss to favor smoothness is a natural alternative to the proposed operator. This is also commented by Rev. jUey. Please see the response to them (paragraph headed "Loss-based strategy for smoothing"), which includes an ablation.
**W/ and w/o global interactions behave differently in Tables 1-2**. Yes, this is an interesting observation that points out a limitation of transformers in MIL. Transformers excel at aggregating/encoding a complex sequence (e.g. a WSI) to make a global prediction, which is the goal of the classification task. That’s why methods with global interactions (mainly transformers) outstand in Table 2. However, they yield lower performance at the localization task (instance-level predictions, Table 1). This is because the self-attention layers of transformers lead to a loss of signal in the individual tokens of the sequence (e.g., the WSI patches). After such layers we obtain a valuable aggregation, but instance-level information has been degraded. Indeed, the proliferation of transformers in MIL is yet another sign that the literature has so far overlooked the localization task, which is key for real-world deployment.
**Limitations on the spatial information**. We acknowledge that this method is not applicable to MIL problems where there is not a spatial structure of the bag. However, medical images (e.g. WSIs and CT scans) inherently possess this structure. To handle it efficiently, we use the torch.sparse API (a common approach in GNNs). This way, the memory overhead is minimal compared to the bag size.
---
We hope the reviewer reconsiders the evaluation of our contribution in the light of this clarification. Please let us know if something remains unclear.
---
Rebuttal 2:
Title: Please read the rebuttal to check if the authors addressed your concerns
Comment: Dear Reviewer YAE1,
Can you have a look at the rebuttal and see if your concerns have been addressed?
Best regards
Your AC.
---
Rebuttal Comment 2.1:
Title: reviewer feedback
Comment: Thanks for the authors' effort to present additional results and make clarifications. I raised the score despite the concern that this still seems to be a marginal improvement. | Summary: The authors propose a technique to improve the localization capabilities of the current MIL, especially for CAD models that perform CT and WSI analysis. The method is based on seeing the attention attributed to each patch as a graph and minimizing its Dirichlet energy, promoting smooth transitions on the attention values of neighbor patches, which is in accordance with the assumption that neighboring patches share the same labels.
Strengths: The work is well-developed and well written, with a methodology explained through concise equations and theorems that guide the reader through the development of the idea. The results validate what is proposed, but minor adjustments and additional information will make the work even more valuable.
Weaknesses: The method introduces the smooth operator on top of the existing ABMIL model. The proposed approach improves the performance of ABMIL on the CAMELYON dataset, both on localization and classification. Nevertheless, when analyzing the attention maps provided in the appendix, the differences between ABMIL and smAP are minimal. Although we have a numerical evaluation of the Dirichlet energy, we cannot tell if the drop in energy is significant only by these numbers, so this visual empiric evaluation is important for a better insight regarding the developed work.
Observing the appendix figures, ABMIL already performs well in matching its attention to the tiles annotations. I suggest the authors apply the smooth operator on top of other models that did not perform well on CAMELYON for this task, such as CLAM, CAMIL, and SETMIL, to evaluate how minimizing the Dirichlet energy of the attention improves the localization task quantitatively and qualitatively.
I suggest the authors perform an ablation study to evaluate how minimizing the Dirichlet energy compares to other smoothing strategies. For instance, summing the variance of the attentions to the model’s loss may provide similar results as minimizing the Dirichlet energy.
The analysis of the alpha parameter is great, but again, a qualitative approach is valuable for this case. Seeing how the attention map changes as the parameter alpha increases is highly valuable for the reader.
Lines 127 to 131: trying to debate equations not present in the paper is not good for the reader. I suggest writing the debated equations in the article or explaining the arguments without mentioning them.
Technical Quality: 3
Clarity: 3
Questions for Authors: The formulation seems to assume bags of equal size, which is not the case in MIL (in general).
The proposed method (loss) may have the drawback of over-smoothing the transition between tiles, making the localization regions blurry. Is this true? Could this be overcome?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Rev. jUey for their positive and valuable feedback. Next we address their concerns.
**Visual comparison between ABMIL and SmAP.**
As pointed out by Rev. jUey, the fact that the Dirichlet energy decreases when using the proposed operator (Table 3 in the paper) does not imply that the instance-level performance (localization task) or the bag-level one (classification task) are enhanced.
Such enhancement can only be assessed by computing performance metrics both at instance and bag level, which is done in Tables 1 and 2 in the paper, respectively. Those tables show that SmAP outperforms ABMIL, especially at localization (see e.g. CAMELYON16, where they obtain 0.96 vs 0.82 in AUROC and 0.84 vs 0.77 in F1-score). Although the attention maps shown for SmAP and ABMIL in CAMELYON16 are very similar, please note that this is the visualization of *just one WSI*, which was selected at random (no cherry-picking). Instead, numerical results in Tables 1-2 in the paper consider all the 130 WSIs in the test set, not only one. Yet, we agree it can be misleading to use only one test WSI for visualization. Thus, in the rebuttal PDF we include three different test WSIs of CAMELYON16, where the overall enhancement of SmAP over ABMIL can be appreciated. It is interesting to see how SmAP better defines the positive (red) region thanks to the smooth operator.
**Applying Sm on top of other baselines.**
As explained above, if we look at aggregated results of Table 1 in the paper instead of a particular WSI, ABMIL is not the best-performing among the baselines. Yet, it is worth exploring how other approaches behave when being combined with Sm. We have done it for the rest of the baselines w/o global interactions, i.e. CLAM, DSMIL, and DFTD-MIL, using CAMELYON16. Notice that adding Sm on top of the baselines w/ global interactions is not as natural, as those methods carry their own mechanisms to account for interactions. The results are in the table below. The improvements when adding Sm are in bold.
|Model|Sm|Instance||Bag||
|-|-|-|-|-|-|
|||AUROC|F1-score|AUROC|F1-score|
|CLAM|No|$0.849_{0.044}$|$0.821_{0.046}$|$0.960_{0.029}$|$0.897_{0.012}$|
||Yes|$\mathbf{0.928}_{0.028}$|$\mathbf{0.873}_{0.018}$|$\mathbf{0.966}_{0.007}$|$0.889_{0.017}$|
|DSMIL|No|$0.76_{0.078}$|$0.654_{0.203}$|$0.947_{0.085}$|$0.866_{0.136}$|
||Yes|$\mathbf{0.960}_{0.013}$|$\mathbf{0.776}_{0.088}$|$\mathbf{0.967}_{0.011}$|$\mathbf{0.919}_{0.018}$|
|DFTD-MIL|No|$0.884_{0.002}$|$0.742_{0.040}$|$0.983_{0.010}$|$0.937_{0.013}$|
| |Yes|$0.884_{0.183}$|$\mathbf{0.836}_{0.222}$|$0.978_{0.158}$|$0.903_{0.183}$|
The conclusion is similar to the main manuscript: instance-level performance is enhanced (greatly in some cases, e.g. an increase from 0.76 to 0.96 in AUROC for DSMIL), whereas bag-level results are competitive. The decrease in bag-level for DFTD-MIL is explained because this method randomly splits each bag into different *chunks*, which may lead to the loss of local interactions exploited by Sm (e.g., if two adjacent instances end in different chunks).
**Loss-based strategy for smoothing.**
Introducing a penalty term in the loss to favor smoothness is a natural alternative to the proposed operator. However, there is a key difference, since the use of a penalty term does not modify the model architecture. The penalty term favors that the learned weights encode such property, but it is not encoded explicitly in the model. For instance, at test stage, the penalty term is not used. Still, we find interesting to include this ablation study. The results are in Table 1 in the global response above, as this was also asked by Rev. YAE1. We conclude that, although differences are not large, Sm obtains superior performance.
**Qualitative assessment of $\alpha$.**
In the rebuttal PDF, we include the attention map by SmAP with three values of $\alpha$ for the same WSIs used before. As theoretically expected, higher $\alpha$ introduces more smoothness. Yet, as reported in the paper, the performance difference among $\alpha$'s is small, and the largest difference is against ABMIL (which treats every instance independently).
**On lines 127-131.**
Thanks for the suggestion, which helps to clarify the paper. In the revised version we have included the equations explicitly, as this is an important insight into why previous approaches that account for local interactions obtain low performance at instance level.
**Bags seem to be of equal size.**
Thanks for pointing this out, the formulation in lines 94-95 can be misleading. No, bags do not need to be equal in size. Indeed, in the description of the RSNA dataset (Appendix B.1), we mention that there are 492 scans (bags) whose size varies from 24 to 57. The same applies to the rest of the datasets, where different WSIs have different number of patches. We have clarified this explicitly in lines 94-95.
**Over-smoothing.**
Thank you, this is a common problem when introducing spatial correlations, not only in MIL but also in very related image processing techniques (e.g. deblurring). To address it, in our paper we use a “soft” adjacency matrix, following previous approaches, e.g. [13]: if two instances $(i,j)$ are not adjacent, then $A_{ij}=0$. If they are adjacent, the value is $A_{ij} = \exp\left(-P^{-1}\sqrt{|| \mathbf{h}\_i -\mathbf{h}\_j ||}\right),$
where $\mathbf{h}\_k \in \mathbb{R}^P$ is the embedding of each patch through the feature extractor.
This $A_{ij}$ is in the interval $[0,1]$, and increases as the embeddings become closer.
Note that the degree of smoothness depends on the *visual similarity of the instances*, avoiding unintended over-smoothing when positive and negative instances are not close.
We have noticed that this was not specified in the paper (it could only be seen in the code). We have included it in the revised version.
---
We hope these responses and additional materials make the work more valuable to the reviewer.
---
Rebuttal 2:
Title: Please read the rebuttal to check if the authors addressed your concerns
Comment: Dear Reviewer jUey,
Can you have a look at the rebuttal and see if your concerns have been addressed?
Best regards
Your AC.
---
Rebuttal 3:
Title: Have your concerns been addressed?
Comment: Dear Reviewer jUey,
We are wondering if you need any additional clarification. Please let us know so that we can respond to you in a timely manner.
We look forward to your response,
Authors | null | null | Rebuttal 1:
Rebuttal: ## **GLOBAL RESPONSE**
We **thank the three reviewers** for taking their time to read our work, providing constructive and valuable feedback. We appreciate it.
We are happy that **the feedback is in overall positive**: "well-developed and well written", "minor adjustments and additional information will make the work even more valuable", "the idea seems unique and based", "the writing is concise and the contributions are clear", "generally an interesting idea" etc.
**Several concerns** have been raised too. **Most of them are specific points** which we believe we have been able to address and/or clarify in the responses. We hope the reviewers find it valuable and they consider raising their scores. Please do not hesitate to request any additional clarification during the discussion phase.
The response is organized as follows:
* This **global response** contains a general overview as well as a table requested by more than one reviewer. Each reviewer will be pointed to these tables from their individual response.
* We also attach a **one-page rebuttal PDF file** to this global response. It contains only figures and tables. Each reviewer will be pointed to these items from their individual response.
* We provide an **individual response** for each reviewer.
After reading this general response, we encourage each reviewer to read their individual one, which will include or point to supporting tables and figures.
---
### **GENERAL OVERVIEW**
The following list provides a brief summary of our response to the main concerns expressed by the reviewers. We have
* Added three new baselines. [Reviewer uFQt]
* Extended all the experimentation using the three same encoders for all the datasets. [Reviewer uFQt]
* Provided a thorough clarification for the proposed approach as well as for key concepts in medical-related MIL (classification and localization tasks). [Reviewer YAE1]
* Included an ablation against an alternative way of favoring smoothness. [Reviewers YAE1 and jUey]
* Included additional attention maps to provide a more representative visualization. [Reviewer jUey]
* Evaluated how the proposed operator performs when combined with other baselines. [Reviewer jUey]
---
TABLE 1: Comparing the proposed smoothing operator with a penalty-based mechanism to favor smoothness. Requested by Revs. jUey and YAE1.
|||RSNA||PANDA||CAMELYON16||
|-|-|-|-|-|-|-|-|
|||Inst. AUROC $(\uparrow)$|Bag AUROC $(\uparrow)$|Inst. AUROC $(\uparrow)$|Bag AUROC $(\uparrow)$|Inst. AUROC $(\uparrow)$|Bag AUROC $(\uparrow)$|
|W/o global int.|SmAP|$\mathbf{0.798}_{0.033}$|$0.888_{0.005}$|$\mathbf{0.799}_{0.005}$|$\mathbf{0.943}_{0.001}$|$0.961_{0.007}$|$\mathbf{0.965}_{0.007}$|
||ABMIL+PENALTY|$0.782_{0.050}$|$\mathbf{0.889}_{0.043}$|$0.780_{0.003}$|$0.935_{0.001}$|$\mathbf{0.979}_{0.013}$|$0.963_{0.012}$|
|W/ global int.|SmTAP|$\mathbf{0.767}_{0.046}$|$\mathbf{0.906}_{0.007}$|$\mathbf{0.790}_{0.007}$|$0.946_{0.003}$|$\mathbf{0.789}_{0.008}$|$0.976_{0.014}$|
||TAP+PENALTY|$0.737_{0.045}$|$0.905_{0.005}$|$0.772_{0.011}$|$\textbf{0.947}_{0.001}$|$0.769_{0.099}$|$\mathbf{0.988}_{0.004}$|
Pdf: /pdf/0e7fa86215f5a015ae2d70bc44d1d45b0f82916d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Are Self-Attentions Effective for Time Series Forecasting? | Accept (poster) | Summary: This work proposed a cross-attention only transformer model for time series forecasting
Strengths: Interesting work. This study thoroughly examines the effectiveness of cross-attention layers in transformers and proposes several useful techniques to build a high-performance model.
Weaknesses: 1. It is intriguing to explore why self-attention is not beneficial for time series forecasting. While the experimental results are robust, readers are more interested in understanding the underlying reasons.
2.The proposed query-adaptive masking contributes to performance improvement but also weakens the overall claim. The added complexity raises questions about whether cross-attention is truly the key factor or if similar designs could enhance self-attention as well. This relates back to the 'why' issue—what is really happening with self-attention and cross-attention in time series forecasting, and why is one mechanism potentially superior to the other?
Technical Quality: 3
Clarity: 3
Questions for Authors: In the weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful assessment of our paper. First of all, we would like to summarize the key points of our rebuttal before detailed responses to each of your concerns.
**1) Distinct Role of Self-Attention and Cross-Attention**
In transformer architectures, self-attention and cross-attention serve fundamentally different roles. In general, self-attention is often used in the encoder part of transformers, and cross-attention is used in the decoder part of transformers. Considering that, our work is not merely a replacement of self-attention with cross-attention; rather, it is a rethinking of the transformer model for time series forecasting. Our main question is that self-attention causes temporal information loss during the embedding process within the time series model [1]. By removing self-attention and preserving solely cross-attention in transformer architecture, we propose a novel architecture that better preserves temporal information.
[1] Zeng, Ailing, et al. "Are transformers effective for time series forecasting?." Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 9. 2023.
**2) Additional Empirical Findings on Self-Attention**
Motivated by your suggestion, we conducted a new experiment to demonstrate that self-attention causes temporal information loss. First, we construct a model by removing the self-attention mechanism from PatchTST and replacing it with a linear layer for independent embedding of each patch. This modification eliminates interference between patches and prevents temporal information loss. We validated the performance of this model on the ETTm1 dataset with prediction lengths of 96, 192, 336, and 720. Detailed results are provided in the table below.
| | PatchTST | PatchTST(SA→Linear) | Ours |
| --- | --- | --- | --- |
| Average MSE | 0.353 | 0.351 | 0.338 |
| Average MAE | 0.382 | 0.381 | 0.376 |
As shown in the table, the modified PatchTST model (i.e., PatchTST(SA -> Linear)) outperformed the original PatchTST model that used self-attention for embedding. Figure 1 in the supplementary PDF shows the absolute values of the weights of the final linear layer for each model as a heatmap. The model with self-attention replaced by linear layers appears to capture temporal information more effectively, while the final linear weights of the original PatchTST model appear as a blurred version of the former model's linear weights. These two aspects provide experimental and intuitive evidence that self-attention leads to temporal information loss. Thank you again for your feedback.
**3) Structural Infeasibility of Removing Self-attention and Proposed Solutions**
While self-attention has possible demerits as mentioned above, the naive method that removing self-attention is structurally infeasible. We believe that this is why several transformer-based models heavily depend on the encoder part of transformers. Therefore, we proposed parameter sharing and learnable queries to address two primary issues:
> **a. Inability to Generate Necessary Queries**
* Problem: Without self-attention, the fixed input tokens used in traditional decoders are insufficient to generate the necessary queries for cross-attention.
* Solution: We introduced learnable queries, establishing horizon-dependent parameters as learnable queries. This ensures that each forecasting query independently generates the necessary input for cross-attention.
> **b. Temporal Information Loss Due to Concatenated Linear Layers**
* Problem: Concatenated linear layers at the end of the decoder could lead to temporal information loss.
* Solution: We designed the decoder to process each horizon's query independently using parameter sharing across layers. This approach minimizes temporal information loss by ensuring that each query's processing remains independent, thus preserving temporal information.
---
**Below, we provide a detailed response addressing each of the weaknesses [W#] raised:**
---
[W1], [W2 - Why is cross-attention potentially superior to self-attention?]
We apologize for not thoroughly discussing the rationale behind removing self-attention and proposing a cross-attention-only architecture. As mentioned above, our primary reason for removing self-attention is due to its tendency to cause temporal information loss, which is critical in time series forecasting. The goal of our work is indeed to highlight the inherent issues of self-attention in time series tasks and to propose a potential alternative. The design choices we made were aimed at addressing the shortcomings of self-attention in time series forecasting, rather than suggesting that cross-attention is inherently superior. We ensure that we will clarify this ambiguity in the revised version. Thank you for your kind review.
---
[W2 - Query Adaptive Masking]
We acknowledge that dropout is a common technique in time series forecasting. However, as detailed in Section 'B.3 Ablation Study on Query-Adaptive Masking' in the Appendix, using dropout in our model results in worse performance compared to Query Adaptive Masking. The reason dropout degrades our performance is that our model predicts each forecasting horizon based on individual queries. Dropout is not applied at the horizon level but instead zeros out values dimension-wise. This nature of dropout disrupts the learning process of our model. To address this issue, we proposed Query Adaptive Masking, an optimization technique designed to tackle problems arising from parameter sharing across multiple horizons. Query Adaptive Masking randomly blocks the learning of queries for certain horizons within a transformer block, allowing the model to focus on learning other queries within the same block. In the revised version, we will clearly describe this issue. Thank you for your insightful feedback.
---
_The remaining weaknesses will be addressed in the continued comment due to character limitations._
---
Rebuttal 2:
Title: Rebuttal by Authors (Cont')
Comment: [W2 - Is the model structure inherently efficient, or is self-attention inherently inefficient?]
We would like to respond with "Both." Let us denote the input length as $L$ and the target length as $T$. The complexity of self-attention grows quadratically with the input length, $O(L^2)$, as each element attends to every other element in the sequence. Additionally, it requires an extra linear layer to interpret the attention representation with respect to the target length $T$. This leads to high computational resource demands as the sequence length increases.
In contrast, cross-attention only requires $O(LT)$ complexity. The linear complexity of cross-attention makes it inherently more scalable and efficient for long sequences. Table 4 explicitly illustrates this phenomenon.
In summary, self-attention-based transformers are inherently inefficient as $L$ increases compared to our model.
Unfortunately, the proposed techniques such as 'parameter sharing across horizons' and 'query-adaptive masking,' cannot be directly applied to existing models with self-attention. To ease understanding, we summarize the applicability of our techniques for different model types in the table below.
| | Parameter Sharing across Horizons | Query-adaptive Masking |
| --- | --- | --- |
| Transformer-based Models (e.g., PatchTST, Crossformer, …) | ❌ Not applicable due to concatenation in the final fully-connected layer. | 🔺 Applicable but affects all horizons rather than the related horizon. |
| Linear-based Models (e.g., TimeMixer, DLinear, …) | ❌ Not applicable as forecasting horizons are not separated. | ❌ Not applicable as these models do not use queries. |
| CATS (proposed) | ✔️ Applicable as each horizon is independently calculated. | ✔️ Applicable and effective as masking affects only the related horizon. |
While transformer-based models can adopt query-adaptive masking, it affects all horizons rather than the horizon related to the masked query. This is expected to decrease model performance, and indeed, we observe the expected results as follows:
| | PatchTST | PatchTST + Query-adaptive Masking | Ours |
| --- | --- | --- | --- |
| Average MSE | 0.353 | 0.409 | 0.338 |
| Average MAE | 0.382 | 0.417 | 0.376 |
In other words, adopting these techniques is not available to existing methods; however, as shown in Table 5, we believe that adopting cross-attention in existing methods can alleviate computational costs and improve performance, which remains a direction for future work. Thank you again for your kind feedback.
---
Rebuttal Comment 2.1:
Comment: Some of my questions have been addressed, and the paper is quite intriguing. I will increase the score accordingly.
---
Rebuttal 3:
Comment: We are very pleased to hear that our response has addressed your concerns. We truly believe that the quality of the paper has significantly improved thanks to the valuable comments provided by the reviewer. Once again, we would like to express our sincere appreciation for your decision to increase the score, and thank you for your time and expertise.
Title: Thank you for your feedback | Summary: The paper titled "Are Self-Attentions Effective for Time Series Forecasting?" introduces a novel time series forecasting architecture named Cross-Attention-only Time Series transformer (CATS). The central hypothesis of the paper is that self-attention mechanisms, a key component of the Transformer models, may not be as effective for time series forecasting as simpler linear models. The CATS model enhances parameter sharing and improves long-term forecasting performance by using future horizon-dependent parameters as queries and past time series data as key and value pairs. The authors conducted extensive experiments across various datasets and demonstrated that CATS achieves superior performance with the lowest mean squared error and uses fewer parameters compared to existing models.
Strengths: The paper proposes a novel forecasting model, CATS, which innovatively removes self-attention mechanisms in favor of cross-attention, offering a fresh perspective on time series forecasting architectures. The model requires fewer parameters and less memory consumption compared to existing Transformer models, which enhances its efficiency, especially beneficial for large-scale applications. By eliminating self-attention, the proposed model simplifies the overall structure, which could lead to better interpretability and understanding of the forecasting process.
Weaknesses: The weaknesses of the paper "Are Self-Attentions Effective for Time Series Forecasting?" from the perspectives provided can be summarized as follows:
1. The use of an embedding-based cross-attention method is not a novel approach in the field of time series forecasting. The paper's reliance on this method may not offer a significant advancement over existing techniques.
2. The paper's expression is difficult to read, which hinders the understanding of its contributions. Sections like 3.2, which discusses 'cross-attention via future as query,' are particularly hard to comprehend, suggesting that the ideas could be presented more clearly.
3. The concepts of 'parameter sharing across horizons' and 'query-adaptive masking' appear to be quite standard design choices in the context of neural network architectures. The paper might be overstating their significance as core novel structures when, in reality, these could be considered trivial or expected features in modern models.
4. If the aforementioned design elements are indeed the core modules of the proposed CATS architecture, their presentation as innovative contributions might be unconvincing. The paper needs to provide stronger justification for considering these elements as the foundation of its novelty.
5. The paper's central claim revolves around the effectiveness of self-attentions for time series forecasting, but the proposed solution does not introduce a groundbreaking concept. Instead, it modifies existing structures, which might not be enough to establish a new baseline in the field.
Technical Quality: 2
Clarity: 1
Questions for Authors: See weaknesses.
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's assessment of our paper. We especially thank the reviewer for pointing out areas such as the readability of Section 3.2, and we will ensure that our paper is revised thoroughly. However, we would first like to address two critical points.
**1) Novelty of the Proposed Method with Cross-Attention**
While it has been noted that the cross-attention method is not novel, we argue that **our model presents a novel perspective that challenges conventional approaches** in the context of time series forecasting. Indeed, **we appreciate that Reviewer sgNU mentioned, "The paper presents a novel architecture, which challenges the conventional use of self-attention** mechanisms in Transformers for time series forecasting." Moreover, to our best knowledge, the investigation of cross-attention layers in time series forecasting has not been covered in the existing literature. While the related work [1] focused on a decoder-only model within the time-series foundation model, it primarily addresses zero-shot performance on a variety of public datasets. **If there are any papers or references that we might have overlooked, we would greatly appreciate your recommendations.**
[1] Das, Abhimanyu, et al. "A decoder-only foundation model for time-series forecasting." *arXiv preprint arXiv:2310.10688* (2023).
**2) Novelty of Proposed Techniques**
Regarding the novelty of **'parameter sharing across horizons' and 'query-adaptive masking,'** we emphasize that these techniques **cannot be directly applied to existing methods**, making them more than 'quite standard design choices.' To ease understanding, we summarize the applicability of our techniques for different model types in the table below.
| | Parameter Sharing across Horizons | Query-adaptive Masking |
| --- | --- | --- |
| Transformer-based Models (e.g., PatchTST, Crossformer, …) | ❌ Not applicable due to concatenation in the final fully-connected layer. | 🔺 Applicable but affects all horizons rather than the related horizon. |
| Linear-based Models (e.g., TimeMixer, DLinear, …) | ❌ Not applicable as forecasting horizons are not separated. | ❌ Not applicable as these models do not use queries. |
| CATS (proposed) | ✔️ Applicable as each horizon is independently calculated. | ✔️ Applicable and effective as masking affects the related horizon only. |
While transformer-based models can adopt query-adaptive masking, it affects all horizons rather than the horizon related to the masked query. This is expected to decrease model performance. As shown in the below table, we observe the expected results on ETTm1 across different horizons 96, 192, 336, and 720. The detailed comparison will be updated in the revised version due to the space limit.
| | PatchTST | PatchTST + Query-adaptive Masking | Ours |
| --- | --- | --- | --- |
| **Average MSE** | 0.353 | 0.409 | 0.338 |
| **Average MAE** | 0.382 | 0.417 | 0.376 |
Thus, we demonstrate that the proposed techniques represent a new type of approach specifically designed for the proposed method with cross-attentions.
---
**Below, we provide a detailed response addressing each of the weaknesses [W#] raised:**
---
[W1] As mentioned above, using cross-attention is our novelty and it has not been covered in existing work. Specifically, our model is indeed the first work to utilize cross-attention exclusively without self-attention. While cross-attention has been employed in various contexts, the novelty of our work also lies in addressing the quadratic complexity issue inherent in self-attention by eliminating it. This design choice not only simplifies the model but also significantly reduces computational complexity, making it more scalable and efficient for long-sequence time-series forecasting.
---
[W2] We apologize for the difficulties encountered in understanding parts of our paper, particularly the section on "Cross-Attention via Future as Query." We recognize that the title and explanation may not have been as intuitive as intended. Based on your advice, we revised this section to provide a clearer and more straightforward explanation of this concept. For instance, we changed the title "Cross-Attention via Future as Query." to "Cross-Attention with Learnable Horizon Query." This improvement better reflects the process and provides more intuitive descriptions and examples to enhance comprehension.
---
[W3] As mentioned above, parameter sharing and query-adaptive masking are infeasible within the existing models. While parameter sharing is a common concept in neural networks, our innovation lies in maximizing this parameter sharing through cross-attention. This level of efficiency in handling parameter growth has not been achieved in any previous models.
Regarding query-adaptive masking, unlike traditional stochastic depth networks that focus on training speed and ensembling, it is an optimization technique for time series forecasting and aims to precisely predict each forecasting horizon based on the corresponding query. Query-adaptive masking also demonstrates that masking the attention information based on each forecasting horizon can boost forecasting performance by focusing on specific learnable queries for the corresponding horizons.
---
[W4-W5] Please refer to our previous responses to the critical points and our responses to [W1, W3]. Building on the addressed points, we would like to emphasize that the maximum adaptation of cross-attention and the proposed techniques—parameter sharing and query-adaptive masking—represent a novel approach leveraging the benefits of cross-attention. This methodology allowed us to overcome the inherent limitations of self-attention and develop a model that is both scalable and efficient, offering significant improvements over existing techniques. Rather than simply modifying existing methods, the exclusive use of cross-attention marks a meaningful advancement in the application of attention mechanisms to time-series forecasting.
---
Rebuttal 2:
Title: Discussion
Comment: Thank you for being a reviewer for NeurIPS2024, your service is invaluable to the community!
The authors have submitted their feedback.
Could you check the rebuttal and other reviewers' comments and start a discussion with the authors and other reviewers?
Regards,
Your AC
---
Rebuttal Comment 2.1:
Comment: The backbone structure of this paper is very similar to PatchTST, with the core difference being the introduction of the cross-attention module and the use of the shared projection layer mentioned in line 171 (which was claimed to be better designed separately in the PatchTST paper). Therefore, while I agree with the authors on the importance of parameter sharing, the supplementary experiments here bring greater confusion. Specifically, it is quite unusual that PatchTST performs significantly worse than CATS after introducing Query-adaptive Masking. This anomaly requires additional experimental details, and the paper needs to submit an updated version with revised experiments and re-emphasized importance.
Furthermore, I think the introduction of the cross-attention mechanism and parameter sharing mechanism makes sense in that this method limits the model size, thereby achieving model gains. Many papers have discussed the generalization issues caused by the introduction of attention or excessive parameters in PatchTST, to the extent that linear models can perform better than PatchTST on some tasks. However, the authors mainly emphasize cross-attention itself in the paper. Without supplementary experiments or a reasonable explanation in this regard, the paper risks being significantly misleading.
In conclusion, due to substantial concerns about the paper, I will maintain my score for the current manuscript.
---
Rebuttal 3:
Title: Rebuttal by Authors
Comment: Thank the reviewer for the response.
**First of all, we strongly argue that the backbone structure itself is our novelty, which is NOT similar to PatchTST.** We would like to highlight that all other reviewers have agreed with our argument as follows: *"The introduction of **CATS represents an innovative approach in the field of time series forecasting**, particularly in re-evaluating the role of self-attention mechanisms. (by **Reviewer sgNU**)"*.
Especially, **'the core difference being the introduction of the cross-attention module' itself is also our main contribution,** which has not been covered in time series domain. *"Interesting work. This study thoroughly examines **the effectiveness of cross-attention layers** in transformers and proposes several useful techniques to build a high-performance model. (by **Reviewer ztsM**)"*
Therefore, **we respectfully disagree with the assessment that "the backbone structure of this paper is very similar to PatchTST."** While we acknowledge that PatchTST is a foundation model in time series domain, CATS is completely different from PatchTST due to our cross-attention-based backbone and the corresponding data-flow.
---
*Further details addressing this and other comments are provided in the following sections.*
---
Rebuttal 4:
Title: Rebuttal by Authors (Cont')
Comment: Based on your comments, we have identified and categorized the weaknesses [W#] you highlighted into three key areas: Structural Similarity with PatchTST, Query-Adaptive Masking in PatchTST and CATS, and Lack of Emphasis on Broader Comparisons.
---
**[W1 - Structural Similarity with PatchTST]**
**Comment:** *"The backbone structure of this paper is very similar to PatchTST, with the core difference being the introduction of the cross-attention module and the use of the shared projection layer."*
While both CATS and PatchTST utilize patching, it’s important to clarify that beyond this similarity, the two models diverge significantly in their overall architecture and approach to time series forecasting. PatchTST emphasizes univariate forecasting and patching while largely retaining the standard Transformer structure by keeping the vanilla Transformer encoder intact and simply substituting the decoder with a linear layer. In contrast, CATS takes a more radical departure from traditional Transformer models by **not only eliminating the self-attention mechanism but also redesigning the decoder**. Unlike PatchTST, which maintains the vanilla Transformer encoder, CATS removes the masked self-attention from the decoder as well, effectively addressing the challenges that arise from this change.
Eliminating self-attention introduces significant challenges, particularly in time series forecasting. These challenges can be summarized as follows:
1. **Inability to Generate Necessary Queries**
- **Problem:** Without self-attention, the fixed input tokens used in traditional decoders are insufficient to generate the necessary queries for cross-attention.
- **Solution:** We introduced learnable queries, establishing horizon-dependent parameters as learnable queries. This ensures that each forecasting query independently generates the necessary input for cross-attention.
2. **Temporal Information Loss Due to Concatenated Linear Layers**
- **Problem:** Concatenated linear layers at the end of the decoder could lead to temporal information loss.
- **Solution:** We designed the decoder to process each horizon's query independently using parameter sharing across layers. This approach minimizes temporal information loss by ensuring that each query's processing remains independent, thus preserving temporal information.
Given that PatchTST closely aligns with the vanilla Transformer structure—using an identical encoder and only replacing the decoder with a linear layer—such reasoning could be extended to suggest that any model based on the vanilla Transformer architecture is "very similar" to PatchTST. However, **the introduction of novel mechanisms in CATS results in a distinct decoder architecture from traditional Transformers, designed to be more efficient and effective for time series forecasting, setting CATS apart from both the vanilla Transformer and PatchTST.**
---
**[W2 - Query-Adaptive Masking in PatchTST and CATS]**
**Comment:** *"It is quite unusual that PatchTST performs significantly worse than CATS after introducing Query-Adaptive Masking."*
Regarding Query-Adaptive Masking, as mentioned in our rebuttal, applying this technique to PatchTST leads to significant performance degradation because masking a single query impacts the entire output. However, as detailed in Appendix B.3, CATS is designed to handle "**multiple inputs with different forecasting horizons sharing a single model** (in line 493)". This structure allows for the masking of individual queries, which only affects the specific horizon they pertain to. This probabilistic masking enables the model to focus more effectively on learning from the unmasked queries, thereby avoiding the performance pitfalls observed in PatchTST.
---
Rebuttal 5:
Title: Rebuttal by Authors (Cont')
Comment: **[W3 - Lack of Emphasis on Broader Comparisons]**
**Comment:** *"The authors mainly emphasize cross-attention itself in the paper. Without supplementary experiments or a reasonable explanation in this regard, the paper risks being significantly misleading."*
We strongly disagree with the assertion that this paper "mainly emphasizes cross-attention itself in the paper without supplementary experiments." We fully acknowledge the recent research highlighting the parameter and memory efficiency challenges of PatchTST and the resulting advantages of linear models. Therefore, **our study includes a comprehensive comparison not only with PatchTST but also with TimeMixer, a state-of-the-art linear model.**
In Section 4.2, **we explicitly address the parameter and memory efficiency issues by comparing our model with both linear and Transformer-based state-of-the-art models.** The results of these comparisons are detailed in Table 4 of our paper, which clearly illustrates the advantages of CATS over both PatchTST and linear models.
* Specifically, we observed that PatchTST suffers from significant memory complexity issues. As the input length increases from 336 to 2880, PatchTST's memory usage escalates by 16.9 times. Linear models were developed to address such issues, and indeed, they reduce the memory usage increase to 3.6 times over the same input length extension. However, our CATS model performs even better, with only a 1.4 times increase in memory usage.
* On the parameter side, linear models exhibit a drastic increase of 46.8 times in parameters as the input length increases, while CATS shows a minimal increase of only 1.1 times. Even the theoretically simplest model, DLinear, shows an 8.5 times increase in parameters, far exceeding the minimal increase observed in CATS.
These results demonstrate that **our study does not simply focus on cross-attention or comparison with PatchTST; rather, it comprehensively evaluates our model against linear models as well**, showcasing the superior efficiency of CATS in both memory and parameter usage.
In addition, the additional results of these experiments are provided in **Appendix B.2: Additional Results for Section 4.2**, which thoroughly documents our findings.
In summary, **we belive that we provide a broad comparison across the landscape of time series forecasting models**, thereby proving the superiority of CATS. **Please let us know in more detail what experiments the reviewer is curious about** for _"This anomaly requires additional experimental details."_ | Summary: The paper presents a novel architecture, the Cross-Attention-only Time Series transformer (CATS), which challenges the conventional use of self-attention mechanisms in Transformers for time series forecasting. The authors propose a model that leverages cross-attention instead, aiming to enhance long-term forecasting accuracy while reducing parameter count and memory usage.
Strengths: The introduction of CATS represents an innovative approach in the field of time series forecasting, particularly in re-evaluating the role of self-attention mechanisms. The paper highlights significant improvements in computational efficiency through reduced parameter sharing and memory usage.
Weaknesses: 1. in the paper and the supplementary code, the training epoch is set to 30 and the early stop patience is set to 10, which is inconsistent with the baseline settings used for comparison, raising concerns about unfair comparisons.
2. The main experiments in the article are conducted only on long-term time series forecasting, without including short-term time series forecasting on datasets such as M4.
Technical Quality: 3
Clarity: 2
Questions for Authors: as above
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed feedback on our paper. Your insights have been invaluable in helping us refine our work. We acknowledge the concerns raised regarding our experimental setup and the scope of our evaluations, and we are grateful for the opportunity to address these points.
Below, we provide a detailed response addressing each of the weaknesses [W#] raised:
---
[W1] We apologize for any confusion caused by the lack of explicit details regarding the training settings of the baseline models in our manuscript. We understand the importance of ensuring fair comparisons and acknowledge that our documentation could have been clearer. **To clarify, we thoroughly investigated the used training epochs and early stop patience settings from the scripts provided in the official repositories of each model.** In our paper, we utilized the reported performance of the baseline models. Therefore, we investigated the settings from the official GitHub repositories of each model. The table below summarizes these settings for our model and the baselines:
| | | Ours | TimeMixer$^a$[1] | PatchTST[2] | Timesnet[3] | Crossformer[4] | MICN[5] | FiLM[6] | Dlinear[7] | Autoformer[8] | Informer[9] |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| ETTh | Epochs | 10 | 10 | 100 | 10 | 20 | 15 | 15$^b$ | 10 | 10 | 10 |
| | Patience | 10 | 10 | 100 | 3 | 3 | 3 | 15$^b$ | 3 | 3 | 3 |
| ETTm | Epochs | 30 | 10 | 100 | 10$^c$ | 20 | 15 | 30 | 10 | 10 | 10 |
| | Patience | 10 | 10 | 20 | 3 | 3 | 3 | 30 | 3 | 3 | 3 |
| ECL | Epochs | 30 | 20 | 100 | 10 | 20 | 15 | 20 | 10 | 10 | 10 |
| | Patience | 10 | 10 | 10 | 3 | 3 | 3 | 20 | 3 | 3 | 3 |
| Traffic | Epochs | 100 | 10 | 100 | 10 | 20 | 15 | 15 | 10 | 3 | 3 |
| | Patience | 20 | 10 | 10 | 3 | 3 | 3 | 15 | 3 | 3 | 3 |
| Weather | Epochs | 30 | 20 | 100 | 10$^d$ | 20 | 15 | 15 | 10 | 10$^e$ | 10$^f$ |
| | Patience | 10 | 10 | 20 | 3 | 3 | 3 | 15 | 3 | 3 | 3 |
$^a$ Under Hyperparameter Searching Setting, Training Epochs is in [10, 100] (Not available in Paper and Github)
$^b$ For ETTh2 dataset, 3 if $T=96,192$, 1 if $T=336$, and 10 if $T=720$.
$^c$ For ETTm1 dataset, 3 if $T=336$, and for ETTm2 dataset, 1 if $T=192, 720$.
$^d$ 1 if $T=192, 720$.
$^e$ 2 if $T=96$.
$^f$ 3 if $T=96$.
**This table demonstrates that different settings, such as training epochs and early stopping patience, are generally used for each model across various datasets.** While we also acknowledge that these variations in experimental setups might lead to unfair comparisons, unifying these settings is critically challenging since there are too many options and it requires a huge amount of computation. However, **we also believe standardized settings should become a more common practice in time series forecasting. As a step towards this, we reported fairer empirical results by unifying input sequence lengths for models (e.g., Table 3 in the main text and Table 9 in the Appendix) rather than using different input sequence lengths for each model [10, 11].** We will include detailed information in the revised manuscript to ensure complete transparency regarding our experimental setup. Thank you again for your feedback.
_References_
[1] https://github.com/kwuking/TimeMixer
[2] https://github.com/yuqinie98/PatchTST
[3] https://github.com/thuml/Time-Series-Library
[4] https://github.com/Thinklab-SJTU/Crossformer
[5] https://github.com/wanghq21/MICN
[6] https://github.com/tianzhou2011/FiLM
[7] https://github.com/cure-lab/LTSF-Linear
[8] https://github.com/thuml/Autoformer
[9] https://github.com/zhouhaoyi/Informer2020
[10] Nie, Yuqi, et al. "A Time Series is Worth 64 Words: Long-term Forecasting with Transformers." The Eleventh International Conference on Learning Representations.
[11] Zhang, Yunhao, and Junchi Yan. "Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting." The Eleventh International Conference on Learning Representations.
---
[W2] Thank you for your suggestion. To address this concern, we conducted additional experiments on the M4 dataset, which is widely used as a benchmark for short-term time series forecasting. The results indicate that CATS outperforms recent state-of-the-art models. The table below summarizes these findings:
| | CATS (Ours) | TimeMixer | Timesnet | PatchTST | … |
| --- | --- | --- | --- | --- | --- |
| SMAPE | **11.701** | 11.723 | 11.829 | 13.152 | … |
| MASE | **1.557** | 1.559 | 1.585 | 1.945 | … |
| OWA | **0.838** | 0.840 | 0.851 | 0.998 | … |
**These results demonstrate that CATS not only shows good performance in long-term time series forecasting tasks but also achieves state-of-the-art performance in short-term time series forecasting tasks.** Additionally, please refer to Table 1 for the full experimental results and Table 2 in the supplementary PDF for the detailed explanation and experimental settings for the M4 dataset, ensuring clarity and reproducibility. We will include these additional results and detailed settings in the revised manuscript.
Again, thank you for your insightful feedback.
---
Rebuttal 2:
Title: Discussion
Comment: Thank you for being a reviewer for NeurIPS2024, your service is invaluable to the community!
The authors have submitted their feedback.
Could you check the rebuttal and other reviewers' comments and start a discussion with the authors and other reviewers?
Regards,
Your AC
---
Rebuttal Comment 2.1:
Comment: Thank you for your explanation. I believe the contribution of this article is roughly enough, but the writing does make it somewhat difficult to quickly grasp its contribution. I will raise my score to 6.
---
Rebuttal 3:
Comment: Thank you for your detailed feedback. We greatly appreciate your insights, which have not only helped us to make our results more robust, but also allowed us to significantly improve the clarity and quality of our writing. We are pleased to hear that you believe the contribution of this article is sufficient, and we sincerely appreciate your decision to raise the score. Your time and expertise have been invaluable in strengthening our work. | null | null | Rebuttal 1:
Rebuttal: Dear all,
We would like to thank the editor and the reviewers for their careful comments and suggestions. We summarize the reviews according to our own perspective.
**Strengths.**
We are glad that Reviewers sgNU, Q7Js, and ztsM found that our results "introduce an innovative approach in time series forecasting" and "highlight significant improvements in computational efficiency and parameter reduction." Additionally, Reviewer ztsM noted that "this study thoroughly examines the effectiveness of cross-attention layers in transformers and proposes several useful techniques to build a high-performance model.”
**Weaknesses.**
We have welcomed all reviews and did our best to carefully address every concern.
Specifically, the reviewers raised the following common concerns.
1. Novelty of our methodologies. (Rev. Q7Js and ztsM)
- Our novelty lies in eliminating self-attention (which causes temporal information loss in forecasting) and relying solely on cross-attention in modern transformer architecture. To our best knowledge, this is the first work that focuses on the potential usage of cross-attention.
2. Additional notes on experiments. (Rev. sgNU)
- We additionally reported the results of short-term time series forecasting tasks (M4), which achieve state-of-the-art performance compared to existing forecasting models. We also thoroughly investigated and compared various training setups of baseline models.
For the details, please refer to the individual response.
**↓Accompanying PDF**
Pdf: /pdf/119e3e5556147a39d43770c08aee78d20ebf69b0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improved Algorithms for Contextual Dynamic Pricing | Accept (poster) | Summary: This paper expands the line of work on contextual dynamic pricing where the buyer's valuation may be noisy. The authors give a simple way to obtain an unbiased estimate for the true valuation function, and propose an exploration - exploitation type algorithm that learn simultaneously the true valuation function and the noise distribution. Based on this observation, algorithms are proposed for when the valuation function is a linear model or when it is Holder continuous.
Strengths: The paper is well-written and I found it easy to follow. The necessary intuitions are adequately given. The main idea behind the algorithm is conceptually simple.
The results in this work expands the line of work on contextual dynamic pricing, though I did not check the proof, I believe the results are new and correct.
Weaknesses: I think for me, the most limiting assumption is that the noise is i.i.d. across all contexts, which the authors also mentioned.
It may not be possible to add experiments now, but I think it will be helpful for this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: In assumption 2, it is assumed the noise is bounded. Do all prior work assume that the noise is bounded? Is there any work that does not?
This is minor, but I am not sure what the green / red color signify in table 1. Also you made the assumption that the noise is bounded, but this did not seem reflected in the rows where you presented your results. The table also has margin issues.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comment.
### Weaknesses
**i.i.d noise:** This assumption is standard in the literature investigating the problem of dynamic pricing. For instance, all the papers mentioned in Table 1 share such noise structure. Nevertheless, as we mentioned in the conclusion, it could be of great interest to explore other assumptions, including a framework in which it depends on the context observed. The main reason that this assumption is needed is that the distribution itself determines the optimal price (and not only, say, its median or its mean), and therefore it has to be learned. If the distribution changes between contexts, all context-dependent distributions need to be learned, making the problem significantly more difficult in terms of regret.
**Experiments:** We will add experiments to the final version of the paper (as we elaborate in the response to reviewer EQLG).
### Questions
**Bounded noise:** Indeed, having bounded noise is standard in the dynamic-pricing literature. Usually it is well-motivated by real-life applications - it is reasonable to consider that valuations for certain goods are bounded, which implies boundedness of the noise. Again referring to Table 1, all works make this assumption, with the exception of Cohen et al. 2019. However, their results are obtained either under the assumption that the noise is subgaussian with a very small variance factor (typically, $1/T$, so that the noise is supported on a very small interval with large probability, and the valuations are “almost deterministic”), or that the distribution of the noise is known. Both these assumptions are stronger than ours. In other works (eg. [14], [15], [21], [30]), as in ours, boundedness of the noise is necessary to ensure that the buyers’ evaluations live in a bounded interval.
**Table:** The colors in the table on the left column are meant to highlight the assumption on the noise distribution and/or the reward function that are either more restrictive, marked in red, or match, in green, the one required for our algorithm. In the right column, our rates are in green, while worse rates obtained under the same assumptions are in red. We will make sure to add a sentence in the caption of the table to make this clear. We will also address the margin issues, and we thank the reviewer for bringing them to our attention. | Summary: The paper studies the contextual dynamic pricing problem with binary demands and an unknown noise distribution. It presents a general framework for a pricing algorithm and proves its regret bounds for two specific types of valuation functions: linear and non-parametric. In the linear case, its results improve existing results and match the lower bound.
Strengths: The paper is well-written and introduces a novel framework for solving and analyzing the contextual dynamic pricing problem with unknown noise distribution. This new framework can be applied in various settings and leads to algorithms with improved regret upper bounds in the linear case.
Weaknesses: One potential concern is the paper's similarity with [1] in the exploration part and its contribution in this part. Line 186 mentions, “this method (using random pricing to construct unbiased estimates of $g(x_t)$) appears to have never been used in dynamic pricing” but as mentioned in line 195, if I understand correctly, [1] (where $g(x_t)$ is a linear function) also uses random pricing to construct such unbiased estimates. Although this paper’s main algorithm and analyses are different from [1]’s, a detailed discussion on the difference in using random prices as an exploration policy would be helpful to better understand its contribution.
Another minor weakness is the lack of experimental results. Numerical experiments could help validate the efficiency of the algorithms and demonstrate their potential advantages over existing methods.
[1]Fan, Jianqing, Yongyi Guo, and Mengxin Yu. "Policy optimization using semiparametric models for dynamic pricing." Journal of the American Statistical Association119.545 (2024): 552-564.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please check point (1) in weaknesses. Additionally, compared to [1], why do the proposed algorithms achieve better regret bounds with milder assumptions (e.g., Assumption 1 does not require i.i.d. contexts as in [1])? Is this improvement due to using bandit techniques (e.g., price/action eliminations) or other reasons? More explanation would be appreciated.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback.
### Weaknesses:
Regarding the statement made in line 186 and the differences with the work in [1] we kindly refer the reviewer to the second question in the response to reviewer Beqc, where we provide more explanation regarding what we meant by this statement and we propose to remove it, having acknowledged how it might appear misleading. Nonetheless, the two methods are not identical and, in particular, ours requires fewer assumptions as we will further explain below (for a more detailed comparison of the assumptions between our papers and [1], please refer to the second question in the response to reviewer EQLG).
Experiments: we will add experiments to the final version of the paper (as we elaborate in the response to reviewer EQLG).
### Question:
Although our algorithm and the one proposed in [1] look similar, they present some fundamental differences that cannot be reconducted to mere technical details.
What they propose is, in fact, an explore-exploit kind of strategy where in the first phase they gather information, by posting prices uniformly at random to build an estimate of the parameter $\theta$ and the c.d.f. $F$, which are later on used to find the optimal price. Since their exploration phase is concentrated all at once in the initial part of their algorithm, they need to make sure to receive diverse context samples to estimate the parameter $\theta$ with good precision, hence the reason for their more limiting assumption on the i.i.d. nature of the contexts, as well as the lower bound on the eigenvalues of the covariance matrix.
Our strategy instead works adaptively, alternating between the value-approximation subroutine and the price-elimination one depending on the contexts received. On one side, this allows us to handle adversarial contexts, making our result more general; on the other, this minimizes the time spent in the exploration phase, and consequently, the regret accumulated.
As mentioned initially, the second phase of the algorithm in [1] consists of an exploitation of the information gathered before, which simply posts the optimal price with respect to the approximation of the expected reward function. To estimate this reward function, the authors estimate as an intermediate step the virtual value function. By contrast, our algorithm bypasses this intermediate step altogether, making it simpler.
To ensure convergence of the greedy price to the best price, the authors rely on assumption 2.1, which states the unimodality of the expected reward function. This assumption strongly limits the class of distributions for which this result is applicable. Without the unimodality assumption, their explore-exploit approach leads to $T^{3/4}$ regret, as discussed in Section 2.3 of our paper. In our price-elimination phase, we apply a multi-armed-bandit algorithm, which performs an extensive exploration across the possible price increments, mitigating the harmful effects that such exploration might have on the regret by allowing for the sharing of information among different contexts. This allows us to recover optimal regret rates while removing the unimodality assumption.
----
[1] Fan, Jianqing, Yongyi Guo, and Mengxin Yu. "Policy optimization using semiparametric models for dynamic pricing." JASA, 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. It addresses my concerns and I have raised the score. | Summary: The paper addresses the problem of dynamic pricing using contextual information, aiming to maximize a seller's revenue by setting prices based on the context and buyer valuations. Buyers purchase products if the prices are lower than their valuations. The authors propose an algorithm called VALUATION APPROXIMATION - PRICE ELIMINATION (VAPE) and analyze its performance under two valuation models: the linear valuation model and the non-parametric valuation model.
Strengths: - The paper is well-written and easy to follow.
- The work is well-motivated from the practical perspective.
- The theoretical results seem to be correct, even if I did not check the proofs carefully.
- The related works are properly discussed.
Weaknesses: - A paper with such practical motivation would benefit greatly from being accompanied by a thorough experimental campaign.
- See questions below.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the definition of regret (between lines 96 and 97), it seems that the optimal price (i.e., the one in the max) is unique for every product. I expect that every context presents an optimal price. Do the authors agree?
- Lines 277-278: can the authors provide some more comments on the results and compare the assumptions of the other works to the one in this paper?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comment.
### Weaknesses
**Experiments:** We agree with the reviewer that it would be beneficial to further demonstrate the advantages of our approach over previously suggested algorithms through empirical evaluation. We started implementing our algorithm and the various baselines, but sadly, due to the short time, we did not complete the implementation in time for the rebuttal deadline. In particular, some baselines are extremely computationally demanding, and we would require additional time to perform statistically valid evaluations. We will update the final version of the paper with the results of this empirical evaluation.
### Questions
1/ This is indeed the case: the optimal price depends on the context and varies with it. This is reflected in our expression of the regret as the optimal price is defined as the maximizer of the function $\pi(x_t, p)$, which depends on the context $x_t$.
2/ Concerning what is written in lines 277-278: both the papers directly mentioned in these lines share with us the assumptions of bounded context domain, parameter $\theta$, and noise, as those are the standard assumptions in the literature.
- [1] make assumptions milder than ours, as they do not assume the c.d.f. $F$ to be Lipschitz. Instead, they rely on the half-Lipschitz nature of the reward function, which gives a worse rate $T^{3/4}$ overall. To understand why we need to assume Lispchitzness of $F$, we kindly refer to the answer 2 of the weaknesses in the response to reviewer Beqc.
- [2] relies on estimating the parameter $\theta$ and $F$ in an explore-exploit fashion. They either achieve better regret rates under stronger assumptions, or worse rates under more similar assumptions. More precisely, their assumptions are stronger for the following reasons:
a/ To prove that the error from approximating the parameter $\theta$ is small they need to assume both the contexts to be i.i.d. and a lower bound on the eigenvalues of the covariance matrix of their variance. Our estimator for $\theta$ is built using a similar regression technique, but it does not require these assumptions (the contexts in our case can even be chosen adversarially), hence making the result more general.
b/ Furthermore, they assume the density of the noise to be lower-bounded by a constant $c$. The inverse of this quantity plays a role in the rate of the bound for the approximation of the CDF $F$, making it vacuous when $c$ is too small, and limiting the applicability of the result. Our estimate of the demand function relies on different tools, instead of the kernels estimators used in [2], hence does not require such an assumption.
c/ Another limiting assumption [2] introduce for their main algorithm is the strict monotonicity of the virtual value function (Assumption 2.1). This is necessary for them to ensure the uniqueness of the optimal price, which is needed to obtain small regret in their explore-exploit algorithm. Note how this is equivalent to asking the expected reward function $\pi$ to be unimodal, which, again, is a more restrictive setting than the one we consider. This last assumption is relaxed in Appendix F, however at the cost of sub-optimal regret rates of order $T^{3/4}$.
In general, other rates can be obtained with stronger assumptions on the regularity of the distribution of the noise or the reward function, as summarized in Table 1. The algorithm we present is the one that achieves the best regret rates, especially optimal ones in the case of linear valuations, while at the same time requiring minimal assumptions on the noise distribution (with the exception of the one in [1]) and allowing for adversarial contexts.
----
[1] Xu, Jianyu, and Yu-Xiang Wang. "Towards agnostic feature-based dynamic pricing: Linear policies vs linear valuation with unknown noise." ICAIS, 2022.
[2] Fan, Jianqing, Yongyi Guo, and Mengxin Yu. "Policy optimization using semiparametric models for dynamic pricing." JASA, 2024
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: Thank you for the clarifications. I have no further questions. I increased my score to 6. | Summary: This paper studies the problem of online contextual pricing under a linear noisy valuation model, with the noise distribution **unknown** to the seller. This work proposes an "exploration-then-elimination" algorithm that achieves $O(T^{2/3})$ **optimal** regret under the assumptions of (1) stochastic feature sequence, (2) Lipschitz noise CDF, and (3) bounded feature norm and noise range. Besides, this work also studies the non-parametric $\beta$-Holder valuation model, and propose an algorithm that achieves $O(T^{d+2\beta/d+3\beta})$ regret.
Strengths: 1, The problem of contextual pricing problem has been extensively studied over the past few years. However, the minimax regret of linear noisy valuation model with unknown noise distribution is a long-existing open problem. Existing works leave a gap between $O(T^{3/4})$ and $\Omega(T^{2/3})$ under the assumption of Lipschitz noise CDF. This work closes the gap by showing that $O(T^{2/3})$ is the correct minimax regret, under certain assumptions.
2, This work also considers the non-parametric $\beta$-Holder valuation model, and propose an algorithm that achieves $O(T^{d+2\beta/d+3\beta})$ regret. This problem is relatively new to the community but it is important as well. In most of the cases, we cannot rely on the correctness of a linear model (although it is causally sound), and therefore this non-parametric model is of significance.
Weaknesses: 1, The algorithm, VAPE, proposed in this work is quite similar to the "Explore-then-UCB" algorithm in Luo et al. (2022, Neurips). Seems like the authors are unaware of this work at all. The only difference is that VAPE adopts a policy-elimination design in the second stage after uniformly pure exploration, while Luo et al. (2022) uses a UCB-style strategy. From the perspective of regret analysis, I do not think there are much difference between those two methods. I then went back to the previous work and looked into their regret analysis, and noticed that they were using a linear bandit model instead of a multi-armed bandit (which is also applicable), which brings an extra $\sqrt{\frac1{\Delta}}$ multiplier on the regret of the second stage, where $\Delta$ is the discretization grid size. In other words, by slightly modifying the "Explore-then-UCB" algorithm, it is able to achieve $O(T^{2/3})$ regret instead of $O(T^{3/4})$ as they claimed. Since both of your works share the same idea of uniformly-pure-exploration stage for an unbiased estimate of $x^\top\theta^*$, which actually originates from Fan et al. (2024, JASA), I am skeptical to the novelty of this work, especially from the perspective of methodology.
The authors are encouraged to discuss how their methods and analysis are different from Luo et al. (2022) in the rebuttal. This is the major concern that I (and other reviewers as I expect) would have.
2, The authors claim that it is required to assume Lipschitzness on the noise CDF, which is commonly applied except for Xu and Wang (2022, Aistats). It is worth noting that Xu and Wang (2022) have proposed an insight on the "Half-Lipschitz" nature of a pricing problem, indicating that the regret rate would not be different with or without Lipschitz assumption.
If this is not applicable, the authors are encouraged to explain in the rebuttal how the "Half-Lipschitz" nature does not work.
References:
Xu, Jianyu, and Yu-Xiang Wang. "Towards agnostic feature-based dynamic pricing: Linear policies vs linear valuation with unknown noise." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
Luo, Yiyun, Will Wei Sun, and Yufeng Liu. "Contextual dynamic pricing with unknown noise: Explore-then-ucb strategy and improved regrets." Advances in Neural Information Processing Systems 35 (2022): 37445-37457.
Fan, Jianqing, Yongyi Guo, and Mengxin Yu. "Policy optimization using semiparametric models for dynamic pricing." Journal of the American Statistical Association 119.545 (2024): 552-564.
Technical Quality: 4
Clarity: 4
Questions for Authors: The authors are encouraged to respond to the following questions at a secondary priority (i.e. after kindly answering my questions in "Weaknesses").
1, Regarding Weakness 1, is there any necessity to apply a policy eliminiation strategy in stage 2 of VAPE? Can it be replaced by a (version of) UCB without impairing the regret rate?
2, On Page 5 Line 196, Please kindly explain the difference between your work and Fan et al. (2024) for valuation estimation methods in details.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have discussed the limitations in their Conclusions. Overall, this is a theoretic work and there are not much societal impact involved. However, the authors are encouraged to discuss more on the societal aspect of the work, e.g. personalized pricing and unfairnesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful analysis.
### Weaknesses
1/ We thank the reviewer for providing us with this relevant reference. As noted by the reviewer, both our work and that of [1] use the idea of sharing knowledge across contexts to improve the estimation of the c.d.f. $F$. However, our algorithm diverges significantly from theirs in three main aspects.
First, our algorithm achieves the optimal regret rate, in contrast to [1]. [1] recast the problem as a perturbed linear bandit problem, which results in a **sub-optimal regret rate**, not merely as a proof artifact: the algorithm for perturbed linear bandits inherently exhibits regret that is linear in the dimension, here, the size of the discretization.
Secondly, we believe that our algorithm is conceptually simpler. Instead of recasting the problem as a perturbed linear bandit in large dimensions, we phrase it as a simple, multi-arm bandit problem. Our expressions of the confidence intervals for the rewards are arguably more intuitive than the UCB in [1]. This simple formulation clarifies the sources of difficulty instead of obscuring them by reducing them to other intermediate problems, thereby helping the reader's understanding. We show that the problem is no more difficult than its non-contextual counterpart due to information sharing across contexts (highlighted in the proof of Claim 1), an idea absent from [1]: in contrast, the higher regret rate in [1] suggests that the contextual problem leads to higher regret. Additionally, our formulation helps generalizing our approach to other valuation models, as demonstrated with $\beta$-Hölder valuations.
As a third point, we point out that [1] assumes i.i.d. contexts, with bounded eigenvalues of the covariance matrix. This assumption is removed in our work, and our algorithm can handle adversarial contexts.
2/ [2] rely on a black-box, expert aggregation algorithm, where each expert represents a possible value of the pair ($\theta$, $F$) on a discretized grid. They control the discretization error by noticing that posting a price slightly lower than the optimum price can only decrease the reward by the difference between the optimal price and this price. Thus, this half-Lipschitzness of the problem leads them to design an algorithm that essentially “rounds down” prices.
By contrast, our algorithm first estimates the parameters, before running a bandit algorithm across the grid of prices that learns and shares information across contexts. This is only possible if the error made in the first phase in estimating $\theta$ implies a small error in the second phase in estimating $F(\delta)$ for values of $\delta$ over a grid. More precisely, if we knew exactly the value of $x_t^{\top}\theta$, we could learn the value of $F(\delta)$ by posting a price $x_t^{\top}\theta + \delta$ without assuming Lipschitness of $F$. However, since we only know $x_t^{\top}\theta$ up to a small error $\epsilon$, we must assume that $F(\delta) \approx F(\delta \pm \epsilon)$ to learn $F(\delta)$.
While our assumptions are stronger than that of [2], our regret rates are lower (and match the lower bound established for Lipschitz-continuous c.d.f.). Whether the rate of $T^{2/3}$ can be achieved without this assumption, or whether the minimax optimal rate differs when $F$ is not Lipschitz continuous, is an interesting question.
### Questions
1/ We agree that a UCB-type algorithm could be used instead of a Successive Elimination algorithm in the second phase of VAPE, provided that the expression for the Upper Confidence Bound for the rewards of the different prices be similar to that of Line 10 in Algorithm 2 or Algorithm 3. In this sense, there is no necessity to apply a successive elimination approach; however elimination-based methods are not conceptually more complex than a UCB-type of approach.
2/ We agree that our claim in Lines 185-186 is misleading. Indeed, some previous works have used the same method to estimate $g$, as noted by the reviewer. We sincerely apologize for conveying the wrong message.
What we meant was that the remark that $g$ can be estimated using $1$-bit feedback allows us to decouple the estimation of $g$ and $F$ and estimate them in a straightforward manner, which seems to have been overlooked by previous works (that we were aware of). Closest to our method is probably [3]: the authors use the same method for estimating $g$, however, their estimation of $F$ strongly differs: they do it in an **explore-exploit fashion**, using **kernel estimators** that are fitted **on the samples already used to estimate $g$**. By contrast, we use piecewise-constant estimators, fitted in a regret-minimization sub-routine. We argue that our estimators are easier to interpret and that the assumptions necessary to establish our result are milder and less cumbersome than those of [3]. Moreover, explore-exploit approaches require i.i.d. contexts with strictly positive definite covariance matrices, while ours can handle adversarial contexts.
However, as the reviewer pointed out, our approach for estimating the pair ($g$, $F$) is close to that of [1]. We therefore propose to remove this sentence altogether, and to add further detail on the distinction between our work, that of Luo et al (2022), and that of [3].
**Limitations:** We acknowledge that the nature of the problem could raise fairness issues. We addressed some of these in the checklist under the “Broader Impacts” section. We are willing to move the paragraph to the main paper.
----
[1] Luo, Yiyun, Will Wei Sun, and Yufeng Liu. "Contextual dynamic pricing with unknown noise: Explore-then-ucb strategy and improved regrets." NeurIPS, 2022
[2] Xu, Jianyu, and Yu-Xiang Wang. "Towards agnostic feature-based dynamic pricing: Linear policies vs linear valuation with unknown noise." ICAIS, 2022.
[3] Fan, Jianqing, Yongyi Guo, and Mengxin Yu. "Policy optimization using semiparametric models for dynamic pricing." JASA, 2024
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: The reviewer thanks the authors for your insightful and exhaustive reply. For the comparison with [1], I somewhat agree that it is suboptimal to have a linear bandit algorithm, while yours are optimal. However, as I asked in Q1, by replacing your policy-elimination stage with a UCB, the algorithm is on the one hand optimal enough and on the other hand closer to their algorithmic design. However, since you are anyways the first to achieve optimal, and given that [1] had been accepted for Neurips 2022, I would like to support the acceptance of your paper.
For the difference of assumptions comparing to [2], I am still skeptical if their half-lipschitzness is not applicable. In other words, I am not sure whether you really need $F(\delta) \approx F(\delta \pm \epsilon)$ which is stronger than it appears for a $T^{2/3}$ regret bound. For the estimates of $F$ and its difference with [3], I tend to believe that there are no substantial difference between (poly) kernels and discretizations. But I am happy to leave them for discussions beyond this paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MaskFactory: Towards High-quality Synthetic Data Generation for Dichotomous Image Segmentation | Accept (poster) | Summary: This paper proposes a generative method for dichotomous image segmentation. In detail, non-rigid and rigid editing techniques are used to generate high-quality synthetic masks. Those masks are leveraged for segmentation model training, which typically requires expensive dichotomous image labeling.
Strengths: This paper introduces an approach for generating high-quality datasets for DIS. In my opinion, the methodology is carefully designed and experiments show the effectiveness. Overall I think this is an interesting and insightful paper.
Weaknesses: 1. There seems to have some typos/errors in the paper.
a. In line 140, the (V,E) should be (V, E_s). The E_s does not show in section 3.2.2 except line 139.
b. The font should be consistent, e.g., line 139,140 and Appendix. A Algorithm 1 line 7-11.
c. The Figure 2 does not show what is discussed in Section 3.3. In the Figure 2, the prompt pool is used for generation. But in Section 3.3, the authors say that the segmentation masks and the edge conditions are used. If the text prompt is not used, it is meaningless to introduce the text in the last row of Figure 2 Stage 1 and Section 3.2.2. Also, here the detailed description of {p^m_i} is missing in the paper, though it is quite clear. Only have P_i shows in Section 3.2.2 and at least {p^m_i} should show in the following sections for consistency.
d. "Topylogy" should be "Topology" in Figure 2.
e. Remove full stops in the title of Section 4, 4.1, 4.2.
2. The authors argue that inspired by ControlNet, canny condition is also used. However, since there is no pretrained ControlNet used, some layers are trained according to line 173 and 174, it is unclear whether the canny condition is useful. Though more conditions present more information intuitively, the canny is already actually included in the informative mask condition. So some ablation studies should be presented to show that the additional condition do helps. But according to Section 5.2, is the pretrained Canny ControlNet used? Need to clarify in the main paper.
3. The method has limited increase compared with SAM-HQ, which finetunes SAM with additional branch on some prepared data.
4. It seems the impact of this method itself is limited, especially DIS is not so important. Maybe the authors can think about extending the method to a broader scope. Though I think this is a minor issue.
Technical Quality: 3
Clarity: 1
Questions for Authors: Some suggestions
1. The authors use Zero123 for rigid mask editing. I am curious whether ImageDream [1] can be used here. Both of them can generate novel view images. [1] can takes the additional text as the input and I am interested in whether it can be used to directly change the shape of the mask like this paper shows that "A round table" can change the shape in Figure 2.
2. For the topology mentioned in Section 3.2.2, is it easy to show some topology structures of some samples? It would be better to include topology results for better illustration.
3. For Table 1, the numbers (2500, 5000, ...) are too close to the line.
4. The authors can include more discussion on related works [2,3]. It is interesting to discuss whether the data filtering strategy in [2] can be combined in this work. And [3], which also takes a dataset as the input and outputs a new one, also includes edited masks for the generation of specially tuned generative models, sharing the similar core idea of this paper.
As I discussed in Strength, overall this is an interesting paper. And I also love the area of synthetic date generation. However, it seems this paper is finished too rush. There are some improper points in the paper, discussed in the Weakness. I believe the authors should revise the paper thoroughly for precise presentation. And it is very important to decribe the method as how it is implemented in fact. It seems the text, masks, and the canny are all used in the generation according to line 388, then the Figure 2, Section 3.3 including Formula 8, and the algorithm in the appendix should be modified. Considering too much issues that need to be resolved, I prefer rating this paper as "Borderline reject" and I am sure to consider raising my rating to "Borderline accept" after the rebuttal for my possible misunderstanding and the authors' proper clarification.
[1] Wang, Peng, and Yichun Shi. "Imagedream: Image-prompt multi-view diffusion for 3d generation." *arXiv preprint arXiv:2312.02201* (2023).
[2] Yang, Lihe, et al. "Freemask: Synthetic images with dense annotations make stronger segmentation models." *Advances in Neural Information Processing Systems* 36 (2024).
[3] Zhu, Lingting, et al. "Generative Enhancement for 3D Medical Images." *arXiv preprint arXiv:2403.12852* (2024).
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their meticulous review and helpful suggestions. We address each point as follows:
### Q1: Typos and Errors
We apologize for these oversights and will correct all identified issues in the revised version:
a. We will change (V,E) to (V, E_s) in line 140 and ensure consistency throughout Section 3.2.2.
b. We will unify the font style across the paper, including lines 139-140 and Algorithm 1.
c. We will revise Figure 2 and Section 3.3 to accurately reflect our method's implementation, clarifying the use of text prompts, segmentation masks, and edge conditions. We will also provide a detailed description of {p^m_i} for consistency.
d. We will correct "Topylogy" to "Topology" in Figure 2.
e. We will remove full stops in the titles of Section 4, 4.1, and 4.2.
### Q2: Canny Condition Usage
We appreciate this insightful observation. While the canny condition is indeed implicitly included in the mask, our experiments show that explicitly including it improves results. We have added an ablation study to demonstrate this:
| Conditions | maxF1 ↑ | M ↓ | E_φ^M ↑ |
|---------------|---------|-------|---------|
| Mask only | 0.776 | 0.075 | 0.869 |
| Mask + Canny | 0.784 | 0.073 | 0.875 |
These results show that including the Canny condition leads to better performance across all metrics. In Figure 3 of our global PDF, we demonstrate the effect of adding the Canny condition. We found that incorporating Canny edges provides better constraints on the boundaries of generated images, resulting in more precise and detailed outputs.
### Q3: Comparison with SAM-HQ
While our improvement over SAM-HQ is modest, it's important to note that SAM-HQ has already incorporated a large amount of additional data during its fine-tuning process. This likely means that SAM-HQ's performance is approaching a plateau, making significant improvements more challenging. In contrast, our method shows more substantial improvements over other baselines, demonstrating its effectiveness, especially in scenarios where performance hasn't yet reached saturation.
### Q4: Suggestions
1. **ImageDream**: We have explored ImageDream and downloaded the weights as suggested. However, we found that it may not be directly suitable for binary image editing without fine-tuning. The results are shown in Figure 1 of our global PDF. Despite this, the method is indeed very inspiring for our work, and we will include a reference to it in our related work section.
2. **Topology visualization**: We agree this would enhance the paper. We have included topology structure visualizations for selected samples in Figure 2 of our global PDF. These visualizations clearly demonstrate how our method preserves and manipulates topological structures during the editing process. We will incorporate these more intuitive visualizations in the next version of our paper.
3. **Table 1 formatting**: We will adjust the spacing in Table 1 for better readability.
4. **Related work discussion**: We appreciate these suggestions and will expand our discussion of [2] and [3]. We will explore incorporating the data filtering strategy from [2] and the dataset augmentation techniques from [3] into our work, and report the results in the revised version.
We acknowledge that some aspects of the paper need refinement, and we are committed to addressing all these issues in our revision. We will ensure that our method description accurately reflects its implementation, including the use of text prompts, masks, and canny edges in the generation process. We will revise Figure 2, Section 3.3, Formula 8, and the appendix algorithm accordingly.
We appreciate the opportunity to clarify these points and look forward to submitting a significantly improved version of the paper.
References:
[1] Wang, P., & Shi, Y. (2023). Imagedream: Image-prompt multi-view diffusion for 3d generation. arXiv preprint arXiv:2312.02201.
[2] Yang, L., et al. (2024). Freemask: Synthetic images with dense annotations make stronger segmentation models. NeurIPS 36.
[3] Zhu, L., et al. (2024). Generative Enhancement for 3D Medical Images. arXiv preprint arXiv:2403.12852.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The rebuttal has addressed my concerns, and I have accordingly raised my rating to 5.
While the weaknesses of the manuscript are evident, it appears that the draft was produced in haste and requires significant revisions.
Overall, I find the topic of generated data for downstream tasks to be interesting, and the proposed technical solution is reasonable.
---
Rebuttal 2:
Comment: Thank you very much for your valuable feedback, time, and attention. We sincerely appreciate your encouragement. We will continue to refine the descriptions in our document to make them even clearer and more accurate. If you have any further questions or need additional clarification on any point, please don't hesitate to let us know. | Summary: This paper introduces a new approach for generating diverse and precise datasets. The authors first introduce a mask editing method that combines rigid and non-rigid editing techniques to generate high-quality synthetic masks leveraging geometric priors from diffusion models and adversarial training and self-attention mechanisms for complex, topologically consistent modifications. Then, the authors generate pairs of images and segmentation mask control generation methods. Finally, the experiments on the DIS5K dataset benchmark demonstrate superior performance compared to existing methods.
Strengths: 1. The paper provides a comprehensive and innovative solution to the challenges of DIS dataset creation. The methodology is well-detailed, and the experimental results robustly support the claims of superior performance.
2. The paper is well-written and easy to read.
Weaknesses: 1. My major concern is the novelty of the method. Although the paper works on a new task to generate simulated data, the method seems to be a combination of some unrelated approaches including Zero-123, GPT, ControlNet et al.
2. Whether the generated data can boost the Dichotomous Image Segmentation task is not validated. For example, the ControlNet generated images can be unrealistic, have a large domain gap between real-world images and training images, and the generated images seem not very diversified. Can generated images really help this task?
3. The approach is evaluated on only one dataset, which is limited.
Technical Quality: 2
Clarity: 4
Questions for Authors: 1. The proposed method is only evaluated on one task, which is only limited to only one object. How about applying the approach on general semantic segmentation datasets, such as VOC, ADE20K and CityScapes?
Confidence: 5
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: See Weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's comprehensive examination of our work. Our responses to each point are as follows:
### Q1: Novelty of the method and combination of existing approaches
While our approach does integrate existing methods, its novelty lies in:
1. **Task-specific adaptations for DIS tasks**: Our method is specifically tailored for Dichotomous Image Segmentation, addressing unique challenges such as high-precision requirements and the need for diverse training samples.
2. **Synergistic integration of rigid and non-rigid editing**: We combine these techniques in a novel way to provide comprehensive mask editing capabilities, enhancing both geometric precision and topological consistency.
3. **Topology-preserving adversarial training**: This innovative approach ensures structural integrity during non-rigid editing, which is crucial for maintaining the quality of binary masks in DIS tasks.
These innovations collectively contribute to a method that is greater than the sum of its parts, specifically designed to meet the demands of DIS tasks.
### Q2: Validation of generated data boosting DIS performance
We have rigorously validated the effectiveness of our generated data in improving DIS performance:
1. Our method achieves a 5.6% improvement in maxF1 score over the baseline, demonstrating significant performance gains.
2. We employ multi-conditional control generation to enhance consistency between generated images and masks, addressing the concern of unrealistic images.
3. The combination of rigid and non-rigid editing increases data diversity, mitigating the issue of limited variability.
To further illustrate the impact of our generated data, we present additional experimental results:
| Training Data | maxF1 ↑ | M ↓ | E_φ^M ↑ |
|------------------|---------|-------|---------|
| Real only | 0.742 | 0.081 | 0.848 |
| Real + Generated | 0.784 | 0.073 | 0.875 |
These results clearly demonstrate that our generated data significantly improves performance across multiple metrics.
### Q3: Limited evaluation on one dataset
We acknowledge this limitation in our initial submission. To address this, we have conducted additional experiments on various datasets to demonstrate the generalizability of our approach. These results will be included in the revised manuscript.
### Q4: Applicability to general semantic segmentation datasets
We have explored the applicability of our method to other semantic segmentation tasks using InSPyReNet [1] as a baseline. Here are some results:
1. ECSSD (Edge detection):
| Method | S_α ↑ | F_max ↑ | MAE ↓ |
|----------------|-------|---------|-------|
| Baseline | 0.942 | 0.959 | 0.023 |
| Ours (Mix1000) | 0.950 | 0.962 | 0.022 |
2. HKU-IS (Salient object detection):
| Method | S_α ↑ | F_max ↑ | MAE ↓ |
|----------------|-------|---------|-------|
| Baseline | 0.931 | 0.952 | 0.022 |
| Ours (Mix1000) | 0.943 | 0.954 | 0.021 |
3. UOD (Underwater object detection):
| Method | S_α ↑ | F_max ↑ | MAE ↓ |
|----------------|-------|---------|-------|
| Baseline | 0.868 | 0.878 | 0.067 |
| Ours (Gen1000) | 0.883 | 0.901 | 0.056 |
These results demonstrate the effectiveness of our approach across various semantic segmentation tasks. We are currently extending our experiments to more challenging scenarios such as NI-seg and FI, and will include these results in the revised manuscript.
### Reference
[1] Yu, Q., et al. "Multi-view Aggregation Network for Dichotomous Image Segmentation." CVPR, 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' feedback. The feedback resolves my concerns, and I would like to raise the score to 5
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind feedback and support. If you have any further questions, please don't hesitate to ask. | Summary: The paper introduces MaskFactory, a method for producing high-quality synthetic datasets for Dichotomous Image Segmentation (DIS) tasks. The method includes a two-stage process: mask editing (combining rigid and non-rigid transformations) and image generation using multi-conditional control methods. The suggested method improves model performance in DIS tasks by providing variation in synthetic data.
Strengths: - The manuscript is well-written and constructed.
- The authors conduct comprehensive ablation studies to highlight the contribution of each component of their method, further validating their approach.
- The proposed MaskFactory framework shows improvements in the quality, and diversity of synthetic datasets, which is critical for advancing DIS applications.
Weaknesses: There is one limitation that authors in the limitations section acknowledge issues with unnatural images in complex scenarios, but it could delve deeper into how these might impact practical applications.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Please see the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's perceptive observation. We provide a more in-depth analysis below:
**Q: The paper acknowledges issues with unnatural images in complex scenarios, but could delve deeper into how these might impact practical applications.**
A: We address this concern by focusing on two key points:
1. DIS data scarcity for VAE training:
We opted for a more general VAE to ensure better generalization. Additional experiments show:
| VAE Training Data | maxF1 ↑ | M ↓ | E_φ^M ↑ |
|-------------------|---------|------|----------|
| General dataset | 0.784 | 0.073| 0.875 |
| Small DIS dataset | 0.762 | 0.079| 0.853 |
A DIS-specific VAE produces more visually coherent images but leads to performance decrease due to overfitting.
2. Impact on performance:
Despite occasional unnatural images, our method significantly improves downstream segmentation performance. We achieve a 5.6% improvement in maxF1 score compared to the baseline, suggesting increased data diversity outweighs drawbacks of some unnatural samples. | Summary: This paper introduces MaskFactory, a novel approach aimed at addressing the challenges of generating high-quality synthetic datasets for Dichotomous Image Segmentation (DIS) tasks.
The authors tackle the challenges by leveraging a blend of rigid and non-rigid editing techniques to generate accurate synthetic masks. Rigid editing utilizes geometric priors from diffusion models for precise viewpoint transformations, while non-rigid editing incorporates adversarial training and mutual self-attention mechanisms for complex modifications. With the accurate generated segmentation masks, a multi-conditional control generation method is then employed to create high-resolution images. The efficacy of MaskFactory is demonstrated through experiments on the widely-used DIS5K dataset, showing superior quality and efficiency in dataset generation compared to existing methods, thus significantly reducing preparation time and costs.
Strengths: 1. The scientific problem discussed here is worth researching, and there is still much room for exploration in this area. If the synthetic technology can be leveraged to substantially expand the simulated image-mask pairs based on existing data, it would benefit model training, especially with the current large multimodal models.
2. This paper takes into consideration both rigid mask editing and non-rigid mask editing, combining them to jointly contribute to mask generation. Additionally, during the non-rigid mask editing process, a topology preserving loss is introduced to maintain the topological structure of the original mask.
3. The experimental results are very encouraging. Compared to previous methods, the data distribution generated in this paper is closer to the real data. As the amount of synthetic data involved in training increases, the model's segmentation capability also increases proportionally.
Weaknesses: 1. The writing of this paper is too brief. It does not clearly show how the authors specifically carry out the mask editing part, or what unique designs they have implemented in the editing process.
2. The paper lacks novelty in certain aspects. For the rigid mask editing part, using viewpoint changes to generate masks is a very common practice. The network architecture and synthesis method in the non-rigid mask editing part are completely identical to those used in MaskFactory. Although an additional topology preserving loss has been added, this loss is hard to regard as an independent innovation. Furthermore, the ablation study does not present results using only L_GAN and L_content, making it difficult to intuitively understand the significant impact of L_structure.
3. Some of the table headers are too narrow.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can rigid mask editing and non-rigid mask editing be integrated into one framework, rather than just being simply combined?
2. Can the improvement brought by the L_structure loss be intuitively reflected in the ablation study?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The method faces challenges including the occasional production of unnatural images with stark foreground-background distinctions and inaccuracies in complex scenarios. Additionally, it relies on pre-annotated image-mask pairs, limiting autonomous data generation and requiring high-quality initial annotations. However, these issues are acceptable as there is extensive research aimed at resolving the inherent problems with ControlNet, which does not detract from the effectiveness of the method presented in this paper. Furthermore, the authors could explore using pseudo-labels for mask correction and generation to reduce dependency on pre-annotated image-mask pairs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers' thorough evaluation and constructive feedback. We are grateful for the opportunity to clarify and expand on our work.
### Q1: Can you elaborate on the methodology, especially regarding mask editing and unique designs?
We will expand the methodology section in the revised manuscript to include:
1. **Rigid Editing**: We will provide a comprehensive explanation of our rigid editing process using Zero123. As stated in Section 3.2.1: "We leverage the Zero123 method, which employs a viewpoint-conditioned diffusion model ψθ to manipulate masks' perspectives." We will elaborate on how this allows for precise geometric adjustments, focusing on viewpoint and scale transformations.
2. **Non-rigid Editing**: We will offer a more detailed description of our non-rigid editing process, including:
- Topology-preserving adversarial training
- Mutual attention mechanism
- Foreground-background focus guidance
As mentioned in Section 3.2.2: "We introduce a topology-preserving adversarial training mechanism to mitigate artifacts and structural degradation in binary mask editing." We will expand on how this mechanism works and its importance in maintaining structural integrity.
3. **Multi-conditional Control Generation**: We will provide a more in-depth explanation of our image generation stage, as described in Section 3.3: "We introduce a multi-condition control generation method to achieve precise RGB image generation."
We will also include architectural diagrams and pseudocode to enhance clarity and understanding of our methodology.
### Q2: Can you clarify your contributions beyond existing methods?
Our work presents several key innovations in the field of mask editing and dataset generation for DIS tasks:
1. **Specific Improvements for DIS Tasks**: Our method is designed for Dichotomous Image Segmentation (DIS) tasks, which require extremely high fidelity and precision in mask editing. Our approach generates diverse samples while maintaining high accuracy.
2. **Methodological Innovations**:
- Topology-preserving adversarial training for non-rigid editing, ensuring structural integrity of edited masks
- Combination of rigid and non-rigid editing techniques, providing comprehensive mask editing capabilities
- Multi-conditional control generation method for precise image-mask pair creation
3. **Performance Improvements**: As demonstrated in Table 2 of our paper, our method consistently enhances the performance of various state-of-the-art segmentation networks across multiple datasets, proving its effectiveness and generality.
4. **Addressing Unique Challenges in DIS Tasks**: Our method addresses the need for high-precision annotations and diverse training samples in DIS tasks, which has rarely been specifically targeted in previous methods.
### Q3: Can you provide more comprehensive ablation study results?
We have conducted additional experiments to provide a complete picture of the impact of each loss component:
| L_GAN | L_content | L_structure | maxF1 ↑ | M ↓ |
|-------|-----------|-------------|---------|-------|
| ✔ | | | 0.778 | 0.073 |
| | ✔ | | 0.745 | 0.075 |
| | | ✔ | 0.751 | 0.074 |
| ✔ | ✔ | | 0.780 | 0.074 |
| ✔ | | ✔ | 0.782 | 0.073 |
| ✔ | ✔ | ✔ | 0.784 | 0.073 |
These results demonstrate that while L_GAN alone provides a strong baseline, the combination of all three loss components yields the best performance. This underscores the importance of our multi-faceted approach to mask editing and image generation.
### Q4: Can rigid and non-rigid mask editing be integrated into one framework?
Currently, we have not integrated these two frameworks, primarily because the rigid editing part cannot use topology preservation constraints. After viewpoint transformation, the detected key points undergo significant shifts, making topology preservation challenging.
In future work, we plan to explore the following directions to attempt integration:
- Develop adaptive loss functions that can handle both rigid and non-rigid transformations
- Design multi-stage editing processes that handle rigid and non-rigid edits separately while maintaining overall consistency
- Research novel architectures that can automatically select appropriate editing techniques based on the input and desired transformation
### Q5: How do you plan to address the limitations mentioned in the paper?
We are actively addressing these issues, particularly focusing on the problem of unnatural images:
1. **Unnatural Images**: We recognize that this issue primarily stems from VAE training. We attempted to train a specialized VAE using the DIS dataset, but due to limited data, the results were not as effective as using a larger general dataset. Here are our preliminary results:
| VAE Training Data | maxF1 ↑ | M ↓ | E_φ^M ↑ |
|-------------------|---------|-------|---------|
| General dataset | 0.784 | 0.073 | 0.875 |
| Small DIS dataset | 0.762 | 0.079 | 0.853 |
In the future, we plan to explore:
- Data augmentation techniques to effectively increase the size of the DIS dataset
- Transfer learning methods, using VAEs pre-trained on large-scale datasets and then fine-tuned on DIS data
2. **Inaccuracies in Complex Scenarios**: We are developing hierarchical generation approaches and improved attention mechanisms to better capture long-range dependencies in complex scenes.
3. **Reliance on Pre-annotated Pairs**: We are researching self-supervised learning methods and weak supervision strategies to reduce the need for extensive pre-annotated data.
We will include preliminary results and discussions of these ongoing efforts in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough review and insightful questions. In our rebuttal, we have clarified several points of potential confusion and provided extensive additional experimental results to address your inquiries.
As the discussion phase is drawing to a close, we are eager to receive your response. After reviewing our rebuttal, do you have any remaining concerns or questions? Please don't hesitate to raise any issues or seek further clarification. | Rebuttal 1:
Rebuttal: We appreciate the reviewers' thoughtful comments and suggestions. To address some of the concerns raised and provide additional support for our claims, we have prepared a global PDF with supplementary visual evidence. This document contains three key figures that demonstrate the effectiveness and novelty of our approach.
## Figure 1: Comparison with ImageDream
Figure 1 presents a comparison between our method and ImageDream, showcasing the effectiveness of our approach in both rigid and non-rigid editing of binary masks. The figure consists of four columns:
1. Original image
2. Result edited by ImageDream
3. Result from our rigid editing method
4. Result from our non-rigid editing method
This comparison clearly demonstrates that while ImageDream is a powerful tool for general image editing, it struggles with the precise requirements of binary mask editing for DIS tasks. Our method, both in its rigid and non-rigid editing capabilities, produces results that are more suitable for DIS applications, maintaining the binary nature of the masks while allowing for meaningful edits.
## Figure 2: Topology Modeling Visualization
Figure 2 provides a visualization of our topology modeling structure, which is crucial for our non-rigid editing process. This figure illustrates how topological constraints are applied during non-rigid editing, ensuring that the structural integrity of the mask is preserved even as complex deformations are applied.
The visualization demonstrates the before and after states of the mask, along with their corresponding topological structures. This clearly shows how our method maintains critical topological features while allowing for significant shape changes, a key innovation in our approach to DIS tasks.
## Figure 3: Impact of Canny Edge Constraints in ControlNet Generation
Figure 3 showcases the effect of incorporating Canny edge constraints in our ControlNet-based image generation process. The figure presents a side-by-side comparison of generation results:
1. Without Canny edge constraints
2. With Canny edge constraints
This comparison vividly illustrates that the addition of Canny edge constraints results in generated images with more defined and accurate boundaries. This improvement is particularly crucial for DIS tasks, where precise edge delineation is essential for accurate segmentation.
Pdf: /pdf/79347f48837aaaa8e74e78857afbc7e50df48122.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Beyond Primal-Dual Methods in Bandits with Stochastic and Adversarial Constraints | Accept (poster) | Summary: This article tackles the problem of multi-armed bandits with general constraints: at each time step, the learner receives a reward and $m$ costs, corresponding to $m$ independent constraints, and its goal is to maximize the reward while maintaning the cumulative cost corresponding to each constraint at a sub-linear level in the time horizon. To choose the action played at each time step (in the sense of the probability to play each arm), the authors propose an adaptation of Exp-IX, that chooses an action in an optimistic feasible set. Their main contribution consists in proving *best-of-both-worlds* guarantees for this algorithm *without knowing* a slackness parameter, usually assumed as known in the related literature. In this setting, best-of-both world guarantees consist in establishing sub-linear regret with respect to the best feasible probability on one side, and establishing simultaneously a competitive ratio determined by the slackness parameter (which is, again, not known and thus not used by the algorithm) on the other hand, if rewards are adversarial. This is the first result of this kind without knowledge of the Slater’s parameter.
Strengths: I found the paper very clear and well written. The setting and the algorithm are well presented, as well as the results. Clear intuitions are provided, especially in the presentation of the technical results.
Weaknesses: In my opinion, there is a major issue with the results,which is that they are obtained with a modified definition of the Slater’s parameter. As stated by the authors, in the literature it is generally defined with the set of probabilities allocated to each arm, while in the paper it is defined with the arm set. While the author claim that this definition is « slightly stronger », I disagree with this claim. It seems to me that on the contrary the problem becomes much simpler if we know in advance a set of K atoms in the simplex that contain for sure a safe policy (even the safest in a sense). To me, this seems like trading a strong assumption (knowing the Slater’s parameter) by another strong assumption. This assumptions also considerably restricts the class of problems that can be considered. Furthermore, from my current understanding of the arguments it seems that the assumption is not just an artifact of the analysis and that the results would not be guaranteed to work if the infimum in the Slater’s condition was achieved by a mixed strategy.
In itself, I find the paper interesting despite this limitation, but I believe that this should be discussed more thouroughly because it reduces the potential impact of the paper.
Technical Quality: 4
Clarity: 3
Questions for Authors: * Please elaborate on the definition of the Slater’s condition. In particular, I am interested in understanding if the definition is indeed necessary for the guarantees to hold of if it is an artifact of the analysis.
* You claim that the competitive ratio matches the lower bound obtained in another paper. However, this seems inexact to me because the lower bound do not consider the same definition of the slackness parameter. I am missing something?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments about the paper.
We agree that a more detailed discussion on this point is necessary and will include an extended version of our response in an additional paragraph. However, we would like to emphasize that the primary contribution of our paper is demonstrating that, perhaps surprisingly, a new and more natural **UCB-style approach** can be effectively employed in the **adversarial** BwK setup.
### On Slater’s condition:
First, we disagree with the claim that our definition of Slater’s condition “*greatly simplifies the problem*”. While, as the reviewer noted, this may facilitate the identification of a safe strategy, this aspect is not the most challenging part of the problem. Indeed, this can be easily done by simply estimating the cost of all the arms. For example, in the BwK framework, the learner knows a safe action that satisfies all the constraints at the same time, and yet many challenges remain to be addressed.
On a minor note, we point out that in the stochastic setting, Slater’s condition is not even necessary and only pertains to the adversarial setting.
Moreover, we would like to highlight that our definition of Slater’s condition coincides with the one on mixed strategies in the case of a single cost/constraint.
Finally, we would like to remark that in most works (e.g., in the BwK literature), it is assumed that there exists a void action that does not use resources, and hence satisfies all the constraints. This assumption is consistent with our definition.
### On the lower bound:
Regarding the reviewer’s second question, it is true that the paper from which we borrow the lower bound uses a different definition of Slater's parameter. However, in the instance used in the lower bound, the strategy maximizing the feasibility margin puts all the probability on a single action. We will add a remark on this point to avoid confusion. This supports our assertion that our definition of Slater’s parameter is not significantly weaker than the one commonly used in the literature.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response, I have no more question for now.
---
Reply to Comment 1.1.1:
Comment: Thank you for the time dedicated to our paper. We think we have addressed all the weaknesses highlighted by the reviewer. Based on our responses, we respectfully ask the reviewer to consider adjusting their evaluation. Thank you. | Summary: This paper studies the problem of multi-armed bandits under long-term constraints.
They give an algorithm with regret and violation guarantees simultaneously for both stochastic and adversarial constraints (i.e. best-of-both-worlds guarantees).
Specifically, they guarantee $\tilde{O}(\sqrt{T})$ regret and violation under stochastic constraints (without Slater's condition), as well as $\tilde{O}(\sqrt{T})$ $\alpha$-regret and violation under adversarial constraints (with Slater's condition with parameter $\rho$ where $\alpha=\frac{\rho}{\rho +1}$).
Strengths: - The paper appears to improve on [11] in that it's regret scales logarithmic with the number of constraints and doesn't require Slater's condition for stochastic constraints.
- The exposition is clear and there are nice insights from the theoretical results.
Weaknesses: My main concern is that the paper is missing key literature that render the contributions rather limited.
More details in the following:
- [R1] gave a similar algorithm that uses an "optimistic set" to overestimate the constraint and then playing regret minimizer within this set. They showed $\tilde{O}(\sqrt{T})$ regret and violation without Slater's condition. In light of this, it appears the only contribution is extending [R1] to adversarial costs (and more restrictive finite action set). Although useful, this result is more limited than claimed.
- The line of literature on stochastic bandits with round-wise constraints (e.g. [R2],[R3],[R4]) is directly relavant because it considers stochastic constraints with stronger constraint satisfaction guarantees (i.e. no violation).
- The abstract claims that their particular problem setting generalizes Bandits with Knapsacks, which is unclear to me. Without this being the case, it seems that the only improvement on the literature would be for the recent arXiv paper [11].
[R1] Gangrade et al, "Safe Linear Bandits over Unknown Polytopes", COLT 2024 (first appeared on arXiv in 2022 as 2209.13694).
[R2] Amani et al, "Linear Stochastic Bandits Under Safety Constraints", NeurIPS 2019.
[R3] Pacchiano et al, "Stochastic Bandits with Linear Constraints", AISTATS 2021.
[R4] Moradipari et al, "Safe Linear Thompson Sampling With Side Information", IEEE TSP 2021.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could you provide more clarification on how this setting generalizes Bandits with Knapsacks?
- Given the references I gave, could you provide any insight on how this work contributes to this literature?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I did not see clear discussion of limitations. See the "Weaknesses" section for my discussion on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback on our work.
### On related works:
We will gladly include the related works highlighted by the reviewer in the final version of the paper. However, we do not consider these works to be technically very related to ours. We outline the main reasons below.
First, we disagree with the claim that “*it appears the only contribution is extending [R1] to adversarial costs (and more restrictive finite action set). Although useful, this result is more limited than claimed.*” Indeed, it is well-known since the seminal work of Mannor et al. [1] that the adversarial setting is far more challenging than the stochastic one. We design a minimax optimal algorithm that provides optimal rates under **stochastic and adversarial** constraints. This was explicitly indicated as one **key question for the BwK set-up** in the Immorlica et al. paper [4].
The papers pointed out by the reviewer only consider the stochastic setting, and the techniques that they propose clearly fail to carry over to the adversarial case, thereby rendering it impossible to give guarantees for BOTH stochastic & adversarial inputs. Extending the techniques of [R1] to the adversarial case is a non-trivial task, and it is unclear whether it is even feasible. In our paper, we use entirely different techniques. Therefore, we strongly disagree with characterizing our results as “*extending the results of [R1] to adversarial costs.*”
Even if we just look at the stochastic setting, there are multiple points in which our work differs from the set of papers suggested by the reviewer. Some notable examples are the following:
* [R2] provide $O(T^{2/3})$ regret guarantees in the case of unknown $\Delta$ ($\Delta$ in [R2] takes the role of our Slater’s parameter), while we always achieve $O(\sqrt{T})$ regret.
* [R3] works with stage wise constraints and thus has to assume the knowledge of a safe action.
Moreover, the stage wise guarantees are not stronger than ours, since they are defined on the expected costs (expectation with respect to the environment and the inner randomization of the algorithm), while we define violations on the realizations, and thus it is clearly impossible to achieve no violations at all rounds.
### On the relationship with BwK:
We omitted a formal discussion on this matter due to space constraints, but we will include it in the final version of the paper using the extra space. We extend the BwK framework along two key directions:
1. We do not assume knowledge of Slater’s parameter. In the classical BwK framework this is assumed to be known. Indeed, assuming that you know the budget $B$ and the time horizon $T$, the per budget constraint is $cost_t \le B/T:=\rho$, and there is a void action which always provides a cost of zero.
2. We can handle both positive and negative costs (some prior literature refers to this problem as bandits with knapsacks with non-monotonic resource utilization; see [2, 3]).
Finally, we observe that we can implement hard constraints (i.e., no violation) by virtually augmenting the costs by $O(1/\sqrt{T})$.
### References
[1] Mannor, Shie, John N. Tsitsiklis, and Jia Yuan Yu. "Online Learning with Sample Path Constraints." Journal of Machine Learning Research 10.3 (2009).
[2] Kumar, Raunak, and Robert Kleinberg. "Non-monotonic resource utilization in the bandits with knapsacks problem." Advances in Neural Information Processing Systems 35 (2022): 19248-19259.
[3] Bernasconi, M., Castiglioni, M., Celli, A., & Fusco, F. (2022). “Bandits with replenishable knapsacks: the best of both worlds.” The Twelfth International Conference on Learning Representations.
[4] Immorlica, N., Sankararaman, K., Schapire, R., & Slivkins, A. (2022). “Adversarial bandits with knapsacks”. Journal of the ACM, 69(6), 1-47.
---
Rebuttal 2:
Title: Thank you for your response; concerns about contribution
Comment: Thank you for the response.
To be clear, my main concern is in the contributions of the proposed algorithm with respect to [R1], which uses a similar algorithm for the case of stochastic costs and stochastic constraints. Indeed, both algorithms "overestimate" the constraint set and then run a regret minimizer within this overestimated set. The main difference is that [R1] uses a UCB based regret minimizer (to handle the stochastic rewards), while this paper uses an adversarial bandit regret minimizer (to handle adversarial rewards). Furthermore, *[R1] gives similar regret and violation bounds ($\tilde{O}(\sqrt{T})$ regret and violation) and also doesn't require Slater's condition.* One of the main claims is that this paper gives the first algorithm with $\tilde{O}(\sqrt{T})$ regret without Slater's condition in stochastic settings, which appears to be incorrect.
As for my question about the relationship with BwK, I was more interested in how exactly this setting generalizes BwK as it wasn't immediately clear to me. I think I cleared this up by looking at prior works so this is not an issue, other than that I believe it needs to be stated more clearly.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for the timely answer. We think the reviewer is referring to our sentence (line 71): “Moreover, in stochastic settings, it is the first algorithm to provide $\tilde O(\sqrt{T})$ regret without requiring Slater’s condition.” We acknowledge that the original statement can be misleading when taken out of context. What we intended to convey is that our algorithm is the first **best-of-both-worlds** approach that, in the stochastic case, does not require the Slater condition while still achieving $\tilde O(\sqrt{T})$ regret and constraint violations. We agree that this should have been communicated more clearly, and we are grateful to the reviewer for bringing this to our attention.
However we do not agree that our main claim is that “ this paper gives the first algorithm with regret without Slater's condition in stochastic settings”. This is not one of our primary contributions at all! Our main contribution is designing a simple and intuitive algorithm that provides best-of-both-worlds guarantees in bandits with general constraints while also being minimax optimal. This algorithm is UCB-like in the stochastic constraints case, but the reviewer's intuition that we can simply swap a UCB-like regret minimizer for stochastic rewards with an adversarial one (since we need to handle adversarial inputs) is wrong. Our algorithm indeed needs to account for the adversarial nature of the constraints, and it does so by automatically switching to alternative methods of estimating them, adjusting the computation of estimates and increasing the learning rate as needed. The techniques we developed to prove that this approach works and achieves optimal rates in adversarial and stochastic settings are far from trivial and do not rely on the techniques used in [R1]. We hope that a closer examination of our paper will convince the reviewer of this point.
We therefore propose to revise the claim in line 71 and include a discussion of the related works for the stochastic setting, as suggested by the reviewer, including but not limited to [R1] (we initially overlooked [R1] since it was only recently published at COLT 2024, but we agree that it relevant to the discussion). | Summary: This work studies the bandit with constraints problem where the authors consider two possible settings for the constraints -- one where the constraints are stochastic, sampled i.i.d. from some unknown distribution, and one where the constraints are adversarial. The rewards are always assumed to be generated by an oblivious adversary. The authors present a novel algorithm which is able to achieve the optimal competitive ratio, up to O(\sqrt{T}) regret in the constraints and objective, in the adversarial constraints setting and O(\sqrt{T}) regret in the constraints and objective in the stochastic setting which is also min-max optimal.
Strengths: To the best of my knowledge the proposed algorithms and regret bounds are novel. The results are nearly min-max optimal (up to small logarithmic factors in K). The presentation of the work is overall good and there seems to be good technical novelty in developing the algorithms. The challenge in this work for achieving best of both worlds regret for the constraints is in constructing the right empirical estimators for the constraints. This is explained well in Section 6. In particular there is tension between tightly bounding V_t and having an estimator which constraints well in the stochastic setting. The main novelty seems to be the choice of adaptive step size in the definition of the empirical constraints which leads to the neat proof of Lemma 6.2 and Theorem 6.1.
Weaknesses: I do not find any major weaknesses in this work. Perhaps the authors can do slightly better in highlighting the tension between bounding V_t and concentration of the empirical estimators. It was also not entirely clear to me what is the benefit of viewing the estimator update as a variant of gradient descent.
Minor typos:
line 517: This let us...
line 517: The underscored t subscript is larger
Equation 10: \bar g^{i}a -- brackets are missing
Technical Quality: 4
Clarity: 3
Questions for Authors: Is it possible to show some type of gap-dependent regret guarantees for the regret of the constraint violation using the same type of estimator? Intuitively it seems like there would still be fast enough concentration of the empirical estimator of the constraints, so that infeasible actions are eliminated quickly in case of a large gap, however, it's not completely obvious how to handle some parts of bound for V_t.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations are appropriately addressed and there is no obvious negative societal impact as this work is theoretical.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive comments about our paper.
**On the OGD interpretation of the estimator updates:** the main benefit of viewing the update as a variant of gradient descent is that it allows us to simplify the analysis in the proofs. We will add a remark in the main paper to clarify this point.
**On the instance-dependent result in the stochastic case:** Note that it is not trivial to find a good definition of “gap” in settings with long-term constraints. The presence of constraints implies that the optimal strategy may be a mixed distribution, and thus, a “second best” arm may not exist. There are some works that try to conjugate a logarithmic dependence in $T$ with some instance-dependent quantities (see e.g. [1,2]). However, these works require strong assumptions to establish such dependencies, making this direction less explored compared to the standard MAB setting. We will add some comments about this in the final version of the paper.
[1] Sankararaman, K. A., & Slivkins, A. (2021). “Bandits with knapsacks beyond the worst case.” Advances in Neural Information Processing Systems, 34, 23191-23204.
[2] Li, Xiaocheng, Chunlin Sun, and Yinyu Ye. “The symmetry between arms and knapsacks: A primal-dual approach for bandits with knapsacks.” International Conference on Machine Learning. PMLR, 2021. | Summary: This paper studies multi-armed bandits with cumulative cost connstraints, with the focus of designing a 'best of both worlds' algorithm that has small constraint violations in both the stochastic and adverasarial settings (where the constraints as well as rewards can vary with time arbitrarily), and attain either low-regret for the stochastic setting, or low $alpha$-regret for the adversarial setting (i.e., ensure that the reward accrued is at least $\alpha$ times the best-in-hindsight reward accrued by a constant unconstrained assignment probability x, up to $o(T)$ terms). The main desiderata is to design methods that avoid explicit knowledge of a slater parameter, and avoid polynomial dependence on m, the number of unknown constraints, in the regret and violations, both of which are cons of prior work using the Langrangian BwK design.
Throughout, the setup adopted is linearised, i.e., at each time, the authors select a distribution $x_t$ over the actions [1:K], and the action $a_t$ is selected by sampling from $x_t$. A bandit feedback of both the reward, $f_t(a)$, and the constaint violations $g_{t}^i(a), i \in [1:m]$ is assumed, where these functions are stochastic perturbations of an expected function in the stochastic setting, and are selected arbitrarily in the adversarial setting. Throughout, it is assumed that $f_t \in [0,1]$ and $g\_t^i \in [-1,1]$. The constraint is of the form $\forall i, g_t^i \le 0$, Net violations are defined as $V_T = \max_{i} V_T^{i}, $ where $V_T^i = \sum_t g_t^i(a_t).$ Note that this equals $\sum_t \langle x_t, g_t^i\rangle,$ up to a $\sqrt{T}$ concentration term. In the adverarial case, the paper studies the regret $\frac{\rho}{1+\rho} \max_{x} \sum f_t(x) - \sum f_t(a_t),$ where $\rho$ is the minimax Slater parameter $\rho:= \max_a \min_{i,t} (-g_{t}^i(a))$.
The overall strategy adopted by the paper is very natural: for each time, the method constructs a set of "plausibly" feasible distributions as $\widehat{\mathcal{X}}\_t = \{x \in \Delta\_K : \forall i, \langle x, \hat{g}\_t^i - b\_t\rangle \le 0\}$, where $\hat{g}\_t^i$ serve as "estimates" for $g\_t^i,$ and $b$ is a nonnegative 'bonus' that ensures optimism in the stochastic scenario. The method then just selects actions from $\widehat{\mathcal{X}}\_t$, using a (small modification of) the EXP-IX strategy, thus ensuring low regret relative to any constant $x \in \bigcap \widehat{\mathcal{X}}\_t$. The main results then need to select $\hat{g}_t^i$ and $b_t$ in such a way that
1. In the stochastic case, $\mathcal{X}^* = \{x : \forall i, \langle \mathbb{E}[g_t^i], x\rangle \le 0\}$ remains within this intersection of $\widehat{\mathcal{X}}s$, and
2. In the adverasrial case, $\frac{\delta\_{a^{\emptyset}}}{1+\rho} + \frac{\rho}{1+\rho} \mathcal{X}$ remains within the same.
3. The net violation is small.
As is natural, the $b_t(a)$ is set to just $\sqrt{ \log(mT^2/\delta)/n_{t}(a)},$ where $n_t(a)$ is the number of times action $a$ has been played up to time $t$. The $\hat{g}\_t^i$ are estimated using OGD as $$ \hat{g}\_{t+1}^i(a) = \hat{g}\_t^i(a)\mathbf{1}\{a\_t \neq a\} + ( (1-\eta\_{t}^i(a) \hat{g}\_t^i(a) + \eta\_t^i(a) g\_t^i(a))\mathbf{1}\{a\_t = a\}.$$ The chief question becomes how to select the learning rates $\eta\_t^i(a)$. The basic tension is illustrated thus: in the stochastic case, we'd like $\eta\_t^i$ to be roughly $1/n\_{t}(a),$ so that $\hat{g}\_t^i(a)$ is essentially the empirical average of the feedback. However, in the adversarial case, such a rate would be way too slow, and one needs quicker adaptation. The authors elegantly resolve this dilemma by setting $ \eta\_t^{i}(a) =(1+\Gamma\_t^i)/n\_t^i,$ where $$ \Gamma\_t^t = \min( \zeta\_t , \max( 0, V\_t^i - \zeta\_t )), \textrm{ where } \zeta\_t = 21\sqrt{Kt \log(mT^2/\delta) }$$
For the stochastic case, the analysis directly shows that, whp, $V\_t^i$ never crosses the $21\sqrt{Kt}$ threshold, and thus the desired $\eta\_t = 1/n\_t(a)$ is retained. For the adversarial case, the analysis proceeds by first controlling the violation over any interval in terms of the updates to $\hat{g}\_t$ over the same, and then arguing via contradition by showing that that if $V_\tau/\sqrt{K\tau}$ is ever too large, then $\eta_t$ is large over a significantly sized interval before it, which in turn implies strong adaptation and a small $V_\tau$. Finally, the point 1 follows by the choice of $b_t(a) \sim 1/\sqrt{n\_t(a)}$, and 2 follows directly from the boundedness of the $g\_t^i$, by interpreting $\hat{g}_t^i$ as a weighted sum of the observed $g\_t^i(a_t)$s with net weight $1$.
Strengths: The subject of the paper is of course very peritnent to the online learning subcommunity at Neurips. I think the technical contribution of the paper, i.e., the design of a BOBW method for the bandits with cumulative constraints that avoids knowledge of Slater parameters is quite interesting. Furthermore, the removal of the poly(m) factors from the ensuing bounds is an important improvement in the same.
explained
I find the paper to be quite well written. The thought process behind the design of the method is well explained, and the intuition behind the design of the learning rate is clear. Prior work is contextualised well, and the contribution of the paper is clearly outlined. To my understanding, the proofs are correct.
The design of the scheme, and the analysis, are simple, but well motivated and elegant. Altogether, while I don't see this paper as especially groundbreaking, I do find it to be a well executed contribution that develops a simple idea and does interesting things with it to clean up the theory of bandits with cumulative constraints.
Weaknesses: Perhaps the main weakness is that the structure of $\Gamma\_t^i$ bakes in the large constant 21 (and this of course leads to the large constant 53 in the bound). I find it natural to wonder if this constant is necessary, or if can be set in a path-dependent way that avoids both the large constant, and potentially offers improvements if the $g$s are somehow 'nice'. That said, I think that such a lacuna is not a major weakness, and such questions are find being left to follow-up work.
Technical Quality: 4
Clarity: 3
Questions for Authors: Comment: Comment: the roundiwse violation bound on V_T^+ of theorem 7.3 has previously been observed for the stochastic setup in the "safe bandit" literature, and this point of contact should perhaps be acknoweldged. See, e.g., [1,2]. The method these papers study is Agarwal and Devanur's style of "doubly-optmistic" selection.
[1]: Chen et al., Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk, ICML 22
[2]: Gangrade et al., Safe Linear Bandits over Unknown Polytopes, COLT 24
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: This is fine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive comments about our work and for pointing out the line of research on safe bandits. We will acknowledge it in the final version of the paper.
On the constants: we didn’t try to optimize the constants as we were mainly interested in achieving asymptotically optimal regret rates. We agree that it would be interesting to explore the methodology suggested by the Reviewer; we will add it as a future research direction. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ReFIR: Grounding Large Restoration Models with Retrieval Augmentation | Accept (poster) | Summary: This work introduces a novel training-free paradigm which uses the retrieval augmentation to expand the knowledge boundaries of existing large restoration models by incorporating reference images as external knowledge to facilitate the restoration of high-fidelity details. The authors propose the nearest neighborhood lookup to retrieve reference images, followed by the cross-image injection to fuse external knowledge into the LRM. Qualitative and quantitative experiments demonstrate the effectiveness of the proposed method.
Strengths: 1. Although diffusion-based restoration models can produce realistic results, their inherent randomness often leads to outputs that are not faithful to the original scene. This work considers an approach by introducing additional reference images to mitigate this challenge.
2. The idea of this work interesting. Utilizing RAG techniques, which has been widely applied in NLP, for low-level computer vision tasks is promising. The authors provide a detailed quantitative and qualitative analysis of the workings mechanism of existing LRMs, based on which they propose specific designs to inject external knowledge into LRMs.
3. The authors propose specific solutions to address challenges such as domain preference issues and spatial misalignment when directly using high-quality reference images. The framework appears to be generic and requires no training, making the proposed method easily applicable to various existing LRMs.
Weaknesses: 1. In Table 1, why there is a variation of the performance improvement when applying the proposed method to different LRMs? For example, there is a 0.38 dB PSNR improvement in SeeSR but only a 0.03 dB improvement in SUPIR. Additionally, why do the restored images produced by different methods seems still different when using the proposed ReFIR?
2. It seems the proposed method requires additional reference images as input, which may increase inference costs and time.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the spatial adaptive alignment module, considering the size of the similarity matrix $sim$ as $HW \times HW$, which requires computing per-pixel similarity, does this introduce significant inference costs? Will the authors open-source the related details and codes?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see the above weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [Q1: Variation of the outputs from different LRMs]
> In Table 1, why there is a variation of the performance improvement when applying the proposed method to different LRMs? For example, there is a 0.38 dB PSNR improvement in SeeSR but only a 0.03 dB improvement in SUPIR. Additionally, why do the restored images produced by different methods seems still different when using the proposed ReFIR?
In fact, we have also noticed this phenomenon and we think it can be well explained based on Fig.8 in the main paper. As shown in Fig.8, for the latent at the t-th time step, there are two forces in different directions pulling it to produce the latent at the next t−1-th time step. One force is from the internal knowledge of frozen weights in LRMs, and the other is the external knowledge from the retrieved reference image. Therefore, although the external knowledge of different LRMs is the same (the reference image is the same), due to their different internal knowledge (different pre-training parameters), it eventually resulting in different LRMs producing different restoration results.
### [Q2: Inference costs from additional reference image]
> It seems the proposed method requires additional reference images as input, which may increase inference costs and time.
To ensure efficient implementation, we use batch accerlaration to process LR and Ref paralley, whose cost is roughly similar to increasing batchsize from 1 to 2. Moreover, although using the reference image inevitably incurs computational cost compared to direct inputing LR for inference, the additional reference can mitigate the hulucination of LRMs, resulting in significant performance gains. We provide a comprehensive comparison regarding performance and effectiveness as follows.
| setup | NIQE | MUSIQ | CLIPIQA | #param | GPU memory | Inference time |
|-------------|---------|--------|---------|--------|------------|------------------|
| SeeSR-NoRef | 4.7432 | 55.54 | 0.6575 | 2.04B | 24.4G | 76.5s |
| SeeSR-ReFIR | 4.4566 | 57.13 | 0.6732 | 2.04B | 40.9G | 170.7s |
### [Q3: Efficiency of the adaptive alignment module]
> In the spatial adaptive alignment module, considering the size of the similarity matrix sim as HW×HW, which requires computing per-pixel similarity, does this introduce significant inference costs?
We are sorry to cause your confusion. In fact, the computation of the similarity matrix is quite efficient. Since we find the pixel-wise similarity between LR and Ref does not change too much among different layers, we thus compute the **all layer shared** similarity maxtrix for one UNet block, resulting in only 12 calculation for one timestep. We will make it more clear in the revision.
### [Q4: Code open-source]
> Will the authors open-source the related details and codes?
Yes! We will release all the code when the paper is accepted. | Summary: The paper introduces ReFIR, a novel method designed to enhance the capabilities of Large Restoration Models by incorporating external knowledge through the retrieval of high-quality, content-relevant images.
The main contribution is to (1) Gives both quantitative and qualitative results on how existing LRM works. (2) Propose a novel solution which is training-free and generic to alleviate the hallucination problem of LRMs.
Strengths: 1. The idea of using external data representation instead of the model parameters seems interesting, which can be used in parallel with other model-oriented approaches.
2. The experiments are solid and the performance are noteworthy. The authors apply the proposed technique into various LRMs and compare with sota under settings with different difficulty levels. From Table1 and Table2, the ReFIR methods achieves significant gains on both fidelity and perceptual metrics, demonstrating the superiority.
3. The framework can be potentially used for multiple Diffusion-based restoration models without additional training, making it a cost-effective solution.
4. The paper is well organized and easy to follow.
Weaknesses: 1. Difference form other methods. The proposed method seems relevant to some of works in image editing tasks, such as MasaCrtl and Prompt2prompt, which also use training-free technique to modify the behavior of the diffusion model. A detailed explanation about the differences between the proposed ReFIR method and these methods would be beneficial.
2. Dependency on the Quality of Retrieved Images. From Table5, it seems the performance of ReFIR heavily relies on the relevance and quality of the retrieved images. If the retrieval system performs sub-optimally, it could adversely affect the restoration outcomes.
3. Complexity in Implementation. The proposed cross-image injection and spatial adaptive gating mechanisms introduce additional operation, moreover, the input image also contains extra reference images, which might challenge efficient implementation in practice.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the retrieval system scale with increasingly large HQ retrieval datasets, and what are the computational implications of scaling?
2. How might different retrieval algorithms (beyond nearest neighbor lookups) impact the performance of ReFIR?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Further exploration into the scalability, efficiency, and the robustness of the proposed method as mentioned in the Weakness would be helpful for broader application.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [Q1: Difference from other works]
> Difference form other methods. The proposed method seems relevant to some of works in image editing tasks, such as MasaCrtl and Prompt2prompt, which also use training-free technique to modify the behavior of the diffusion model. A detailed explanation about the differences between the proposed ReFIR method and these methods would be beneficial.
Although both ReFIR and previous image editing work are both training-free, however, we point out that they differ in the following ways.
- The specific techniques are different. Although both modify the attention layer, our ReFIR further introduces the Spatial Adaptive Gating to solve the spatial misalignment between LR and Ref, and proposes strategies to allow the injection of multiple reference images. In addition, we also propose the novel Distribution Alignment to alleviate the domain gap between two diffusion process chains.
- The goals are different. Previous image editing work modifies the attention layer for controllable generation, while our ReFIR aims to inject external knowledge to alleviate the hallucination problem of existing LRMs.
- Finally, in addition to incorporating reference images through training-free attention modification, another contribution of this work is the proposal of a feasible retrieval system to facilitate the acquisition of external knowledge from retrieved databases.
### [Q2: Dependency on the Quality of Retrieved Images]
> From Table5, it seems the performance of ReFIR heavily relies on the relevance and quality of the retrieved images. If the retrieval system performs sub-optimally, it could adversely affect the restoration outcomes.
To make our ReFIR more robust under conditions when the relevance and quality of the retrieved images is poor, we further explore techniques including fallback strategy and the adaptive filtering policy. Please see the global author rebuttal part for more details.
Using these techniques can further improve the robustness of ReFIR when the retrieval system performs sub-optimally. We will explore more techniques for improvements in the future.
### [Q3: Complexity in implementation]
> The proposed cross-image injection and spatial adaptive gating mechanisms introduce additional operation. Moreover, the input image also contains extra reference images, which might challenge efficient implementation in practice.
Actually, since the modification of the attention layer only happens in the decoder in the last 20 timesteps, which only accounts for 12% of all the layers, the cross-image injection and spatial adaptive gating rise **almost negligible** increase in computational cost. We provide the specific cost comparison as follows.
| setup | input | #param | GPU memory | Inference time |
|---------------------------|-------------------|--------|------------|------------------|
| w/o cross-image injection | batchsize=2 | 3.87B | 51.1G | 312.2s |
| and spatial adaptive gating| 1xLR + 1xRef | 3.87B | 51.4G | 322.8s |
Since the LR chain only receives the corresponding same layer features from the Ref chain, this property thus allow the **batch acceleration** for efficient implementation. Specifically, the input batch size is increased from 1 (LR only in original NoRef LRM) to 2 (LR+Ref in our ReFIR). Therefore, the batch parallel computation of both LR and Ref make it efficient for practical implementation.
### [Q4: Scaling up with lager retrieval database]
> How does the retrieval system scale with increasingly large HQ retrieval datasets, and what are the computational implications of scaling?
As suggested, we use ImageNet as the larger database for scaling up the retrieval system. The results are as follows.
| setup | NIQE | MUSIQ | CLIPIQA |
|----------------------|---------|--------|---------|
| ImageNet as Retrival | 4.4233 | 57.14 | 0.6770 |
| baseline | 4.4566 | 57.13 | 0.6732 |
It can be seen that increasing dataset size can improve the relevance and thus improves the performance. Furthermore, since the calculation of similarity between LR and retrieved dataset is parallel, we find it brings almost negligible computational cost when scaling up retrieved dataset.
### [Q5: Explore differeent retrieval algorithms]
> How might different retrieval algorithms (beyond nearest neighbor lookups) impact the performance of ReFIR?
As suggested, we use the TopK retrieval algorithm to explore the impact of different retrieval algorithms. Specifically, we select images with top 3 large similarity as reference inputs, and use the multi-reference injection technique to allow multiple reference images. The experimental results are shown below.
| setup | NIQE | MUSIQ | CLIPIQA |
|-----------|---------|--------|---------|
| Top3-reference | 4.4217 | 57.25 | 0.6749 |
| baseline | 4.4566 | 57.13 | 0.6732 |
It can be seen that using the TopK retrieval algorithm can bring more relevance to the reference image, thus improving the performance. | Summary: This paper presents a plug-and-play approach to enhance the quality of Diffusion-based super-resolution models by leveraging reference images. The authors exploit CLIP to effectively filter high-definition images with similar semantics from a pre-trained dataset. These reference images are then employed to replace the original diffusion model's features within the attention operation, leading to improved super-resolution results.
Strengths: 1. The paper makes a valuable contribution by demonstrating how introducing external knowledge via CLIP filtering can effectively address the hallucination problem in Diffusion-based super-resolution models.
2. The paper presents promising quantitative and qualitative results that support the effectiveness of the proposed method.
Weaknesses: 1. Figures 5 and 6 would benefit from including the reference images selected by the model alongside the processed results. This would allow for a clearer evaluation of the effectiveness of the reference image selection process. Consider revising the figures to include the reference images or provide additional visualizations that demonstrate the impact of the reference information on the final output.
2. Tables 1 and 2 highlight a significant performance boost from ReFIR on the RefSR dataset, which diminishes in real-world scenarios. This raises questions about ReFIR's ability to handle diverse real-world data. Can the authors provide additional evidence or analysis to demonstrate ReFIR's effectiveness in retrieving relevant reference images for real-world scenarios, even when the similarity might not be as high as in the RefSR dataset? Or, could we generate reference images automatically just like CoSeR? The comparison and more detailed discussion is necessary.
3. The appendix appears to contain crucial information about the reference image retrieval stage, such as the selection criteria for the feature extractor. This information should be moved to the main body of the paper for better clarity and transparency.
4. The paper should discuss the computational cost and time consumption associated with the "Source Reference Chain" process. While the performance improvement is valuable, quantifying the trade-off in terms of computational resources is essential.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The paper does not explicitly address the potential issue of similar features being missed due to image chunking in pre-trained SD models like SeeSR and SUPIR. Consider discussing whether the model retrieves features from the corresponding chunk of the reference image or employs a different strategy to handle chunked inputs.
2. It would be helpful to clarify whether the features used for matching in the reference image retrieval process are pre-computed or computed online.
3. The paper explores the concept of internal knowledge (learned by the LRM) and external knowledge (introduced through reference images). While directly training the LRM on the reference image dataset seems like an alternative approach, it's important to consider the potential drawbacks, such as overfitting to the specific reference data and reduced generalizability. Discussing these trade-offs would provide a more nuanced understanding of the benefits of using reference images as external knowledge.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: They have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [Q1: Reference image presentation]
Thanks for your advice, we will add the retrieved image in Fig.5 and Fig.6 in the revision to improve the presentation quality.
### [Q2: Improving ReFIR's ability in real-world scenarios]
> Tables 1 and 2 highlight a significant performance ... detailed discussion is necessary.
To further improve the ability of ReFIR in real-world applications, we further explore several techniques to improve its robustness. The related analysis and experiments are shown in the **global author rebuttal**.
Equipped with the fallback mechanisms (not using reference or generating Ref like CoSeR) as well as the adaptive filtering strategies (relevance/quality/task-based filtering), our ReFIR can alleviate the dilemma when high-correlated and high-quality images are scarce or unavailable, enhancing its ability in real-world scenarios.
### [Q3: Moving the retrieval stage details to main body]
Due to the 9-page limit for submission this year (1 page less than usual), we are sorry for placing the image retrival details in the appendix. We will add more details about the image retrieval stage in the main body in the revision.
### [Q4: Computaion and time cost from Source Reference Chain]
> The paper should discuss the computational cost and time consumption associated with the "Source Reference Chain" process. While the performance improvement is valuable, quantifying the trade-off in terms of computational resources is essential.
Since the "Target Restoration Chain" only receives the corresponding same time-step and layer features from the "Source Reference Chain", this property thus allow the **batch acceleration** for efficient implementation. Specifically, these two chains are implemented sharing one LRM weight, and simply increasing the batch size from 1 (LR only in original NoRef LRM) to 2 (LR+Ref in our ReFIR).
Therefore, the computational cost and time consumption of ReFIR is roughly similar to the original LRM with batchsize set to 2, with a slight overhead increase from the additional cross-image injection. The comparison of model complexity before and after incorporating our ReFIR are given in Table7 of Addpendix, and here we also give this reuslt as follows:
TableA. Comparison on computation and time cost. The input resolution is $2048\times2048$, and is evaluated on one 80G A100 GPU.
| Method | input | #param | GPU memory | Inference time |
|--------------|----------------|--------|------------|------------------|
| SeeSR| batchsize=2 | 2.04B | 40.2G| 160.2s|
| SUPIR| batchsize=2 | 3.87B | 51.1G| 312.2s|
| SeeSR+ReFIR | 1xLR + 1xRef | 2.04B | 40.9G | 170.7s|
| SUPIR+ReFIR | 1xLR + 1xRef | 3.87B | 51.4G | 322.8s|
### [Q5: Dicussion on patchfied images ]
> Consider discussing whether the model retrieves features from the corresponding chunk of the reference image or employs a different strategy to handle chunked inputs.
The patchfy is usually used when the input resolution is large. Although it is intuitive to use the corresponding patch from the large Reference Image as the reference branch's input, it can inevitably incur significant performance degradation when the LR and Ref possess spatial misalignment.
Here, we introduce another solution which simply uses existing techniques to deal with this problem. Specifically, we employ the Multi-reference injection, shown in Appendix-A, in which we use all the patches from the reference image to guide the restoration of a certain LR input patch. In this way, the LR image can maintain the global receptive filed from the Ref. In experiment, we choose the number of patches as 4 for a 2048x2048 input from RealPhoto60 dataset, resulting in 4 small patches with size 512x512. The results are shown as bellow.
| setup | GPU cost | NIQE | MUSIQ | CLIPIQA |
|----------------|----------|---------|--------|---------------|
| MultiRef-for-patchfied images | 29.8G | 4.6543 | 56.20 | 0.6598 |
| baseline | 40.6G | 4.4566 | 57.13 | 0.6732 |
To some extent, introducing the multi-reference injection can alleviate the perofrmace drop while maintaining modest memory footprint. However, similar to exsting SD-based restoration works, the inherent tradeoff between whole-image receptive filed and the low GPU cost still exist. Our framework may potentially benefit from future acceleration works.
### [Q6: Computation mode for retrieval]
> It would be helpful to clarify whether the features used for matching in the reference image retrieval process are pre-computed or computed online.
Since the retrival database is agnostic to the specific LR inputs, we thus **pre-compute** the retrival vectors for fast inference. We will make it more clear in the revision.
### [Q7: Converting refernece image as internal knowledge]
Due to the backpropagation of LRM needs huge GPU costs, e.g. 64 Nvidia A6000 GPUs for training SUPIR, we are not able to practically convert reference images as internal knowledge through fully fine-tuning.
However, we point out that the idea of injecting downstream knowdge through fine-tuning has been explored in the image genration community, such as DreamBooth. During the subject-driven adaptation of dreambooth, the authors of dreambooth also encountered diffusion model's overfitting for specific subjects, and they propose the prior preservation loss to solve this problem.
As for the image restoration, it is difficult to get calibration datasets for prior preservation since the content of LR images can be various, unlike the similar topic in image generation. Moreover, in real-world applications, it is also difficult to obtain scene-specific data for training in advance, and even if we get it, it requires additional training time. In contrast, our ReFIR framework does not need the scene-specific data in advance and can adapt to specific scenes in a training-free manner.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thank the authors for their efforts in the rebuttal. It addresses several of my concerns.
---
Rebuttal 2:
Title: Response to Reviewer
Comment: Thank you very much for your positive feedback. We are delighted that our responses have addressed your concerns. We will further improve our work in revision based on the reviewers' comments and discussions. | Summary: The paper titled "ReFIR: Grounding Large Restoration Models with Retrieval Augmentation" introduces a novel framework called Retrieval-augmented Framework for Image Restoration (ReFIR). This method addresses a significant issue in diffusion-based Large Restoration Models (LRMs) — the tendency to generate hallucinatory outputs when faced with heavily degraded images, akin to issues faced by large language models. ReFIR mitigates this by incorporating external high-quality, content-relevant images retrieved via a nearest neighbor lookup in the semantic embedding space. These images serve as a source of external knowledge, enabling the restoration model to produce more accurate and faithful details. The framework modifies the self-attention layers of existing LRMs to integrate textures from the retrieved images, significantly enhancing the model's ability to restore images without any additional training required. Extensive experiments demonstrate that ReFIR achieves realistic and high-fidelity restoration results, confirming its effectiveness across various existing LRMs. This training-free, adaptable approach significantly expands the knowledge boundary of LRMs, offering a promising solution to their inherent limitations.
Strengths: * The paper is well-organized and articulates complex concepts with clarity, making it accessible to both experts and those new to the field. The significance of ReFIR lies in its potential to profoundly impact practical applications involving image restoration, such as in digital forensics, media restoration, and medical imaging, where accuracy and fidelity are paramount. This approach offers a scalable and adaptable solution that can be applied across various existing models without the need for retraining, highlighting its utility in improving the practical deployment of deep learning models for image restoration.
* The ReFIR framework introduces a unique method of integrating retrieval-augmented techniques into image restoration models, addressing the hallucination dilemma often faced by large restoration models. By borrowing the concept of retrieval-augmented generation from natural language processing and adapting it for image processing, ReFIR creatively leverages external image databases to enhance the detail and fidelity of restored images, significantly expanding the utility of existing large-scale models.
* The implementation of ReFIR demonstrates robust quality through extensive testing and innovative adaptations. The framework modifies existing large restoration models by integrating high-quality, content-relevant images during the restoration process. This method has shown to significantly enhance the performance of these models, as validated by comprehensive experiments that not only improve quantitative metrics like PSNR and SSIM but also yield visually more accurate restorations.
Weaknesses: * The effectiveness of ReFIR is significantly dependent on the quality and relevance of the images retrieved from external databases. This reliance could limit the framework's effectiveness in scenarios where highly relevant and high-quality reference images are scarce or unavailable. Addressing this, the authors could explore mechanisms to assess and ensure the relevance and quality of retrieved images dynamically during the restoration process or develop fallback strategies when suitable images are not found.
* While the paper mentions that the framework does not require additional training, it does not thoroughly address the potential increases in computational overhead and latency introduced by the retrieval process and the modification of self-attention layers. This could be particularly challenging when deploying in real-time or resource-constrained environments. Future iterations of the framework could benefit from a detailed analysis of computational costs and potential optimizations to streamline the retrieval and integration processes.
* The experimental setup primarily demonstrates the effectiveness of ReFIR under controlled conditions with specific types of image degradations. To fully validate the robustness and generalizability of the framework, additional testing across a broader spectrum of real-world scenarios and more diverse degradation types would be beneficial. This would help in understanding the limitations and operational range of ReFIR in practical applications, ensuring it can effectively handle unexpected or uncommon degradation patterns.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Given that the paper primarily demonstrates ReFIR's performance on specific types of image degradations, can the authors clarify how well the framework performs across a broader spectrum of degradation scenarios, including less common or more complex types? Further elaboration on its performance in varied real-world conditions could significantly clarify the framework’s versatility and robustness.
* Can the authors provide more details on the computational efficiency of the retrieval process integrated within the ReFIR framework? Specifically, what are the impacts on processing time and computational resources when applying ReFIR to large-scale restoration models, and are there strategies in place to optimize this process in real-time applications?
* In cases where the retrieval process fails to find sufficiently relevant or high-quality images, what strategies does ReFIR employ to ensure the quality of the restoration output? A detailed explanation of fallback mechanisms or alternative approaches when ideal reference images are not available would be helpful in assessing the framework's reliability and functionality in less-than-ideal conditions.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Since ReFIR relies on external datasets for retrieving reference images, there is a potential for bias if those datasets are not diverse or inclusive of various demographic groups and scenarios. This could inadvertently lead to biases in restored images, particularly in sensitive applications. The authors should discuss the measures taken to ensure the diversity of the datasets and address potential biases in the image retrieval and restoration process.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Data quality and representativeness', 'Ethics review needed: Discrimination, bias, and fairness']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [Q1: Reliance on the image quality and relevance]
> The effectiveness of ReFIR is significantly dependent on the quality and relevance of the images retrieved from external databases. This reliance could limit the framework's effectiveness in scenarios where highly relevant and high-quality reference images are scarce or unavailable. Addressing this, the authors could explore mechanisms to assess and ensure the relevance and quality of retrieved images dynamically during the restoration process or develop fallback strategies when suitable images are not found.
We understand your concerns, and we have thus conducted a thorough experiment presented in the **global author rebuttal part**.
With these newly developed fallback and filtering strategies, our ReFIR is further improved under the real-world retrieval settings, relieving the dependence on retrieval system.
### [Q2: Computational overhead from retrieval and attention modification]
> While the paper mentions that the framework does not require additional training, it does not thoroughly address the potential increases in computational overhead and latency introduced by the retrieval process and the modification of self-attention layers. This could be particularly challenging when deploying in real-time or resource-constrained environments. Future iterations of the framework could benefit from a detailed analysis of computational costs and potential optimizations to streamline the retrieval and integration processes.
In order to reduce the computational overhead of the retrieval process, we **pre-calculated** the feature vectors of all images in the retrieval database before inference. Furthermore, the cosine similarity between the LR image vectors and all retrieval vectors is computed **in parallel**. These strategy results in an **almost negligible** (less than 3% inference time) cost of computational overhead.
The modification of self-attention layers only happens in the last 20 timestep in the decoder layers, i.e., **only 12%** attention layers are modified while the left is kept intact.
These analysis is also supported by practice, in which we find these two process only take up <5% inference time, with most computational cost coming from the original LRM. Future LRM acceleration (e.g. pruning, quantization) will benefit our ReFIR, and we will explore more efficient implementation in the future.
### [Q3: Testing on broader degradations]
> The experimental setup primarily demonstrates the effectiveness of ReFIR under controlled conditions with specific types of image degradations. To fully validate the robustness and generalizability of the framework, additional testing across a broader spectrum of real-world scenarios and more diverse degradation types would be beneficial. This would help in understanding the limitations and operational range of ReFIR in practical applications, ensuring it can effectively handle unexpected or uncommon degradation patterns.
As suggested, we test the robustness of our ReFIR framework on another two challenging real-world mixed degradations, i.e., low-resolution$\times$4+noise$\sigma$=50, as well as low-resolution$\times4$+JPEG$q=30$. The results are as follows.
TableA low-resolution$\times$4+noise$\sigma$=30 on the WR-SR dataset
| Metric | SeeSR | SeeSR+ReFIR |
|--------|-------|-------------|
| PSNR | 23.06 | 23.16 |
| SSIM | 0.6273| 0.6298 |
| LPIPS | 0.2710| 0.2698 |
| NIQE | 3.8580| 3.7266 |
| FID | 44.10 | 42.78 |
TableB low-resolution$\times4$+JPEG$q=30$ on the WR-SR dataset
| Metric | SeeSR | SeeSR+ReFIR |
|--------|----------|-------------|
| PSNR | 23.84 | 23.91 |
| SSIM | 0.6679 | 0.6686 |
| LPIPS | 0.2291 | 0.2287 |
| NIQE | 3.8723 | 3.8148 |
| FID | 36.74 | 35.56 |
It can be seen that our ReFIR framework maintains its effectiveness on real-world hybrid degradations, demonstrating its robustness and generalizability.
### [Q4: Possible bias from retrival database]
We agree with you. Currently, we only use publicly available, high-quality academic data as the retrieval database. In the future, we will pay attention to content diversity and potential bias in constructing larger scale retrieved data, e.g., we could use VLMs or LLMs for automatic filtering followed by human review. | Rebuttal 1:
Rebuttal: ## Global Author Rebuttal
### **[1. Remarks by authors]**
We would like to express our sincere gratitude to all the reviewers for taking their time reviewing our work and providing fruitful reviews that have definitely improved the paper. We are encouraged that they find our method
- "offers a scalable and adaptable solution that can be applied across various existing models without the need for retraining" (Reviewer NDC8, Reviewer UnRT, Reviewer qg3y)
- "addressing the hallucination dilemma often faced by large restoration models" (Reviewer NDC8)
- "creatively leverages external image databases to enhance the detail and fidelity of restored image" (Reviewer NDC8)
- "not only improves quantitative metrics like PSNR and SSIM but also yields visually more accurate restorations" (Reviewer NDC8, Reviewer SJc6, Reviewer UnRT)
- "The experiments are solid" (Reviewer UnRT)
During this rebuttal period, we have tried our best and made a detailed response to address the concerns raised by reviewers. If you have any further questions, we will actively discuss with you during the author-reviewer discussion period.
---
### **[2. Futher exploration on the retrival in the wild]**
We have noticed Reviewer NDC8, Reviewer SJc6, and Reviewer UnRT rise concerns on the performance of our ReFIR when highly relevant and high-quality reference images are scarce or unavailable. Here, we first discuss the mechanism already existed in ReFIR for mitigating uncorrelation problems. After that, we explore several new techniques to further improve the model ability in this extreme situation.
**2.1 Spatial Adaptive Gating for relevance filtering**
In Sec4.2, we introduce the Spatial Adaptive Gating (SAG) to resolve the spatial misalignment between LR images and Ref images. Since the mask $M$ in Eq.2 contains pixel-wise similarities between LR and Ref, we point out that this similarity-aware mask can **filter out low correlated pixels** from the reference image, thus imporving robustness in the wild. The visualization of this similarity mask $M$ is shown in Fig.13 of the Appendix.
**2.2 Exploration for Further Improvments**
Furthermore, as suggested by reviewers, we further develop several new techniques for further improvement.
We first develop **fallback strategies** to handle the situation where reference images are not available. This includes two possible choices:
1. Since our method does not modify the parameters of LRMs, we can directly use the original inference pipeline of the LRM without using reference images. We denote this as `origin_lrm`.
2. We use the BLIP model to caption the LR image to obtain the text prompt, which will then be fed into the StableDiffusion2.0 model to generate semamtic-similar high-quanlity images as the reference. We denote this as `gen_ref`.
We then develop the **adaptive filtering policy** to assess and ensure the relevance and quality of retrieved images. This includes three alternatives:
1. We set the cosine similarity threshold, and use the retrieved image as the reference only if the similarity is greater than the threshold, otherwise use the image provided by the above fallback strategy. We denote this as $r_{rele}$, and set $r_{rele}$ as 0.6 through ablation.
2. We use the non-reference image quality assessment metric CLIPIQA score as a threshold, and use retrieved image only when its quality is greater than the threshold, otherwise the fallback strategy is used. We denote it as $r_{qual}$, and set $r_{qual}$=0.6246 obtained from the mean CLIPIQA of the retrieval database.
3. Task-oriented adaptive filtering. We respectively use the retrieved image and the fallback strategy to generate the result. And then we select the one with a larger task score as the final result. We denote it as $r_{task}$
Due to limited rebuttal time, we use SeeSR large restoration model as a representative, using the real-world degradation dataset RealPhoto60.
We first give the results in which all LR images adopt the fallback strategies in TableA.
TableA Results of all images using fallback strategies
| setup | NIQE | MUSIQ | CLIPIQA |
|-----------|---------|--------|---------|
| origin_lrm | 4.7432 | 55.54 | 0.6575 |
| gen_ref | 4.6923 | 55.98 | 0.6602 |
| ReFIR | 4.4986 | 57.01 | 0.6759 |
It can be seen that using the SD2.0 generated images as the fallback image can bring slightly improvement compared with noRefernce. But it is still inferior to the results using the retrieved dataset, which we argue is due to the fact that there is a knowledge overlap between generated images and the LRMs.
Then we combine the two fallback strateges and three filtering policies to obtain a more robust RAG system in the real-world senarios, as shown in Table B.
TableB Experimental results of combining different fallback and adaptive filtering strategies
| Metrics | origin_lrm+$r_{rele}$ | origin_lrm+$r_{qual} $| origin_lrm+$r_{task}$ | gen_ref+$r_{rele}$ | gen_ref+$r_{qual}$ | gen_ref+$r_{task}$ | ReFIR-baseline |
|------------|----------------------|------------------|------------------|------------------|----------------------|----------------------|-----------------|
| NIQE | 4.4982| 4.4943| 4.3891| 4.4978| 4.4923| 4.3464| 4.4986|
| MUSIQ | 56.78| 57.12| 57.77| 56.95| 57.11| 57.68| 57.01 |
| CLIPIQA | 0.6730| 0.6745| 0.6898|0.6743| 0.6770| 0.6942| 0.6759 |
It can be seen that using the SD generated image as the reference is better than not using a reference image, supporting the results in Table A. In addition, the task-oriented filtering strategy achieves a significant performance improvement due to the fact that it works in the output end, but is is accompanied by a larger inference time. Using SD-generated image as fallback and adopt the quality-based filtering act as a competitive alternative to the our preivous baseline. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Understanding the Differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks | Accept (poster) | Summary: The paper concerns using the proposed "Dynamical Systems Framework" (DSF) to better understand the differences between linear SSMs, RNNs, linear attention and Softmax attention. The DSF shows a way to write each of these models as a linear time-varying recurrence. The paper aims to use this to answer questions about the differences between the three types of models, the role of state expansion, and how ideas from SSMs can be used to improve RNNs. Empirical experiments on the multi-query associative recall (MQAR) task and Wikitext are used to support the findings.
Strengths: - The goal of better understanding the differences between the different models, and in particular trying to understand the gap between Softmax attention and its more efficient alternatives is important
- The paper is well-written and clear. I particularly liked Figure 4 in Appendix A which made the dimensions being referred to throughout the paper clear.
- The different formulations and results in the paper appear to be technically correct
- Insights are used from SSMs to propose a different normalization scheme for linear attention which appears to improve performance on the tasks considered
- The empirical results on the MQAR and Wikitext experiments support the claims
Weaknesses: - The proposed DSF is nice in that it allows comparing each of the different types of models, but it's originality is weakened by the fact that most of the main parts of the framework have been discussed before in prior work
- The connections between linear attention and linear RNNs/SSMs are clear and have been discussed in prior works as mentioned in related work.
- In addition, the infinite dimensional representation of softmax attention as discussed in Section 3.2.1 is also discussed in Katharopoulos et al. 2020 among others
- Softmax attention has also been formulated as a dynamical system in https://arxiv.org/abs/2401.06104 which should be cited
- The findings that are suggested as a result of the framework also do not seem to be new or particularly surprising
- The fact that larger state sizes lead to increased expressivity is not surprising and has been explored before, e.g. in https://arxiv.org/abs/2312.04927, https://arxiv.org/abs/2312.00752
- The proposed improvement to RNNs is to change the parameterization of a linear RNN (the quasi LSTM) by replacing its transition parameterization with that of S6. But this seems to just make it more like the already existing linear RNN, RG-LRU (with a slight different in parameterization from S6). This seems to be a much weaker finding than the claimed "What do selective SSMs teach us about improving RNN architectures" from line 53 of the Intro.
- As a counter to this point I am making, the proposed improvement to the normalization of linear attention appears to be novel and appears to improve performance on the tasks considered.
- The focus and claims around expressivity and performance in the paper appear to be very language focused.
- This is obviously of interest, but perhaps narrows the ability for the DSF to provide new and interesting insights as I mention in the point above.
- In addition, only 2 small tasks are considered to support the claims. This limits the ability to understand if the claims hold in more general settings.
- The authors state in the limitations that to strengthen the insights, a larger and more complex language task is needed. This could be great, however I would suggest perhaps also considering more diverse data modalities could also strengthen the insights of the proposed framework.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Perhaps considering data modalities and examples other than language could help to highlight insights that can be drawn from the DSF? E.g. perhaps modalities that more naturally arise from dynamical systems could help to further highlight differences between softmax attention and its more efficient alternatives? Or could help highlight differences between different linear attention/SSM/RNN variants?
- Are the differences between softmax attention and the other methods on the MQAR task, e.g. in Figure 5, simply due to the difference in state size? One can think of the state size of softmax attention as the size of its KV cache required for the sequence length of the task. If you make the recurrent alternatives have a state size that matches the softmax attention KV cache size for the task sequence length, do they then perform as well? Or are there other things happening in Softmax Attention as well? Perhaps exploring this through the lens of the DSF framework could also provide insights?
Minor:
- It is stated the Matrix in Equation 12 is $L\times L$, but isn't this only the case when $B$ and $C$ are $N \times 1$ and $1 \times N$? The presentation in the paragraph before suggests these are general matrices.
- Line 203 says "The DSF makes apparent how a necessary condition for separability is
for the map ζ(·) to be expressed by a finite-dimensional kernel, ". But in line 154 it says "Softmax attention also satisfies the assumption of separability". I realize the point that is trying to be made regarding "practical separability", but nonetheless, it appears to be a contradiction as currently written.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the helpful feedback and pointers! Changes or clarifications (as deemed appropriate) for every single point raised will be incorporated in the final version of the paper. In what follows we provide a brief discussion on the points that were raised in the review:
**Prior studies:** Thank you for the feedback and pointing us at [Oren et al. (2024)](https://arxiv.org/abs/2401.06104). We agree that connections between the three model classes have been studied in previous work and are well-established. Our intent is to translate the three model classes specifically as dynamical systems - which is achieved by the DSF - in order to use methods from control theory to analyze the models and to obtain a single form of representation to easily compare certain parameterization choices between the model classes. In the revised version, we made sure to better highlight this intent and we appropriately cited [Oren et al. (2024)](https://arxiv.org/abs/2401.06104).
We also acknowledge that the infinite representation of softmax-attention has been discussed previously in [Katharopoulos et al. (2020)](https://arxiv.org/abs/2006.16236) but also in [Nauen et al. (2024)](https://arxiv.org/abs/2403.02920) and [Choromanski et al. (2021)](https://arxiv.org/abs/2009.14794) among others. We believe the discussion on representation of softmax-attention is important in the context of the DSF, which is why we included it. In the revised manuscript, we made sure to include additional related works and improve the presentation to better reflect previous works.
**Empirical novelty:** We acknowledge that the insight on increased performance with larger state sizes is well-established in the literature, we believe it is beneficial to show that insights provided by the DSF are consistent with previous results. In the revised manuscript, we improved the presentation of this insight to clearly state that we are only recovering existing results and cite the relevant works more prominently.
For the insights on RNNs, the main difference between the proposed qLSTM variant and the RG-LRU is the shared parameterization in state transition $A$ and input matrix $B$. While RG-LRU shares parameters in both matrices, the qLSTM does not (as briefly mentioned in lines 263-265). Interestingly, any attention-based model also shows this coupling in the DSF. In the revised manuscript, we added a discussion of this point for all three model classes (please also see the global response) and plan to include an empirical analysis for the camera-ready version.
**Task complexity & modalities:** Thank you for the feedback and the suggestion. We agree that more complex and also diverse tasks can strengthen the insights we provide. While we believe in-depth analysis of a specific insight (e.g. the proposed normalized attention) including showing performance on complex tasks are more suited for subsequent works (we only have limited computational resources), we agree that other task modalities offer more insights. Therefore, to improve the empirical validation of our findings, we include experiments on the LRA benchmark, which also contains image tasks, in the revised manuscript. As an example, we included the results of the LRA image task in the attached pdf. In doing this we generated an additional insight (performance gap between SSM models and transformers on LRA) as discussed in the general response.
**Attention and state size:** Thank you for the excellent suggestion. Preliminary experiments we ran show better performance of the other models with their state size increased to the size of the KV cache, but not outperforming softmax attention. Hence, it seems that there is more to softmax attention than just the size of the KV cache. Additionally, we believe there is more potential to such an analysis, since it relates to the question of how information is compressed in the state of the DSF. This is important since running these models with the size of the KV cache is not sustainable in practical applications with larger sequence lengths. We think such an analysis could lead to further follow-up works investigating different aspects of this question. Therefore, we will conduct full experiments for the revised version and perform an in-depth analysis of the results.
**Minor:** Thank you for pointing out the two inconsistencies. It is true that the matrix in Eq. (12) is not $L \times L$, but should be a block matrix consisting of $L \times L$ blocks. We will correct this in the revised manuscript. We will also improve the language for practical separability, since it is indeed misleading in the current form.
---
Rebuttal 2:
Comment: Thank you for the clarifying comments and additional results. I believe after the response to the reviewers this is a stronger paper and of interest to the community. I have increased my score.
---
Rebuttal Comment 2.1:
Title: Thank you for the review
Comment: Thank you for taking the time to review the rebuttal and the positive score. We will make sure to incorporating all of your valuable feedback into the final version of the paper. Your thoughtful input is greatly appreciated and helped us improve the quality of our work. | Summary: This paper shows that many sequence models (including attention, SSMs, and recurrent models) can be viewed as linear time-varying dynamical systems. This is helpful for answering some questions about the differences and similarities between these architectures.
Strengths: **Importance of unifying framework**. The present state of sequence model research is quite muddy, with tons of new architectures being proposed. On the surface these architectures often look quite complicated and differentiated, when in reality they are closely related to one another. Because of this, I think unifying frameworks are useful for helping push progress in the space.
**The role of normalization in S6 and Linear Attention.** The paper includes a very interesting discussion of the parameterization of the normalization in S6 and Linear attention.
Weaknesses: **Presentation and clarity.** The introduction and abstract do not provide any details on how the Dynamical Systems Framework works and assumes a reader is familiar with dynamical systems. To improve the presentation and accessibility, I would recommend including a high-level discussion of what the DSF is and what challenges it addresses.
**Theoretical Novelty.** In the first paragraph of the introduction the authors state, “Although these models show great promise in boosting efficiency, current comparisons with attention are merely empirical.” This isn’t correct: there have been many theoretical studies comparing these new recurrent architectures with Softmax attention. To name a few:
- [Arora *et al.](https://arxiv.org/abs/2402.18668)* include a theoretical analysis grounded in results from communication complexity that highlights differences in models ability to perform associative recall.
- Using tools from circuit complexity, [Merill *et al.*](https://arxiv.org/pdf/2404.08819) show that Attention and SSMs are in the same complexity class ($\text{TC}^0$).
- Using tools from communication complexity, [Bhattamishra *et al.*](https://arxiv.org/pdf/2406.09347) provide separation results between attention and recurrent architectures on numerous tasks.
**Empirical Novelty.** The main empirical result in this paper is that increased state size (across architectures) leads to improved performance on MQAR and that separable attention matches Softmax attention with sufficiently large recurrent size. This result has already been shown theoretically and empirically in [Arora *et al.*](https://arxiv.org/pdf/2402.18668) (ICML ‘24) - see Figure 2, Theorem 3.1, and Section 3. The work under review builds upon Arora *et al.* by including a few additional architectures to the analysis. But this prior work should be mentioned in the related work and/or the introduction given the significant overlap in results .
**Claims around Lemma 2.** Some of the language around Lemma 2 is a bit incautious. The authors state that because of Lemma 2, “the larger the state expansion n, the more expressivity the architecture has”. In other words, they are claiming that increasing state expansion strictly increases expressivity. To support this claim, the theorem would need to additionally show that a dynamical system of state dimension $N$ ***cannot*** recover all dynamical systems with state dimension $\hat{N}$. Additionally, I think its important to add the qualification that this holds assuming everything else about the architecture is held constant. It’s of course possible for an architecture with smaller state size to admit solutions that a larger state size model cannot represent if the parameterizations are different.
**Minor typos and clarifications.**
- Line 32: “framework that allows to evaluate” → “allows us to evaluate”
- Line 61: “We use $\sigma(\cdot)$ to denote is the sigmoid function.
- In the list of questions answered by the paper (Line 40), I would include hyperlinks to the relevant sections.
Technical Quality: 3
Clarity: 2
Questions for Authors: It would help to add additional exposition on the results in Fig. 2. What are the takeaways? It looks like exponentiating improves performance, but this isn’t stated explicitly.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes the authors discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the helpful feedback and pointers! Changes or clarifications (as deemed appropriate) for every single point raised will be incorporated in the final version of the paper. In what follows we provide a brief discussion on the points that were raised in your review:
**High-level discussion:** Thank you for this suggestion! We indeed believe that this would help to make the paper accessible to a broader audience. To address this comment, we plan to add an introduction to the concept of dynamical systems and how it applies in the context of foundation models. We will also delve into the specific dynamical system parametrization chosen by the DSF, which is the canonical state space representation. This choice will be motivated by two reasons: (i) this exists a wide body of literature on state space model representations for dynamical systems, and (ii) it encompasses transformers, RNNs and SSMs in a suitable fashion that allows for further analysis. We will also better explain how expressing foundation models in the DSF allows for a direct comparison between models and enables the use of control theoretical tools for their study.
**Prior theoretical comparative studies:** Thank you for pointing this out, we acknowledge that the statement in the introduction is not correct. We intended to convey that to the best of our knowledge so far there has been no unified approach to theoretically analyze transformers, SSMs, and RNNs, but only empirical studies benchmarking specific models of these classes against each other. In the revised version, we changed this to reflect prior theoretical work comparing various models with attention, including properly referencing the papers that are mentioned in the review.
**Empirical novelty:** Thank you for pointing us at [Arora et al. (2024)](https://arxiv.org/abs/2402.18668). We agree that the insight “increased state size leads to better performance” is not novel and has been shown in previous work. We intended to include this insight, since we believe it is beneficial to show that insights generated from the DSF are consistent with the literature. In the revised manuscript, we appropriately cite [Arora et al. (2024)](https://arxiv.org/abs/2402.18668) and improve the presentation to reflect that this insight has been reported before. Additionally, we improved the presentation of the overall document to better highlight the other two insights generated from the DSF (i.e. normalized attention and the S6-inspired state transition for qLSTMs) as well as the additional insights we added (please see the global response for more details). This change puts more focus on the new insights and explicitly presents the state size finding as established knowledge.
**Lemma 2:** Thank you for the feedback, we acknowledge that the language in and around Lemma 2 is imprecise. As correctly pointed out, Lemma 2 should state that expressivity with a larger state size is at least as good as with lower state size, i.e., not strictly better. We reformulated Lemma 2 and the text surrounding it to reflect this. Additionally, we now explicitly state that this only holds if everything else is held constant as mentioned in the review.
**Results discussion:** Thank you for the feedback. We agree that the key take-aways from Fig. 2 are lacking and are not appropriately discussed in the text. In the revised manuscript, we added the following discussion points:
- The results in Fig. 2 and Tab. 1 support the conclusions that (i) performance of linear attention can be improved by using a suitable normalization function and (ii) this normalization function does not need to use the keys and queries but can be learnt independently.
- The difference in parameterization between the proposed normalized attention and SSD mainly lies in the recurrent nature of the normalization parameters in transition matrix $A$ and input matrix $B$ for normalized attention. With the addition of the LRA experiments in the revised manuscript and the performance gap between SSMs and transformers, this warrants the question on the role of this recurrent normalization. In the revised manuscript we added a discussion on this question as detailed in the global response.
**Typos:** Thank you! We will amend these in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response. I look forward to reading about the new insights provided by their framework.
I raise my score from a 5 to a 6.
---
Reply to Comment 1.1.1:
Title: Thank you for the review
Comment: Thank you for taking the time to review the rebuttal and the positive score. We will make sure to incorporating all of your valuable feedback into the final version of the paper. Your thoughtful input is greatly appreciated and helped us improve the quality of our work. | Summary: This paper provides a dynamical system based framework for principled comparisons between various existing recurrent architectures (linear attention, SSMs, etc.). There are focuses on the formulation and the role of the hidden state dimension. Some experimental results on the multi-query associative recall (MQAR) benchmark and the WikiText-103 dataset are provided to demonstrate the differences between the considered models.
Strengths: - The paper is overall well written and organized
- Systematic comparisons between the formulations of various sequence models are provided, which are neat and useful for researchers without much background in the area
Weaknesses: - The two proposed experiments (the MQAR benchmark and the WikiText-103 dataset) seem random and limited. There are no experiments on the more standard benchmark tasks such as the LRA task (focusing on smaller tasks such as the sequential CIFAR task would be valuable) and time series forecasting tasks
- The theoretical insights are actually quite limited. There are unaddressed natural questions such as:
(1) one might wonder about finite-dimensional approximation of the softmax attention,
(2) how Lemma 1 could be related to the kernel method and RKHS
- There are also other aspects that are left out for the comparative studies: e.g., how do the training dynamics differ between the various models (which architecture will converge faster), as well as the stability of the models (how prone are they to the infamous vanishing/exploding gradient problem). Also, can we quantify and compare the long range dependency learning capability of these models?
- No practical guidance in terms of which architecture is suitable for what kind of task is provided. After all, this should be the main goal of the comparisons
Technical Quality: 2
Clarity: 3
Questions for Authors: See above
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the helpful feedback and pointers! Changes or clarifications for every single point raised will be incorporated in the final version of the paper. We provide a brief discussion on the points that were raised in the review:
**Benchmark:** Our main goal was to show-case some selected insights the DSF can bring to all three investigated model classes, i.e., transformers, SSMs, and RNNs. Since softmax-attention-based models are known to be the best performing models for language modalities and considerable research focuses on sub-quadratic models to close the performance gap to softmax-attention, we opted to benchmark on language tasks. However, we added the LRA benchmark to the experimental validation of our insights (see global response). The LRA results are consistent with existing empirical results, which show that attention-based models perform worse than SSM-based models on the LRA. Why this is the case is still an open research question and warrants more detailed investigation, but the DSF points to a potential hypothesis for this performance gap: the recurrent normalization discussed in Section 4.2 of the paper. Along with the LRA results we added a discussion on this point to the revised paper.
**Insights:** Thank you for your feedback. We agree that the paper only showed a few theoretical insights. Therefore, we have extended on these insights and added several more as discussed in the general response. Also thank you for the specific suggestions, which we discuss below.
- *Finite-dimensional approximation:* In the revised manuscript, we expanded on the discussion in Section 3.2.1 on using a Taylor approximation of softmax-attention. Specifically for the Taylor approximation, there exist error bounds that can be used to bound the error between the approximation and softmax-attention on a given interval, e.g., the Lagrange bound (see e.g. Section 3.3.2. in [Chevillard et al. (2008)](https://hal-lara.archives-ouvertes.fr/hal-02102827)). The Lagrange error bound defines the worst-case error of the approximation on the given interval and thus provides a quantitative metric of how well softmax-attention can be approximated. Additionally, we extended the discussion of existing works that use a finite-dimensional approximation of softmax.
- *Kernel method:* Thank you for the excellent question. The attention function $\zeta$ in Eq. (13) (later rewritten in Lemma 1) can be interpreted as a kernel if $\psi = \phi$ , with softmax-attention using a specific type of exponential kernel function, i.e., $\exp(x^\top y)$. This was noticed and further analyzed in [Tsai et al. (2019)](https://arxiv.org/abs/1908.11775) and Katharopoulos et al. (2020). In the revised manuscript, we added a remark on this and refer to the mentioned papers for more details.
**Training dynamics:** While training dynamics (especially convergence) can be studied empirically using experiments, the DSF also allows theoretical analysis. As discussed in Example 2 of [Dörfler et al. (2024)](https://arxiv.org/abs/2401.14029), a gradient-based optimization algorithm (e.g. SGD) can be interpreted and written as a dynamical system. Using this perspective together with the DSF allows interpretation of the training dynamics as two interacting dynamical systems. Thus, the training dynamics can be analyzed theoretically with tools from control theory, e.g., via Lyapunov theory for convergence/stability of the training. However, we believe this question requires an in-depth investigation and additional empirical validation, which are out of scope for this paper.
**Stability:** The exploding/vanishing gradient problem is linked to the eigenvalues of state transition matrix $A$ for SSMs (Orvieto et al. (2023)). Therefore, SSM models actively restrict the eigenvalues to the range [0,1]. Using the DSF the same argumentation can be made for transformers and RNNs. In the revised paper, we included an additional discussion on this.
**Long-range dependencies:** A model's abiltiy to capture long-range dependencies are due to two factors: (i) the modulus of the eigenvalues in dynamic matrix $A$, and (ii) the dimension of the state $x$ (in DSF representation). Other factors include the compression of the input into the state by the encoding architecture, but this is out of the scope of our analysis that focuses on the recurrent block of learning architectures. Given two trained models with numerical values for $A$ and a sufficiently large state dimension, the DSF would be capable of predicting the performance in long-range contexts by looking at the eigenvalues of $A$. Moreover, theory on system theoretic model reduction (e.g. [Obinata & Anderson (2012)](https://dl.acm.org/doi/abs/10.5555/559486)) provides principled ways of comparing the ability of a given state dimension to compress past information. Thus, the DSF can be leveraged to provide a theoretical analysis on the long-range capabilities of the analyzed models. To complement our analysis, we will incorporate this discussion into the revised manuscript.
**Task suitability:** We believe that the question of architecture suitability for specific tasks is a broad open research question, and many factors need to be considered to provide a clear answer. Specifically, the recurrent models presented in this paper are only one aspect of deep learning architectures, but the encoders, decoders, skip connections, etc. also play a role in task suitability. For this reason, we do not aim to answer this question in full in this paper. However, motivated by the comment we have experimentally explored different data modalities. Our findings illustrate (see attached pdf and paper experiments) that transformer architectures (specially with softmax attention) tend to perform better on language tasks and SSMs on tasks that require long-range dependencies such as the LRA benchmark. However, the performance of a model is highly dependent on the specific formulation of the task.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I am satisfied with the response and have raised my score.
---
Reply to Comment 1.1.1:
Title: Thank you for the review
Comment: Thank you for taking the time to review the rebuttal and the positive score. We will make sure to incorporating all of your valuable feedback into the final version of the paper. Your thoughtful input is greatly appreciated and helped us improve the quality of our work. | Summary: This paper introduces the Dynamical Systems Framework (DSF), a theoretical approach for analyzing and comparing various foundation models in AI. The DSF reformulates attention-based models, State Space Models (SSMs), and Recurrent Neural Networks (RNNs) into a common dynamical systems representation. This allows for principled comparisons and generates new insights into their similarities and differences. The authors leverage the DSF to compare linear attention and selective SSMs, provide conditions for approximating softmax attention, analyze state expansion in RNNs and SSMs, investigate differences between linear attention and S6 (Mamba) models, and apply SSM insights to RNN architectures. The paper combines theoretical results with empirical validations on the MQAR benchmark and WikiText-103 dataset.
Strengths: - Novel unified perspective on different foundation model architectures
- Enables new theoretical insights and comparisons between previously isolated model classes
- Sound theoretical analysis with supporting lemmas and proofs
- Empirical validation on MQAR and WikiText-103 provides practical grounding
- Clear potential for guiding future development of efficient, scalable models
- Innovative application of SSM insights to improve RNN architectures
Weaknesses: - Limited empirical evaluation, focusing primarily on MQAR and a single language modeling task
- Lack of statistical significance reporting for experimental results
- Some theoretical results (e.g., Lemma 2) are relatively straightforward
- Dense theoretical sections may be challenging for broader audience
- Immediate practical impact on model performance is somewhat limited
- Doesn't fully explore implications for training or inference efficiency
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How well might the DSF insights scale to much larger models (billions of parameters) used in state-of-the-art applications?
2. Are there theoretical limits to the convergence of linear attention to softmax attention performance with increased state expansion?
3. Have you explored alternative normalization schemes that might bridge the gap between S6 and linear attention while maintaining computational efficiency?
4. How does the S6-inspired state transition in qLSTM affect its long-term memory capabilities?
5. Can the theoretical insights from DSF reformulations be leveraged to develop new, efficient implementations?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge that the DSF parametrization, while theoretically insightful, doesn't necessarily lead to efficient implementations. Empirical validation is limited to synthetic tasks and a relatively small language modeling task, constraining generalizability. The paper also lacks comprehensive analysis of computational efficiency implications for different architectures under the DSF lens.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the helpful feedback and pointers! Changes or clarifications (as deemed appropriate) for every single point raised will be incorporated in the final version of the paper. In what follows we provide a brief discussion on the points that were raised in the review:
**Empirical evaluation:** We agree that the empirical evaluations of our insights were somewhat limited. To improve this we added additional experiments on the LRA benchmark, which also includes image tasks to broaden the empirical evaluation. The partial results (due to time constraints), i.e., only the sequential CIFAR-10 (image) task are shown in Tab. 1 of the attached pdf, and the results of the full benchmark will be added to the revised manuscript.
**Statistical significance:** Thank you for this comment! We are including error bars to Fig. 2 and 3 in the paper. These results are shown (as shaded regions) in Fig. 1 and 2 of the attached pdf.
**Theoretical results:** We acknowledge that Lemma 2 is somewhat straightforward and that this effect has already been shown empirically. However, this finding is a trivial consequence of the DSF and we wanted to highlight that the DSF provides insights that are consistent with the literature.
**Theoretical sections:** Thank you for this comment! We acknowledge that the theoretical sections might be dense for an audience unfamiliar with dynamical systems. To improve upon this point, we plan to incorporate a brief introduction to the conceptual basics of dynamical systems in the introduction of the revised manuscript. We will also extend Section 3.1 with a more detailed presentation to make it more amenable for a broader audience.
**Practical impact:** We added several more insights and analyses to the revised document as stated in the global response and we believe that the proposed work enables leveraging system theoretic insights in order to generate more specific practical impacts in future works.
**Training and inference efficiency:** Although the primary goal of the DSF is to carry out theoretical comparison and obtain insights on the mathematical representations of foundation models, we believe it can also help in providing pointers for efficient training and inference. For instance, as discussed below in the response to the question regarding efficient implementations, the DSF allows identification of efficient algorithms to implement a specific model. Additionally, the DSF naturally allows analysis of the eigenvalues of the state transition matrix $A$, which are linked to the exploding/vanishing gradient problem ([Orvieto et al. (2023)](https://arxiv.org/abs/2303.06349)). We included a discussion on this to the revised version.
On a more theoretical level, it is possible to view the training process as an interaction of two dynamical systems by formulating the optimization algorithm as a separate dynamical system (see e.g. Example 2 in [Dörfler et al. (2024)](https://arxiv.org/abs/2401.14029)). In the revised version, we added a remark on this, but we think elaborating on this requires a more in-depth analysis that is out of scope of the paper.
**Scalability:** In order to explore how the insights (e.g. normalized attention) scale to billions of parameters, an empirical evaluation is vital. Due to the extensive computational demands of this task, we believe this analysis is out of the scope for this paper. In a theoretical analysis, normalized attention scales linearly with context length, similarly to linearized attention and SSD. As such, we anticipate that it will scale similarly to other SSM models that have been demonstrated at a larger scale.
**Softmax approximation:** Thank you for the excellent question! We believe this is a very interesting research question that warrants further investigation, but has no straightforward answer. However, in the special case of polynomial kernel functions (in Lemma 1), softmax-attention can be exactly recovered using infinitely many polynomials. Therefore, in the limit these polynomial kernels can recover the softmax performance with appropriate weights as there is an exact correspondence.
**Alternative normalization schemes:** Yes, we also considered other normalization schemes to the one shown in Eq. (21), i.e.,
\begin{align}
\eta(u_i) &= e^{\textrm{ReLU}(W_\eta u_i)} \\\\
\eta(u_i) &= \sigma(W_\eta u_i)
\end{align}
We observed that they perform similarly to the one presented in the paper. We included the results on these additional normalization schemes to the appendix of the revised manuscript including a brief discussion.
**Long-term memory:** The long-term memory capabilities of a model from a control theoretic perspective depend on the eigenvalues of the state transition matrix $A$ and the size of the state $x$ and thus the size of $A$ ([Orvieto et al. (2023)](https://arxiv.org/abs/2303.06349)). Therefore, any model reformulated in the DSF can be analyzed in this way. Since the proposed state transition is the same state transition as in S6, the two models have the same theoretical long-term memory capabilities. These capabilities then solely depend on the hyper-parameter $n$ (state size) and the initialization of $A$. However, we agree that this question warrants a more detailed analysis and thorough empirical validation, which is out of scope of this paper.
**Efficient implementations:** Thank you for the excellent question. Yes, the DSF can be leveraged to identify and also develop new efficient implementations. In the revised version, we expand the first paragraph in Section 3.2 that only briefly mentions that the DSF can be used for this. In the appendix, we also added a few examples of how this can be done, e.g., the proposed normalized attention can be efficiently implemented using flash linear attention.
---
Rebuttal Comment 1.1:
Comment: I thank the authors' for their detailed responses to my and other reviewers' comments. I am satisfied with the answers, and look forward to seeing the final version of the paper. While I appreciate the paper for its theoretical contribution, I personally find theoretical frameworks to be most compelling when used to engineer and design novel architectures and training protocols. In the case of SSMs, I am more willing to make an exception because I believe that the literature in this field is scattered and oftentimes talking past each other. If I can suggest one thing, I would like the authors to really hone the introduction to allow this paper to become a good introductory theoretical framework for understanding these various architectures. I will maintain my positive score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer BcKL
Comment: Thank you for taking the time to review the rebuttal and the positive score. We will make sure to improve the introduction according to your feedback. Your thoughtful input is greatly appreciated and helped us improve the quality of our work. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their time and effort evaluating our paper. We believe the insightful reviews helped us to greatly improve the paper. The main contents of the rebuttal are several added insights we gained from the DSF, extended experiments including the LRA benchmark, and several clarifications in the text regarding existing works and theoretical results.
**Additional experiments:** In response to the reviews, we extended the evaluation of our insights to the Long Range Arena (LRA) benchmark. The LRA benchmark adds an additional data modality (images) to the evaluation and also allows us to investigate the well-known performance gap between attention-based models and SSMs. Additionally, we plan to test our insights on time series forecasting, specifically on the Informer ETT benchmark.
**Additional insights gained from DSF:** In response to all the reviews, we added several additional insights generated from the DSF.
The goal of this paper is to introduce the DSF as a useful framework to approach theoretical research questions about foundation models. The results included are meant to exemplify important questions that the DSF can answer and open future research directions, rather than be a comprehensive list of the insights, each of which will require theoretical derivations in their own right. However, in order to address the concern of limited insights and enhance the theoretical contributions of this work, we added the following insights:
- Investigating the performance of attention and selective SSM models on the LRA benchmark, has led us to investigate the performance gap in more detail (SSMs outperform transformers significantly). We found that the gap can be explained by the recurrent normalization strategy (discretization step) used by selective SSMs as discussed in Section 4.2 of the paper.
- Using the DSF, we extended our study to provide an analysis of the eigenvalues of the state transition matrix $A$ of all three model classes. These eigenvalues are directly linked to the exploding/vanishing gradient problem ([Orvieto et al. (2023)](https://arxiv.org/abs/2303.06349)). In the case of SSMs and RNNs, the absolute value of the eigenvalues are constrained in the range $[0,1]$ by construction. We observe that for attention-based models this is also true empirically due to the normalization used.
- In the DSF formulation of softmax-attention it is revealed that attention-based models share parameters in the state transition matrix $A$ and the input matrix $B$. While we mention this fact in the paper (lines 263-265), we expanded the discussion of this since SSMs and RG-LRU also shared this parameterization. However, this is not the case in more standard RNNs.
**Novelty:** In response to the reviews, we clarified our own theoretical contributions and improved the presentation to appropriately reflect existing results. The main contribution of our paper is the introduction of the DSF, which is a unifying framework for analysis of transformers, SSMs, and RNNs. To the best of our knowledge, this is the first unified framework that allows analysis of all three model classes in the same parameterization and thus allows to identify differences in the models that lead to significant performance differences. While some of the provided results already exist in the literature (e.g that increased state size improves performance), we also provide novel insights unique to the DSF framework in a comprehensive way that enables further analysis with control theoretical tools. For instance, the DSF enabled proposing the normalized attention, which we have extended in the revised version.
**Attached pdf:** The attached pdf contains Figures 2 & 3 of the original paper including error bars, i.e., confidence margins over 10 random seeds, which we aim to increase further as computational constraints permit. Additionally, Table 1 in the attached pdf shows preliminary results on the LRA benchmark, specifically for the image task (sequential CIFAR-10).
Pdf: /pdf/3c70b86535a4a39ad6c5ded2aad5ac5a4ff9f63e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Causal Effect Identification in a Sub-Population with Latent Variables | Accept (poster) | Summary: This paper addresses the S-ID (Sub-population Identification) problem in causal inference, extending it to scenarios with latent variables. The S-ID problem seeks to determine if a causal effect in a specific sub-population can be uniquely computed from observational data pertaining to that sub-population. The authors introduce new graphical definitions such as S-components and S-Hedges, which are extensions of classical notions like C-components and Hedges. They present a sufficient graphical condition for determining if a causal effect is S-ID and propose a sound algorithm for solving the S-ID problem in the presence of latent variables. Additionally, they show a reduction from the S-Recoverability problem to the S-ID problem.
Strengths: Technical quality: The paper presents thorough theoretical analysis, including formal definitions, lemmas, examples and theorems, as well as the reduction derivation.
Weaknesses: Empirical evaluation: there is no experiments at all besides the last section in appendix briefly describling how the authors want to conduct them. That is to say, this paper lacks experimental results or real-world case studies to demonstrate the practical applicability and performance of the proposed solution, especiallly for the two recursive algorithms
Comparison to other existing methods: The paper surely follows the id, c-id, S-Recoverability literature for related work, but still could benefit from a more extensive comparison with other latent variable models in causal inference, [1-3] to name a few
[1] Liu, Yuhang, et al. "Identifying weight-variant latent causal models." arXiv preprint arXiv:2208.14153 (2022).
[2] Sherman, Eli, and Ilya Shpitser. "Identification and estimation of causal effects from dependent data." Advances in neural information processing systems 31 (2018).
[3] Kocaoglu, Murat, Karthikeyan Shanmugam, and Elias Bareinboim. "Experimental design for learning causal graphs with latent variables." Advances in Neural Information Processing Systems 30 (2017).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Remark 5.3. the authors "conjects that this algorithm is also complete". Is there any example or situation that it returns a false negative?
2. The reduction from S-Recoverability to S-ID in Section 6 seems to suggest that S-ID is a more general problem. Is there any scenario where solving S-ID would be more useful than solving the S-Recoverability problem?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper proved that both proposed algorithms are sound for s-id, but did not show any guarantee for the completeness. Nevertheless, this limitation is mentioned in conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We first answer the two questions and then discuss the reviewer's concerns mentioned in the weakness section.
------
> **Q1**: Is there any example or situation that it returns a false negative?
We have not found any examples where the algorithm produces a false negative. In fact, we have evidence to support that the s-ID algorithm always returns "Fail" when the target causal effect is not s-ID and have observed no false negatives. That is why we conjectured that the algorithm is complete. We have tried to prove this conjecture, and we have succeeded in proving it for some specific cases. However, proving it in general has eluded us so far (perhaps due to the complexity of s-component and s-Hedge structures and the positivity constraint).
----
> **Q2**: Is there any scenario where solving S-ID would be more useful than solving the S-Recoverability problem?
There are two main implications of Theorem 6.1 and Remark 6.2:
- **s-ID is a more general problem**: As the reviewer correctly points out, s-ID is more general than s-Recoverability in the sense that an algorithm for s-ID can tackle s-Recoverability problem as well.
- **s-ID is a more practical setting**: Recall that given the observational distribution of a sub-population, s-Recoverability attempts to identify $P_{X}(Y)$ (the causal effect over the entire population), while s-ID attempts to identify $P_{X}(Y | S = 1)$ (the causal effect over the sub-population). Not surprisingly s-Recoverability does not succeed except under very special settings.
This is what Remark 6.2 argues: the condition for s-recoverability to successfully identify $P_{X}(Y | S = 1)$ is quite restrictive, whereas s-ID has a more realistic condition.
Accordingly, there are indeed many examples where $P_{X}(Y | S = 1)$ is identifiable, but $P_{X}(Y)$ is not. For instance, please refer to Figure 4, where our algorithm returns
$ P_{X_1}(Y | S = 1) = \sum_{Z_1, Z_2} P^s (Z_1, Z_2) \sum_{X_2} P^s (X_2 | X_1, Z_1, Z_2) \sum_{\tilde{X}_1} P^s(Y | \tilde{X}_1, X_2, Z_1, Z_2) P^s (\tilde{X}_1 | Z_1, Z_2).$
While s-Recoverability fails to recover $P_{X_1}(Y)$ since this effect is indeed non-identifiable.
------
## Regarding empirical evaluation
Computing a causal effect numerically typically involves several steps:
- **Causal Discovery (a.k.a., Causal Identification)**: Learning the causal graph from the available data.
- **Causal Effect Identification**: This step focuses on determining whether the causal effect is identifiable and, if so, deriving a formula for it.
- **Numerical Estimation**: Given a finite set of samples, compute the expression derived in the identification phase numerically.
- **Evaluation**: This step measures and evaluates the quality of estimators proposed in the estimation step, which often includes sensitivity analysis.
Similar to [1-5], this paper focuses on addressing the identification part. Still, we provided an example with real-world variables in Example 1 and also carried out a simple numerical experiment in Appendix C using a finite set of samples. However, designing an end-to-end pipeline is beyond the scope of this paper and is an interesting future research direction. This would involve collecting a real-world dataset, conducting rigorous causal discovery, designing a proper estimator based on the variable distribution, and applying various evaluations.
[1] Tian, Jin, and Judea Pearl. "A general identification condition for causal effects." In Proceedings of the Eighteenth National Conference on Artificial Intelligence (2002)
[2] Shpitser, Ilya, and Judea Pearl. "Identification of conditional interventional distributions." Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, 2006.
[3] Bareinboim, Elias, Jin Tian, and Judea Pearl. "Recovering from selection bias in causal and statistical inference." In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 2410-2416. 2014.
[4] Lee, Sanghack, Juan D. Correa, and Elias Bareinboim. "General identifiability with arbitrary surrogate experiments." In Uncertainty in artificial intelligence, pp. 389-398. PMLR, 2020.
[5] Jaber, Amin, Adele Ribeiro, Jiji Zhang, and Elias Bareinboim. "Causal identification under Markov equivalence: calculus, algorithm, and completeness." Advances in Neural Information Processing Systems 35 (2022).
------
> Comparison to other existing methods: The paper surely follows the id, c-id, S-Recoverability literature for related work, but still could benefit from a more extensive comparison with other latent variable models in causal inference, [1-3] to name a few...
Thank you for mentioning these works, especially Paper [2], which is more relevant to our problem.
Papers [1] and [3], however, seem to be more focused on causal discovery, which is the first step of the pipeline we described earlier. Nonetheless, as the reviewer suggested, we will include more related work.
-----
Finally, we noticed that the reviewer is a bit concerned about the "Soundness" and "Presentation" of the paper. We would appreciate it if the reviewer could provide further details so that we can improve the paper.
---
Rebuttal Comment 1.1:
Comment: Having read the authors' rebuttal and comments from other reviewers, my questions have been adequately addressed. As a result, I increase my evalutaion to 6. | Summary: This paper extends the sub-population causal effect identifiability (S-ID) problem to include latent variables by adapting classical graphical definitions such as connected-components and Hedges. It proposes a sound algorithm to compute causal effects in sub-populations with latent variables.
Strengths: 1. The paper is written well and easy to understand.
2. Examples in each section helps to understand the underlying idea easily.
3. All the necessary background is discussed clearly.
Weaknesses: 1. Example 1 could be a better one because socioeconomic status can cause cardiovascular disease.
2. It would be good to include a sub section for summarising any assumptions made.
3. It would be good to include some real-world use-cases benefiting from such setting of causal effect identification.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses section
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's suggestions and positive feedback.
---
> Example 1 could be a better one because socioeconomic status can cause cardiovascular disease.
We acknowledge that the causal graph might not be completely accurate - the primary goal of this example is to demonstrate the difference between ID and s-ID algorithms over a simple graph.
For the particular edge that the reviewer is mentioning (from socioeconomic status to cardiovascular disease), if we simplify/replace socioeconomic status with income, it might be fair to assume that this effect is mediating through $X$, which is the medication choice.
----
> It would be good to include a sub section for summarising any assumptions made.
As the reviewer suggested, we will add a section that summarizes the problem setup and the corresponding assumptions. We will also add a table of notations to improve the clarity.
----
> It would be good to include some real-world use-cases benefiting from such setting of causal effect identification.
Thank you for your suggestion. While our main contribution is to establish the theoretical framework for addressing the s-ID problem in the presence of latent variables, we agree that providing more real-world examples in addition to Example 1 would be valuable for readers looking to apply our proposed method. Therefore, we will include another example in the final version.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I thank the authors for their response. I've read their response and I will stay with my score. | Summary: This paper extends the S-ID problem, which asks if a causal effect within a specific sub-population can be identified using only observational data from that group. The authors consider the scenarios where some variables are latent. They provide a sufficient graphical condition to determine whether a causal effect is S-ID and propose an algorithm based on this criterion. While the paper proves the algorithm's soundness, it suggests it might also be complete. Finally, they show that solving S-ID can solve a related problem called S-Recoverability.
Strengths: - The paper addresses an important problem in causal inference.
- The paper is very well-written and covers the prerequisites very well.
Weaknesses: See the Questions section below.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Although the work provides rigorous theoretical contributions, it would have been nice to evaluate how it would also work empirically, especially in a close-to-real-world scenario.
- There are many variables and notations used throughout the paper. A table summarizing these notations and their definitions could improve readability.
- How often do real-world problems satisfy the restrictive condition in Equation (8)?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of the proposed approach have not been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer and appreciate the positive feedback on the importance of the problem and the clarity of the presentation.
Below we answer the questions.
---
> Although the work provides rigorous theoretical contributions, it would have been nice to evaluate how it would also work empirically, especially in a close-to-real-world scenario.
Computing a causal effect numerically typically involves several steps:
- **Causal Discovery (a.k.a., Causal Identification)**: Learning the causal graph from the available data.
- **Causal Effect Identification**: This step focuses on determining whether the causal effect is identifiable and, if so, deriving a formula for it.
- **Numerical Estimation**: Given a finite set of samples, compute the expression derived in the identification phase numerically.
- **Evaluation**: This step measures and evaluates the quality of estimators proposed in the estimation step, which often includes sensitivity analysis.
Similar to [1-5], this paper focuses on addressing the identification part. Still, we provided an example with real-world variables in Example 1 and also carried out a simple numerical experiment in Appendix C using a finite set of samples. However, designing an end-to-end pipeline is beyond the scope of this paper and is an interesting future research direction. This would involve collecting a real-world dataset, conducting rigorous causal discovery, designing a proper estimator based on the variable distribution, and applying various evaluations.
[1] Tian, Jin, and Judea Pearl. "A general identification condition for causal effects." In Proceedings of the Eighteenth National Conference on Artificial Intelligence (2002)
[2] Shpitser, Ilya, and Judea Pearl. "Identification of conditional interventional distributions." Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, 2006.
[3] Bareinboim, Elias, Jin Tian, and Judea Pearl. "Recovering from selection bias in causal and statistical inference." In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 2410-2416. 2014.
[4] Lee, Sanghack, Juan D. Correa, and Elias Bareinboim. "General identifiability with arbitrary surrogate experiments." In Uncertainty in artificial intelligence, pp. 389-398. PMLR, 2020.
[5] Jaber, Amin, Adele Ribeiro, Jiji Zhang, and Elias Bareinboim. "Causal identification under Markov equivalence: calculus, algorithm, and completeness." Advances in Neural Information Processing Systems 35 (2022).
------
> There are many variables and notations used throughout the paper. A table summarizing these notations and their definitions could improve readability.
Thank you for your suggestion. We will add a table summarizing our key notations.
> How often do real-world problems satisfy the restrictive condition in Equation (8)?
The condition in Equation (8) specifies that following an intervention on $X$, the target variable(s) $Y$ should have the same distribution in both the sub-population and the entire population. In other words, after the intervention, $Y$ should not be impacted by the sub-population, which is a very stringent requirement.
For instance, in the family of graphs where $Y$ is a parent of $S$, Equation (8) does not hold. Consequently, $P_{X}(Y)$ is not identifiable from $P (\mathbf{V} | S = 1)$, while $P_{X}(Y | S = 1)$ can still be identifiable from $P (\mathbf{V} | S = 1)$. | Summary: The paper presents a sound algorithm for checking the s-identifiability of causal effects under sub-populations. This work complements earlier work on s-ID by generalizing the causal graphs to allow hidden confounders (no causal sufficiency). Specifically, the paper introduces the notions of s-components and s-Hedge, in parallel to the classical notions of c-components and c-hedge, and derives theoretical results based on those. The main theorem (Theorem 5.1) summarizes the condition under which a causal effect is s-ID, and a detailed algorithm is also proposed for deriving an identifying formula. Moreover, a reduction from s-recoverability to s-ID is mentioned which provides yet another approach to solve the s-recoverability problem.
Strengths: - I think the problem is quite meaningful since selection bias can be common in the data collection process.
- It is great that the paper not only introduces novel notions such as s-components and s-hedges but also thoroughly reviews the classical notions of c-components and c-hedges, so we can make a comparison. The lemmas and theorems are also in parallel (but different ) to the previous ones for classical identification in [Tian, Pearl], which makes these profound concepts easier to penetrate.
- I found the examples helpful, especially Examples 4 and 6, in aiding my understanding of definitions.
- In general, a hard work that contains valuable theoretical contributions.
Weaknesses: - It seems that the s-ID method is sound but not complete, but I guess the paper already contains enough contributions and the completeness part can always be the future work.
- More intuitions can be provided on the difference between s-ID and ID at the end of page 2 - it would be helpful to provide a more intuitive explanation for Example 1 (in addition to an explanation based on identifying formulas) for readers to see the importance of the problem.
- The definition of $Q[]$ seems to be ambiguous. In Section 2 last subsection, $Q[X]$ is defined as the interventional distribution $Q[X] := P_{x} (V \setminus X)$, but in Theorem 3.4 $Q[D]$ seems to mean $Pr_x(D)$. It may be helpful to clarify the formal definition of $Q[D]$.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Are there any insights on the difference between having $P^s(V)$ vs. having $P(V)$ for identifiabilty? For example, would more variables become dependent so they now belong to the same s-components when collecting data under $P^s(V)$?
- Is there any evidence (counterexample) proving that Algorithm 1 is not complete?
- Regarding the reduction from s-recoverability to s-ID, I'm wondering if there is any impact of this reduction besides theoretical interests. For example, will the reduction-based approach for s-recoverability be more computationally efficient?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: OK.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback and appreciate the positive comments on the contributions and presentation of the paper.
-----
> Are there any insights on the difference between having $P^s(V)$ vs. having $P(V)$ for identifiabilty? For example, would more variables become dependent so they now belong to the same s-components when collecting data under $P^s(V)$?
We confirm your intuition. The presence of selection bias ($S$) injects additional dependencies among variables. Consequently, the rules of do-calculus cannot be directly applied to $P^s$ because of the influence of $S$. As a result, the ID algorithm is no longer applicable to the s-ID setting.
To tackle this challenge, we defined s-components over the non-ancestors of $S$ to capture these additional dependencies among c-components.
We further defined s-Hedge based on the s-component and derived our identification results based on that.
-----
> Is there any evidence (counterexample) proving that Algorithm 1 is not complete?
We have not found any examples where the algorithm produces a false negative. In fact, we have evidence to support that the s-ID algorithm always returns "Fail" when the target causal effect is not s-ID. This suggests that there are no false negatives and that the algorithm is complete. We have tried to prove this conjecture, and we have succeeded in proving it for some specific cases. However, proving it in the general case has evaded us so far due to the complexity of s-component and s-Hedge structures and the positivity constraint. This is definitely a future work direction for us, but also for the community at large.
----
> Regarding the reduction from s-recoverability to s-ID, I'm wondering if there is any impact of this reduction besides theoretical interests. For example, will the reduction-based approach for s-recoverability be more computationally efficient?
The goal of our reduction was to demonstrate that the s-ID algorithm can also address the s-Recoverability problem, thus, algorithm 2 might not be the optimal choice for the s-recoverability problem. We also note that computational complexity is not a limitation for either approach. However, in cases where the causal effect is identifiable, our method could provide a different expression. Having different expressions for the target causal effect could potentially be beneficial for developing an estimator that numerically computes the causal effects using a finite set of samples. This can be further studied in future work.
----
> More intuitions can be provided on the difference between s-ID and ID at the end of page 2 - it would be helpful to provide a more intuitive explanation for Example 1 (in addition to an explanation based on identifying formulas) for readers to see the importance of the problem.
We thank the reviewer for the suggestion. We will include a more detailed discussion in the revised version, where we have an additional page.
----
### Regarding the definition of $Q[]$:
Thank you for pointing out this typo - we will fix it.
In graph with observable variables $\mathbf{V}$, for each $\mathbf{D} \subseteq \mathbf{V}$ we have $Q[\mathbf{D}] = P_{\mathbf{V} \setminus \mathbf{D}}(\mathbf{D})$. $Q[\mathbf{D}]$ follows this definition in Theorem 3.4. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery | Accept (poster) | Summary: The paper investigates the limitations of popular UED approaches, such as PLR and ACCEL, demonstrating that they do not improve upon the Domain Randomisation baselines, where levels are randomly sampled. The author's main claim is that learnability does not correlate with the scoring functions MaxMC and PVL used by current UED approaches, leading to sub-optimal performance.
To address this issue, they propose Sampling For Learnability (SFL), a simple algorithm where levels are chosen to prioritize learnability. In the context of the paper, the learnability of a level is defined as $p \cdot (1-p)$, where p is the success rate of the agent on that given level.
The authors conduct experiments on three different environments: MiniGrid, Single, and Multi-Agent JaxNav. As a metric, they propose the use of the conditional value at risk (CVaR), evaluating on an α% of levels on which the agent performs worst, from a randomly generated buffer of levels.
The results show that SFL outperforms current UED approaches. Simultaneously, when it is possible to compute it, true regret outperforms SFL, indicating that the flaw of UED algorithms is indeed in the scoring function approximation. To summarize, the main contributions of the authors are:
- Demonstrate sub-optimality of current UED approaches, due to lack of correlation between their scoring function and learnability
- Propose a new algorithm, SFL, which prioritizes levels with high learnability
Strengths: - UED has indeed been seen to perform sub-optimally in practice, i.e. as evidenced in the JaxUED paper. It is thus an important problem to address since limited work has been done
- The investigation of the correlation between learnability and the scoring function is a good justification for the paper itself to exists, and gives a plausible reason for the poor performances of UED on some tasks.
- The SFL algorithm proposed is novel (as far as I know), simple, and easy to understand. The results obtained on the 3 environments they considered seem promising.
- CVaR seems a more principled and justified way to investigate the robustness of Auto-curricula algorithms
Weaknesses: - Learnability definition: why are environments that are solved exactly half of the time the most valuable ( given the $p(1-p)$ function used to define it)? It seems an arbitrary choice, which should need to be justified better through additional experiments. I do not agree that levels that are solved 5% of the time are as valuable as ones solved 95% for example.
- Many parameter choices are just reported with no justification supporting it ( e.g. hyperparameter $\rho$ for the algorithm)
- Experiments are poor in broadness. Given the purely empirical nature of the paper, I would expect way more experiments to indeed show that SFL outperforms the UED baselines. This includes more seed for the Multi-agent JaxNav, and more environments. As of now, it is not possible to actually confirm that SFL is indeed better than UED algorithms.
- The writing is poor and sometimes confusing. Learnability is referenced way before being formally defined, the syntax of some sentences is poor, etc... By reading the paper, it seems the writing has been rushed.
Technical Quality: 2
Clarity: 1
Questions for Authors: - In Algorithm 1, $\rho$ is set to $0.5$. Hence, half of the levels used for updating the policy $\pi_\phi$ are ones with high learnability, and half are sampled at random. Why do not set $\rho$ to a higher value? I would like to see a plot showing the performance of SFL upon varying $\rho$, so as to have a better idea of how indeed training in 'more learnable' environments correlates with better performances
- In Alg.1, what are the performances if instead of sampling uniformly from the high-learnable levels $\mathcal{D}$, you select levels in decreasing order of learnability?
- What is the average/median learnability of the randomly created levels in Alg 2? What about the top-K ones? It is important to see and report it so indeed one is sure that the levels you are using are indeed ones with high-learnability. As far as I can infer from the paper, it could still be that most of the randomly created levels have low learnability.
- Can you provide more details on what you did to make current UED methods work better? The poor performance of UED algorithms was already highlighted in the JAXUED paper, but I am not sure if this is just because they were not optimized fully, e.g. via a more extensive hyper-parameter search
- Did you try using different definitions of Learnability? How did they perform?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: No ethical limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review and useful comments, particularly highlighting that we are focusing on an "important problem", and that our algorithm is "simple, and easy to understand". Please find our responses below.
# Weaknesses
## Different Definitions of Learnability
Intuitively, we justify our definition as follows (in a goal-based setting where there is only a nonzero reward for reaching the goal):
- $p$ represents how likely the agent is to obtain positive learning experiences from a level.
- $1-p$ represents the maximum potential improvement the agent can make on that level.
- Multiplying these yields (probability of improvement) * (improvement potential), i.e., expected improvement.
$p(1-p)$ can also be seen as the variance of a Bernoulli distribution with parameter $p$, i.e., how inconsistent the agent's performance is.
We have added this explanation to the updated manuscript to increase clarity.
To confirm this choice we have run a set of experiments on single-agent JaxNav where we represent the learnability function as a piecewise quadratic. During this test, the peak learnability value remained at 0.25 but we varied the success rate at which the peak occurs. The results are in Figure 4\(c) of the attached PDF; overall, a peak around $p=0.5$ performs the best, and moving away from this reduces performance slightly. However, as performance is not dramatically worse, this indicates that the most important aspect of learnability is in rejecting levels with perfect or zero success rates.
## Hyperparameter choices / Ablations
Thank you for raising this point. We have tuned all of the hyperparameters, and have run ablations on SFL. **Our conclusions remain unchanged (i.e., that SFL outperforms UED)**, but we have more insights now into which parameters are most important for SFL. Furthermore, we find that SFL is less sensitive to hyperparameters than PLR or ACCEL. The new tuned results are in Figure 1, and the ablations are in Figure 2 of the 1-page PDF.
## More seeds and environments
Thank you for raising this point. We have run more seeds for multi-agent JaxNav, bringing the total count to 5, and our results have not changed. We have also included results on an additional environment, XLand-MiniGrid, as described in our global response. Here, SFL also outperforms PLR and DR.
We now **have results in four environments demonstrating SFL's superiority compared to SoTA UED methods**. We further compare against several ablations. We believe these results are sufficiently broad to showcase SFL's performance.
## Writing
Thank you for raising the writing quality issues, and we apologise for letting them slip through. We have carefully reviewed the submitted manuscript and have fixed these issues for our updated version.
# Questions
## Different values of the sampling ratio $\rho$
We agree that this is an important question and we have tested various values of $\rho$ on single-agent JaxNav with results shown in Figure 2 of the attached PDF. Increasing $\rho$ beyond 0.5 increases performance slightly.
Due to the high maximum fill % used for JaxNav, we hypothesise that the randomly generated levels are able to present challenging control and navigation problems.
While increasing $\rho$ makes the agent train on more learnable levels, it may also decrease the diversity of the levels seen by the agent.
## Decreasing order of learnability
Thank you for this suggestion, we agree it is a useful question to answer. We have trialled selecting levels in decreasing order of learnability rather than randomly and, as illustrated in Figure 1 of the attached PDF, this has roughly the same effect as reducing the buffer size.
This makes sense, as by reducing the buffer size or selecting in decreasing order, we are restricting the range of levels that can be chosen to only the highest-learnability ones.
## Average vs top-k learnability
For 45 iterations for single-agent JaxNav, we logged the average and median learnability of the entire set of sampled levels, and the subset we use to train on, with results below. In short, **the entire set of sampled levels generally has low learnability, whereas the top 100 levels have much higher values**. This shows that SFL is filtering out levels with very low learnability (i.e., levels that the agent either already perfectly solves or that are currently too hard). This result further explains why we outperform existing methods. We appreciate you raising this point and have added this analysis to the manuscript.
| | Mean All | Median All | Mean Subset | Median Subset |
|------------------|------------|------------|-------------|---------------|
| Value | 0.014 | 0.0 | 0.198 | 0.195 |
## What we did to make UED better
For the existing UED methods, we used the open-source implementations provided by JaxUED and conducted a thorough sweep of hyperparameters, as explained in our general response. Additionally, as JaxUED showed, and as we've also found, DR can be surprisingly competitive if the PPO parameters are tuned appropriately. Indeed, in the multi-agent setting of JaxNav, DR is in line with or outperforms all existing UED methods during CVaR evaluation; albeit the UED methods use hyperparameters tuned on the single-agent JaxNav setting.
## Different learnability definitions?
Following your suggestion, we have experimented with different peak values, as outlined above. We have not tried an altogether different definition because ours followed from the intuition explained above. However, our code will be open-sourced, allowing the community to easily try alternative definitions.
# Conclusion
Thank you once again for your detailed review. We hope our responses have addressed your concerns, and we welcome any further discussion or questions you might have. If we have successfully alleviated your concerns, we kindly ask you to consider updating your score to recommend accepting our paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the answer
Comment: Thanks for your detailed answer. I took some more time to think about the points you raised:
**Different Definitions of Learnability**: I see that indeed rejecting impossible or already completely solved level is indeed the biggest factor in increasing performance. I would still argue however that I find the learnability definition not very principled - if what mentioned in the first sentence is true, many other scoring function which do the same in those extreme cases, and which chosoe a better heuristic in the remaining scenarios will outperform SFL. Isn't that the natural thing to try?
**Hyperparameter choices / Ablations**: Can you clarify the sentence ' we have more insights now into which parameters are most important for SFL'. Which are these hyperameters, and more specifically, why is that the case?
**More seeds and environments**: Thanks for adding more seeds and an additional environment.
**Writing**: can you provide the most significant changes done to the writing?
**Different values of the sampling ratio**: You say *increasing makes the agent train on more learnable levels, it may also decrease the diversity of the levels seen by the agent* - do you have any insight on the structure of the levels that are create by SFL, especially when $\rho$ is increased to 1? Many similar plots have been produced by other similar papers, such as Fig. 3 in the ACCEL paper (https://arxiv.org/pdf/2203.01302).
**Average vs top-k learnability**: You should also compare this with the levels of learnability in the buffers of PLR and ACCEL for example, such as one can have a complete picture of where the advantage by using SFL is coming from.
**What we did to make UED better** and **Different learnability definitions?**: Thanks for your answer
In general, I agree with the authors when saying that SFL outperforms UED method in the environments considered. However, beside the remaining doubts above, I am still on the fence for this work. My main criticism comes from the fact that a new method (SFL) is proposed, but that there is no extensive justification on why this is happening. I believe it would be good to have more insight in the structure of the levels SFL prioritizes, and if there are other gaps SFL is actually filling beside excluding impossible or completely solved levels. Is learnability itself better, or any method which excludes the pathological type of levels I just mentioned would actually perform similarly? I believe this is an important point to address.
Sorry for the delay in the answer, and I understand if you are not able to produce a response in time!
---
Reply to Comment 1.1.1:
Title: Response 1/2
Comment: Dear reviewer, thank you for your response, your continued engagement, and your commitment to improving our paper! Please find our responses below.
# Hyperparameter choices / Ablations
First, as illustrated in Figure 2 in the rebuttal PDF, SFL is robust to all of its hyperparameters, and suboptimal choices do not lead to catastrophic failure.
That being said, the most important hyperparameters are:
- $N$: How many levels to sample. Sampling more levels increases performance, as more levels gives SFL a larger sample to draw high-learnability levels from.
- $K$: Buffer Size. In Jaxnav, we find a small buffer size is beneficial, and in Minigrid it is better to have a slightly larger one. This could relate to how easy the environment is (corresponding to how long it takes to learn a particular level). As JaxNav is harder than Minigrid (as it also involves low-level continuous control obstacle avoidance in addition to maze solving), it may be that training on each level more times is beneficial. By contrast, since Minigrid is easier, the number of episodes required in a level may be much less.
- $T$: The buffer update period. If this value is too large, then performance degrades. This is because the learnability of the buffer, and therefore the usefulness of the levels, decreases as the agent trains on it.
# Writing
We kept the structure of our manuscript largely the same and made several local and low-level writing changes and fixes to ensure the writing flows better. One structural change we did make is to define learnability earlier on, in Section 4.1. We finally also added additional explanations and descriptions based on the reviews by yourself and the other reviewers.
# Different values of the sampling ratio
We agree such a plot would be useful, and will provide one in our revised manuscript.
Since we are unable to share figures during the discussion period, we report the shortest path, number of walls, and solvability, averaged over training in the table below.
We find that SFL has marginally longer shortest paths and marginally fewer walls. The SFL levels are also considerably more solvable.
| Method | Shortest Path (Mean) | N Walls (Mean) | Solvable (Mean) |
|:-----------|:-----------------------|:-----------------|:------------------|
| PLR (PVL) | 4.74 (0.06) | 25.59 (0.19) | 0.89 (0.01) |
| PLR (MaxMC) | 7.03 (0.49) | 22.63 (0.21) | 0.89 (0.02) |
| SFL | 8.12 (0.02) | 21.98 (0.43) | 1.00 (0.00) |
Aside from solvability, there is not a noticable difference in these metrics between PLR and SFL, despite SFL significantly outperforming PLR. Qualitatively, we find that in JaxNav, levels with high learnability tend to involve a lot of turning and intricate obstacle avoidance (as opposed to long paths). As such, the number of walls and shortest path length do not fully capture a level's difficulty. We will add illustrative examples of generated levels to our Appendix to demonstrate this point.
# Average vs top-k learnability in UED:
Thank you for suggesting this experiment, we have run the same analysis for ACCEL and PLR in JaxNav. We further compute the correlation between the score (e.g. MaxMC or PVL) and learnability for all levels in the buffer. We find that:
- **The average learnability of levels in the PLR/ACCEL buffer is very low.** This is also true when selecting only the levels with the top 50 PVL/MaxMC scores.
- There is no significant correlation between PVL/MaxMC and learnability.
- Most of the levels selected by PVL and MaxMC can consistently be solved by the agent.
The table below reports the learnability and success rates for levels within the PLR/ACCEL buffers averaged over training. At each evaluation step, we calculate the average and median values for the entire buffer and then average these values over training. Finally, we report the mean and standard deviation across three different seeds.
| Method | Learnability (Mean) | Learnability (Median) | Success Rate (Mean) | Success Rate (Median) |
|:-------------|:----------------------|:------------------------|:-----------------|:-------------------|
| PLR (PVL) | 0.01 (0.00) | 0.00 (0.00) | 0.85 (0.01) | 0.96 (0.00) |
| PLR (MaxMC) | 0.02 (0.00) | 0.00 (0.00) | 0.85 (0.03) | 0.95 (0.04) |
| ACCEL (MaxMC) | 0.02 (0.00) | 0.00 (0.00) | 0.93 (0.01) | 0.97 (0.02) |
| ACCEL (PVL) | 0.01 (0.00) | 0.00 (0.00) | 0.94 (0.02) | 0.97 (0.02) |
This result further strengthens our findings (shown in Figure 2 of the original manuscript) that current UED score functions do not correspond to learnability. Instead, most levels in the UED buffers can already be solved 100% of the time. By contrast, as shown in our rebuttal, SFL consistently selects levels with high learnability.
---
Rebuttal 2:
Title: Response to authors
Comment: Dear Authors,
Thanks for your detailed response and additional experiments.
**Different values of the sampling ratio**: Thanks for producing these results. In the final manuscript, I would be interested in seeing other possible features of the environment which are common in levels sampled with SFL
**Average vs top-k learnability in UED**: This result is interesting. I would consider to appropriately insert it in the appendix of the final version of the paper.
**Justification for SFL & Different Definitions of Learnability**: Thanks for providing additional examples of learnability. This addresses the issues I raised.
Given the above discussion, I believe most of the issues I raised were appropriately addressed by the authors. I will raise my score accordingly.
Thanks again for the detailed answers and the additional experiments, and good luck with the paper
---
Rebuttal Comment 2.1:
Title: Response to ycoo
Comment: Dear Reviewer,
Thank you for your timely response to our comments. We believe we have addressed the concerns you have raised in detail, including through additional experiments which confirm and strengthen our original findings. We also appreciate the increase in your score but were hoping for more significant support given your positive response. | Summary: This experimental paper proposes a UED method (SFL) for JaxNav, a continuous single- and multi-robot navigation task in a partially observable setting. The authors document the shortcomings of UED methods, (Domain Randomisation, Prioritised Level Replay, and ACCEL) on this partially observed, continuous action, continuous state setting. The authors provide a demonstration that the scoring mechanism used by the above UED methods is misguided. Their method, SFL, uses a “learnability score” which focuses learning on levels for which agents achieve success rates closer to 50%. To better focus on robustness and generalization, the authors develop an empirical CvaR measure of success for evaluation and comparison of methods.
Strengths: Scientists who are interested in developing and implementing the RL agents in real life will find this experimental paper important.
The paper is technically sound and provides simulation support for the claims. The authors provide a nice discussion of the weaknesses of SFL (proposed version works only for binary outcome, deterministic environment). I have not gone over the github site; aside from the points made below, replicability is good.
The learnability score is original (obvious after reading the paper, but perhaps not before). This score nicely takes advantage of the JaxNav environment which provides fast training of an RL agent. Related work is adequately cited and it is very clear how SFL differs from the 3 UED methods discussed.
Weaknesses: See questions below.
Technical Quality: 3
Clarity: 3
Questions for Authors: Lines 15-17. Suggest to delete spurious statements such as “We had tried our best to make current UED methods work for our setting before going back to the basics and developing our new scoring function, which took a few days and worked out-of-the-box. We hope our lessons and learnings will help others spend their time more wisely.” Don’t complain!
The authors should make an effort to assist readers familiar with RL but not ACL methods by providing details in the appendix. For example in line 175, Algorithm 1 does not provide information on how \phi is updated. The update algorithm could be provided in the appendix.
Figure 2 includes statements about “learnability” prior to a definition of learnability.
Lines 154-157. Learnability has not yet been defined so statements about slight/no correlation with learnability are vacuous. It is not clear to reader whether learnability in these lines/Figure 2 is the same as the definition of learnability given later (and used by SLR).
Lines 133-135 might better go in the later section on weaknesses/limitations.
Line 214. The authors do not define “solvable.” The reader needs to know how you are operationalizing this term.
Line 274-5. Unclear what “perfect regret” means.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are nicely discussed. It would be cool if the authors could comment on how SFL might be generalized to continuous outcomes and stochastic environments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review, particularly highlighting that our paper is "technically sound" and that the learnability score is "original". Please find our responses to your comments below.
> Lines 15-17. Suggest to delete spurious statements such as “We had tried our best to make current UED methods work for our setting before going back to the basics and developing our new scoring function, which took a few days and worked out-of-the-box. We hope our lessons and learnings will help others spend their time more wisely.” Don’t complain!
We agree and have removed this from our manuscript.
> The authors should make an effort to assist readers familiar with RL but not ACL methods by providing details in the appendix. For example in line 175, Algorithm 1 does not provide information on how \phi is updated. The update algorithm could be provided in the appendix.
Thank you for your suggestion and we agree that we should provide a more detailed background on ACL in our appendix. On line 175, $\phi$ is updated using any RL policy learning algorithm (we use PPO) and as such is not affected by the ACL process, we will make this distinction clearer in our updated manuscript.
> Figure 2 includes statements about “learnability” prior to a definition of learnability
> Lines 154-157. Learnability has not yet been defined so statements about slight/no correlation with learnability are vacuous. It is not clear to reader whether learnability in these lines/Figure 2 is the same as the definition of learnability given later (and used by SLR).
Thank you for pointing this out! We apologise for this oversight and we have rectified this by defining learnability earlier on in our paper.
> Lines 133-135 might better go in the later section on weaknesses/limitations.
We appreciate this suggestion and have implemented this in the updated manuscript.
> Line 214. The authors do not define “solvable.” The reader needs to know how you are operationalizing this term.
We added an explanation to the manuscript., thank you for bringing it to our attention. "Solvable" in our paper means that the goal state can be reached in a particular level, i.e., it is not impossible to complete.
> Line 274-5. Unclear what “perfect regret” means.
Regret is defined as the difference in return between the optimal policy on a level, and the current policy. However, since it is intractable in practice to compute this, most methods use approximations (such as PVL and MaxMC). We use "perfect regret" to refer to the exact implementation of regret, which is sometimes possible (e.g. in gridworlds we can compute the optimal return). We have made this clear in our updated manuscript.
> It would be cool if the authors could comment on how SFL might be generalized to continuous outcomes and stochastic environments.
Here are just some possible options to explore. We have added this discussion to our updated manuscript, thank you for raising this point as an important step of future work!
- In a non-binary-outcome domain, we could potentially reuse the intuition that $p(1-p)$ can be seen as the variance of a Bernoulli distribution; therefore, in a continuous domain, an analogous metric would be the variance of rewards obtained by playing the same level multiple times.
- For stochastic environments, we could form an extended level space, where the random seed and the level $\theta$ are bundled into $\tilde \theta$. In this way, we could turn a stochastic environment into a deterministic one, and since we assume simulator access already, this is not a significantly stronger assumption.
# Conclusion
Thank you again for your helpful review, we hope we have addressed your comments satisfactorily, and welcome further discussion.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for your response. I will hold to the rating of 7. | Summary: This work proposes a new Unsupervised Environment Design (UED) method, called Sampling for Learnability (SFL), developed for navigation tasks. SFL upsamples environment configurations with high learnability scores, i.e., p*(1-p), where p is the success rate in a specific configuration. The paper highlights the failure of some of the state-of-the-art (SOTA) UED approaches on such tasks and argues that they cannot approximate learning agents' regret, unlike SFL. They evaluate SFL in JaxNav, a Jax-based single and multi-agent navigation benchmark they introduce, and MiniGrid. Their empirical results evidence that SFL is robust and can outperform Domain Randomisation (DS) and SoTA UED approaches in JaxNav and MiniGrid. The paper demonstrates these outcomes by following two evaluation protocols, respectively: 1) A risk-based protocol that they introduce, computing the conditional value-at-risk (CVaR) of the distribution of success rate based on randomly sampled levels, and 2) a common protocol in the UED literature that evaluates performance/success in hand-designed test sets, as they focus on complex, yet, solvable configurations.
Strengths: - This paper is well-written, clearly expresses the motivation and the gap in the literature, and illustratively analyses the failure of existing UED methods in navigation tasks.
- Their contribution is a simple yet novel UED approach called SFL that explicitly addresses this failure mode. Their empirical results through two evaluation protocols highlight that SFL is robust (in terms of CVaR-based metrics) and outperforms existing approaches in expected return/success.
- I agree with the authors' claim that existing UED methods claim robustness yet fail to quantify it accurately. Hence, the robustness evaluation protocol utilized in this work may be considered new in the UED literature.
Weaknesses: - Section 4.1 analyses MaxMC and PVL, popular UED score functions, in terms of their predictiveness of learnability. However, there is no demonstration of a similar analysis for SFL. Although SFL's scoring function is intuitive and it is not as challenging to guess what the scoring would look like, including such an illustrative comparison would support the claims in this paper.
- The empirical results according to the proposed evaluation protocol for robustness (Figures 3a, 5a, and 7a) show that SFL outperforms the rest of the evaluated methods for alpha < 1 (100%). However, except in the multi-agent case (Fig 5a), the expected success rates indicate marginal or no improvement. In MiniGrid, the difference between DR and SFL drops quickly as alpha gets above 0.1 (10%). This is likely due to SFL not doing so well in easier levels compared to other baselines. Although this is expected, as the proposed metric results in upsampling learnable environments, not easy ones, I suggest a comparison of results in easy levels to conclude whether this is the case or not.
Technical Quality: 3
Clarity: 3
Questions for Authors: Table 4 showcases compute times for each evaluated approach. Most of these approaches have similar components, except maybe SFL's extra rollout phase. So why is SFL one of the fastest?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The major limitation of the proposed approach is the rollouts needed to compute the learnability score of randomly sampled levels. Section 4.2 indicates that 5000 levels are sampled, and n-step rollouts, where n is not specified, are generated to compute the scores. It would be more informative if the amount of computation and time spent in this phase were reported (in comparison to the usual curriculum learning task sampling phase). In addition, I wonder how SFL is one of the fastest in the single agent nav environment, as reported in Table 4 in Appendix F, despite having this additional phase.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, especially for highlighting our "simple yet novel UED approach" and that our paper is "well-written".
Below, we address the issues raised in the review:
# Correlation analysis for SFL
>Section 4.1 analyses MaxMC and PVL, popular UED score functions, in terms of their predictiveness of learnability. However, there is no demonstration of a similar analysis for SFL. Although SFL's scoring function is intuitive and it is not as challenging to guess what the scoring would look like, including such an illustrative comparison would support the claims in this paper.
We agree such analysis would be helpful, and we have added this to our updated manuscript. Due to space constraints we have not included this in the attached rebuttal PDF.
# Easy Levels
To assess performance on easy levels we have run our evaluation procedure over 10,000 uniformly sampled levels with fewer obstacles than usual. For JaxNav, we used a maximum fill % of $\leq 30\%$, half of the standard 60%. Meanwhile, for Minigrid, we use a maximum number of 30 walls instead of 60. These levels, therefore, are generally easier than the levels we evaluated on in the main paper. Results are reported in Figure 4(a,b) of the attached PDF.
On JaxNav, SFL still demonstrates a significant performance increase while on Minigrid all methods are very similar. Due to the challenging dynamics of JaxNav, even levels with a small number of obstacles can present difficult control and navigation problems meaning Automated Curriculum Learning (ACL) methods (such as SFL) still lead to a performance differential over DR. Meanwhile, in Minigrid, due to its deterministic dynamics, difficulty is heavily linked to the obstacle count as this allows for more complex mazes. As such, DR is competitive to ACL methods in settings with fewer obstacles.
# SFL runtime speed
Thank you for raising this important point, as it highlights how SFL takes advantage of parallel computations on the GPU.
Looking at a single iteration (including training + eval) in Minigrid on an L40S GPU, the time breakdowns are as follows:
| | PLR | SFL |
| -------------------- | ----- | ----- |
| Train Step | 37.5s | 35s |
| Get Learnable Levels | 0 | 2.2s |
| Eval Step | 0.7s | 0.7s |
| Total | 38.2s | 37.9s |
We note that the SFL rollouts are fast for two reasons:
- We aggressively parallelise them, running up to 5000 environments in parallel, which takes about the same time as running only hundreds in parallel.
- We do not compute any gradients for these transitions.
The upshot is that they take significantly less time than the actual training step. Furthermore, UED's training step is more complex than SFL's, since it must maintain a buffer of levels, compute the scores during training, and potentially update the buffer.
For JaxNav, obtaining the learnable levels takes a little longer, but still takes less time than our logging. Due to this, slight differences in logging between PLR and SFL masked this additional computational cost. To better control for this, we measure how long SFL takes vs PLR in the absence of logging. We find that SFL is about 6% slower than PLR, despite running all of the additional rollouts. In Figure 1(a) of the attached PDF (and our updated manuscript), **we have provided compute-time matched results** for single-agent JaxNav (where SFL runs for 6% fewer timesteps). Despite running for fewer timesteps, SFL still significantly outperforms UED methods. SFL on Minigrid runs as fast, or slightly faster than PLR.
We apologise for the oversight in not specifying how long our rollouts are. We use a consistent rollout length of 2000 steps. We have updated this in our manuscript.
# Conclusion
We hope we have addressed the reviewer's comments and are happy to discuss any of these points further. We would further ask if our responses have addressed the reviewer's concerns, and that they consider increasing their support for our paper.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for your effort in responding to my comments and questions and providing new results that support the validity of your work.
My concerns have been thoroughly addressed, so I'll raise my score from 6 to 7. | Summary: The paper introduces a new metric based on solve rates for evaluating the learning potential of tasks in a multi-task RL environment. It also presents a new evaluation protocol based on worst-case performance to better characterize the robustness of different methods. The work uses this protocol to compare their method with several Unsupervised Environment Design baselines and concludes that it outperforms the baseline. They also find that their UED baselines perform worse than Domain Randomization on this new evaluation metric, and conduct additional experiments to confirm and analyze these results.
Strengths: The paper includes many contributions, including a new method, a new evaluation protocol, and new benchmarks of existing methods. These contributions are open sourced and well documented in the appendix to support reproducibility. The work attempts to identify failure modes of a large class of regret-based UED algorithms which could help to move further ACL research toward more promising directions. These methods are effective and widely applicable, but their generalization outside of a small set of baselines has not been thoroughly studied. This work will become increasingly important as RL research moves on to more complex environments.
Weaknesses: The main weakness of the paper is the claim that domain randomization and their new heuristic outperform all UED algorithms, which is not entirely convincing from the results. The PLR hyperparameters used in this work are different from [1] and [2], for example the prioritization, temperature, and number of edits are all different. PLR in particular has several hyperparameters that need to be tuned for new domains, such as the buffer size, sampling temperature, and staleness coefficient. The authors do not explain whether they tuned the hyperparameters for the baselines or how they tuned the hyperparameters for their own method. I think even a small grid search over reasonable parameters that worked in other domains would help to make the comparisons throughout this paper more convincing.
Another concern is that this work does not include any standard episodic return plots, which is the main metric of comparison in RL. Without these plots, it is difficult to tell whether their method truly outperforms the baselines, or whether their baselines are properly tuned. For instance, if the SFL algorithm performs better according to the CVaR evaluation but not on a standard test return plot, then it's debatable which method is more useful. That being said, [2] does compare the mean evaluation solve rate of UED baselines on Minigrid, and finds that Accel and PLR both significantly outperform DR. In this work, PLR severely underperforms DR using the same metric in Figure 3. This seems to be strong evidence that its hyperparameters or implementation are not correct.
Even if we assume that the baselines are working as intended, I PLR and SFL have many differences which makes it unclear what change is actually leading to better performance. I think this work would benefit from additional experiments using the learnability metric in SFL as the prioritization metric in PLR, and possibly the alternative, using PVL and MaxMC as the selection metrics in the full SFL algorithm.
Overall the writing and presentation was quite good, but there were a few notable issues. The paper somewhat confounds its description of PLR with Robust PLR in section 2.2.1. PLR trains on randomly sampled levels, while Robust PLR evaluates randomly sampled levels and only trains on levels from its replay buffer. Also, "learnability" is not clearly defined until 4.2, after it has been referenced many times. This is ok when discussing "intuitive notions of learnability" in a vague sense, but not when making statements such as "no correlation exists between learnability and regret score".
[1] Jiang, Minqi, et al. "Replay-guided adversarial environment design." Advances in Neural Information Processing Systems 34 (2021): 1884-1897.
[2] Parker-Holder, Jack, et al. "Evolving curricula with regret-based environment design." International Conference on Machine Learning. PMLR, 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Why did you not simply use your heuristic instead of PVL or MaxMC in prioritized level replay?
* Why does there appear to be a discrepancy between the UED baseline solve rates reported in this paper and the ones reported in [2]?
* On line 214 you describe sampling 10,000 solvable levels to evaluate on. Are these unseen levels (at least with high probability) or are they sampled from the training set?
* You mention in the Limitations section that your method is restricted to Jax-based settings. Is the method limited to those settings, or just your particular Jax-based implementation?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately discuss the limitations in the paper and throughout the work. Their SFL method appears to be less general that the UED baselines that they compare against, but performs much better on the domains tested in this work. Simple methods on new benchmarks can serve as stepping stones to more general methods, so I do not believe this impacts their contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for your thorough review and helpful suggestions! We also appreciate you mentioning that our paper "includes many contributions" and "could help to move further ACL research toward more promising directions." Please find our responses and changes below.
# Hyperparameters and UED underperformance
As rightfully requested by the reviewer, we have performed extensive hyperparameter tuning for all methods (excluding DR). Using these new, optimised hyperparameters (see Figure 1 in the attached 1-page PDF), **UED outperforms DR** (in Minigrid and Single-agent Jaxnav) but **SFL still outperforms UED**. For DR, we use the PPO hyperparameters as discussed in the main response, and did not tune it further, as it has no other relevant hyperparameters. Please see our main response for details regarding how we performed our sweep.
Furthermore, when comparing our results to previous work please note that in the original Robust PLR paper, they use 25 walls for Minigrid. However, they also show that changing the number of walls can drastically alter results; for instance, in appendix C.1.2, they show that using 50 walls causes DR to be competitive to UED. Following Minimax [1] and JaxUED [2], we use 60 walls. Therefore, we cannot directly compare the results of our work and that of the original Robust PLR & ACCEL papers.
[1] Jiang, et al. "Minimax: Efficient Baselines for Autocurricula in JAX."
[2] Coward, et al. "JaxUED: A simple and useable UED library in Jax."
# Episodic Return Plots
We primarily include success on the set of held-out levels during training rather than episodic return because, in these goal-oriented domains, the success rate is a more intuitive and clearer marker of performance. This is because the actual environment reward may include reward shaping terms that do not directly correspond to our ultimate aim of reaching the goal (but are nonetheless necessary to learn a policy).
We also find that the average return in all our domains is closely tied to success rate. This is illustrated in Figure 5 of the manuscript where 5\(c) reports success rate while 5(d) reports return (note, there is a naming error on the y-axis of Figure 5(d)).
**Therefore, the performance trends illustrated in 3(b), 5\(c), 7(b) are mirrored in the episodic return throughout training.** We have clarified this in the revised manuscript, and have included reward curves in the appendix.
# Learnability as PLR score function and PVL as SFL's metric
Thank you for suggesting this, we agree that this is an important ablation to understand the source of improvement and have included these results in the attached PDF (Figure 3).
We ran this ablation for both Minigrid and single-agent JaxNav, and in both domains, **SFL + Learnability outperforms all other combinations**. Using learnability with PLR/ACCEL has a slight positive effect for PLR in JaxNav and a negative effect in Minigrid. Using PVL as the score function inside our SFL algorithm performs worse, except when we have a large buffer of levels in JaxNav.
Some intuition for why this is:
- In the large buffer case for PVL on JaxNav, the training levels are more random, since we also include levels with lower scores. Indeed, with a small buffer, PVL performs very poorly, a result in line with our analysis in Figure 2 of the submitted manuscript as PVL fails to predict the frontier of learnability accurately.
- Conversely, a large buffer results in a set of levels that is slightly biased towards higher success rates, as illustrated in Figure 2 of the submitted manuscript (i.e., PVL correlates slightly with success rate). This bias helps screen out some unsolvable levels using the scoring function, a feature of regret-based UED algorithms [3]. Additionally, our observation shows that random JaxNav levels often provide a good learning signal, as demonstrated by a sampling ratio $\rho$ of 0.5 performing similarly to 1.0 in the SFL ablations shown in Figure 2 of the 1-page PDF. Therefore, SFL-PVL with a large batch of levels creates a learning curriculum that leads to decent performance, although it is still inferior to SFL with learnability.
**The results of this ablation suggest that the improvements due to SFL can be attributed to both the learnability score function *and* our improved sampling approach.**
Note, however, that we did not hyperparameter-tune these results, as we used the optimised hyperparameters found above, and just swapped the score function.
[3] Jiang, et al. "Replay-guided adversarial environment design."
# Poor explanations and presentation
Thank you for pointing out these problems, we appreciate your thoroughness. We have rectified them in our updated manuscript. We have also defined learnability earlier on, making the paper clearer.
# Questions
> Using our heuristic with PLR?
See above and Figure 3 in the attached 1-page PDF. We find that SFL + Learnability outperforms the other combinations.
> A discrepancy between UED's performance in this paper and [2]?
See above, in "Hyperparameters and UED underperformance".
> Are the 10,000 solvable levels unseen?
The levels used during evaluation are sampled from our environment generator separately from the training process. As the space of possible levels is very large, generating the same random level twice is unlikely.
> Is the method restricted to Jax-based settings?
Our implementation is in JAX but the method is general. However, as raised in our limitations section, one must take the cost of SFL's additional environment rollouts into account when considering implementing our algorithm; we chose JAX because its speed and parallelisation significantly alleviates this constraint.
# Conclusion
We hope that the reviewer feels we have addressed their questions and welcome any further discussion. We also ask that, if all their concerns are met, the reviewer consider revising their score to recommend accepting the paper.
---
Rebuttal Comment 1.1:
Comment: I’d like to thank the authors for their very thorough response and comprehensive additional experiments, as well as apologize for not responding sooner.
Thank you for performing a more thorough hyperparameter sweep, and for pointing out the difference in minigrid block budgets from prior work. After looking through JaxUED and Robust PLR again, I agree with the authors that UED is comparable to DR in the 50 and 60 block setting. That being said, I think the authors should also experiment in the more challenging 25 block setting where UED methods are most effective. Regardless, these new results provide convincing evidence that the proposed method is better than the SOTA in at least some settings.
Than you also for running the requested ablations so quickly. The results seem inconsistent across hyperparameters and environments, but it does appear that SFL is generally better, and both the sampling and prioritization metrics positively impact performance.
My main concern was that there seemed to be strong evidence that the evaluations in the paper were not fair. With these extensive new results and changes, I believe the contribution of this work is convincing, and I hope the authors will incorporate these new results in their paper as promised. I will raise my score to a 7. | Rebuttal 1:
Rebuttal: Dear reviewers, we appreciate your detailed reviews and concrete suggestions for improvement. We are especially grateful for reviewers mentioning that our paper is "well-written", "technically sound", and "includes many contributions", including an algorithm that is "novel, simple, and easy to understand." We are pleased that reviewers recognise the "simulation support" for our claims and that all "contributions are open-sourced and well documented in the appendix to support reproducibility".
We have implemented your suggested improvements, and run additional experiments, which **confirm our findings that SFL outperforms current UED methods on several domains.**
In our global response, we highlight the main changes we made, with updated and additional results presented in the 1-page PDF of figures.
# New Environment - XLand-Minigrid
*Requested by Reviewer ycoo.*
We have added an additional environment to our experiments by running DR, SFL and PLR on Xland-Minigrid's meta-RL task [1]. See below for details.
**Results**
We report performance using our CVaR evaluation procedure and, in line with [1], as the mean return on an evaluation set during training. Our results are presented in Figures 1(d) and 4(d) of the attached PDF. **SFL outperforms both PLR and DR.** Results are averaged over 5 seeds and during evaluation each ruleset was rolled out for 10 episodes. Training runs were conducted on one L40S and due to the large number of levels being rolled out to fill SFL's buffer, SFL was slower than DR and PLR. As such, we report results for SFL compute-time matched to PLR. Not only does SFL outperform PLR for their respective best set of hyperparameters but it is also much more robust to hyperparameter choice, with only one configuration of PLR's hyperparameters being competitive compared to the large majority of SFL's. More details on this experiment are in our updated manuscript.
**Environment Overview**
This domain combines an XLand-inspired system of extensible rules and goals with a Minigrid-inspired goal-oriented grid world to create a domain with a diverse distribution of tasks. Each task is specified by a ruleset, which combines rules for environment interactions with a goal, and [1] provide a database of presampled rulesets for use during training. Following [1], we use a 13x13 grid with 4 rooms and sample rulesets from their high diversity benchmark with 3 million unique tasks. As training involves sampling from a database of precomputed rulesets, ACCEL is not applicable. PLR and SFL select rulesets for each meta-RL step to maximise return on a held-out set of evaluation rulesets.
**Hyperparameter tuning**
All methods used the default PPO hyperparameters given by [1]. For both PLR and SFL, we performed a grid search using similar compute budgets. Due to space, we cannot provide details here but have in our updated manuscript.
[1] Nikulin, et al. "XLand-minigrid: Scalable meta-reinforcement learning environments in JAX."
# Extensive Hyperparameter Tuning
*Suggested by Reviewers Su34 and ycoo.*
To ensure competitive results, we have performed extensive hyperparameter tuning for each baseline. Updated results are shown in Figure 1 of the 1-page PDF. **We find that SFL still significantly outperforms existing UED methods.**
For PPO we conducted an extensive sweep for the JaxNav environment ensuring robust DR performance. For Minigrid, our JaxNav PPO parameters performed similarly to those given in the JaxUED implementation but allowed us to use 256 environment rollouts in parallel during training compared to JaxUED's 32. We kept these base PPO parameters fixed for all methods.
For both PLR and ACCEL, we swept hyperparameters for Minigrid and JaxNav's single-agent variation. For both methods, we performed a grid search over the replay $p$, in $\{0.5, 0.8\}$, level buffer size $K$, in $\{1000, 4000, 8000\}$, choice of scoring function in $\{ \text{PVL}, \text{MaxMC}\}$ and prioritisation function in $\{ \text{rank}, \text{TopK}\}$. For rank prioritisation, we searched for the temperature in $\{0.3, 1.0\}$, while for TopK we searched for k in $\{1, 32, 128\}$. For ACCEL, we additionally searched for the number of edits in $\{5, 20, 50\}$. Each set of hyperparameters tested was run over 3 seeds. For multi-agent JaxNav, we used the best-performing hyperparameters for single-agent JaxNav.
For SFL, we started with the default parameters listed in the main text and conducted an independent line search over the parameters: batch size $N \in \{ 500, 5000, 25000\}$, rollout length $L \in \{1000, 2000, 4000 \}$, number of levels to save $K \in \{100, 1000, 5000 \}$, buffer update period $T \in \{10, 50, 500, 1000, 2000 \}$ and sampled environments ratio $\rho \in \{0.25, 0.5, 0.75, 1.0\}$.
Since this is a line search and not a grid search, the total number of tuning runs (and total compute) is significantly less than for PLR and ACCEL (20 runs for SFL vs 90 for PLR & 270 for ACCEL).
# Additional Baselines
*Suggested by Reviewer Su34.*
We have added two new baselines (see Figure 3 in the 1-page PDF):
1. Positive Value Loss (PVL) as a score function for SFL.
2. Learnability as a score function for PLR and ACCEL.
**SFL+Learnability still outperforms both of these variations.**
# Ablations and Effects of SFL Hyperparameters
*Suggested by Reviewer ycoo.*
In Figure 2 of the 1-page PDF, we have investigated the effects of changing hyperparameter values on SFL. We also experimented with different variations of learnability, where the highest learnability score corresponds to success probabilities other than 0.5, for instance, where solving a level 20% of the time is defined as "optimal" learnability (Figure 4\(c)).
Overall, SFL's performance improves when we sample more levels, and it is relatively robust to all other hyperparameters. There is a detrimental effect if we sample the set of learnable levels too infrequently (see the "Buffer Update Period" subfigure).
Pdf: /pdf/b7c7e136d162b0cf97bc2a20998b097f139ac4f1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
$C^2M^3$: Cycle-Consistent Multi-Model Merging | Accept (poster) | Summary: The paper proposes a new, cycle-consistent method of merging more than two neural networks in weight space by simultaneously solving multiple permutation-based merge process. The key innovation is to ensure cycle consistency when merging $n>2$ models. The authors showed that the method, together with techniques like REPAIR, can lead to substantial benefits across various architectures and datasets.
Strengths: 1. The paper proposes a novel alignment algorithm that, by construction, ensures cycle consistency for merging multiple models. The authors specifically showed the benefit of merging models in a universe space $u$ as a bridge for model permutations. I personally found the idea of a universe space neat and sensible, and this method could be a nice advance in the field of model interpolation & merging.
2. Relatively extensive comparison with existing best methods and across different datasets and settings (including some analysis on the loss landscape, etc.)
3. The paper demonstrated compatibility of the method to adopt techniques like REPAIR which helps maintain good performance (i.e., practical enhancement over other prior methods)
Weaknesses: 1. While the concept of a universe weight space is nice, the presumption of neuron permutations and the related theoretical foundations are still based strongly on prior work.
2. I found the paper fail to analyze the pros and cons of the simultaneous global optimization in the main paper (though there are some in the appendix). Specifically, the Frank-Wolfe process will involve a $O(n^2L)$ loop which would be very costly compared to the more "local" approach by Ainsworth et al.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The major concern I have is related to the global optimization process itself for model merging. How "easy" is this optimization process and why conditional GD? How does the optimization scale with wider/deeper networks? I suspect that the empirical observation in Fig. 12(a) in Appendix A.5 exactly suggests the instability in the optimization algorithm.
2. The authors reported the walk-clock time efficiency for merging 2 models in Appendix A.5. How slow is the method for merging $n>2$ models (e.g., say 5, 10), especially compared to setting one model as the "universe space" and perform $n-1$ pairwise optimizations?
3. How exactly is the merging factor $\alpha$ determined when trying to use REPAIR for multiple models in Table 1?
4. While by design the transformation to/from the universe weight space is invertible, but in practice depending on the quality of the optimization problem solving, the actual $P^{AC} \circ P^{CB} \circ P^{BA}$ could in theory still be not so close to the identity matrix. Maybe I missed something, so it'd be great if the authors can clarify.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and spot-on questions. We now proceed to address the raised concerns and answer the questions to the best of our capabilities.
- **W1: model merging foundations based on prior work.** We agree with the reviewer that our work builds on top of existing work and partially leverages the same foundations. This is true, however, for each permutation-based model merging work, e.g. [1, 2, 3, 4, 5, 6], and we believe that sharing a set of common assumptions is inevitable in a subfield. We would also like to remark that **none of the existing works provably guarantees cycle consistency**, with the problem itself being overlooked. Therefore, while we acknowledge owing the existence of this research to prior work that has laid the foundations of the field, we argue that our work takes it a step further and, being theoretically grounded, has the potential to be itself a foundation for future research.
- **W2: pros and cons of global vs local optimization.** We thank the reviewer for bringing this up. The answer is double-fold: i) **global optimization is required to enforce cycle consistency**, as it is an inherently global property; ii) **global optimization removes the arbitrariness in the layer iteration order**, which, as we report in Figure 4 and Table 6, results in a marked variance in the results.
More in detail, it is not clear how cycle consistency, a global property of the model-level transformation, can be achieved using layer-local optimization. For this reason, we do not only choose global optimization due to the deterministic nature of its results but also because it is the only way to achieve our main and foremost objective — that of having cycle-consistent permutations between models. Agreeing with the reviewer about the importance of these considerations, we intend to use the extra page in the camera-ready version to clarify them.
- **Q1: why conditional gradient descent.** We thank the reviewer for the question. Conditional gradient descent (Frank-Wolfe) is a particularly suitable algorithm for the problem we are trying to solve as it **only requires computing the gradient**, for which we derived a closed form, and enjoys two advisable properties: i) **monotonicity of the objective function** (see also Figure 11 in appendix) ii) **guaranteed convergence rates**, for which we added a proof in the appendix. Regarding the difficulty of the process, the monotonicity of the loss makes it arguably stable, even if the step size is decreasing in an alternating manner. As briefly explained in Appendix A. 5, we believe this behavior to reflect a fixed pattern in the optimization rather than an instability in the overall process. Regarding the scalability of the approach with larger networks, we report here its wall-clock time when merging n=2, 3 ResNet20 models having 1x, 2x, 4x, 8x, and 16x width, together with their number of parameters.
| | 1x | 2x | 4x | 8x | 16x |
| --- | --- | --- | --- | --- | --- |
| # params | 292k | 1.166m | 4.655m | 18.600m | 74.360m |
| C2M3 time n=2 | 33.4s | 33.5s | 40.5s | 80.8s | 367.8s |
| C2M3 time n=3 | 32.9s | 83.18s | 91.0s | 162.0s | 715.8s |
| MergeMany time n=2 | 0.24s | 0.4s | 3.4s | 8.9s | 59.4s |
| MergeMany time n=3 | 1.2s | 4.1s | 19.5s | 105.8s | 892.3s |
As can be inferred from the table, the scaling laws depend on the complexity of the resulting matching problem and cannot be predicted merely from the number of parameters, with a 4-fold increase in parameters resulting in no increase in runtime for the first three columns, a double increase in the second-last column and a 5-fold increase in the last. Compared to MergeMany, our approach enjoys a milder increase in running time when increasing the number of parameters. **We also included a rigorous proof determining the convergence rate of the algorithm in the rebuttal document**.
- **Q2: Efficiency of the method for n>2.** We thank the reviewer for the question. To answer it, we computed the matching and merging time when merging $n=2, \dots, 10$ ResNet20 models with 4x width, as can be seen in Figure 1 of the rebuttal document. Compared to the pairwise baseline that maps all the models to a fixed one, our approach incurs a significantly steeper cost. However, Figure 2 (rebuttal document) shows that the latter also suffers from a performance decrease when increasing $n$ which is much more pronounced, making it not advisable for the task.
- **Q3: Merging factor alpha.** We thank the reviewer for pointing out this missing detail. In all our experiments, we have used $\alpha = \frac{1}{n}$, where $n$ is the number of models to merge. This is indeed usually the hardest case, as it defines a point that is the furthest from the individual basins.
- **Q4: cycle consistency.** We thank the reviewer for allowing us to clarify this aspect. Luckily, cycle consistency, i.e., the property for which applying a cycle of such transformations leads back to the starting point, holds in this case as long as each transformation is orthogonal. Looking at Figure 1 in the manuscript, it can be appreciated how mapping $A$ to another model $B$ means always passing through $U$ and then to $B$ with $P_B$, and that mapping $B$ to another model $C$ means passing through $U$ with $P_B^T$. This means that, as long as $P_B P_B^T = I$, the effect of mapping $A$ to $B$ is nullified. Since the proposed algorithm returns permutations (a subspace of orthogonal transformations), cycle consistency always holds for the resulting permutations.
---
Rebuttal 2:
Comment: Thanking again the reviewer for their effort, we remain available for any further clarification.
### References
[1] Ainsworth, Samuel, Jonathan Hayase, and Siddhartha Srinivasa. 2022. “Git Re-Basin: Merging Models modulo Permutation Symmetries.” In *The Eleventh International Conference on Learning Representations (ICLR), 2022*
[2] Jordan, Keller, Hanie Sedghi, Olga Saukh, Rahim Entezari, and Behnam Neyshabur. 2023. “REPAIR: REnormalizing Permuted Activations for Interpolation Repair.” In *The Eleventh International Conference on Learning Representations (ICLR), 2023*
[3] Navon, Aviv, Aviv Shamsian, Ethan Fetaya, Gal Chechik, Nadav Dym, and Haggai Maron. 2023. “Equivariant Deep Weight Space Alignment.” in *The Forty-first International Conference on Machine Learning (ICML), 2024*
[4] Peña, Fidel A. Guerrero, Heitor Rapela Medeiros, Thomas Dubail, Masih Aminbeidokhti, Eric Granger, and Marco Pedersoli. “Re-Basin via Implicit Sinkhorn Differentiation.”, IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023
[5] Singh, Sidak Pal, and Martin Jaggi. 2020. “Model Fusion via Optimal Transport.” In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 (NeurIPS), 2020*
[6] Horoi, Stefan, Albert Manuel Orozco Camacho, Eugene Belilovsky, and Guy Wolf. 2024. “Harmony in Diversity: Merging Neural Networks with Canonical Correlation Analysis.”, in *The Forty-first International Conference on Machine Learning (ICML), 2024* | Summary: This paper further put forward a kind of cycle consistent Multi-Model Merging to merge models after permute simultaneously. It addresses the limitations in previous approaches that only handled pairwise merging. It uses a "universe" space to factorize permutations between models, optimizing all layer permutations simultaneously using the Frank-Wolfe algorithm.
Strengths: - Deterministic result: independent of the random choice of layers
- The "universe" space to factorize permutations between models solves issues with previous methods that could accumulate errors when applying cyclic permutations.
- This paper has sufficient analyze and discussion on the model width, number of merged models, and linear mode connectivity in the universal basin.
- The algorithm to factorize is data-free (based on the Frank-Wolfe algorithm)
Weaknesses: - Sensitive to hyper-parameters
- Lack of theoretical guarantee (I personally think there is not much researchers can do in this model-merging sub-domain. Especially in large networks)
- Exps are down on small datasets and small networks. Could do some exps on larger network and larger dataset
- What is the convergence speed given a different number of parameters? What about comparing to other algorithm such as MergeMany?
Technical Quality: 4
Clarity: 3
Questions for Authors: - You mentioned: "the resulting models that we obtain are sensible to a wide variety of factors, from training hyperparameters to the optimization algorithm used" in the paper. Could you please elaberate more on this?
- What is your convergence speed comparing to git-rebasin (MergeMany). It would be great to see a Pareto Front showing that given a specific size of set of models, the trade-off between computation time and accuracy and see which algorithm occupies more Pareto Optimal solutions
- Is that possible to do some exps on CLIP-based/LLM (like the phi-3 which only has 3B parameters) larger networks? There are bunch of CLIP-fine-tuned networks with the same architecture and LLMs fine-tuned based on phi-3 etc. The exps are done only on EMNIST, CIFAR10 and CIFAR100. The largest network being used is ResNet-16/VGG-16 which are a bit out-of-data nowadays. (I know Git-rebasin was working on VGG-16 and ResNet 20 but still...)
- In figure 1.b, can you also calculate the cosine similarity? After all, L2 distance does not make much sense in high dimension space.
- The eq 3 is confusing, I am not sure if the author wants to sum p from 1 to n while not equals to q or sum (p,q) pairs from (1,1), (1,2), ..., (n-1,n) while p!=q. I am asking this because the author mentioned "In order to generalize to n models, we jointly consider all **pairwise**
problems". But the equation 3 does not show the "pairwise".
- What is the meaning of color in figure 3?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, the author have mentioned the limitations well in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and spot-on questions. We will now do our best to address each and every critique and question.
**Sensitivity to hyperparameters**
We thank the reviewer for raising this point. We would like, in fact, to clarify this aspect: **when, in the limitations, we refer to merging methods being sensible to hyperparameters, we do not refer to this particular approach**, but to the whole linear mode connectivity phenomenon. Different works have, in fact, observed this in practice: among these, Ainsworth et al. [1] when listing the known failure modes of their approaches, mention models trained with SGD and too low learning rate, or ADAM coupled with too high learning rate. Keller et al. [2] show that the chosen normalization layer incredibly affects the accuracy of the resulting merged model, while Qu et al. [3] observe learning rate, weight decay, and initialization method to play a strong role as well. We therefore argue that the sensitivity to hyperparameters is not a peculiarity of our approach, but something we observed throughout all the approaches we used and that has been confirmed in the existing literature. We added this clarification to the main manuscript.
**Lack of theoretical guarantees**
We thank the reviewer for the comment. As explicitly stated in the limitations of the paper, we share with the reviewer an overarching wish for an improved theoretical ground for model merging and linear mode connectivity. This is mostly due to the whole permutation-based model merging field building upon a conjecture that cannot be proven or disproven due to the huge number of possible neuron permutations. We would like, however, to remark that **this is not something that has to do with our research in particular but with the field in general** and that, differently from several works in the field, we provide a **principled approach that indeed holds some guarantees**. While this may be just a brick on a bumpy road, this brick is stable and allows further work to build upon it.
**Small datasets and networks**
We thank the reviewer for the suggestion.
| Paper | Conference | Datasets | Architectures |
| --- | --- | --- | --- |
| Git Re-Basin [1] | ICLR22 | MNIST, CIFAR10, CIFAR100, ImageNet | MLP, VGG, ResNet |
| REPAIR [2] | ICLR23 | MNIST, CIFAR10, CIFAR100, ImageNet | MLP, VGG, ResNet |
| Deep Weight Space Alignment [3] | ICML24 | MNIST, CIFAR10, STL10 | MLP, VGG, CNN |
| Re-Basin via Implicit Sinkhorn [4] | CVPR23 | ad hoc polynomial regression dataset, CIFAR10 | MLP, VGG |
| Model fusion via optimal transport [5] | NeurIPS20 | MNIST, CIFAR10 | MLP, VGG, ResNet |
| CCA-Merge [6] | ICML24 | CIFAR10, CIFAR100 | VGG, ResNet |
As can be seen in the table, we are using the **most established set of architectures and datasets considered in all the previous and concurrent literature** in the field. This choice stems from two motivations: 1) for the **sake of comparison** and continuity with respect to previous works; 2) the **complexity of the architecture adds additional challenges** that must be taken into account and requires additional research that is not immediately relevant to the merging approach. In particular, the architecture suggested by the reviewer is a transformer-based architecture and requires ad hoc mechanisms to handle multi-head attention, positional embeddings, and the vast number of residual connections, in fact motivating stand-alone works to do this [4, 5]. We agree, however, with the reviewer that we are lacking a very large-scale case and, therefore, are also training ResNet50 endpoints on ImageNet. We aim to include these results as soon as they are available. Given our limited computational budget, these may be ready before the end of the discussion phase or for the camera-ready.
**Convergence speed**
We thank the reviewer for the question. We report here the wall-clock time when merging n=2,3 ResNet20 models having 1x, 2x, 4x, 8x, and 16x width, together with their number of parameters.
| | 1x | 2x | 4x | 8x | 16x |
| --- | --- | --- | --- | --- | --- |
| # params | 292k | 1.166m | 4.655m | 18.600m | 74.360m |
| C2M3 time n=2 | 33.4s | 33.5s | 40.5s | 80.8s | 367.8s |
| C2M3 time n=3 | 32.9s | 83.18s | 91.0s | 162.0s | 715.8s |
| MergeMany time n=2 | 0.24s | 0.4s | 3.4s | 8.9s | 59.4s |
| MergeMany time n=3 | 1.2s | 4.1s | 19.5s | 105.8s | 892.3s |
As can be inferred from the table, the scaling laws depend on the complexity of the resulting matching problem and cannot be predicted merely from the number of parameters, with a 4-fold increase in parameters resulting in no increase in runtime for the first three columns, a double increase in the second-last column and a 5-fold increase in the last. Compared to MergeMany, our approach enjoys a milder increase in running time when increasing the number of parameters. **We also included a rigorous proof determining the convergence rate of the algorithm in the rebuttal document**.
**Cosine similarity of the models**
We thank the reviewer for the suggestion. We report here the cosine similarity between a model and the model obtained by cyclically applying the permutations obtained with git re-basin and C2M3 respectively.
| | (a, b, c, a) | (b, c, a, b) | (c, b, a, c) | (a, c, b, a) |
| --- | --- | --- | --- | --- |
| git re-basin | 0.251 | 0.251 | 0.251 | 0.251 |
| C^2M^3 | 1 | 1 | 1 | 1 |
The result is analogous to that observed in the manuscript with the L2 distance, showing the lack of error accumulation in the proposed approach.
**Ambiguity in equation 3**
We thank the reviewer for pointing out this ambiguity. We confirm their understanding of the equation summing over all pairs of models `(1, 1), (1, 2), …, (n-1, n)` . For the sake of clarity, we replaced the equation in the manuscript with a triple sum, for each model `p=1, ..., n-1`, for each model `q=p+1, ..., n` and each layer `l=1, ..., L`.
(TBC)
---
Rebuttal 2:
Title: Rebuttal by Authors (2)
Comment: **Meaning of color in Figure 3**
The color in Figure 3 represents the value of the loss in the loss landscape given by interpolations of the models. Red values indicate low-loss regions (basins), while blue values indicate high-loss regions. We added this information in the caption.
Thanking again the reviewer for their effort, we remain available for any further clarification.
**References**
[1] Ainsworth, Samuel, Jonathan Hayase, and Siddhartha Srinivasa. 2022. “Git Re-Basin: Merging Models modulo Permutation Symmetries.” In *The Eleventh International Conference on Learning Representations (ICLR), 2022*
[2] Jordan, Keller, Hanie Sedghi, Olga Saukh, Rahim Entezari, and Behnam Neyshabur. 2023. “REPAIR: REnormalizing Permuted Activations for Interpolation Repair.” In *The Eleventh International Conference on Learning Representations (ICLR), 2023*
[3] Qu, Xingyu, and Samuel Horvath. "Rethink Model Re-Basin and the Linear Mode Connectivity." *arXiv preprint arXiv:2402.05966* (2024).
[4] Imfeld, Moritz, et al. "Transformer fusion with optimal transport." In *The Twelfth International Conference on Learning Representations (ICLR), 2024*
[5] Verma, Neha, and Maha Elbayad. "Merging text transformer models from different initializations." *arXiv preprint arXiv:2403.00986* (2024).
---
Rebuttal Comment 2.1:
Comment: Thanks for addressing my questions. I am happy to increase the score.
---
Reply to Comment 2.1.1:
Comment: We are glad to hear that the questions have been addressed. We thank the reviewer for their effort and consideration. | Summary: The paper introduces Cycle-Consistent Multi-Model Merging for merging neural networks by optimizing neuron permutations globally across all layers, ensuring cycle consistency when merging multiple models. Utilizing the Frank-Wolfe algorithm, this approach addresses inter-layer dependencies and guarantees that cyclic permutations result in the identity map. The method is generalized to handle more than two models by mapping each to a common universe space and thus enhancing alignment robustness.
Strengths: - The paper introduces a method for merging neural networks by ensuring cycle consistency through global optimization of neuron permutations, addressing a limitation in existing pairwise approaches
- The use of the Frank-Wolfe algorithm for simultaneous layer optimization and incorporation of activation renormalization demonstrates an innovation and advantages through later experiments
Weaknesses: - While the paper demonstrates the effectiveness of C2M3 across various (simple) architectures and (toy) datasets, it lacks a detailed analysis of the method's scalability to very large models or datasets. A more comprehensive evaluation of performance and computational requirements for larger-scale applications is desirable to understand its benefits
- Although the method shows promising results in experimental settings, the paper would be strengthened by including case studies or examples of real-world applications where C2M3 has been successfully implemented. This would help demonstrate the practical utility and robustness. Currently it is primarily focused on classification tasks.
- The paper lacks a deep theoretical analysis of the convergence properties and guarantees of the proposed method. Including theoretical insights or proofs regarding the convergence and stability of C2M3 would strengthen the work
- Other minor things:
- In abstract the number of models is denoted by $N$, while $n$ is used in the main text
- Fig 2 does not have "Figure 2" in the caption
- the definition 1 is quite ambiguous, stating it as $A\approx B$. However, it is important to clarify the precise meaning of this approximation in a rigorous manner, especially when defining something clearly.
Technical Quality: 2
Clarity: 2
Questions for Authors: See the weakness part above
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their efforts in providing insightful comments and questions.
**Method’s scalability to large models or datasets.**
| Paper | Conference | Datasets | Architectures |
| --- | --- | --- | --- |
| Git Re-Basin [1] | ICLR22 | MNIST, CIFAR10, CIFAR100, ImageNet | MLP, VGG, ResNet |
| REPAIR [2] | ICLR23 | MNIST, CIFAR10, CIFAR100, ImageNet | MLP, VGG, ResNet |
| Deep Weight Space Alignment [3] | ICML24 | MNIST, CIFAR10, STL10 | MLP, VGG, CNN |
| Re-Basin via Implicit Sinkhorn [4] | CVPR23 | ad hoc polynomial regression dataset, CIFAR10 | MLP, VGG |
| Model fusion via optimal transport [5] | NeurIPS20 | MNIST, CIFAR10 | MLP, VGG, ResNet |
| CCA-Merge [6] | ICML24 | CIFAR10, CIFAR100 | VGG, ResNet |
As can be seen in the table, we are using the **most established set of architectures and datasets considered in all the previous and concurrent literature in the field**. This choice stems from two motivations: 1) for the **sake of comparison** and continuity with respect to previous works; 2) the **complexity of the architecture adds additional challenges** that must be taken into account and requires additional research that is not immediately relevant to the merging approach. In particular, transformer-based architectures require ad hoc mechanisms to handle multi-head attention, positional embeddings and the vast number of residual connections, in fact motivating stand-alone works to do this [8, 9]. We agree, however, with the reviewer that we are lacking a very large-scale case and, therefore, are also training ResNet50 endpoints on ImageNet. We aim to include these results as soon as they are available. Given our limited computational budget, these may be ready for the camera-ready.
**Real-world applications**. We thank the reviewer for the suggestion. Tackling the problem of model merging, we inherit all the applicative domains suggested in relevant prior works, such as federated learning [1, 3], incremental learning [4], and continual learning [10]. While its foundational nature, in our opinion, makes examples of real-world applications out of scope for our paper, we share the curiosity of the reviewer in assessing its effectiveness in such a scenario. Therefore, we are currently setting up a federated learning experiment whose results we hope to share during the discussion phase.
---
Rebuttal 2:
Comment: **Theoretical analysis: convergence properties and guarantees.**
Following previous literature on the Frank-Wolfe algorithm [7], we know that FW obtains a stationary point at a rate of $\mathcal{O}(1 / \sqrt{t})$ on non-convex objectives with a Lipschitz-continuous gradient. We now prove that the considered objective function
\begin{equation}
\sum_{p=1}^{n-1} \sum_{q=p+1}^{n} \sum_{\ell=1}^L \langle (P_{\ell}^p )^\top W_\ell^p P_{\ell -1}^p, (P_{\ell}^q)^\top W_{\ell}^q P^q_{\ell -1} \rangle
\end{equation}
Has a Lipschitz-continuous gradient. We recall that, for each layer permutation $P^A = \{P_1^A, P_2^A, \ldots, P_{L}^A\}$ of model $A$, we can define the gradient of our objective function relatively to the model $B$ we are matching towards:
\begin{equation}
f(P_l^A) = \nabla^{\text{rows},P_l^A} + \nabla^{\text{cols},P_l^A} + \nabla^{\text{rows},\leftrightarrows,P_l^A} + \nabla^{\text{cols},\leftrightarrows,P_l^A} = \left[W^A_l P_{l-1}^A (P^B_{l-1})^\top (W^B_{l})^\top + (W^A_{l+1})^\top P_{l+1}^A (P^B_{l+1})^\top W^B_{l+1}\right] P^B_{l} + \left[W^B_{l} P_{l-1}^B(P^A_{l-1})^\top (W^A_{l})^\top + (W^B_{l+1})^\top P_{l+1}^B (P^A_{l+1})^\top W^A_{l+1}\right] P^A_{l}
\end{equation}
To prove Lipschitz continuity, we need to show there
exists a constant $C$ such that
$\forall p\in[1,n] l\in[1,L] \lVert f(P_\ell^p) - f(Q_\ell^p) \rVert \leq C \lVert P_\ell^p - Q_\ell^p \rVert$.
To simplify passages, we only consider a fixed $l$ and perform a
generic analysis. We begin by observing that
$$
f(P_l^p) - f(Q_l^p) =
\sum_{q\in[1,n]\setminus \{p\}}
\left[ W^p_{l} P_{l-1}^p (P^q_{l-1})^\top (W^q_{l})^\top \right. + \left. (W^p_{l+1})^\top P_{l+1}^p (P^q_{l+1})^\top W^q_{l+1}\right](P^q_{l} - Q^q_{\ell}) +
\left[ W^q_{l} P_{l-1}^q(P^p_{l-1})^\top (W^p_{l})^\top \right. + \left. (W^q_{l+1})^\top P_{l+1}^q (P^p_{l+1})^\top W^p_{l+1}\right] (P^p_{l}-Q^p_{l})
$$
The last form of the above equation can be rewritten as
a sum of the two sums:
$$
\sum_{q\in[1,n]\setminus \{p\}} \left[W^p_{l} P_{l-1}^p (P^q_{l-1})^\top (W^q_{l})^\top \right. + \left.(W^p_{l+1})^\top P_{l+1}^p(P^q_{l+1})^\top W^q_{l+1}\right] (P^q_{l} - Q^q_{l}) +
\sum_{q\in[1,n]\setminus \{p\}} \left[W^q_{l} P_{l-1}^q(P^p_{l-1})^\top (W^p_{l})^\top \right. + \left.(W^q_{l+1})^\top P_{l+1}^q (P^p_{l+1})^\top W^p_{l+1}\right] (P^p_{l}-Q^p_{l})
$$
Since the first term does not depend on either
$P_l^p$ or $Q_l^p$, we assume as a worst case that its norm is 0.
Then, we remove transposes (since $\lVert M \rVert = \lVert M^\top \rVert$) and apply the triangle
inequality and the sub-multiplicative property of matrix norms:
$$
\lVert f(P_l^p) - f(Q_l^p) \rVert \leq
\sum_{q\in[1,n]\setminus \{p\}} \lVert P^p_l-Q^p_{l}\rVert \left( \lVert W^q_l\rVert \lVert P_{l-1}^q\rVert \lVert P^p_{l-1}\rVert \lVert W^p_l\rVert \right. + \left. \lVert W^q_{l+1}\rVert \lVert P_{l+1}^q\rVert \lVert P^p_{l+1}\rVert \lVert W^p_{l+1}\rVert \right)
$$
Let
$$
C = \max_{q\in[1,n]\setminus \{p\}} ( \{ \lVert W^q_l \rVert \lVert P_{l-1}^q\rVert \lVert P_{l-1}^p\rVert \lVert W^p_l\rVert + \lVert W^q_{l+1}\rVert \lVert P_{l+1}^q\rVert \lVert P^p_{l+1}\rVert \lVert W^p_{l+1}\rVert } )
$$.
Then,
$$
\lVert f(P_l^p) - f(Q_l^p) \rVert \leq C \sum_{q\in [1,n]\setminus p} \lVert P_l^p - Q_l^p \rVert = C (n-1) \lVert P_l^p - Q_l^p \rVert
$$
we conclude that $f(P_l^p)$ is Lipschitz continuous for all models
and all layers, with Lipschitz constant $C(n-1)$ depending on both the
norm of the weights matrices and the number of models.
---
Rebuttal 3:
Comment: **Inconsistencies:**
We thank the reviewer for accurately spotting the naming inconsistency, we have promptly fixed the issue in the manuscript by making $n$ lowercase everywhere. Analogously, we added a figure-level caption to Figure 2 which incorrectly had only two subfigure-level captions. The caption states “*Existing methods accumulate error when cyclically mapping a model through a series of permutations, while $C^2M^3$ correctly maps the model back to the starting point.*”.
Regarding definition 2.1, we used the definition as presented in Git Re-Basin [1]. We believe, however, the reviewer’s comment to be spot on, as the current formulation may hinder clarity. We therefore replaced the $\mathcal{L}(\Theta_A) \approx \mathcal{L}(\Theta_B)$ assumption by instead asking that the two points correspond to weights of neural networks trained to convergence with SGD.
**References**
[1] Ainsworth, Samuel, Jonathan Hayase, and Siddhartha Srinivasa. 2022. “Git Re-Basin: Merging Models modulo Permutation Symmetries.” In *The Eleventh International Conference on Learning Representations (ICLR), 2022*
[2] Jordan, Keller, Hanie Sedghi, Olga Saukh, Rahim Entezari, and Behnam Neyshabur. 2023. “REPAIR: REnormalizing Permuted Activations for Interpolation Repair.” In *The Eleventh International Conference on Learning Representations (ICLR), 2023*
[3] Navon, Aviv, Aviv Shamsian, Ethan Fetaya, Gal Chechik, Nadav Dym, and Haggai Maron. 2023. “Equivariant Deep Weight Space Alignment.” in *The Forty-first International Conference on Machine Learning (ICML), 2024*
[4] Peña, Fidel A. Guerrero, Heitor Rapela Medeiros, Thomas Dubail, Masih Aminbeidokhti, Eric Granger, and Marco Pedersoli. “Re-Basin via Implicit Sinkhorn Differentiation.”, IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023
[5] Singh, Sidak Pal, and Martin Jaggi. 2020. “Model Fusion via Optimal Transport.” In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 (NeurIPS), 2020*
[6] Horoi, Stefan, Albert Manuel Orozco Camacho, Eugene Belilovsky, and Guy Wolf. 2024. “Harmony in Diversity: Merging Neural Networks with Canonical Correlation Analysis.”, in *The Forty-first International Conference on Machine Learning (ICML), 2024*
[7] Lacoste-Julien, S. (2016). Convergence Rate of Frank-Wolfe for Non-Convex Objectives. *ArXiv, abs/1607.00345*.
[8] Imfeld, Moritz, et al. "Transformer fusion with optimal transport." In *The Twelfth International Conference on Learning Representations (ICLR), 2024*
[9] Verma, Neha, and Maha Elbayad. "Merging text transformer models from different initializations." *arXiv preprint arXiv:2403.00986* (2024).
[10] Marczak, Daniel, et al. "MagMax: Leveraging Model Merging for Seamless Continual Learning." ECCV 2024
---
Rebuttal Comment 3.1:
Title: Additional results
Comment: As anticipated in the rebuttal, we have run our framework in a federated learning scenario. To this end, we have used the state-of-the-art federated learning library Flower and employed our matching scheme over a set of 10 clients over CIFAR10, each adopting a small CNN model. We observed the following:
1. When all the clients start from the same initialization, our approach has no benefit and falls back to standard averaging. In fact, the optimization process quickly returns identity matrices as permutations, suggesting the models already share the same basin.
2. When instead we initialize the clients from different random initializations, our approach visibly outperforms FedAVG.
We report here the accuracy for 50 aggregation rounds, with each client training for 20 local epochs. We report the results every 5 rounds for brevity.
| | 1 | 5 | 10 | 15 | 20| 25| 30| 35| 40| 45|
| -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --|
| FedAvg |0.0942 | 0.394 | 0.4972 | 0.5517 | 0.5699 | 0.5893 | 0.6018 |0.6063 | 0.6099 | 0.6136|
| C2M3 | 0.0941 | **0.4234** | **0.5193** | **0.5555** | **0.5783** | **0.5978** | **0.6077** | **0.6165** | **0.618** | **0.622** |
If we increase the number of local epochs, the benefits get more pronounced. This is in line with the intuition that standard averaging becomes less effective when clients drift due to prolonged local training and too infrequent aggregation.
| | 1 |2| 3| 4| 5| 6| 7| 8| 9| 10 |
| -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --|
| FedAvg |0.0942 | 0.2638 | 0.3543 | 0.3825 | 0.4165 | 0.4505 | 0.4742 | 0.4994 | 0.5169 | 0.5317|
| C2M3 | **0.0947** | **0.3303** | **0.3899** | **0.4441** | **0.4764** | **0.4968** | **0.5184** | **0.5334** | **0.5434** | **0.5536**|
While these results are not sufficient to claim an overall supremacy of the approach for the task due to the limited evaluation and choice of models, they show the approach to be promising for the problem and encourage further research.
Thanking the Reviewer for their suggestion, we kindly encourage them to revise their score if they consider their concerns to be addressed; otherwise, we remain available for any further clarification. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments and questions. We are happy to see the contribution of our work be appreciated, with reviewers finding the work innovative, empirically advantageous, and addressing a limitation in existing approaches (djfE). The concept of a universe space was found to be neat and sensible, and a nice advancement to the field (mu2n). Moreover, we are happy to see the benefits of the approach acknowledged, being deterministic, data-free and avoiding the accumulation of error when permuting cyclically (TBfo). Finally, we are glad to see that the experiments and comparisons with the best existing methods were found to be extensive (mu2n) and the analysis and discussions to be sufficient (TBfo).
Having done our best to address all the raised concerns, we remain available for any further clarification or doubt.
Pdf: /pdf/f52d71cba3a144674fa020a558c2c81b1897e04c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
S-SOS: Stochastic Sum-Of-Squares for Parametric Polynomial Optimization | Accept (poster) | Summary: This paper is concerned with the following natural generalization of polynomial optizimation:
Given a distribution $\nu(\omega)$ and a polynomial $f(x,\omega)$, find the tightest (in expectation) function $c(\omega)$ that lower bounds $f(x,\omega)$ everywhere.
Given the computational hardness of the problem, the authors focus on the sum-of-squares relaxation (of a given degree $s$).
For trigonometric polynomials, they show a $O(\frac{\log s}{s})$ convergence of the error of the relaxation w.r.t. the optimum (here we hide factors depending on the input in the $O$ notation). Under stronger assumption on the optimal solution they further achieve fast error convergence.
Finally, they complement these results with experimental evaluations (in which they apply certain SDP sparsification techniques to improve the scalability).
Strengths: I find the question investigated in this paper and the two main theorems (2.1 and 2.2) valuable and non-obvious.
The "Cluster basis hierarchy" is also a non-trivial original contribution.
Finally, I believe the experiments included in the paper will be of interest to the wider NeurIPS' audience.
Weaknesses: In my opinion, the main weakness of the paper lies in the fact that it fails to explain the *core* ideas behind its results. There is not a careful discussion of the theorems, their limitations and the insight required to prove them. This makes the paper easy to follow but also quite frustrating to read. I believe significant improvements can be made in this sense. Such improvements could greatly improve the quality of the manuscript.
The theoretical analysis result is limited to trigonometric polynomials. This is an important restriction, but a discussion addressing this topic, possible extensions and related obstacles is not present.
The cluster basis hierarchy (henceforth CBH) used to scale up the SDP is non-trivial and interesting. However it is never discussed in-depth and no theoretical result is provided. I understand that convergence results for CBH may be hard to prove, but a more comprehensive discussion seems needed.
Technical Quality: 3
Clarity: 2
Questions for Authors: What can the authors say for other classes of polynomials?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: I encourage the authors to further discuss the limitations of their theoretical results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer iKyk for their detailed comments.
We believe that the reviewer’s primary concerns are addressed by our global response:
- Limitations of trigonometric polynomial assumption and the possibility of extending to more general polynomial families
- Expanded cluster basis hierarchy discussion
- Small size of experiments
Regarding the core ideas behind our results and a more careful discussion of the theorems, limitations, and insight - we hope to improve and expand our discussion to make the journey easier and more fruitful.
Thank you again for your consideration. | Summary: The manuscript considers variants of stochastic-SOS hierarchy for parametric polynomial optimization (POP), which had been previously considered using similar (joint+marginal) methods (https://arxiv.org/abs/0905.2497). Unfortunately the (joint+marginal) methods did not work very well, perhaps because parametric POP is known to have a very complicated structure (https://doi.org/10.1287/moor.2021.0097). While the numerical experiments are not exactly conclusive, the present manuscript provides guarantees on the rate of convergence, which improve upon the joint+marginal method somewhat. The "cluster basis" variant of the hierarchy could be of independent interest.
Strengths: -- The convergence rate results are neat, although limited to 1-periodic trigonometric polynomials over compact sets.
-- The manuscript cites a number of interesting recent preprints (such as https://theses.hal.science/LAAS-POP/hal-04201167v1).
Weaknesses: -- The numerical experiments are poorly designed. The sensor network localization problem is known to be reducible (https://arxiv.org/abs/1002.0013) to trivial sizes. Even early papers (https://link.springer.com/article/10.1007/s10107-009-0338-x) considered n=10000 senosrs. The 2010 paper tests on instances with n = 20000 to 100000 sensors, while the present authors consider instances on less than n=15 sensors. This is 4-5 orders of magnitude less than the state of the art.
-- It is not clear how the objective of the heuristically extracted solution (A.7.6) differs from the objective function of the relaxation.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you explain the application of the joint+marginal method as a starting point?
Could you comment on the rates of convergence of the joint+marginal method?
Could you demonstrate the behaviour of your S-SOS on the more complicated behaviours from https://doi.org/10.1287/moor.2021.0097? Cf. Definition 3.9 (Irregular accumulation point), but also Definition 3.7 (Discontinuous non-isolated multiple point) and Definition 3.6 (Discontinuous isolated multiple point).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The discussion of the limitations is misleading. It claims that all SDP-based algorithms fail to scale, while there clearly are (https://arxiv.org/abs/1002.0013) SDP-based algorithms that scale well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Dyr8 for their detailed comments. In particular, we appreciate all the external citations they flagged.
The reviewer highlights that the sensor network localization (SNL) problem can be reduced to trivial sizes citing work solving instances of 20k-100k sensors, while our numeric experiments only have up to 15 sensors. A few things need to be clarified on this front.
In the cited paper (Krislock and Wolkowicz 2018, https://arxiv.org/abs/1002.0013), the authors propose an algebraic reduction of the noiseless SNL problem so that they can analytically obtain the range of the PSD matrix. This simplifies the SDP dramatically, however in the noisy setting (where observed sensor-sensor distances are perturbed with observation noise) this approach is unusable, requiring significant modification. Krislock and Wolkowicz use this reduction to solve SNL problems of 10k-100k sensors, but they develop a highly specialized algorithm that does not use any SDP solvers, as per the abstract.
Noiseless SNL dramatically simplifies the problem. The intuition here is that in the noiseless setting, localizing even a small number of sensors near an anchor will propagate the correctly localized positions to sensors nearby. If one can find small groups of sensors that can be well-oriented w.r.t. each other, one can pursue localization of these groups in parallel and orient them globally at the very end. In the noisy setting, any error in localization can easily propagate to the whole instance, dramatically reducing the size of problems that can be effectively solved (c.f. https://link.springer.com/article/10.1007/s11276-007-0034-9, https://epubs.siam.org/doi/10.1137/100792366).
The solution extraction in A.7.6 uses the moment matrix to find the mean/variance of the sensor positions. We did not clarify this in sufficient detail in the paper but Lasserre 2010 [[link](https://epubs.siam.org/doi/10.1137/090759240)] discusses and proves the convergence of a sequence of semidefinite relaxations (here, our dual). Our dual (their primal) has moment vector solutions converging to that of a probability distribution encoding all globally optimal solutions. Thus, as long as we solve the SDP with large enough degree $s$ to high accuracy, our extracted solutions will be “close” to the true solution of the infinite-degree relaxation. In our cluster basis hierarchy, we do not prove the convergence of the same moment vector. Numerics seem to suggest that it behaves similarly as the full basis hierarchy, but this is presently a limitation of our theory, and we hope to find a similar convergence proof.
Due to the limited space available, we could not add much detail on the joint+marginal method. The joint+marginal method is essentially our dual semifinite program (our Eq 3 and 4). In that paper, no quantitative convergence rates are provided, although later works (such as the ones we reference) have them.
Thank you for bringing up the Bellon et al paper. In Bellon et al, a parametric SDP has a trajectory of solutions (a sequence of PSD matrices $X$ parameterized by $t$) and seeks to characterize the geometry of the trajectory $(X, t)$. In our work, the primal SDP has as its objective $\int c(\omega) d\nu(\omega)$ where $c(\omega)$ is guaranteed to be a lower-bound to the cost function $f(x, \omega)$ and $\nu(\omega)$ is the density of $\omega$. When we solve the finite-degree primal SDP (Eq 2), we obtain a PSD matrix $W$ that describes the “non-negative part” of the function $f(x, \omega)$ for all $\omega$. As formulated we cannot directly compare our results with that of Bellon et al.
We instead consider the related problems
$$ \max c s.t. f(x, \omega) - c = m(x)^T W(\omega) m(x), W(\omega) \succcurlyeq 0 $$
with dual
$$ \min \langle M(\omega), K \rangle s.t. \langle K_a, M(\omega) \rangle = 0, M(\omega) \succcurlyeq 0 $$
which describes a series of $(c, W, M)$ values that depend on the parameter $\omega \in \Omega$. We must further restrict to an open interval $\Omega = (\omega_i, \omega_f) \subseteq \mathbb{R}$. Now we have a dual SDP that matches the parametric SDP of Bellon et al.
Let’s first analyze the simple function $f(x, \omega) = (x - \omega)^2 + (\omega x)^2$ with $\omega \in [-1, 1]$. The parametric SDP amounts to finding a SOS approximation to the function at every $\omega$. We can analytically solve for the lower-bounding path $c(\omega)$ and the SOS residual $m(x)^T W(\omega) m(x)$. We find:
$$ m(x)^T W(\omega) m(x) = f(x, \omega) - c(\omega) = \frac{1}{\omega^2 + 1} (\omega^2 x - \omega + x)^2 $$
The path $(W, c)$ taken as $\omega$ varies in $[-1, 1]$ sweeps out a smooth curve and we conclude that all points $(W, c)$ are regular for $\omega \in [-1, 1]$.
It is more difficult for SNL, although we have a few observations to start with:
- In the noiseless setting (parameter $\omega = 0$) there exists a unique configuration of the sensor positions.
- In the noisy setting, for small $\omega$, the recovered sensor positions will be “close” to the correct unique sensor positions.
- Consider 3 anchors (A, B, C) and 1 sensor (X) in 2D SNL. If the 3 anchors are not degenerate, then when $\omega=0$ we have the sensor positioned at the intersection of three circles, each having radius equal to the noiseless distance. If we perturb one of the observed distances (X-C) along a line, we note that there are two perturbing values that would lead to loss = 0, leading to consistent distances. This corresponds to the circle intersecting the other two at two points, with mirror symmetry about the segment intersecting A and B.
This last observation helps us see that we should have a discontinuity when the perturbation places the sensor at the segment intersecting A and B, and thus we should have multi-valued $W, M$ on either side of that point.
It’s possible that we misunderstood the work somehow. Regardless, it’s quite interesting and we shall incorporate this into our paper.
Thank you for your consideration.
---
Rebuttal Comment 1.1:
Title: Thank you!
Comment: Many thanks for having worked out the example. I see the distinction between the separated and non-separated parametric problems, where Bellon considered the former and you are consider the latter. Thank you! | Summary: This work proposes a new sum of squares (SOS) based relaxation for stochastic polynomial optimization,called Stochastic SOS (S-SOS). For a class of problems where the cost function $f$ can depend on variables $x$ as well as randomly drawn parameters $\omega$, the authors propose a hierarchy of relaxations whose solutions yield strict lower bounds on the optimal value of $f$. For a degree $s$ relaxation, it is shown that the obtained lower bound is $\log(s)/s$ and $1/s^2$ for different parameterizations of the parametric lower bound function $c(\omega)$. Finally, the proposed methods are applied to sensor network localization problems, where S-SOS outperforms the stated baseline.
Strengths: 1. This proposes S-SOS, a novel heirarchy of relaxations for solving stochastic polynomial optimization problems.
2. The paper provides asymptotic convergence results for the sequence of relaxations - that is, as the degree of the SOS program increases, the error between the lower bound obtained via the relaxation and the true optimal value reduces to 0. In particular, the finding that by using piecewise constant approximation of $c(\omega)$ achieves the $1/s^2$ convergence rate, outperforming the polynomial parameterization of $c(\omega)$, is quite interesting and surprising to me.
3. The paper is written quite clearly, with no major typos that I could find.
4. The proofs appear to be correct, with a couple of small typos such as in equation 6.
Weaknesses: 1. In equation (6), I believe the polynomial in question should be $f(x,\omega) = (x-\omega)^2 + (\omega x)^2$. Otherwise, the correct solution should be $c^*(\omega) = -\omega^4/(1-\omega^2)$.
2. Perhaps in the related work section, a little space could be devoted to other approaches to polynomial optimization, such as the LP/SDP relaxations constructed using the Polya and Handelman theorems, and the LP/SOCP relaxations obtained by using DSOS and SDSOS polynomials.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Are there problems for which the exact lower bound can be achieved in a "single step"? That is, suppose the polynomial cost and constraint functions are of degree $s$, are there problems for which a degree $s$ relaxation suffices to obtain the true minimum? Do such problems arise in practical circumstances?
2. While the DSOS/SDSOS hierarchies do not, in general, converge to global optima, have the authors considered using the associated LP relaxations, particularly in the context of the sensor localization problem? This is salient since the LP/SOCP relaxations can be used to solve much larger problems than the SDP solutions.
3. Is there any intuition as to how the result stated in Prop. 2.2. can be extended to the multivariate case?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of the work are discussed, though they aren't stated in a single section. It would be helpful to add such a section, perhaps in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Cxce for their detailed comments.
With regards to the typos identified, we appreciate the flagging of these and will correct them in the next version. The reviewer is indeed correct in that Equation (6) should have the plus sign.
We thank the reviewer for flagging additional approaches to polynomial optimization and will include those citations. In particular, we find the DSOS and SDSOS papers of particular interest. It seems that the core idea is to consider further relaxing the SDP to a SOCP or LP problem, which constrains the PSD cone further. We didn’t consider further relaxations as our goal was to propose a framework using the tightest possible yet still solvable bound (hence the SDP relaxation). We expect that LP and SOCP relaxations would lead to more efficient and scalable algorithms, though at the expense of accuracy.
We observe that in the regular SOS hierarchy, if one starts with a degree-$2s$ SOS polynomial then you will find exact convergence when using the hierarchy at degree-$2s$. This is true in all cases in that setting, but once we pass to the parameterized/stochastic setting of S-SOS, this no longer holds true generically (or even in most practical cases).
Just as an example, consider the case discussed in Section 3.1, where $f(x, \omega) = (x - \omega)^2 + \omega^2 x^2$. The cost is a degree-4 polynomial in two variables but we show no finite-degree polynomial will achieve the exact cost $c^*(\omega) = \omega^4 / (1 + \omega^2)$. Instead, we see that it can be well-approximated by a polynomial of degree $s$ with $s$ small. And indeed, in many practical scenarios, we anticipate that this is the case. Why this is true (that simple polynomials in general have non-polynomial best lower bounds but may be well-approximated by low-degree polynomials nonetheless) is out of scope in our analysis but it is a very interesting practical fact.
The reviewer has a good point that it is possible to extend the idea behind Prop 2.2 to the multivariate case. We neglected this in our work, but it appears that as in the 1D case of Prop 2.2, one can create a piecewise grid approximation to the lower-bounding function and do a SOS decomposition at each point. However, the constants that prefix the rates will get worse and have a dependence on the spatial dimension, i.e. we will have a rate of the form $\frac{C \sqrt{d}}{s_p}$ where $d$ is the dimension of the noise space $\Omega$ and $s_p$ is the number of grid points per dimension. One should also observe that the number of degrees of freedom for this approximation scales exponentially in the space dimension. We shall adapt the paper accordingly.
Thank you for your consideration.
---
Rebuttal Comment 1.1:
Title: Huh?
Comment: The authors write:
**We observe that in the regular SOS hierarchy, if one starts with a degree-2s SOS polynomial then you will find exact convergence when using the hierarchy at degree-2s.**
I would suggest that the authors review https://arxiv.org/abs/2403.08329, which analyzes the following:
\[
\begin{array}{rl}
\min_{x \in R} & x \\
\mathrm{s.t.} &1-x^2 \geq 0 \textrm{ and }
& x+(1-\varepsilon)x^2 \geq 0 ,\\
\end{array}
\]
parametrized by a scalar $\varepsilon \in [0,1]$. There, the convergence of the moment-SOS hierarchy is finite, but arbitrarily slow as $\varepsilon$ goes to zero.
---
Reply to Comment 1.1.1:
Title: Reply to the "Huh?" by Reviewer Dyr8
Comment: We thank reviewer Dyr8 for their remark. In particular, we appreciate the external citation they flagged.
Indeed, our comment was not precise and fails to be true in general. To be clearer, we were referring to the unconstrained case $$\min_{x\in\mathbb R^n} p(x)$$ for $p$ being already in sum of squares form. Hence, the observations of https://arxiv.org/abs/2403.08329 do not quite fit to the setting we wanted to stress.
We hoped for exact convergence with the same degree in the unconstrained setting. This will be true for example in the cases where the objective assumes $0$ as minimum. We would like to point out that the latter is the case for many physical objectives that are in the focus of our research and, thus, we made the claim. One can also observe that if $p(x)-p_{min}$ allows for an SOS representation it will be of same degree as the original objective in the unconstrained case since no cancellation effects can be observed.
So the remaining question is if we can shift SOS polynomials by constants and stay SOS. This, indeed, seems not to be true in general as the example $$p(x,y,z) = x^2y^2z^2+(x^2-1)^2+(y^2-1)^2+(z^2-1)^2$$ potentially can show (numerically verified). We would like to follow up on this question.
As a general remark why assuming $\min p =0$ is of interest, we would like to point out that eventually we do not know the exact form of the objective but are given only function values at samples. However, one can assume a physical model that in many cases would allow an exact SOS recovery in one step as the minimum will be zero.
Furthermore, observe that in the parametric case the situation is more complex. Here we have to approximate the lower bounding function by polynomials, which in general is a severe restriction and, thus, we cannot expect to obtain finite convergence in any relevant case. Additionally, the degree we use for the SOS hierarchies is lower bounded by the degree we use to approximate the lower bounding function and the question of finite convergence seem less relevant.
---
Rebuttal 2:
Title: Response to Author Rebuttal
Comment: I thank the authors for their thoughtful rebuttal. At this point, I'm happy to maintain my positive score for this work.
**Edit:**
I would like to elaborate on my justifications for keeping the score as is (weak accept): the work, while interesting, (generally) technically correct, and novel, has a few drawbacks that prevent me from granting a higher score. First, the scale of experiments is far lower than previous work, as pointed out by Reviewer Dyr8. Second, the lack of a multivariate bound (which I believe nearly all problems, including the experiments stated here are), makes the analysis a little handwavy (the dependence on the square root of the dimension in a possible multivariate version of Prop. 2.2., as pointed out by the authors, weakens the result slightly from an applicability standpoint). | Summary: In their study, the authors investigate parametric polynomial optimization where the function to be minimized is \( f(x, \omega) \). Here, \( x \) represents the decision variable, and \( \omega \) signifies a noise parameter. The primary goal is to approximate the best lower bound, \( c^*(\omega) = \inf_x f(x, \omega) \), for each value of \( \omega \). To manage this setup, the Sum-Of-Squares (SOS) hierarchy is adapted, an approach first introduced by Lasserre in his seminal ``Joint and Marginal'' work in the late 2000s.
The paper elaborates on the derivation of this hierarchy and its dual, as well as discussing the rates of convergence under suitable assumptions about the optimal solution \( c^* \), and includes some applications towards the end of the paper. From a technical standpoint, the work seems correct and presents a rigorous approach to handling stochastic variables in polynomial optimization. However, it does not significantly deviate from established methodologies, which may limit its appeal in terms of novelty.
Regarding its relevance to the NeurIPS audience, while the modified hierarchy certainly adds practical value, the paper does not address applications that align closely with the core interests of the community. The experiments, focusing on sensor network localization, seem peripheral to the main areas of interest at NeurIPS, which typically centers around more direct applications to machine learning and artificial intelligence technologies.
In conclusion, although the paper is technically proficient and might captivate a niche audience interested in theoretical optimization, it appears to fall short of the high innovation standards and relevance to ML typically expected for NeurIPS publications.
Strengths: -- Modification of the SOS hierarchy to incorporate noisy parametric problems
-- Error bounds under assumptions
Weaknesses: -- Relevance
-- Novelty
Technical Quality: 3
Clarity: 2
Questions for Authors: In your paper, you discuss the adaptation of the Sum-Of-Squares (SOS) hierarchy to handle stochastic parameters in polynomial optimization, which is a significant theoretical advancement. However, the practical applications presented, such as sensor network localization, seem somewhat tangential to the core interests of the NeurIPS community, which often focuses on direct applications in machine learning and artificial intelligence.
Could you elaborate on how the methodologies developed in your study could be applied to more central problems in machine learning? Additionally, are there potential modifications or extensions to the SOS hierarchy that could make it more directly applicable to common challenges in neural network training or optimization under uncertainty?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer pMRv for their detailed comments.
Concerning our work’s novelty and relevance, we believe that our proposal is one of specific interest to the ML/AI community here.
Applied mathematicians have studied SDPs and convex optimization for some time, while ML/AI scientists are more interested in non-convex/scalable methods as they can be applied to deep learning and other topics of recent interest. We believe fruitful work is to be done at the intersection of these two. To give just two examples of papers that fuse ideas across polynomial optimization, SOS methods, and neural networks, consider:
- Agrawal et al 2019 [[link](http://arxiv.org/abs/1910.12430)]: Introduces differentiable optimization layers in neural networks, which combines ideas from practical convex optimization and produces a new tool to be used in deep learning.
- Jaini et al 2019 [[link](http://arxiv.org/abs/1905.02325)]: A framework for high-dimensional density estimation inspired by sum-of-squares polynomials.
Our work assumes that both $f(x, \omega)$ and the lower-bounding function $c(\omega)$ are polynomials. Once we characterize how the SOS hierarchy works in this simple case, we can generalize to the cases where both functions are more complex, perhaps even parameterized by neural networks. Insights from understanding the rate of convergence and the speed of solving a given finite-degree SDP can help us design algorithms in the general case where the functions are more complex.
Consider that we propose a general primal/dual program whose solution provides a tight lower bound. To solve it in practice, one may make the polynomial assumption and then truncate at finite degree, which gives rise to the SDPs we analyze here. Another direction may be to solve the entire problem in the neural network setting: parameterize the basis function $m(x, \omega)$ with a neural network, use a smooth interpolating function class for the lower bound $c(\omega)$, and solve the program with an iterative gradient method. Finally, inspired by Agrawal et al 2019, imagine using the lower bound (or extracted solutions) obtained as output from our solved SDP as input to another layer in a neural network.
The reviewer may not be impressed by the size of the problems that can be solved here, but note our discussion on scalability and size in the global response. We will call the reviewer’s attention to the fact that noiseless v. noisy SNL are two very different problems - in the former we can solve SNL instances of ~100k sensors, in the latter it becomes much more difficult. The resulting output of our program actually solves for the global optimum, which gives sensor positions for every possible configuration of parameter $\omega$. Finally, we only use general-purpose SDP solvers in this work. Many additional methods can be explored, such as exploiting more specialized solvers, low-rank structure, sparsity, and subspaces of the PSD cone. We also expect that leveraging GPUs and other methods of acceleration can immensely increase the size and practicality of the framework we propose.
Our work is also relevant to many problems commonly seen in the AI + Science area. Recent works often seek to learn some kind of molecular potential with deep learning, to do something like “deep molecular dynamics”. The goal here is to have some energy function that depends on some external parameters (e.g. noise). We believe that present approaches lack a principled understanding, and that grounding it in a polynomial optimization setting and sum-of-squares methods can lead to generally useful results, if not a new way of thinking about such problems. To such an end, our paper also proposes a cluster basis hierarchy, introduced and used to scale the size of problems we can solve.
Thank you again for your consideration. | Rebuttal 1:
Rebuttal: We saw the following points come up in multiple reviews so we thought it would make sense to address it as a global response.
The reviewers chiefly seemed to have concerns about the following:
- The limitations of our theoretical assumptions (particularly the trigonometric polynomial one) and the possibility of extending to more general polynomial families
- Clarity, motivation, and further discussion (e.g. more on limitations, cluster basis hierarchy)
- The limitations of our numerical experiments, such as the relevance of sensor network localization and the small size of our experiments
**Limitations of theory.**
With respect to our theory, we agree wholeheartedly that the specialization to 1-periodic trigonometric polynomials is a severe limitation. To review, our work proves quantitative convergence of the S-SOS hierarchy in the setting of (i) a compact domain $X, \Omega$ and (ii) 1-periodic trigonometric polynomials. Periodic trigonometric polynomials are a natural choice to take advantage of Fourier convergence results on a compact domain. If the function $f(x, \omega)$ can be assumed to be only of interest for $(x, \omega)$ in some compact set, then we can rescale the compact set to be a 1-periodic compact domain and apply the 1-periodic trigonometric polynomial results. To use other polynomial families, one can apply a substitution argument, i.e. any result for the trigonometric polynomial hierarchy leads to a matching result for regular polynomials (2.2 in Bach Rudi 2023, link). We regret that we didn’t mention this and will add this to our paper.
**Improved clarity and discussion, esp. on the cluster basis hierarchy.**
The reviewers also mentioned that our paper could have been written more clearly and with much more discussion and elaboration, including on the motivations and intuitions behind the theory as well as the cluster basis hierarchy. Due to the limited space of the Neurips venue, we squeezed what we could into the main text and put everything else in the appendix. We hope to greatly expand the supplement as we continue our revisions.
As for the cluster basis hierarchy, we agree that more discussion is needed — as it is a core technique behind our goal of scaling up the S-SOS approach, particularly when the cost function is well-structured. We will expand our discussion of it in the supplement and hope to cover it in more detail in future work. Unfortunately, it is difficult to prove similar theoretical results for the cluster basis. But here we can also provide numerical support for the convergence of the cluster basis.
**Limitations of numerics: relevance and small scale.**
Finally, with respect to our numerics, the reviewers commented that the core problem (sensor network localization, SNL) is interesting but not of relevance to the broader Neurips community and we only demonstrated it on $N=15$ sensors, a small number compared to the state-of-the-art. We commented in response to another reviewer but will reproduce our response here.
*In the cited paper (Krislock and Wolkowicz 2018, https://arxiv.org/abs/1002.0013), the authors propose an algebraic reduction of the noiseless SNL problem so that they can analytically obtain the range of the PSD matrix. This simplifies the SDP dramatically, however in the noisy setting (where observed sensor-sensor distances are perturbed with observation noise) this approach is unusable, requiring significant modification. Krislock and Wolkowicz use this reduction to solve SNL problems of 10k-100k sensors, but they develop a highly specialized algorithm that does not use any SDP solvers, as per the abstract.*
*Noiseless SNL dramatically simplifies the problem. The intuition here being that in the noiseless setting, localizing even a small number of sensors near an anchor will propagate the correctly localized positions to sensors nearby. As such, if one can find small groups of sensors that can be well-oriented with respect to each other, one can pursue localization of these groups in parallel and then orient them globally at the very end. In the noisy setting, any error in localization can easily propagate to the whole instance, dramatically reducing the size of problems that can be effectively solved (c.f. https://link.springer.com/article/10.1007/s11276-007-0034-9, https://epubs.siam.org/doi/10.1137/100792366).*
Note also that the $N=15$ number is deceptive. **Uncertainty makes the SNL problem considerably more difficult than the noiseless setting.** S-SOS handles this naturally while also solving the problem for its global optimum. This means that we get a solution for the sensor positions for every possible configuration of noise, via the probability distribution $\mu(x, \omega)$.
As for the relevance to Neurips: sensor network localization is a problem that this conference is quite unfamiliar with. It is an old problem in polynomial optimization but generally challenging in the noisy setting. Once lifted into the stochastic setting, we find that our S-SOS framework is a natural fit for this but also many other possible problems in "AI + Science". We believe that taking this approach can be quite fruitful and we hope to see this line of work through.
We want to thank all reviewers for their consideration. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The Sum of Squares (SOS) technique is well-known in polynomial optimization studies. This paper studies its variant, in which the function to be lower bounded has random parameters drawn from some probability distribution. The main contribution is a cluster-based SDP hierarchy for the Stochastic-SOS (S-SOS) method and a convergence proof for this and the standard Lasserre. Experiments demonstrate the effectiveness of the new hierarchy in the sensor network localization problem (SNL).
Strengths: - One strength lies in the convergence results on S-SOS, which is a vital generalization of the standard SOS. The proof and arguments in the Appendix seem non-trivial and thorough.
- The cluster-basis hierarchy is novel and practically relevant. Although SOS has good theoretical performance, the SDPs could be expensive to solve in real-world engineering problems. The proposed hierarchy reduces the sizes of the SDPs that need to be solved.
- Applications to the SNL problem complement the theoretical results in convergence and demonstrate the advantage of leveraging the S-SOS technique over standard Monte Carlo methods.
Weaknesses: - It would be better if the authors presented the assumptions in a subsection and explained their restrictiveness. For example, Theorem 2.1 assumes the objective polynomial to be trigonometric. I wonder how challenging it is to extend the results to more general polynomial families.
- The paper uses only two paragraphs to introduce and illustrate the cluster-basis hierarchy, a vital part of its contributions. A more comprehensive discussion of the hierarchy could enhance the paper's content and readability. Besides, I would suggest the authors polish the main paper more.
- Novelties in the proof techniques would often attract independent interests. I wonder if the convergence proof is established by applying only the standard techniques and existing results.
- The experiments seem to contain, at most, $N=15$ sensors, which are suitable for synthetic validation but may not be sufficient to demonstrate the technique's practical advantage.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please check the comments on the paper's strengths and weaknesses. The authors should also feel free to point out any inaccuracy or misunderstanding. Thanks.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: It may be beneficial if the authors could explain more about the paper's limitations in theory (restrictiveness of the assumptions) and practice (practicality of the experimental settings). But I do not find the current presentation concerning. Similarly, I do not think the paper would have any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer MyRu for their detailed comments.
The reviewer brings up a good point about presenting the assumptions in a subsection. We will set aside a section in the appendix for this purpose.
As for novelty in our proof techniques or lack thereof, it was not obvious to us that one could take the existing results and just apply them directly to a new framework. The way a proof is presented also matters and we hope that our detailed presentation in the appendix is instructive for similar paths taken in the future.
We believe our global response covers the remaining points brought up in the strengths/weaknesses sections, which center on the limitations of our theory and numerics:
- Limitations of trigonometric polynomial assumption and the possibility of extending to more general polynomial families
- Expanded cluster basis hierarchy discussion
- Small size of experiments
Thank you again for your consideration. | null | null | null | null | null | null |
ProEdit: Simple Progression is All You Need for High-Quality 3D Scene Editing | Accept (poster) | Summary: This paper presents a task decomposition method that achieves more robust 3D scene editings. The main contribution is the concept of decomposition of the desired task (represented by a prompt) and the adaptive 3D Gaussian Splatting training process. The edited appearance and geometry using decomposed tasks are better than the results generated by previous methods.
Strengths: The strengths of this paper are:
- The idea of decomposing the task is interesting and useful.
- The results demonstrated in the paper is convincing to support the claim.
Weaknesses: The weaknesses of this paper are:
- The simple linear decomposition scheme do not provide any semantic meaning, which is a bit hard to evaluate the result of each subtasks.
- Moreover, there is no results (except fig. 1) of each subtask.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It is unclear can the method do more object manipulation instead of just appearance editings. For example, it would be nice to show more examples for ScanNet++ scenes that involves moving objects or adding objects. Current results are mostly just change the appearance.
- I think it is important to show and evaluate the results of each subtask, instead of just showing the final results in the main paper. (e.g., use a more complex task and test the alignment between the results of each subtask and the human task decomposition results). I understand this is still a limitation of simple linear interpolation scheme, but the results will be very meaningful and helpful imho for future research.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I already discussed the main limitation, i.e., the task decomposition in the previous section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1. The semantic meaning of decomposition and evaluation of subtasks
- In our method, the semantic meaning can be interpreted as "how IP2P acts with such interpolated embedding." Though we cannot write down the text instructions corresponding to the interpolated embedding, we can still visualize it with the IP2P's editing results w.r.t. the interpolated embeddings. We provide a visualization of the alignment between the edited scene in **FigPDF.E**.
- From the first row, we can roughly interpret each subtask's goal. For example, $r_2$ roughly indicates, "give him blue eyes and pointy ears; make the background slightly green," while $r_4$ indicates, "make his eyes completely blue, his hair red, and his face slightly thinner; make the background dark green."
- Comparing the two rows, we can observe that the edited scene roughly matches the appearance/effect of the IP2P image editing, as especially shown in hair color. This shows a subtask semantic alignment for each subtask.
- Though our task decomposition is based on a linear interpolation of embeddings, our method does not depend on, and is actually agnostic to, its exact semantic meaning.
- Our key insight is to decompose a difficult task into several easier tasks to reduce the inconsistency in the distillation (Sec.3.2). Instead of focusing on the semantic meaning of the interpolated embeddings, we focus more on whether and how selecting an interpolation point can sufficiently decrease the difficulty and inconsistency.
- Therefore, our difficulty metric for adaptive subtask decomposition (L159) is designed based on the approximated task difficulty, instead of the difference of semantic meanings.
### W2/Q2. Results of each subtask
- We have provided the results of each subtask in our supplementary video at 0:03. Please refer to it.
- As shown at 0:03 of the supplementary video, a latter subtask results in more aggressive editing (e.g., redder hair, thinner face, and larger ears in the "Elf" editing task).
- We also provide a visualization of the alignment between the edited scene and IP2P's behavior on each subtask. Please refer to the response to W1 above and **FigPDF.E**.
- We will provide more results in the main paper in the revision.
### Q1. Tasks about moving or adding objects in the scenes
- We would like to clarify that, following previous works IN2N, ViCA-NeRF, ConsistDreamer, etc., our paper focuses on a framework to perform instruction-guided editing, that distills the editing signals from an existing pre-trained 2D diffusion model. We would like to humbly point out that the operation of moving or adding objects is a challenging task that is not yet solved by any of the baselines, and is out of the scope of this paper.
- The method we propose is a distillation-guided instruction-guided pipeline. Therefore, the editing capability of our framework, as well as the existing baselines, is all distilled from the 2D diffusion model, which needs to support object movement/creation guided by instruction. However, currently, most 2D diffusion models do not well-support an instruction-guided object moving, and multiple views may even require different instructions to perform the editing (e.g., "left" should be changed to "right" in an opposite view, different visible reference objects, etc.).
- Designing such a 2D diffusion model and/or a format of 3D-consistent editing instruction is out of the scope of this paper. We leave this task as an interesting future work. However, the idea of progressive editing is general, and can be potentially applied to such a task once we have an applicable 2D diffusion model for this task with appropriate instruction conditions. | Summary: This paper presents ProgressEditor, which decomposes the 3D scene editing task into multiple subtasks and progressively modifies the scene which is represented by 3D Gaussians. The subtask decomposition is defined as the linear interpolation of the encoding of the editing prompt. Given the editing instruction, the proposed approach recursively searches for the proper subtask decomposition, so that the difficulty of each subtask is uniformly distributed. Then it progressively completes the subtasks with the proposed adaptive Gaussian creation strategy and finally obtains the edited scene.
Strengths: The strengths of this paper include:
(1) It proposes a novel idea that decomposes the 3D editing task into multiple subtasks and progressively completes the editing.
(2) It provides an adaptive 3DGS tailored to the progressive 3D editing framework, which is able to refine the 3DGS more efficiently.
(3) The proposed approach generates high-quality editing results with clear texture and precise geometry.
Weaknesses: The weaknesses include:
(1) The experimental evaluation is insufficient to validate the effectiveness of the proposed approach with the lack of quantitative assessment.
(2) It lacks the ablation study on some technical designs described in the method section. Please see my questions below.
Technical Quality: 2
Clarity: 3
Questions for Authors: The key of the proposed approach is to decompose the 3D scene editing tasks to alleviate the multi-view inconsistency problem during the distillation process. However, what about the consistency of each subtask? The instruction encoding is not necessarily linear. So the inconsistency still exists in each subtask. Although each subtask should have less inconsistency compared to the original task, would the progressive editing (multi-stage editing) introduce additional burden to the process?
Line 194-196 describe the subtask scheduling, where it adds additional subtasks r_0 and r_n. There should be an ablation study to validate the necessity of this setting. There should be more technical details about the Gaussian creation strategy described in lines 229-231.
It is difficult to measure the advantage of the proposed method compared to the other alternatives. It seems the generated results of the proposed method preserve the feature of the person better in Fig.3. But it’s hard to say that it generates the results with more geometry editing (line 283). A quantitative evaluation should be reported to validate the effectiveness of the proposed method. In addition, a comparison of the time cost of each method should also be presented.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: This paper includes a discussion of the limitations of the proposed approach. It's better to present some failure cases for a better understanding.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1. Quantitative assessment
- Please refer to the "Quantitative Evaluation" in the **global author rebuttal**. Thank you.
### W2/Q2.2. Additional subtasks $r_0$ and $r_n$
- Our framework is designed in a setting, where the input and output format of the scene can be in *any* scene representation (e.g., NeRFs, conventional 3DGS, etc.). However, our editing procedure requires the scene representation to be our *Adaptive* 3DGS, which is tailored to progressive editing.
- Therefore, the additional subtasks $r_0$ and $r_n$ represent the input and output states where the scene is in other representations. The corresponding subtasks $s_0 = S(s_\mathrm{input}, r_0)$ and $s_\mathrm{output} = S(s_n, r_n)$ are for the conversion between other scene representations and our adaptive 3DGS (i.e., re-reconstructions).
- Within these re-reconstructions, the diffusion model works as a simple refiner of the per-view images, which preserves most of the appearances and only refines some defects or abnormalities, and may also compensates for some minor non-sufficient edited parts.
- We provide a visualization of results before and after the refinement of addition subtask $r_n$ in **FigPDF.D**, where the depth maps are the 3DGS-modeled depth, segmented to emphasize the foreground. We can observe that the two images have very similar appearances, while the refined version has a more precise geometry and appearance near the ear part. This shows that the additional $r_n$ can make minor refinements to the edited results but will not significantly change or improve the appearance.
### Q1. The consistency and non-linearity of subtasks
- In our method, we use *adaptive* task decomposition to reduce the difficulty or inconsistency of each subtask. As mentioned in L150-L180, we approximate the difficulty with the difference between original and edited images, and design an adaptive subtask decomposition upon this.
- With this method, even if the instruction encoder is not linear, we can still obtain a subtask decomposition with reduced difficulty in between, which is no larger than $\mathrm{d}_\mathrm{threshold}$ (L173), a preset threshold for subtask difficulty.
- As our method decomposes one editing task into multiple subtasks, we have to solve more editing (sub-)tasks in total. Though we may still need a longer running time to complete all these subtasks, each decomposed subtask is simpler to achieve and, therefore, takes a shorter time than completing a full editing task, and we can significantly improve the performance and gain aggressivity controllability with this trade-off. Notably, our ProgressEditor is significantly more efficient than current state-of-the-art ConsistDreamer, as detailed in the reply to Q3.2 below.
### Q2.1 Gaussian creation strategy in L229-L231
- This strategy is about controlling the growth speed of the Gaussians. More specifically, if we culled $n$ Gaussians in the previous step, we will only allow $t(n)$ Gaussians to be created at this step, where $t(n)$ is the threshold scheduling w.r.t. $n$ and the total number of Gaussians.
- With this controlling strategy, the Gaussian creation (1) will not generate too many Gaussians for slightly inconsistent multi-view images, and (2) will create more Gaussians at the high-frequency parts of the scene, which improves the results.
- We will add these details in the revision.
### Q3.1. Where our results have more geometry editing
- Our ProgressEditor performs the editing with more geometry editing, as shown in the following editing cases:
- In the "Tolkien Elf" task of Fangzhou scene in Fig.3, ours w/ high aggressivity makes the shape of the face thinner, while most baselines tend to preserve the original shape.
- In the "Lord Voldemort" task of Fangzhou scene in Fig.3, ours generates a face with more wrinkles and also edits the neck part.
- In the "Clown" task of the Face scene in Fig.3, the clown edited by ours is smiling more aggressively.
### Q3.2. Comparison of time cost
- As we are using a dual-GPU training strategy (L233), each subtask takes only slightly longer than training a 3DGS representation from scratch (10-15 minutes). Therefore, the whole editing process with 4 subtasks takes 1-2 hours, and the ones with 8 subtasks take 3-4 hours.
- Compared with baselines, ConsistDreamer takes 12-24 hours according to their paper; other baselines like IN2N may take comparable or shorter time than ours, but can only achieve lower-quality editing results with significantly worse 3D consistency.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. The response clarifies the details about "additional subtasks" and "consistency and non-linearity of subtasks", and "Gaussian creation strategy". My only remaining concern is the "ablation study on some technical designs". Specifically, the authors have already shown the "no subtask decomposition" variant in Fig.5 of the paper. Is there any quantitative evaluation (like the newly added comparison evaluation) to validate the effectiveness of the subtask decomposition, which is the main novelty of the proposed approach? I'm happy to increase my rating given any quantitative evidence about this ablation study.
---
Rebuttal 2:
Comment: We sincerely thank the reviewer for acknowledging our clarifications, and we are glad that our response has addressed most of the reviewer’s concerns. We thank the reviewer for the follow-up question and address the remaining concern here.
### D1. Quantitative ablation study
- We provide the quantitative comparison between our full method and the “No Decomposition” (“ND”) variant, as shown in the table below.
| Variant | GPT↑ | CTIDS↑ | CDC↑ |
| :---- | :---- | :---- | :---- |
| Ours ND | 72.87 | 0.0671 | 0.2902 |
| Ours Full | **82.80** | **0.0844** | **0.3833** |
- Note: To promptly address the reviewer's question, here we primarily focus on GPT and CLIP-based metrics, which we think are sufficient to validate the effectiveness of our method. We leave the user study later, as it requires additional time to gather user responses.
- In addition, the GPT score of our full method presented here is not directly comparable to that in the global author rebuttal. In this case, we compare our full method with the “ND” variant, whereas in the global author rebuttal, we compared our full method with the IN2N and ConsistDreamer baselines.
- Without subtask decomposition, the “ND” variant directly exposes the 3DGS to highly inconsistent edited multi-view images. This makes the 3DGS overfit to such inconsistent images with view-dependent effects and finally leads to consistently lower metrics. Together with the visualization (e.g., Fig.5), this quantitative evaluation validates the effectiveness of our proposed subtask decomposition.
If the reviewer has any follow-up questions, we are happy to discuss them.
---
Rebuttal 3:
Comment: ### D1 (Continued). Quantitative ablation study \- User study
- Following our earlier response, we have now obtained the results from our user study involving 41 participants. The results are shown in the table below, with the following metrics: user study of overall quality ("USO"), user study of 3D consistency ("US3D"), and user study of shape plausibility ("USP", detailed as below).
| Variant | USO↑ | US3D↑ | USP↑ |
| :---- | :---- | :---- | :---- |
| Ours ND | 68.46 | 61.72 | 60.73 |
| Ours Full | **92.70** | **90.48** | **88.72** |
- Please note that similar to the GPT scores, the USO and US3D metrics here are not directly comparable with those in our global rebuttal.
- For this user study, we further evaluate shape plausibility (USP) as an additional metric. We provide participants with the modeled depth maps, similar to those in Fig.5, along with the rendered RGB images. We then ask them to evaluate whether the shapes are reasonable and match the rendered images.
- Consistent with the conclusion in our earlier response, the "ND" variant performs significantly worse than our full method under the user study in all metrics. This further validates the effectiveness of our proposed subtask decomposition.
---
Rebuttal Comment 3.1:
Comment: Thank you for making the quantitative evaluation! The numbers validate the effectiveness of the subtask decomposition idea.
So I have improved my score to weak accept.
---
Reply to Comment 3.1.1:
Comment: We sincerely thank the reviewer for the positive feedback and for raising the score. Your constructive comments and suggestions have been invaluable in improving the paper. | Summary: This work focuses on instruction-based 3D scene editing. It proposes a progressive editing framework by decomposing the complex editing task into different subtasks based on the difficulties. In this way, it could ensure multi-view consistency in each easy subtask and finally obtain consistent editing for the whole task.
Strengths: 1. The idea of decomposing a difficult task into several easy subtasks is interesting and makes sense. This can avoid inconsistent multiview edits based on a complex instruction.
2. The figure of pipeline clearly illustrates the methodology and motivation. Based on the visualization results, the proposed method demonstrates superior editing effects compared to the baseline methods.
Weaknesses: 1. I have some doubts about the main technique of the article. This work defines sub-tasks of different difficulties by weighting instruction prompts and empty prompts with r. I am uncertain whether the editing difficulty is sufficiently sensitive to the weight r. The authors should provide 2D multiview editing results for different r to illustrate that as r increases, the inconsistency across multiple views also increases, indicating a rise in editing difficulty.
2. The illustrations in Fig.3 are not clear. I don't know which row/column corresponds to which method. Please clarify this in the response.
3. This paper does not provide a quantitative comparison with baselines.
4. Most of the results are based on human faces. The edits conducted on Scannet scenes are style transfer, which does not require high levels of consistency. It would be better to give more visualizations of outdoor scenes such as the 'bear' and 'garden', etc.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness. My major concern is the reasonability of defining the difficulties based on the weight r.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the possible limitations and social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1/Q1. About the editing difficulty w.r.t. the weight $r$.
- We provide a visualization of per-view edited results (i.e., each image is edited *individually* with IP2P) w.r.t. different $r$'s as **FigPDF.A**. The multi-view inconsistency situations are as follows:
- $r_0$: All the views are the same as the original view, so it is perfectly consistent.
- $r_1$: The face begins to turn white. The only inconsistency is the different degrees of changing color.
- $r_2$: Some parts of the face become red, with a new inconsistency in different locations of red parts.
- $r_3$: More parts of the face change the color, and the nose changes the shape. More inconsistency emerges, including color distribution and nose shape.
- $r_4$: The final edited results with various inconsistencies in all parts, even including the hair color.
- This visualization shows that the editing inconsistency and difficulty increase when $r$ increases.
- In Sec. 3.2, we also show some analysis about the increment of difficulty when $r$ increases.
- As mentioned in L138, the task difficulty is proportional to the size of the feasible output space (FOS, the set of possible scenes that can be regarded as the edited result of the editing task).
- When $r=0$, the task is just "keeping original," and the FOS only contains the original scene. With the increment of $r$, the task becomes more aggressive, i.e., far from "keeping original," and brings a more significant change to the scene.
- As mentioned in L153, "intuitively, an editing task that brings a significant change typically has more degrees of freedom to apply such a change, leading to a larger FOS," and therefore, higher difficulty and more inconsistency.
- With subtask decomposition, we only need to solve subtasks from $r_{i-1}$ to $r_{i}$, which has a much smaller FOS compared with directly solving the subtask $r_{i}$ (from $r_0=0$), and, therefore, is much easier.
### W2. The illustrations in Fig.3
- Please refer to the maps in **FigPDF.B** to help understand the organization of Fig.3.
- The upper part of Fig.3 contains the results of the Fangzhou scene. There are two sub-tables on the left and right. In each of the sub-tables, each row corresponds to a method (baseline or ours), and each two-column group represents an editing task (e.g., "Turn him into the Tolkien Elf").
- The lower part of Fig.3 contains two parts.
- The first row represents the original scene, and the editing results of the task "Turn him into a clown."
- In the first 6 columns of the table below, each row corresponds to a method, and each two-column group represents an editing task.
- The last 4 columns show the crucial editing task "Give him a plaid jacket." The two rows on the top-left continue the same rows of the first 6 columns, which correspond to baselines "IN2N" and "ConsistDreamer," and the two rows on the top-right are two other baselines "PDS" and "EN2N." The two rows on the bottom also continue the same rows of the first 6 columns, which are "ours" under two settings.
- We apologize for the confusion caused by the layout of Fig.3. As different baselines have different publicly available results (e.g., some methods provide code for re-production, while the others can only be compared by referring to the images provided in their papers), we have to organize Fig.3 in this way to show all of them concisely. We will revise the caption to include a detailed explanation of its organization in the revision.
### W3. Quantitative comparison
- Please refer to the "Quantitative Evaluation" in the **global author rebuttal**. Thank you.
### W4. Results of outdoor scenes
- In our original submission, we emphasized the results of the scenes which were mostly covered by the baselines, for a thorough comparison.
- We thank the reviewer for the suggestion and here we provide the results of two outdoor scenes: Bear from IN2N and Floating Tree from NeRFStudio, in **FigPDF.C**. As ConsistDreamer was not evaluated on the Floating Tree scene, we only compare with IN2N in the editing tasks of this scene.
- In the "grizzly bear" task, our ProgressEditor generates similar fur textures as ConsistDreamer, both of which are much clearer than IN2N, and ours also supports aggressivity control. Notably, our ProgressEditor achieves comparable editing results with only 1/4 to 1/6 running time of ConsistDreamer with fewer GPUs.
- In the "snow" task, our ProgressEditor can also provide high-quality editing results by generating snow on the floor and making the sky whiter, while the baseline IN2N generates blurred floor and leaves. In the "autumn" task, our ProgressEditor also shows the aggressivity control ability by controlling the color of the leaves.
- These results demonstrate that our approach is effective for outdoor scenes as well. We will add these and more results of outdoor scenes in the revision.
- We would also like to clarify that high levels of consistency are also crucial in style transfer tasks, especially for large-scale scenes in ScanNet++. For such scenes, each object may occur in many different views from various viewing directions, and inconsistent multi-view editing results in more blurry and gloomy colors after averaging. As shown in the visualization figures in ConsistDreamer's paper, namely Fig.1 (Van Gogh painting) and Fig.B.4 (Ablation Task (B)/(D)), inconsistency results in gloomy colors and blurred textures in style transfer tasks, especially in ScanNet++ scenes.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses and clarifications. These responses have solved most of my concerns and questions. I finally decided to improve my score to weak accept.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the positive feedback and for increasing the score. If the reviewer has any follow-up questions, we are happy to discuss them. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive and insightful comments:
- We propose an "interesting, novel, and reasonable" idea (Mz9K, 49ym, PcMD) to solve the instruction-guided 3D editing task, in a well-presented and illustrated way with clearly stated motivations (Mz9K).
- The proposed method generates "high-quality editing results with clear texture and precise geometry" (49ym), which are "superior" compared to the baselines (Mz9K) and "convincing to support" the effectiveness of our method (PcMD).
We address all the reviewers' concerns in each reply. We also provide the following visualizations in our PDF content:
- **FigPDF.A**: Per-view edited results of different subtasks, as a visualization of the difficulty at different $r$'s. (Mz9K)
- **FigPDF.B**: Map of organization of Fig.3. (Mz9K)
- **FigPDF.C**: Results of outdoor scenes. (Mz9K)
- **FigPDF.D**: Visualization of results before and after additional subtask $r_n$. (49ym)
- **FigPDF.E**: Visualization of the alignment between subtasks and edited scene. (PcMD)
We also reply to some commonly asked questions here.
### Mz9K-W3 / 49ym-W1. Quantitative Evaluation
- In our original submission, we primarily focused on the extensive qualitative comparisons to show the advantages of our high-quality editing results, as it still remains an open question to design a metric to evaluate the 3D editing results in a fair and complete manner.
- We provide the quantitative assessment with the following metrics: user study of overall quality ("USO"), user study of 3D consistency ("US3D"), GPT evaluation score ("GPT"), CLIP Text-Image Direction Similarity ("CTIDS"), and CLIP Direction Consistency ("CDC"). The user study was conducted with 26 participants. The GPT score is detailed below, and the CLIP-based scores are from IN2N's paper. The results are shown in the table below.
|Method|USO↑|US3D↑|GPT↑|CTIDS↑|CDC↑|Running Time↓|
|:-|:-:|:-:|:-:|:-:|:-:|:-:|
|IN2N | 51.35 | 65.45 | 45.32 | 0.0773 | 0.3260 |**0.5-1h**|
|ConsistDreamer | _68.65_ | _75.23_ | _74.40_ |**0.0912**|**0.3912**| 12-24h |
|ProgressEditor (Ours) |**87.96**|**80.23**|**81.00**| _0.0844_ | _0.3833_ | _1-4h_ |
- From the table above, we observe that our ProgressEditor consistently outperforms IN2N with large margins. Our ProgressEditor also significantly outperforms the strong baseline ConsistDreamer in two overall-quality metrics and the user study-based 3D consistency metric, while achieving comparable CLIP-based metrics, with a running time of only 1/3 of that of ConsistDreamer.
- "GPT score": We provide GPT-4o with the original video, the editing prompt, and the video generated by three methods all together with random names and in random order (to enforce a consistent scoring mechanism across methods). Then, we ask it to provide a score between 1 and 100 for each evaluating the overall quality, including (1) editing completeness and accuracy, (2) original image preservation, (3) 3D consistency, and (4) image appearance, and return the scores of all baselines as a JSON array. We repeat multiple times and take the average.
- Our GPT score can be regarded as a Monte-Carlo implementation of the recently proposed "VQAScore" [*Evaluating Text-to-Visual Generation with Image-to-Text Generation*, In ECCV'24], a metric based on a vision-language model's evaluation of the generated image, which has been shown to outperform CLIP-based scores.
- As the vision-language model is the only model that is powerful enough to understand and evaluate all the aspects, this VQA-based metric, along with our GPT score, can be viewed as a relatively complete automated quantitative measurement compared to CLIP-based scores.
Pdf: /pdf/8a2416df9087231c857a483f79c8b11f2f09599a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Similarity-Navigated Conformal Prediction for Graph Neural Networks | Accept (poster) | Summary: This paper addresses the problem of the lack of reliable uncertainty estimates in semi-supervised node classification tasks using Graph Neural Networks.
This paper shows that nodes with the same label as the ego node play a critical role in the non-conformity scores of the ego node.
The authors propose a method to aggregate the non-conformity scores based on
feature similarity and structural neighborhood, to improve the efficiency of prediction sets and singleton hit ratio.
Strengths: 1. This paper is well-motivated, with clear motivation that nodes with high feature similarity or direct connections tend to have the same label.
2. This paper provides theoretical guarantee that the proposed method can consistently generate a smaller prediction set than basic non-conformity scores functions while maintaining the marginal coverage rate.
3. The authors provide adequate empirical analysis, including ablation studies and comparisons with state-of-the-art methods.
Weaknesses: 1. While the paper demonstrates the effectiveness of the proposed method on various datasets, it lacks a detailed analysis of the scalability / computation cost of the algorithm. It would be interesting to see how the method scales with the number of nodes and edges.
2. The focus on transductive learning in the paper limits its applicability to inductive learning scenarios, which are common in real-world classification tasks.
3. It would be interesting to see the discussion and experiments of the proposed method on heterphily graphs, where nodes with different labels are more likely to be connected.
Technical Quality: 3
Clarity: 3
Questions for Authors: How will the proposed method perform on heterphily graphs?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discussed the limitations on transductive settings.
N/A potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the insightful and detailed comments. Please find our response below:
> **1. Scalability / Computation cost [W1]**
Thank you for your valuable feedback. The time complexity of SNAPS is primarily determined by the computation of corrected scores. In this work, we use one-hop nodes and nodes with high feature similarity to correct the ego node. In the transductive setting, this complexity applies to the entire graph. Consequently, one-hop generalization requires $\mathcal{O}(E)$ runtime, and K-NN generalization requires $\mathcal{O}(NM)$, where $E$ is the number of edges, $N$ is the number of test nodes, $M$ is the number of nodes sampled to correct the scores of test nodes, with $M\ll N$ for large graphs. Finally, the time complexity of SNAPS is $\mathcal{O}(E+NM)$.
**Time complexity of k-NN.** Calculating pairwise similarities is inherently parallelizable, which makes k-NN significantly more efficient. Additionally, there have been some approximation methods that could be used to significantly speed up the computation for large graphs, such as NN-Descent [1] that can be easily implemented under MapReduce to empirically achieve an approximate k-NN graph in $\mathcal{O}(N^{1.14})$.
> **2. Inductive learning [W2]**
In the context of inductive learning, exchangeability is not maintained as changes in the graph affect the calibration nodes' conformity scores [2, 3]. Therefore, our primary focus is on transductive learning. Additionally, we present experimental results in the inductive scenario using CoraML dataset, illustrated in Figure 4 of the attachment. The results demonstrate that SNAPS generally achieves **valid coverage with comparable set sizes** in the inductive setting.
> **3. Performance on heterophilous graphs [W3 & Q1]**
We thank the reviewer for this intriguing question. Following the reviewer's advice, we conduct the experiments on two common heterophilous graph benchmarks. The benchmarks' details are as follows:
| Datasets | Nodes | Features | Edges | Classes | Homophily Ratio |
| :--: |:--: |:--: |:--: |:--: |:--: |
| Chameleon | 2,277 | 2,325 | 36,101 | 5 | 0.23 |
| Squirrel | 5,201 | 2,089 | 217,073 | 5 | 0.22 |
FSGNN [4] is used as the GNN model, and we adopt the dataset splits from Geom-GCN [5], i.e. splitting nodes into 60%/20%/20% for training/validation/testing. To evaluate the performance of the CP methods, we divide the test set equally into the calibration and evaluation sets. For DAPS, we set $\lambda=0.5$. For SNAPS, we set $\lambda=0.5, \mu=0$ and $k=20$, i.e., neglecting the structural neighborhood. To construct a cosine similarity-based k-NN graph for these two heterophilous datasets, we utilize the embeddings from FSGNN. In the following tables, empirical results at $\alpha=\{0.10, 0.15\}$ are presented respectively:
| | FSGNN |Coverage | | |Size | | |
| :--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: |
|Datasets | Accuracy | APS | DAPS | SNAPS | APS | DAPS | SNAPS
|Chameleon| 78.09±0.93 | 0.904 | 0.905 | 0.904 | 1.95 | 2.75 | **1.70** |
|Squirrel| 73.72±2.19 | 0.900 | 0.897 | 0.900 | 2.41 | 3.05 | **2.27**|
| |Coverage | | | Size | | |
| :--: |:--: |:--: |:--: |:--: |:--: |:--: |
|Datasets | APS | DAPS | SNAPS | APS | DAPS | SNAPS
|Chameleon| 0.850 | 0.853 | 0.851 | 1.62 | 2.23 | **1.32** |
|Squirrel| 0.851 | 0.848 | 0.850 | 1.89 | 2.37 | **1.64**|
We can observe that SNAPS still shows **consistent superiority** over the baselines on heterophilous networks, which demonstrates its weak dependence on homophily.
[1] Efficient K-Nearest Neighbor Graph Construction for Generic Similarity Measures. WWW'11.
[2] DAPS: Conformal Prediction Sets for Graph Neural Networks. ICML'23.
[3] Uncertainty Quantification over Graph with Conformalized Graph Neural Networks. NeurIPS'23.
[4] Improving Graph Neural Networks with Simple Architecture Design. arxiv'21.
[5] Geom-GCN: Geometric Graph Convolutional Networks. ICLR'20.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the extensive rebuttal, addressing most of my concerns.
I raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you for increasing the score
Comment: We are pleased that most of your concerns have been addressed. We sincerely thank you for raising your score. | Summary: This paper introduces a novel algorithm, SNAPS, which enhances conformal predictions by aggregating non-conformity scores based on feature similarity and structural connections. Extensive experiments validate SNAPS' effectiveness, demonstrating its ability to produce more compact prediction sets with higher singleton hit ratios while maintaining rigorous finite-sample coverage guarantees.
Strengths: 1. The paper is clearly written and well-structured, facilitating easy comprehension.
2. It offers a new approach by utilizing similarity measurements based on node features to implement Conformal Prediction in node classification tasks, supported by a comprehensive theoretical analysis.
3. The paper conducts thorough experiments to test the validity of the proposed method, showing consistent improvements across various metrics.
Weaknesses: 1. Computing pairwise similarity is computationally demanding, especially for large-scale graph data.
Although the author notes in Appendix B. 1 that a subset was sampled to reduce computation costs, no details are provided on the sampling method or the size of the sample.
2. According to Figure 1(b), the difference in feature similarity between identical and different labels is minor, which does not convincingly justify the necessity of using feature similarity as an additional calibration method.
3. As shown in Figure 4, the success of this method hinges on the empirical selection of $\lambda$ and $\mu$, which restricts its broader application.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Figure 1c shows that the number of nodes with the same label at the $k$-th nearest neighbors decreases as $K$ increases, but lines 272 -273 claim that more nodes with the same label are selected to enhance the ego node as $k$ increases. Does the explanation in lines 272-273 conflict with the observation in Figure 1c?
2. As shown in Table 9, the experiments were conducted on homogeneous datasets. Can this approach be extended to heterogeneous datasets? Will using high-similarity nodes for calibration remain effective in such settings?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful feedback. Please find our response below:
> **1. Sampling method for large-scale dataset [W1]**
To reduce the computation burden, we utilize a random subset from the original nodes (80,000 / 2449029, OGBN Products). Despite its simplicity and limited access to full datasets, the proposed SNAPS still establishes superior performance without high cost.
> **2. Necessity of using feature similarity as an additional calibration method [W2]**
Sorry for the misunderstanding caused by the lack of data precision. The features are high-dimensional and sparse in the vector space, leading to minor differences between similarity results. Therefore, we multiply the results shown in Figure 1(b) by 1000 and calculate the difference between the similarity of identical labels and that of different labels in each row, as shown in the following table:
|class|0|1|2|3|4|5|6|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|0|0|25.90|31.38|36.27|35.76|19.08|20.48|
|1|14.90|0|22.83|23.68|22.33|9.54|11.26|
|2|22.32|24.76|0|33.78|29.37|20.06|17.36|
|3|9.75|8.16|16.32|0|12.26|11.82|8.74|
|4|10.08|7.64|12.75|13.10|0|9.09|4.68|
|5|15.56|17.02|25.60|34.82|31.25|0|17.40|
|6|26.01|27.79|31.96|40.80|35.90|26.45|0|
The table demonstrates that the **relative difference in feature similarity between the identical and different labels is significant**.
> **3. Choice of hyper-parameters $\lambda$ and $\mu$ [W3]**
In the manuscript, SNAPS uses a hold-out dataset to tune the hyper-parameters, which is a common practice [1,2]. However, **there exist good default hyper-parameters for SNAPS** on most datasets, i.e., $\lambda=\mu=1/3, k=20$, whose experimental results at $\alpha=0.05$ are shown in the following:
|||APS/RAPS/DAPS/SNAPS||
|-|-|-|-|
|Dataset|Coverage|Size|SH|
|CoraML|0.950/0.958/0.957/0.951|2.50/2.62/2.32/**1.74**|43.09/27.34/44.52/**54.11**|
|PubMed|0.950/0.968/0.967/0.950|1.82/2.10/2.09/**1.61**|33.39/14.66/23.27/**44.11**|
|CiteSeer|0.951/0.950/0.952/0.950|2.41/2.69/2.16/**1.90**|48.53/35.37/55.40/**58.22**|
|CS| 0.950/0.953/0.954/0.950|2.04/1.31/1.33/**1.13**|64.32/66.91/74.91/**85.21** |
|Physics|0.951/0.962/0.962/0.950|1.39/1.44/1.28/**1.07**|72.44/62.22/77.65/**88.58**|
|Computers|0.950/0.950/0.951/0.950|3.01/3.04/2.30/**2.01**|29.21/9.87/42.19/**45.98**|
|Photo|0.949/0.950/0.950/0.950|1.90/1.81/1.56/**1.30**|54.86/47.27/67.57/**79.50** |
The results show that SNAPS, with default hyper-parameters, outperforms RAPS and DAPS, even though both baseline methods are tuned on a hold-out dataset.
> **4. Conflict between the explanation in lines 272-273 and the observation in Figure 1c [Q1]**
Thank you for pointing the conflict out. There is indeed the ambiguity for the explanation in lines 272-273 of the manuscript. For clarity, we rephrase the statement as follows:
Figure 4(a) and Figure 4(b) show that the performance of SNAPS significantly improves as k gradually increases from 0. This improvement occurs because the increasing nodes with the same label are selected to enhance the ego node. Subsequently, as k continues to increase, the performance of SNAPS tends to stabilize.
> **5. Extension to heterophilous datasets [Q2]**
we guess your concern is about extension to heterophilous datasets, where nodes with different labels tend to be linked [3], because there may be inconsistencies in the types of nodes in heterogeneous datasets. To verify the efficiency of SNAPS on heterophilous graphs, we conduct the experiments on two common heterophilous graph benchmarks, as shown below:
|Datasets|Nodes|Features|Edges|Classes|Homophily Ratio|
|-|-|-|-|-|-|
|Chameleon|2,277|2,325|36,101|5|0.23|
|Squirrel|5,201|2,089|217,073|5|0.22|
For the experiment setting, we choose FSGNN [4] as the GNN model and follow the dataset splits of Geom-GCN [5], i.e. splitting nodes into 60%/20%/20% for training/validation/testing. To evaluate the performance of the CP methods, we divide the test set equally into the calibration and evaluation sets. For DAPS, $\lambda=0.5$. For SNAPS, $\lambda=0.5, \mu=0$ and $k=20$, i.e., SNAPS neglecting structural neighborhood. To construct a cosine similarity-based k-NN graph for these two heterophilous datasets, we utilize the embeddings from FSGNN. In the following tables, empirical results at $\alpha=\{0.10, 0.15\}$ are presented respectively:
||FSGNN|Coverage|||Size|||
|-|-|-|-|-|-|-|-|
|Datasets|Accuracy|APS|DAPS|SNAPS|APS|DAPS|SNAPS|
|Chameleon|78.09±0.93|0.904|0.905|0.904|1.95|2.75|**1.70**|
|Squirrel|73.72±2.19|0.900|0.897|0.900|2.41|3.05|**2.27**|
||Coverage|||Size|||
|-|-|-|-|-|-|-|
|Datasets|APS|DAPS|SNAPS|APS|DAPS|SNAPS|
|Chameleon|0.850|0.853|0.851|1.62|2.23|**1.32**|
|Squirrel|0.851|0.848|0.850|1.89|2.37|**1.64**|
[1] DAPS: Conformal Prediction Sets for Graph Neural Networks. ICML'23.
[2] Uncertainty sets for image classifiers using conformal prediction. ICLR'21.
[3] Graph Neural Networks for Graphs with Heterophily: A Survey. arxiv'22.
[4] Improving Graph Neural Networks with Simple Architecture Design. arxiv'21.
[5] Geom-GCN: Geometric Graph Convolutional Networks. ICLR'20.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing most of my concerns. As a result, I have updated my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you for increasing the score
Comment: We appreciate the valuable suggestions and feedback from the reviewer. We are also glad that most of your concerns have been addressed. Thanks again for increasing the rating! | Summary: The authors propose a new score function for conformal predictions on graphs. Given any baseline score the new score is aggregated based on the neighbors in the given graph; and the neighbors from a secondary kNN graph constructed based on the similarity between input features. The approach is motivated by noticing that augmenting the neighborhood of a node with other nodes from the same class improves performance.
Strengths: The approach is well motivated and the proposed score is simple and intuitive which is a pro.
While I think the experimental evaluation can be improved (see weaknesses and questions) the experiments that are carried out are well described, thorugh, and help to support the claims made in the paper.
Weaknesses: The calibration set size of $\min\\{1000, |V_{calib} \cup V_{test}|/2\\}$ seems problematic. For example, for Cora with 20 labels per class we have 20*7=140 labels for training/validation set. This means that a calibration set with size 1000 has an order of magnitude more labels. In practice it is much more likely to use most of the labels for training rather than calibration. At the very least results should be reported where the calibration set size is the same as the training/validation set size. Similarly, for ImageNet, equally dividing the data into a calibration set and a test set is not realistic.
In practice we either need to use fixed hyper-parameters or split the calibration set into 2 subsets: one for calibrating and one for tuning h-params. The authors do not discuss this issue (see also question 5).
The similarity graph is constructed based on a single heuristic (cosine similarity between node features). Considering other heuristics would be interesting, especially ones that also incorporate structure information and not only feature information.
Given the simplicity of the approach (which is a plus) the experimental evaluation should be strengthend (see questions).
I am willing to increase my score if the authors adequately address my questions.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Figure 1a) you show that the set size decreases as you increase the number of nodes with the same (oracle) label. This is effectively adding additional edges between nodes from the same class before the aggregation, increasing the homophily. This will likely increase the accuracy of the underlying model which non-surprisingly leads to reduced set size. How does the accuracy change if you e.g. take the argmin of the aggregated APS scores as a prediction or e.g. do vanilla GNN prediction on the augmented graph?
2. CF-GNN (Huang et al., 2023b) can in principle learn to do a similar aggregation to the one your propose. Can you please compare with them?
3. How does the performance of SNAPS (and the baselines) change as you vary the calibration set size? Importantly, also for small (realistic) sizes.
4. How does the performance of SNAPS (and the baselines) change as you vary the significance level $\alpha$?
5. What are the optimal h-params for different datasets and is there a good default value that works for most datasets? Relatedly, how does Figure 4c and 4d look like for other datasets?
6. Have you considered other similarity metrics?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The approach is likely to only work for graphs that have homophily (similar to DAPS). While this often holds for graphs of interest in practice, clearly highlighting this as a limitation would be appreciated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the valuable comments and will incorporate these suggestions into the final version. We are certain they will substantially improve the presentation of our work. Please find our response below:
> **1. Calibration set size [W1 & Q3]**
Here, we provide effect analysis of various calibration set sizes:
1. **Same calibration set size as training set.** We conduct experiments where the calibration set size is the same as the training set size (20 per class). Moreover, the calibration set is equally split into two sets: one for tuning h-params of CP methods and one for conformal calibration. SNAPS employs fixed h-params, i.e., $\lambda=\mu=1/3$ (A detailed analysis of fixed h-params for SNAPS can be found in Response 2 [W2 & Q5]). Here are the average results on 7 datasets at $\alpha=0.05$:
||APS/RAPS/DAPS/SNAPS||
|-|-|-|
||Size|SH|
|Average|2.15/2.14/1.86/**1.54**|49.41/37.66/55.07/**65.10**|
The detailed results can be found in Table 1 of the attachment.
2. **Small calibration set size.** The results of the average size are shown in the following table
|Num.|50|100|200|300|400|
|-|-|-|-|-|-|
|APS|2.92|2.50|2.43|2.42|2.39|
|RAPS|3.12|2.76|2.35|2.33|2.26|
|DAPS|2.84|2.49|2.06|2.07|1.96|
|SNAPS|**2.02**|**1.76**|**1.72**|**1.71**|**1.70**|
The results above both show that **SNAPS consistently outperforms other methods under different calibration set size setups**.
In Figure 1 from the attachment, more results on different datasets can be found.
For the vision dataset ImageNet, we provide the average results across different models for SNAPS and APS in Table 2 of the attachment.
Despite the small size of calibration set, SNAPS still outperforms APS for classification problems.
> **2. Choice of hyper-params [W2 & Q5]**
**Optimal h-params.** In the manuscript, we use a hold-out dataset to tune the h-params, which is a common practice [1]. Here, we report the mean of the optimal h-params for SNAPS on different splits of each dataset.
|Datasets|CoraML|PubMed|CiteSeer|
|-|-|-|-|
|$\lambda$|0.32±0.19|0.24±0.22|0.39±0.21|
|$\mu$|0.44±0.13|0.51±0.24|0.27±0.19|
**Good default hyper-params.** Moreover, there exist good default hyper-params for SNAPS on most datasets, i.e., $\lambda=\mu=1/3$, which indicates that three components of SNAPS are equally proportioned. The experiment results supporting this conclusion can be found in Response [W1 & Q3].
**How does Figure 4c and 4d look like for other datasets?** We provide experimental results of other datasets in Figure 3 of the attachment.
> **3. Other methods for similarity graph construction [W3 & Q6]**
To incorporate structural information, we can use self-supervised learning to obtain embeddings of nodes [2]. Then, we use the cosine similarity between node embeddings to construct the similarity graph. To validate this method, we conduct experiments on the self-supervised model GraphACL [2] and follow its experimental setup. The experimental results are as follows:
|SNAPS with|Original/GraphACL|Original/GraphACL|
|-|-|-|
|Datasets|$\alpha=0.05$|$\alpha=0.10$|
|CoraML|**1.68**/1.71|1.31/**1.28**|
|PubMed|1.62/1.62|1.35/**1.31**|
|CiteSeer|1.84/**1.68**|1.39/**1.23**|
The results demonstrate that the **similarity graph constructed based on self-supervised learning is applicable to SNAPS**.
> **4. Effect of the aggregated APS scores / the augmented graph on the prediction accuracy [Q1]**
Thank you for posing this insightful question. We take the argmin of the aggregated APS scores as a prediction and train vanilla GCN on the augmented graph. The prediction accuracy for these methods is as follows:
|Methods/Datasets|CoraML|PubMed|CiteSeer|
|-|-|-|-|
|vanilla GCN|81.48|77.40|83.90|
|argmin of DAPS|81.25|78.70|83.88|
|argmin of SNAPS|81.83|79.43|84.08|
|augmented graph|77.50|75.89|74.31|
The results indicate that **SNAPS slightly enhances prediction accuracy**, which may be one reason for SNAPS's effectiveness.
> **5. Comparison with CF-GNN [Q2]**
To compare with CF-GNN, we randomly select 20 nodes per class for training/validation and set the calibration set size to 1,000 [3]. The APS score serves as the basic non-conformity score. The average results are calculated from 10 GCN with each trial of 100 conformal splits. Moreover, we introduce a metric, i.e., **Time**, to evaluate the running time for each trial. As shown in the Table below, SNAPS outperforms CF-GNN in both metrics.
|Datasets|Size with $\alpha=0.05$|Size with $\alpha=0.1$|Time|
|-|-|-|-|
|||(CF-GNN/SNAPS)||
|CoraML|2.60/**1.68**|1.68/**1.31**|142s/**0.73s**|
|PubMed|2.13/**1.62**|1.86/**1.35**|148s/**0.98s**|
|CiteSeer|3.07/**1.84**|1.96/**1.39**|124s/**0.71s**|
> **6. Different significance levels $\alpha$ [Q4]**
Here are the average set size across 10 GCNs with each trial of 100 conformal splits for CoraML at different significance levels:
|Method\alpha|0.05|0.07|0.09|0.10|0.12|0.14|0.16|
|-|-|-|-|-|-|-|-|
|APS|2.42|2.10|1.90|1.81|1.70|1.60|1.52|
|RAPS|2.16|1.81|1.58|1.43|1.28|1.18|**1.10**|
|DAPS|1.92|1.66|1.47|1.44|1.32|1.25|1.20|
|SNAPS|**1.68**|**1.48**|**1.36**|**1.30**|**1.23**|**1.18**|1.14|
The results demonstrate that SNAPS outperforms other baselines at most significance levels. Additionally, we provide detailed results on PubMed and CiteSeer in Figure 2 of the attachment.
> **7. Limitation to homophily**
We clarify that the proposed method does not depend on homophily. Specifically, we conduct experiments on heterophilous datasets and SNAPS still outperforms baseline methods under low homophily ratio. Details can be found in response 2 to Reviewer 8yy7.
[1] DAPS: Conformal Prediction Sets for Graph Neural Networks. ICML'23.
[2] Simple and Asymmetric Graph Contrastive Learning without Augmentations. NeurIPS'23.
[3] Uncertainty Quantification over Graph with Conformalized Graph Neural Networks. NeurIPS'23.
---
Rebuttal Comment 1.1:
Title: Updated score
Comment: Most of my concerns have been addressed. I have increased my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you for raising your score
Comment: Thank you for checking our rebuttal and raising your score. We will incorporate the new results and explanations into the final version appropriately. Sincerely thanks for your valuable time on this paper! | Summary: The authors apply conformal prediction to graph neural networks by aggregating the non-conformity scores based on both one-hop neighbors and feature similarity. The framework is verified through various experiments on graph ML benchmark datasets, where it's shown to generate smaller prediction sets and higher singleton hit ratio (i.e. only the correct answer in the set).
Strengths: This paper is clearly presented and the results are fairly intuitive. It builds upon DAPS by adding a feature similarity term in eq (4). The experimental section is comprehensive in terms of number of datasets, ablation studies and parameter analysis.
Weaknesses: One suggestion is to add a discussion on the assumptions in Proposition 2 and a proof sketch (if possible). For example, what does "$\Delta$ very small" really mean and how reasonable is it?
Technical Quality: 3
Clarity: 3
Questions for Authors: I think it's possible that this method could more strongly outperform DAPS in heterophilous networks due to the addition of the term that doesn't depend on neighbors. Is there any chance the authors have tested their methods on a heterophilous network or observed any dependency of the performance on the degree of homophily?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition and valuable suggestions. Please find our response below:
> **1. Discussion regarding the assumption in Proposition 2 [W1]**
Thank you for the great suggestion. Here is our discussion regarding the assumption in Proposition 2.
Given a data pair $(\boldsymbol{x},y)$ and a model $f(\boldsymbol{x})$, we define a predicted probability estimator as $\pi(\boldsymbol{x})\_y$, where $\pi(\boldsymbol{x})\_y:=\sigma(f(\boldsymbol{x}))$ represents the predicted probability for label $y$ and $\sigma(\cdot)$ is an activation function, such as softmax. Let $\boldsymbol{S}$ denote APS scores of nodes, then we have
$$\begin{aligned}\boldsymbol{S}\_{ui}=\sum\_{j=1}^{|\mathcal{Y}|}\pi(\boldsymbol{x}\_u)\_j\mathbb{I}[\pi(\boldsymbol{x}\_u)\_j>\pi(\boldsymbol{x}\_u)\_i]+\xi\cdot \pi(\boldsymbol{x}\_u)\_i,\end{aligned}$$ where $\boldsymbol{S}\_{ui}\in[0,1]$ is the score corresponding to node $u$ with label $i$, and $\xi\in[0,1]$ is a uniformly distributed random variable. Let $E\_k[\boldsymbol{S}\_{ui}]$ be the average of scores corresponding to label $i$ of nodes whose ground-truth label is $k$. Suppose $T$ is the number of nodes whose ground-truth label is label $k$.
**a.** If $\pi(\boldsymbol{x}\_u)\_i$ is the largest predicted probability for node $u$, then $E\_k[\boldsymbol{S}\_{ui}]=E\_k[\xi\cdot \pi(\boldsymbol{x}\_u)\_i]=E\_k[\pi(\boldsymbol{x}\_u)\_k]+E\_k[\xi\cdot \pi(\boldsymbol{x}\_u)\_i]-E\_k[\pi(\boldsymbol{x}\_u)\_k]$. Suppose the number of nodes satisfying this case is A.
**b.** Otherwise, $E\_k[\boldsymbol{S}\_{ui}]\geq E\_k[\pi(\boldsymbol{x}\_u)\_k]+E\_k[\xi\cdot \pi(\boldsymbol{x}\_u)\_i]$. Suppose the number of nodes satisfying this case is B, where $A+B=T$. Therefore, summing up $E\_k[\boldsymbol{S}\_{ui}]$ for both cases, we have
$$A\cdot E\_k[\boldsymbol{S}\_{ui}]+B\cdot E\_k[\boldsymbol{S}\_{ui}]\geq (A + B)\cdot (E\_k[\pi(\boldsymbol{x}\_u)\_k]+E\_k[\xi\cdot \pi(\boldsymbol{x}\_u)\_i])-A\cdot E\_k[\pi(\boldsymbol{x}\_u)\_k], $$
i.e.,
$$E\_k[\boldsymbol{S}\_{ui}]\geq E\_k[\pi(\boldsymbol{x}\_u)\_k]+E\_k[\xi\cdot \pi(\boldsymbol{x}\_u)\_i]-\frac{A}{T}\cdot E\_k[\pi(\boldsymbol{x}\_u)\_k].$$
Let $\epsilon=\frac{A}{T}\cdot E\_k[\pi(\boldsymbol{x}\_u)\_k]$, which reflects the average error of misclassifying label $k$ as label $i$. Let $\eta$ be $1-\alpha$ quantile of APS scores with a significance level $\alpha$. Then we have
$$\begin{aligned}\eta=(1-\alpha)\frac{1}{|\mathcal{Y}|}\sum\_{j=1}^{|\mathcal{Y}|} E\_j[\pi(\boldsymbol{x}\_u)\_j].\end{aligned}$$ We can set $\eta - E\_k[\boldsymbol{S}\_{ui}]=\Delta$. If $\Delta\leq 0$, then $\eta \leq E\_k[\boldsymbol{S}\_{ui}]$, which means $\hat{\boldsymbol{S}}\_{vi}>\eta$ holds in Subsection A.2 of the manuscript. If $\Delta > 0$, then we have
$$0<\Delta=\eta - E\_k[\boldsymbol{S}\_{ui}]
\leq \eta - E\_k[\pi(\boldsymbol{x}\_u)\_k]-E\_k[\xi\cdot \pi(\boldsymbol{x}\_u)\_i]+\epsilon.$$
The upper bound of $\Delta$ is given here. In most cases, $\eta\leq(1-\alpha)E\_k[\pi(\boldsymbol{x}\_u)\_k]$ or $\eta\approx(1-\alpha)E\_k[\pi(\boldsymbol{x}\_u)\_k]$, and then $0<\Delta<-\alpha E\_k[\pi(\boldsymbol{x}\_u)\_k]-E\_k[\xi\cdot \pi(\boldsymbol{x}\_u)\_i]+\epsilon$. Since $\epsilon$ reflects the average error of misclassifying label $k$ as label $i$, '$\Delta$ very small' is reasonable.
> **2. Performance on heterophilous networks and dependency of the performance on the degree of homophily [Q1]**
We thank the reviewer for this intriguing question. To analyze the performance of SNAPS on heterophilous networks, we conduct the experiments on two common heterophilous graph benchmarks. The benchmarks' details are as follows:
| Datasets | Nodes | Features | Edges | Classes | Homophily Ratio |
| :--: |:--: |:--: |:--: |:--: |:--: |
| Chameleon | 2,277 | 2,325 | 36,101 | 5 | 0.23 |
| Squirrel | 5,201 | 2,089 | 217,073 | 5 | 0.22 |
We choose FSGNN [1] as the GNN model. We adopt the dataset splits of Geom-GCN [2], i.e. splitting nodes into 60%/20%/20% for training/validation/testing. To evaluate the performance of the CP methods, we divide the test set equally into the calibration and evaluation sets. For DAPS, we set $\lambda=0.5$. For SNAPS, we set $\lambda=0.5, \mu=0$ and $k=20$, i.e., neglecting the structural neighborhood. To construct a cosine similarity-based k-NN graph for these two heterophilous datasets, we utilize the embeddings from FSGNN. In the following tables, empirical results at $\alpha=\{0.10, 0.15\}$ are presented respectively:
| | FSGNN |Coverage | | |Size | | |
| :--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: |
|Datasets | Accuracy | APS | DAPS | SNAPS | APS | DAPS | SNAPS
|Chameleon| 78.09±0.93 | 0.904 | 0.905 | 0.904 | 1.95 | 2.75 | **1.70** |
|Squirrel| 73.72±2.19 | 0.900 | 0.897 | 0.900 | 2.41 | 3.05 | **2.27**|
| |Coverage | | | Size | | |
| :--: |:--: |:--: |:--: |:--: |:--: |:--: |
|Datasets | APS | DAPS | SNAPS | APS | DAPS | SNAPS
|Chameleon| 0.850 | 0.853 | 0.851 | 1.62 | 2.23 | **1.32** |
|Squirrel| 0.851 | 0.848 | 0.850 | 1.89 | 2.37 | **1.64**|
We can observe that SNAPS still shows **consistent superiority** over the baselines on heterophilous networks, which demonstrates its weak dependence on homophily.
[1] Improving Graph Neural Networks with Simple Architecture Design. arxiv'21.
[2] Geom-GCN: Geometric Graph Convolutional Networks. ICLR'20.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. I have also read the other reviews and comments, and will keep my positive score.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive score and recognition
Comment: Thank you for your recognition and for keeping the positive score. We are glad that our responses addressed these concerns, improving the quality of this work. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time, insightful suggestions, and valuable comments. We are certain that they will make our work more complete. We are glad and encouraged that reviewers find the method is **well-motivated** (aWFN, xCn2) and **theoretical** (3cJc, xCn2), our method is **simple** (aWFN) and **effective** (8yy7, 3cJc, xCn2), and the experiments are **extensive** (8yy7, aWFN, 3cJc, xCn2). Besides, reviewers recognize that the writing is **easy to follow** (8yy7, 3cJc). We provide point-by-point responses to all reviewers' comments and concerns.
**Performance of SNAPS on heterophilous graphs**. We note that all reviewers are interested in the performance of SNAPS on heterophilous graphs. Thus, we add experiments on two heterophilous datasets and provide the experimental results in reponses below. For example, on the dataset Squirrel, SNAPS reduce the average set size from 1.89 (APS) to 1.64 when $\alpha=0.15$. The results demonstrate that SNAPS still outperforms other baseline methods on heterophilous graphs.
In summary, our method may reduce the set size for homophilous and heterophilous graphs. We will present the analysis in the final version.
Pdf: /pdf/2d54faade735bb87c4caa4144faebaac0ac052b7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Scaling Retrieval-Based Language Models with a Trillion-Token Datastore | Accept (poster) | Summary: The paper introduces a substantial datastore named MASSIVEDS for retrieval-in-context language models (RIC-LM). It details a comprehensive construction pipeline, with a notable deviation from the traditional datastore construction sequence. Specifically, it places indexing and retrieval at the initial stage, followed by the merging and filtering of retrieved results. The evaluation indicates that RIC-LM with MASSIVEDS, outperforms standalone language models in knowledge-intensive tasks. Additionally, the authors analyze the scaling behavior of the datastore under various configurations.
Strengths: 1. The exploration of how datastore configurations impact retrieval-based language models is both intriguing and significant. Some of the findings provide valuable contributions to the advancement of these systems.
2. The paper is well-written and accessible, with each step of the implementation thoroughly explained and presented.
3. The proposed datastore holds potential for related research in this field.
Weaknesses: 1. The paper attempts to draw critical and general conclusions, yet the scope and depth of their experiments lack the robustness required to fully support these assertions. The study focuses on the impact of various datastore configurations, such as size, filters, and decontamination strategies. Evaluations are carried out on OpenQA tasks using a retriever specifically trained for QA. However, several key questions remain unaddressed. For example, it is unclear whether the findings would remain consistent using a single data source, whether the conclusions apply to specific tasks that are distinct from broader tasks like QA, and how the interplay between different types of retrievers and language models might affect the results. Without exploring these areas, claiming a definitive trend is premature.
2. Regarding the proposed MASSIVEDS, there is a lack of a comparative analysis and comparson with other existing datastores [1, 2]. Merely assembling a large and diverse datastore does not inherently indicate a significant contribution or guarantee enhanced performance. Evaluating the proposed datastore against these established frameworks could help justify claims of superiority or uniqueness.
3. There are several major assertions may be incorrect or lack sufficient justification. For instance:
- The claim that indexing the entire datastore initially reduces computational overhead is debatable. In practice, dense vectors can be stored and reused, and there is no need to re-index any data twice. Therefore, the upper bound of computational cost remains indexing the entire datastore. This aspect could be also alleviated significantly by considering alternative indexing strategies [3, 4].
- The overlap between the datastore and the downstream evaluation dataset raises concerns about data leakage. The presence of RedPajama data in MASSIVEDS (Table 2), and its use in downstream evaluation (line 191) necessitates a clearer delineation of this overlap and its impact on the results. Clarifying this could address the validity of the claim that "Retrieval is strictly beneficial for language modeling" (line 214).
- The use of models such as TinyLlama and Llama-7B to conclude that "retrieval effectively closes the performance gap between small and large models" (line 225) appears inconclusive in the context of current advancements in LLM. Reevaluating this claim with additional models might provide a clearer picture.
Typos:
- line193 into into -> into
- line726: exmaples -> examples
[1] Soldaini, Luca, et al. "Dolma: An open corpus of three trillion tokens for language model pretraining research"
[2] Gao, Leo, et al. "The pile: An 800gb dataset of diverse text for language modeling"
[3] Li, Haitao, et al. "Constructing tree-based index for efficient and effective dense retrieval"
[4] Zhou, Jiawei, et al. "Semi-Parametric Retrieval via Binary Token Index"
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Could you elaborate on how few-shot exemplars are selected for QA tasks?
2. For downstream evalaution, how is the language model configuration determined? Is there any inherent randomness in the LM inference process, and if so, what measures are taken to mitigate its effects?
3. Following W3, have the authors verified whether the downstream evaluation dataset overlaps with the datastore? How is this ensured, and how do you analyze the extent to which observed improvements might be influenced by potential data leakage?
4. Given that many of the stages within MASSIVEDS, such as parallel indexing and aggregating retrieval results from different sources, are common practices in retrieval pipelines, how do you substantiate that the contribution of MASSIVEDS is significant and not merely an assembly of various data segments?
5. Suggestion: given the emphasis on streamlining the datastore construction pipeline, proposing multiple variants of the datastore setup (such as variations in source and chunk size) might be more advantageous than focusing solely on creating the largest and most diverse datastore. This could facilitate more comprehensive analyses and support broader conclusions for future works in this domain.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: In the limitation section, the authors did acknowledge their limited analysis concerning the systematic range of retrievers and language models, which is indeed crucial. However, it could be problematic to draw broad conclusions on the datastore scaling trend from such a restricted dataset and limited scope of experiments. A more comprehensive analysis is necessary, or alternatively, the conclusions should be more narrowly defined to ensure precision and avoid misleading interpretations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that our work makes valuable contributions to the advancement of these systems!
**Weakness 1.** It remains uncertain if the findings would hold with a single data source, whether they apply to specific tasks beyond general ones like QA, and how interactions between various retrievers and language models might influence the results.
**A1.** We will carefully adjust our claims in our final draft to highlight our choice of models and retrievers, and that our downstream tasks focus on QA. Given resource constraints, we chose to focus on the most widely adopted RAG designs and evaluation setups; we agree that there are many more questions to be studied, and we hope that our open-source data and code will facilitate such work in the future.
On the specific comments:
(1) It is hard to scale single data sources like Wikipedia to the trillion token regime, so we focus on multi-domain web-scale data; however, we compare with single in-domain data sources in Table 3.
(2) We agree it would be good to study more downstream tasks beyond QA, and we will emphasize this in the discussion. We note that it is common in prior work (RePlug, Shi, et al; Atlas, Izacard, et al.) to focus on QA as well, and that we are the first to study downstream tasks at large datastore scales.
(3) We have added results on new LMs (Llama3, Olmo, and Pythia families) and new retrievers (DRAGON and GTR) in Figure R1 and Table R1 in the general reply, respectively; these results are consistent with our submission.
---
**Weakness 2.** There is a lack of comparative analysis and comparison of MASSIVEDS with other existing datastores [1, 2] (Dolma and Pile).
**A2.** We would like to clarify that a datastore, an indexed resource ready for retrieval, differs from a pretraining corpus, which contains only raw data, like Dolma and Pile. Constructing a datastore from a pretraining corpus involves data cleaning, chunking, embedding, and indexing, which take substantial effort, and as such there are no comparable open-source datastores to MassiveDS.
That said, the reviewer raises an interesting point about comparing different datastore sources. We ran an additional experiment using datastores constructed from DCLM-baseline and FineWeb-Edu (which have been shown to perform better than Pile and Dolma). As Table R2 in the general reply shows, MassiveDS achieves comparable performance.
---
**Weakness 3.** The claim that indexing the entire datastore initially reduces computational overhead is debatable. This aspect could be alleviated significantly by alternative indexing strategies [3, 4].
**A3.** The reviewer is correct that embedding should be done once to optimize efficiency; this is also the core of our pipeline. However, there are additional challenges: e.g., the retrieval process is I/O bound and often takes significant time to load passages from a large document pool. Also, using existing software like FAISS requires building separate indices for different sets of dense embeddings, significantly increasing storage demands for various configurations. Our open-source pipeline provides an efficient, ready-to-use solution that overcomes these challenges and that can be used as a foundation for future research. We appreciate the reviewer’s point and will expand our discussion of this in the paper.
The alternative indexing strategies cited by the reviewer do not fully overcome the challenges: [3] speeds up the search process but doesn't lower the cost of index construction; [4] improves datastore construction efficiency, but needs additional training and doesn't avoid redundant storage for different configurations. We will discuss them in our next revision.
---
**Weakness 4 and Q3.** The overlap between the datastore and the downstream evaluation dataset raises concerns about data leakage.
**A4.** We agree that data leakage is a potential concern. In Appendix B.1, we describe the strategies we used to avoid it. Specifically, we applied a strict data decontamination method to remove documents that overlap with the evaluation samples, which is stricter than common practice such as Dolma and RedPajama. In Figure 2(b), we also assessed perplexity (PPL) scaling on M2D2, which is not part of MassiveDS, and found scaling curves similar to those using decontaminated RPJ.
---
**Weakness 5.** The author should reevaluate the claim with additional models.
**A5.** We thank the reviewer for suggesting this. We have added more results with different LMs in Figure R1 in our general reply and will include them in our paper.
---
**Q1.** How few-shot exemplars are selected for QA tasks?
**A6.** Following the popular evaluation repository LM-Evaluation-Harness, we use few-shot examples randomly sampled from the development set, which do not overlap with test samples.
---
**Q2.** How is the language model configuration determined? Is there any inherent randomness?
**A7.** We follow LM-Evaluation-Harness, which uses greedy generation for TriviaQA and NQ, and log-likelihood for MMLU and MedQA. Therefore, there is no randomness in the inference. We’ll clarify this in the paper.
---
**Q3.** (see Weakness 4)
---
**Q4.** How do you substantiate that the contribution of MASSIVEDS is significant and not merely an assembly of various data segments?
**A8.** Our primary contribution is enabling the scientific study of >1T-token RAG scaling on an academic budget, which extends beyond merely assembling the datastore and differs from previous systems that aimed to enhance search speed. Our identification of the commutability of different operations and the strategic ordering of these operations are crucial for facilitating accessible datastore scaling studies. Beyond this, we highlight our other contributions in presenting the first datastore scaling trends and analyses on various datastore design factors.
We thank the reviewer for their valuable suggestions and we’ll update our draft accordingly.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Dear Authors,
Thank you for your detailed response. I appreciate the efforts to address my initial concerns, and I have accordingly updated my review score. However, I still have some follow-up questions:
1. What are the critical factors to consider when expanding a datastore with diverse data sources to ensure performance improvements in RAG?
2. Regarding the application of MASSIVEDS to other complex tasks, would you recommend maintaining the same datastore while adapting the retriever and language model to be more task-specific?
3. Could additional experiments be conducted on TriviaQA, using a merged corpus such as <MedQA corpus + TriviaQA corpus>, to assess whether such integration enhances performance?
---
Reply to Comment 1.1.1:
Title: Response to the follow-up questions
Comment: We thank the reviewer for raising the score, which encourages us. Here are some discussions on the follow-up questions:
**Q1.** What are the critical factors to consider when expanding a datastore with diverse data sources to ensure performance improvements in RAG?
**A1.** This is a great question. In Section 5.2, we visualize the source distribution of retrieved results in Figure 3 and compare the performance of the multi-domain MassiveDS and single-domain datastores (Table 3). Our results show that the retriever tends to retrieve from relevant sources, and having more OOD data in the datastore does not have a negative impact on performance. Based on these observations, we hypothesize that having useful information in the datastore is the most critical factor. Therefore, our rule of thumb for data source selection is to **maximize the chance that there exists data in the datastore that can provide helpful information for future queries**. In other words, data sources that could potentially contain useful information for future queries are all desirable for better performance.
Based on the above principle, we manually selected two types of data sources:
The first type is general web data, such as data from CommonCrawl, which is a large and diverse data source that potentially covers various topics that a user may query at inference time.
The second type is domain-specific data, which is selected based on our prior knowledge of the evaluation domain. For example, we intentionally added more scientific data, such as pes2o scientific papers, math datasets, and biomedical data, to the datastore because we want the RAG system to perform better on scientific benchmarks such as MMLU and MedQA.
In summary, our data source selection was done empirically and we believe that exploring automatic data selection methods for datastore construction is a promising future direction. For instance, one potential follow-up work could involve training classifiers for source-level or document-level data selection to reduce the size of the datastore while maintaining its effectiveness for targeted tasks. We will include these discussions in the updated paper.
**Q2.** Regarding the application of MASSIVEDS to other complex tasks, would you recommend maintaining the same datastore while adapting the retriever and language model to be more task-specific?
**A2.** As discussed in A1, when adapting to new tasks, we recommend optimizing the datastore composition to include more data sources that are potentially helpful for these new tasks. Prior knowledge about the task distribution can help guide targeted data selection. For example, you may want to include more code data when building a datastore for coding tasks, such as HumanEval.
**Q3.** Could additional experiments be conducted on TriviaQA, using a merged corpus such as <MedQA corpus + TriviaQA corpus>, to assess whether such integration enhances performance?
**A3..** We would like to clarify that including more data in the datastore is likely, but not guaranteed, to improve performance. However, you can still try including everything in a datastore because it is robust to OOD data if there is no storage or computational constraints for the datastore size. To illustrate this, we conducted additional experiments where we compared the performance of different datastore combinations using the setup suggested by the reviewer. The results are shown in the below table.
| Datastore | LM-only | Wikipedia | Wikipedia + C4 | Wikipedia+PubMed | MassiveDS |
|---------|------|---|-------|---------|------|
| TQA | 64.1 | 72.6 | 75.8 | 72.6 | 77.0 |
As shown in the table, adding a new data source that is a hybrid of OOD data and potentially helpful data, such as C4, can further improve the performance of the in-domain datastore. Meanwhile, including a data source that only contains OOD data, such as PubMed, does not improve or decrease the TQA performance. However, we note that having PubMed in the datastore is helpful for biomedical tasks. Therefore, we recommend increasing the diversity of the datastore to enhance the performance on both single or multiple tasks. We will clarify that the data sources should still be carefully chosen with a consideration on what sources would be potentially helpful to the targeted tasks.
We thank the reviewer for the insightful discussion. We are happy to follow up if the reviewer has any other questions! | Summary: This paper studies the effect of scaling datastores for retrieval-based language models. A trillion-token datastore, MassiveDS, is constructed and then filtered to remove contaminated and duplicate documents. A distributed pipeline is proposed to index and retrieve from MassiveDS with a modest compute budget. Evaluation on language modeling tasks and downstream QA tasks shows that scaling the datastore brings clear improvements, and careful datastore filtering is crucial for model performance.
Strengths: 1. This paper studies an important research question: the scaling of datastores for retrieval-based LMs.
2. This paper open-sources the 1.4T token datastore for retrieval-based LMs, which consists of diverse domains and is carefully filtered. This datastore could be a valuable resource for future retrieval-based LM studies.
3. Evaluation on multiple tasks demonstrates the importance of datastore scaling and highlights some interesting questions, such as how to improve retrieval performance and how to filter datastores.
Weaknesses: 1. The claim of proposing the 'largest' datastore could be reconsidered. For LM pre-training, there are larger datastores like RedPajama-v2 with 30T tokens, and for IR, there are corpora like ClueWeb22 with 16T tokens. The proposed MassiveDS is mainly sourced from the RedPajama, making it similar to other existing collections.
2. The proposed distributed index and retrieval pipeline is interesting but seems inflexible. How does it compare with existing search engine frameworks like ElasticSearch and Weaviate?
3. This paper lacks an analysis of retriever. Contriever-msmarco is used, whose training data may differ from MedQA and MMLU. Considering the limited generalizability of dense retrievers (as shown by the BEIR paper), stronger retrievers like those in MTEB could be used.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. What is the recall rate of the retriever for different datastore sizes?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The limitations section is included and adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for acknowledging the importance of our research question and the value of our findings and open-sourced resources! We would like to address the reviewer’s concerns below:
**Weakness 1.** The claim of proposing the 'largest' datastore could be reconsidered as there are other pretraining corpus that exceed even 10T tokens.
**A1.** We would like to clarify that a datastore is different from a pretraining corpus: by “datastore”, we refer to an index that is ready for retrieval [1]; while a pretraining corpus only contains the raw data. Building a datastore on an existing pretraining corpus requires further data cleaning, data chunking, embedding, and indexing over the raw data, which is a non-trivial effort for the retrieval community. It has been challenging to conduct datastore scaling research at this scale before and existing open-sourced datastores are much smaller than MassiveDS, as shown in Table 1. Althogh there is pretraining corpus that is larger than our datastore in terms of raw text, we are the first to make this attempt to construct and open-source a datastore with over 1 trillion tokens.
[1] Asai, Akari, et al. "Reliable, adaptable, and attributable language models with retrieval."
---
**Weakness 2.** The proposed distributed index and retrieval pipeline is interesting but seems inflexible. How does it compare with existing search engine frameworks like ElasticSearch and Weaviate?
**A2.** We would like to note that our contribution is orthogonal to these existing frameworks: prior work, such as ElasticSearch and Weaviate, focuses on efficiency at inference, while our work focuses on the scientific evaluation of retrieval performance which requires not efficient serving, but efficient experimentation of various factors of the datastore. Even with the state-of-the-art nearest neighbor search used in production, studying the effect of various factors of the datastore like the size, quality filters, and decontamination methods remains expensive. This is because such experiments require rebuilding search indices for every combination of the factors, orthogonal to what search algorithm is being used. Our study focuses on removing the needs for such repetitive rebuilding of indices. The outcome is the comprehensive study of the impact of various datastore factors, as we demonstrated as part of the results, which is the novel part of our work.
---
**Weakness 3.** This paper lacks an analysis of retriever. Contriever-msmarco is used, whose training data may differ from MedQA and MMLU. Considering the limited generalizability of dense retrievers (as shown by the BEIR paper), stronger retrievers like those in MTEB could be used.
**A3.** We agree with the reviewer that we indeed only investigated one base retriever due to being limited by academic computational resources, and we studied the effect of improved retrieval by applying a cross-encoder reranker which is more computationally efficient in the paper. In the table below, we run the evaluation using 2 more base retrievers of similar sizes that perform better than Contriever on the MTEB benchmark using 10% randomly sampled MassiveDS. Our results show that, interestingly, the performance of the base retrievers of similar sizes on our general web data does not necessarily align with the ranking on MTEB. We hypothesize that this low correlation is because the domain compositions of MassiveDS and MTEB are different. Due to limited computational resources, we defer the study with larger retrievers, such as GRIT-7B, to future work. But we note that such larger embedding models are often prohibitively expensive to scale up to a trillion-token datastore (Wang et al., 2024).
| Name | Retriever Type | Size | Perplexity ↓ | Natural Questions ↑ | MMLU ↑ |
|------------|----------------|-------|--------------|----------------------|--------|
| Contriever | dense | 177M | 4.2210 | 0.3321 | 0.4922 |
| DRAGON | dense | 110M | 4.2373 | **0.3399** | 0.4875 |
| GTR-Base | dense | 110M | **4.2146** | 0.3080 | **0.4934** |
---
**Question 1.** What is the recall rate of the retriever for different datastore sizes?
**A4.** In our evaluation setup, both upstream language modeling and downstream evaluation sets do not provide gold documents for the questions. Therefore, we can only report the end-to-end performance on these tasks. | Summary: This paper studies the impact of scaling the datastore (retrieval dataset) on retrieval-based language models.
The contributions are:
- MASSIVEDS a 1.4 trillion-token datastore for retrieval-based LMs that will be made open-source.
- A pipeline to study the impact of the datastore scaling on the language models at inference time
- An analysis of the impact of scaling and other pre-processing steps (decontamination, deduplication, and quality filtering) on the language models at inference time
Strengths: 1) OPEN SOURCE
Both the models and data used in the work are open source and when the code is made available, it will facilitate future work and additional research on retrieval-based LM at scale.
2) WRITING/PRESENTATION
The paper is easy to read, the contributions and findings are explicitly stated, and the tables and figures are easy to read.
3) IMPACT
Given the importance of retrieval augmented generation for making LM more trustworthy and adapted to a specific domain/environment, releasing a large-scale dataset and pipeline for retrieval-based LMs can facilitate future research in that domain.
Weaknesses: INFERENCE TIME ANALYSIS
This work does not study the impact of the scaling of the datastore on inference time. Since scaling the data store can make retrieval slower and therefore, make the generation slower, this is a crucial aspect to study, that was not addressed in this work.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) CODE NOT AVAILABLE?
Unless there is an oversight on my part, I could not find the implementation (no link to anonymous GitHub and no supplementary material) and I could not find the code attached to the paper. Although the authors indicated that the code and data will be made available, having an anonymous GitHub repo with the code already available would improve the strength of this submission.
2) RETRIEVAL AUGMENTED GENERATION?
Since the LMs used in the experiment are all generative language models, why is the term Retrieval Augmented Generation not used to describe this work? ]
3) DATA DECONTAMINATION/DEDUPLICATION
Can more details be provided about data decontamination/deduplication? In practice, what is the difference between the two? Since data decontamination filters out "documents with 80+% 13-gram Jaccard similarity or 32-gram longest sequence overlap", how is deduplication made? is it only on documents matching exactly at the string level?
4) FIGURE 4 (small detail):
Is there a reason why the baselines LM-only on the subplots (d), (e), and (f) are not extended with a dashed line?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss several limitations of their work:
- Focus on a particular class of retrieval-based LMs (RIC-LMs),
- Do not study very large LMs (only 1B and 7B parameters)
- Only consider dense retrievers.
- Only considers QA tasks
There is only one limitation to this work that was not mentioned by the authors: the lack of analysis of the impact on inference time (see weaknesses).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for highlighting our open-source contribution and acknowledge the potential impact on the community! We would like to address the reviewer’s concerns below:
**Weakness 1.** This work does not study the impact of the scaling of the datastore on inference time.
**A1.** Our main focus is to see the effect of dataset scaling on upstream and downstream model performance. While we agree that a larger datastore can introduce additional latency, there's active research on improving the speed of nearest neighbor search; we believe our contributions are orthogonal. In addition, we show a small LM with retrieval augmentation could match or even outperform larger LM-only models, as supplemented in Figure R1 in the general reply, which indicates a potential reduction of inference time by using a small LM augmented with MassiveDS. Given the complexity of inference speed optimization and its loose relationship with our focus, we defer the study of inference-time efficiency-performance tradeoff to future work.
---
**Question 1.** The codes are not available by the time of submission.
**A2.** We thank the reviewer for reminding us to provide anonymous codes with the submission. We uploaded our code in an anonymous github repository and sent it to the AC according to the rebuttal rules. We hope the reviewer could easily get access to the code.
---
**Question 2.** Why don’t the authors use the term “retrieval augmented generation (RAG)”?
**A3.** We define retrieval-based LMs as a general family of LMs that leverage large-scale text data at inference time [1], and RAG [2] is one such model. We believe that our finding is generally applicable to other types of retrieval-based LMs such as kNN-LM style models, and therefore decided to use the term. More specifically, we focus on retrieval-in-context LMs as mentioned in Section 2, which is often used interchangeably with RAG [3] but more specifically describes off-the-shelf LMs that use retrieved context at inference time.
[1] ACL 2023 Tutorial: Retrieval-based Language Models and Applications https://acl2023-retrieval-lm.github.io/
[2] Lewis, Patrick, et al. "Retrieval-augmented generation for knowledge-intensive nlp tasks."
[3] Min, Sewon, et al. "Silo language models: Isolating legal risk in a nonparametric datastore."
---
**Question 3.** Can more details be provided about data decontamination/deduplication?
**A4.** The motivation of deduplication for a datastore is to remove documents that are exactly or approximately the same such that the retrieved top-k documents won’t contain repetitive information. To achieve this, a similarity score is computed between every two documents in the datastore, and one document is removed from every duplicate pair. While decontamination aims at removing the same or near-duplicate documents in the datastore that directly contain the information about the test samples. The motivation is to avoid having test samples directly in the datastore such that the model can achieve high evaluation performance by cheating. The test sample is compared against every document in the datastore, and the documents with high similarity scores are removed from the datastore for this certain task.
In our paper, the 13-gram Jaccard similarity score is one metric we use to measure the overlapping rate between two documents (in deduplication, it’s 2 documents from the datastore; in decontamination, one document is the test sample and another document is from the datastore). We set the threshold to 80% for both deduplication and decontamination that we remove a document if it has a similarity score that is higher than 80% with another document. Since decontamination plays an important role in evaluation, we combined the 13-gram Jaccard similarity metric with another metric for decontamination to make our datastore less likely to be contaminated. More details can be found in Appendix B.1.
---
**Question 4.** Is there a reason why the baselines LM-only on the subplots (d), (e), and (f) are not extended with a dashed line?
**A5.** We thank the reviewer for pointing out this detail! We forgot to add a dashed line for these 3 subfigures and we will add them back in the next revised version!
---
Rebuttal Comment 1.1:
Comment: Thank the authors for further explanations. I do not have other concerns and will keep my original positive rating. | Summary: The paper introduces MASSIVEDS, the largest and most diverse open-sourced datastore for retrieval-based language models, containing 1.4 trillion tokens. The authors design a MASSIVEDS pipeline to efficiently explore the impact of different datastore features by reordering the datastore construction operations so that the most expensive operations, such as indexing and retrieval, are only run once. Extensive experiments demonstrate that model performance improves as the datastore size increases.
Strengths: (1) To my knowledge, this is the first work studying scaling laws regarding datastore size. The author introduces the largest and most diverse open-sourced datastore for retrieval-based language models, containing 1.4 trillion tokens. This large-scale datastore can serve as a promising testbed for developing new retrieval-based LMs.
(2) The author promises to release not only the data but also the code to reproduce the experiments. This could greatly facilitate research efforts within the community.
(3) The author proposes a simple yet effective data processing pipeline that enables efficient investigation of the impact of different datastore features.
(4) Extensive experiments show that the performance of retrieval-based LMs benefits from scaling datastore sizes. The evaluation includes both language modeling and downstream question answering. The results on MMLU and MedQA also highlight the need for developing retrieval-augmented LMs that can excel in tasks requiring reasoning abilities.
(5) Some ablation studies provide interesting and insightful findings, such as data deduplication being a crucial factor for enhancing language model performance when the datastore size is extremely large.
Overall, I believe this is a good paper studying an important problem supported by solid experiments.
Weaknesses: Overall, I do not find any significant weaknesses in the paper. The only minor issues I noticed are:
(1) The retriever and reranker used in this paper may be somewhat outdated. Recent models on the MTEB leaderboard might offer better performance. The benefits of scaling datastore sizes could be more significant with a more powerful retriever. However, this is a very minor point. Considering the computational cost of indexing all the data, trying other retrievers is not needed.
(2) Typo: L223 "Datascore" should be "Datastore".
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see my comments in Weakness.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations are properly stated in the Conclusion and Limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the importance of our study and the solidness of our work! We would like to address the only concern by the reviewer on the choice of retriever below.
**Q1.** The retriever and reranker used in this paper may be somewhat outdated. Recent models on the MTEB leaderboard might offer better performance. The benefits of scaling datastore sizes could be more significant with a more powerful retriever. However, this is a very minor point. Considering the computational cost of indexing all the data, trying other retrievers is not needed.
**A1.** We thank the reviewer for understanding our computational constraints, and we agree with the reviewer that the base retriever we used in this paper isn’t new. We chose Contriever-MSMACRO because it was used in many previous works cited in L178-179. We agree that benefits of scaling datastore sizes could be more significant with a more powerful retriever, which is consistent with our conclusion in Section 6.2 that improved retrieval could enhance scaling trends. In the table below, we tried 2 more base retrievers that outperform Contriever on the MTEB leaderboard using 10% randomly sampled MassiveDS. Our results show that, interestingly, the performance of the base retrievers of similar sizes on our general web data does not necessarily align with the ranking on MTEB. We hypothesize that this low correlation is because the domain compositions of MassiveDS and MTEB are different. Due to limited computational resources, we defer the study with larger retrievers, such as GRIT-7B, to future work. But we note that such larger embedding models are often prohibitively expensive to scale up to a trillion-token datastore (Wang et al., 2024).
| Name | Retriever Type | Size | Perplexity ↓ | Natural Questions ↑ | MMLU ↑ |
|------------|----------------|-------|--------------|----------------------|--------|
| Contriever | dense | 177M | 4.2210 | 0.3321 | 0.4922 |
| DRAGON | dense | 110M | 4.2373 | **0.3399** | 0.4875 |
| GTR-Base | dense | 110M | **4.2146** | 0.3080 | **0.4934** |
---
**Q2.** Typo: L223 "Datascore" should be "Datastore".
**A2.** We thank the reviewer for pointing out the typo for us. We will fix it in our next version. We are happy to address more questions from the reviewer during the rebuttal period. | Rebuttal 1:
Rebuttal: We appreciate the reviewers' strong support for the contributions of the paper and their insightful comments. This general response outlines how we have responded to their concerns and provides the requested supplementary results.
**Summary of common concerns and our corresponding response.**
* Reviewer F6AB and Reviewer TsAT asked to compare our proposed pipeline with existing retrieval systems for efficient search at inference time, such as DiskANN. We would like to clarify that our pipeline is optimized for a different goal and is orthogonal to the existing systems.
* Our pipeline is designed to facilitate accessible datastore scaling studies, where the key challenge is how to efficiently study datastores of different configurations, such as datastore sizes, quality filters, data decontamination methods, etc. While prior work such as DiskANN focuses on search efficiency at inference time given a fixed configuration, we believe these are complementary to our work. Specifically, even with the state-of-the-art nearest neighbor search used in production, studying the effect of various factors of the datastore remains expensive. This is because such experiments require rebuilding search indexes for every combination of the factors, orthogonal to what search algorithm is being used. Our work focuses on removing the need for such repetitive rebuilding of indexes. This allows us to conduct a comprehensive study of the impact of various datastore factors, which is a novel contribution of our work.
* Reviewer TsAT and Reviewer AKfc asked about comparisons of MassiveDS with other datasets such as Dolma. We would like to clarify that the definition of a datastore is different from a pretraining dataset.
* By “datastore”, we refer to an index that is ready for retrieval; a pretraining corpus only contains raw data. Building a datastore on an existing pretraining corpus requires additional data chunking, embedding, and indexing over the raw data, which is a non-trivial effort. It has been challenging to conduct datastore scaling research at this scale before and the existing open-sourced datastores are much smaller than MassiveDS, as shown in Table 1 of the submission. Prior larger scale pre-training data is only available in the format of raw text, while we open source both the raw text as well as the resulting embedding and index to search from the entire 1.4 trillion corpora. In addition, we open source the codebase to use our released index for future study.
* Inspired by the reviewer, we further compare the performance of datastores built with different sources in Table R2 below, which hasn’t been examined by any prior work. See the response below for more details.
* Requested supplementary results.
* As requested by Reviewer omwP and Reviewer AKfc, we supplement more scaling results with more language models, such as Llama3, Olmo, and Pythia models, in Figure R1. The new results show our conclusions hold across different LMs.
* As requested by Reviewer cEKQ, Reviewer TsAT, Reviewer vyTD, and Reviewer AKfc, we supplement the evaluation results using 3 different base retrievers in Table R1. The results indicate that retrievers of similar sizes have comparable performance on general-web data and that Contriever-MSMACRO is a reasonable pick for our main experiments.
* As requested by Reviewer AKfc, we compare the performance of datastores constructed using different data sources (DCLM-baseline and FineWeb-Edu) in Table R2. The results indicate that MassiveDS matches or even outperforms the datastores constructed with the latest high-quality pretraining data sources.
* Code: We uploaded our code to an anonymized repository and have sent the link to the AC.
We will go into more detail about each of these in the relevant individual responses below.
**Table R1. Results of different base retrievers on 10% randomly sampled MassiveDS evaluated with Llama2-7B.**
| Name | Retriever Type | Size | Perplexity ↓ | Natural Questions ↑ | MMLU ↑ |
|------------|----------------|-------|--------------|----------------------|--------|
| Contriever-MSMACRO | dense | 177M | 4.221 | 33.2 | 49.2 |
| DRAGON [1] | dense | 110M | 4.237 | 33.9 | 48.7 |
| GTR-Base [2] | dense | 110M | 4.214 | 30.8 | 49.3 |
[1] Lin, Sheng-Chieh, et al. "How to train your dragon: Diverse augmentation towards generalizable dense retrieval."
[2] Ni, Jianmo, et al. "Large dual encoders are generalizable retrievers."
**Table R2. Results of different data sources for the datastore evaluated with Llama2-7B.**
| Source | #Tokens (B) | NQ | MMLU |
|----------------|-------------|-------|-------|
| LM-only | / | 26.6 | 45.8 |
| DCLM-baseline [3] | 100 | 31.4 | 49.0 |
| FineWeb-Edu [4] | 100 | 30.4 | 49.4 |
| MassiveDS (ours) | 100 | 33.2 | 48.8 |
[3] Li, Jeffrey, et al. "DataComp-LM: In search of the next generation of training sets for language models."
[4] Penedo, Guilherme, et al. "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale."
**Figure R1. (See PDF) Scaling performance on downstream tasks with different language models, corresponding to Figure 2 (c)-(f) with additional models included. The trends remain consistent.**
Pdf: /pdf/4c5adf03c1219f07deac1de6826a90401b1d0f22.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper "Scaling Retrieval-Based Language Models with a Trillion-Token Datastore" explores a new dimension of scaling language models (LMs) by considering the amount of data used during inference. The study focuses on retrieval-based LMs, which access a large external datastore during inference, and examines the impact of scaling the datastore. The authors introduce MASSIVEDS, a 1.4 trillion-token datastore, and an efficient pipeline to study various datastore features such as size, data filters, and decontamination strategies. The experiments reveal that datastore scaling follows a log-linear trend and significantly improves performance on various tasks. The authors also highlight the importance of aggressive decontamination, data deduplication, and improved retrieval techniques for better scalability. If accepted, the dataset MASSIVEDS will be the largest and also most diverse open-sourced datastore which will benefit the research community.
Strengths: - **Originality**: The paper tackles the important question of how scaling retrieval-based LMs by the size of the inference datastore impacts upstream and downstream task performance which is novel and adds a new dimension to LM scaling laws.
- **Quality**: The research is rigorous, with well-designed experiments and robust methodologies.
- **Clarity**: The paper is well-organized, and the key contributions are clearly articulated.
- **Significance**: The open-sourced resources will facilitate further research in this direction.
- **Experiment Design**: The design of the experimental set-up is novel and theoretically grounded which will be interest to the research community on how to efficiently experiment with multiple dataset configurations
Weaknesses: The following are some of the limitations of the paper (some of these have already been highlighted by the authors):
- **Task Diversity**: The evaluation is primarily focused on QA and knowledge-intensive tasks where one can expect the retrieval based LMs to work well. Inclusion of more diverse tasks, such as commonsense reasoning or open-ended text generation, would provide a more comprehensive assessment of the datastore's scalability.
- **Limited Scope of Analysis**: This is a generic comment about the overall analysis results in the paper. For each of the analysis dimension, the authors have scratched the surface by experimenting with a basic technique which I am afraid makes the results limited in their scope and generalizability. Here are a few examples to illustrate the point - a) quality filtering: they explore basic filtering from DOLMA but leave the higher-quality filters for future-work b) Decontamination: 13-gram models for decontamination and leave out some techniques such as Min-K% Prob which are known to be stronger baselines c) Retrieval Model: The paper uses a specific retrieval model (CONTRIEVER-MSMARCO) without exploring alternative models. This might be due to the fact that a major portion of the paper was focused designing the scalable experimental set-up for this analysis.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Previous studies (https://arxiv.org/html/2307.07164v2) have shown that lack of diversity adversely impacts the results. What was the diversity of results retrieved by the underlying retrieval?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The authors adequately address the limitations of their work, such as focusing on a specific class of retrieval-based LMs, limited model sizes, and dense retrievers. They also acknowledge the need for further exploration of different retrieval architectures and a wider range of evaluation tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novelty and solidness of our work! We would like to address the concerns and questions below:
**Weakness 1.** The evaluation is primarily focused on QA and knowledge-intensive tasks where one can expect the retrieval based LMs to work well. Inclusion of more diverse tasks, such as commonsense reasoning or open-ended text generation, would provide a more comprehensive assessment of the datastore's scalability.
**A1.** We agree that more diverse tasks would make our study more comprehensive. Our study takes a step in this direction as it is the first analysis, to the best of our knowledge, of datascore scaling on downstream tasks. Previous studies (RETRO, Borgeaud, et al.) examine retrieval-based language models with datastores exceeding 1 trillion tokens have focused solely on language modeling, and it remains unclear how datastore scaling and different datastore factors impact the performance of downstream tasks.
In addition to knowledge-intensive tasks, we also include tasks that involve reasoning on top of knowledge. In particular, we included MMLU, which is widely recognized as a standard metric for assessing the performance of modern LMs, in our evaluation. Given our resource constraints, it was challenging to cover more downstream tasks, as we discussed in our limitation sections. We will emphasize this point in the next revision, and we hope that by open-sourcing our datastore and pipeline, we will enable others to build upon our work and evaluate large-scale retrieval-based models on a more diverse set of downstream tasks.
---
**Weakness 2.** For each of the analysis dimensions, the techniques are basic: E.g., a) quality filtering: basic filtering from DOLMA but leave the higher-quality filters b) Decontamination: 13-gram models for decontamination and leave out some techniques such as Min-K% Prob c) Retrieval Model. This might be due to the fact that a major portion of the paper was focused designing the scalable experimental set-up for this analysis.
**A2.** We thank the reviewer for acknowledging that our main focus is on the design of scalable experimental setup, as it is challenging to cover all possible datastore configurations given our computational constraints. Nevertheless, our analyses have covered several key aspects that people care about when designing the datastores, and our findings are inspiring.
Here are some discussions on the points mentioned by the reviewer:
(1) We agree with the reviewers that the DOLMA filters are basic and that higher-quality filters, such as fasttext filtering [1], are not covered in our study. We focus on standard filters that have been applied to both traditional and modern pretraining corpora such as RedPajama, C4, Dolma, etc. We believe these analyses are interesting to a broad community. In addition, we show that it is easy to test the effect of different quality filters using our pipeline and defer the study on more recent filters to future works.
(2) For data decontamination, we considered 13-gram Jaccard similarity-based decontamination because it has been widely adopted by many works, such as GPT-3, RETRO, Dolma, etc. Beside this, we also applied another decontamination method, longest-string decontamination, which enables us to experiment with different strictness levels of decontamination, as shown in Figure 4. By tuning the hyper-parameters of the decontamination methods, we show datastore scaling can benefit the model’s performance across different decontamination levels (Figure 4), so our conclusions hold regardless of the strictness of the decontamination method. Separately, Min-K% prob is designed for detecting if a given LM has been trained on certain data, and isn’t intended to be a data decontamination method.
(3) We indeed only used a single base retriever model in the submission, so we add a new ablation on the base retriever using 10% of MassiveDS and supplement the results in the table below. We chose DRAGON and GTR-Base because they rank higher than Contriever on the MTEB benchmark and are similar in size. Interestingly, the results indicate that these base retrievers exhibit similar performance on MassiveDS. We hypothesize that this is due to the differing domain compositions of MassiveDS compared to the datasets used in MTEB.
| Name | Perplexity ↓ | Natural Questions ↑ | MMLU ↑ |
|------------|--------------|----------------------|--------|
| Contriever | 4.2210 | 0.3321 | 0.4922 |
| DRAGON | 4.2373 | 0.3399 | 0.4875 |
| GTR-Base | 4.2146 | 0.3080 | 0.4934 |
Due to limited compute, we leave it to future work to explore the performance with a larger and more capable retriever such as GRIT-7B. But we note that such larger embedding models are often prohibitively expensive to scale up to a trillion-token datastore (Wang et al., 2024). However, we are optimistic that significantly improving the retriever will lead to better scaling trends. This expectation is supported by the evidence in Figure 5, which shows that enhancing retrieval quality with a cross-encoder reranker improves performance.
[1] Li, Jeffrey, et al. "DataComp-LM: In search of the next generation of training sets for language models."
---
**Q3.** Previous studies (https://arxiv.org/html/2307.07164v2) have shown that lack of diversity adversely impacts the results. What was the diversity of results retrieved by the underlying retrieval?
**A3.** As shown in Figure 3, the retriever retrieves from diverse sources in MassiveDS, and it tends to retrieve from relevant domains more frequently than other domains. Our results also indicate that increasing diversity helps improve the performance—MassiveDS outperforms single-domain datastores on the evaluated tasks, as shown in Table 3. We will discuss this in our next version.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal from Authors
Comment: Dear Authors, Thanks for responding to my comments and also including the supplementary results. The response and the proposed revisions to the next version adequately addresses the concerns I had related to some of the weakness of this work. After going through their responses, I have no other major concerns and due to this I am revising my rating. | Summary: The paper studies the effects of scaling retrieval datastore in retrieval augmented language models. The authors present the impacts of various design choices such as data size and data selection. A testbed dataset MassiveDS is also introduced.
Strengths: - The authors present a substantially larger retrieval datastore for retrieval augmented LMs.
- Extensive experiments have been conducted to study how datastore designs/properties can impact RAG results.
- Detailed analysis has been conducted on several eval datasets and models.
Weaknesses: - Large-scale embedding-based search is not a new topic but has been studied for years. Arguably it is a mature technology ready in production [1, 2]. The authors have made a rather arbitrary design in the index design. Information like hardware latency and memory/CPU footprint were not reported; comparisons with popular techniques like DiskANN [2] are not presented. This could be confusing and/or misleading to people reading the paper: should they adopt the search pipeline introduced in the paper, or just use a more developed technology? Also, this makes it difficult to assess the contribution of the entire paper.
- While the paper shows a general improving trend in RAG tasks and with relatively capable LMs, the models in the paper still seem to underperform the popular 2-year-old RAG model ATLAS [3] by a significant margin. Specifically, the design of the RAG system used in the paper can be somewhat basic. How much the results transfer to stronger system/system designs remains relatively unclear.
[1] Huang, J., Sharma, A., Sun, S., Xia, L., Zhang, D., Pronin, P., Padmanabhan, J., Ottaviano, G., & Yang, L. (2020). Embedding-based Retrieval in Facebook Search. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
[2] Subramanya, S.J., Devvrit, Kadekodi, R., Krishaswamy, R., & Simhadri, H.V. (2019). DiskANN : Fast Accurate Billion-point Nearest Neighbor Search on a Single Node.
[3] Izacard, G., Lewis, P., Lomeli, M., Hosseini, L., Petroni, F., Schick, T., Yu, J.A., Joulin, A., Riedel, S., & Grave, E. (2022). Few-shot Learning with Retrieval Augmented Language Models. ArXiv, abs/2208.03299.
Technical Quality: 2
Clarity: 3
Questions for Authors: - It was discussed in the paper that MASSIVEDS can outperform in-domain datastores on two eval sets. I wonder how this happens. Are there pieces of information easier to retrieve and/or read from out-of-domain? If so, what forms can this information take to be easier to read and retrieve than the original information?
- The authors claim to observe an empirical log-linear scaling trend in the RAG system. It would be interesting to see some discussion on why this is the case and the implications of this. How does this happen? Does this mean retrieval datastore scaling has an extremely diminishing gain?
- The authors use token counts extensively across the paper. While this is common in describing pre-training data, the actual retrieval units, i.e. passage/chunk is arguably a better measurement to describe the statistics of the retrieval task. Can the authors provide some statistics based on chunks/passages?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing out the comprehensiveness of our work! We would like to address the concerns and questions below, and we will edite the paper accordingly.
**Weakness 1.** Large-scale embedding-based search is not a new topic but has been studied for years. Arguably it is a mature technology ready in production [1, 2]. Information like hardware latency and memory/CPU footprint were not reported; comparisons with popular techniques like DiskANN [2] are not presented. This could be confusing and/or misleading to people reading the paper: should they adopt the search pipeline introduced in the paper, or just use a more developed technology? Also, this makes it difficult to assess the contribution of the entire paper.
**A1.** The work listed by the reviewer focuses on the search efficiency at inference with a fixed datastore configuration, but we would like to clarify that this is orthogonal to the focus of the current paper. Our main goal is to conduct a thorough analysis of the effect of scaling the datastore for retrieval-based language models and the impact of various datastore factors, where the inference speed isn’t a key bottleneck—even with the state-of-the-art nearest neighbor search used in productions such as DiskANN, studying the effect of various factors of the datastore like the size, quality filters, and decontamination methods remains expensive. This is because such experiments require rebuilding search indices for every combination of the factors, unrelated to which search algorithm is being used. Therefore, we design a new pipeline for efficient experimentation of various factors of the datastore by removing the need for repetitive datastore construction and repetitive large-scale retrieval, which are orthogonal and can be combined with prior work on efficient search algorithms.
---
**Weakness 2.** While the paper shows a general improving trend in RAG tasks and with relatively capable LMs, the models in the paper still seem to underperform the popular 2-year-old RAG model ATLAS [3] by a significant margin. Specifically, the design of the RAG system used in the paper can be somewhat basic. How much the results transfer to stronger system/system designs remains relatively unclear.
**A2.** Despite active research on advancing RAG designs, there have been few scientific studies of the scaling properties of datastores with more than a few billion tokens. As an initial open-source study on datastore scaling, we picked the most basic but widely adopted RAG design instead of any particularly optimized system. In Figure 5, we presented an analysis of the impact of improved retrieval on the datastore scaling performance, which shows that adopting a more advanced retrieval technique, i.e., a reranker, can further improve the scaling performance, indicating a positive sign that our conclusions could potentially be extended to more recent RAG designs. Overall, we focus on conducting a comprehensive analysis of datastore scaling and sharing our thoughts on how future works could further improve the scaling curves, rather than achieving state-of-the-art scores on specific datasets with a particularly optimized RAG design.
The purpose of Atlas is to finetune the retriever for best performance. Atlas finetunes the retriever and the LM to adapt to every specific task using the full task data (>10k) for their best-reported scores, while we evaluate the model in a training-free fashion by prepending only 5 examples in context. The purpose of our work is to disentangle the factors that affect performance and study them in detail. As such, Altas is orthogonal work. The insights from our work can be combined with Atlas, but here we ask more basic questions about RAG scaling.
---
**Question 1.** It was discussed in the paper that MASSIVEDS can outperform in-domain datastores on two eval sets. I wonder how this happens. Are there pieces of information easier to retrieve and/or read from out-of-domain?
**A3.** MassiveDS is a hybrid multi-domain datastore that includes both “in-domain datastores” as subsets and general web data to enhance its knowledge base. The key finding here is that the model is robust to other out-of-domain data in the same datastore and the retriever still preferentially retrieves from the right domain (Figure 3), so it eliminates the need to develop domain-specific datastores for each task, which is often a costly and complex process. Consequently, users can establish a single, general-purpose datastore that is effective across various tasks, even in cases where creating a task-specific in-domain datastore is unfeasible. This flexibility greatly simplifies data management and expands the utility of MassiveDS in diverse applications.
---
**Question 2.** The authors claim to observe an empirical log-linear scaling trend in the RAG system. It would be interesting to see some discussion on why this is the case and the implications of this. How does this happen? Does this mean retrieval datastore scaling has an extremely diminishing gain?
**A4.** Many previous scaling law studies have shown significant gains by scaling the pretraining data and the model size, and they all present performance gains with the x-axis (the number of tokens for pretraining or the number of parameters) in the log scale [1,2]. We find similar performance gains by scaling the datastore to the gains by scaling pretraining data. Therefore, our results indicate that the datastore could be another dimension to scale in addition to the pretraining data and the model size.
[1] Gadre, Samir Yitzhak, et al. "Language models scale reliably with over-training and on downstream tasks."
[2] Kaplan, Jared, et al. "Scaling laws for neural language models."
---
**Question 3.** Can the authors provide some statistics based on chunks/passages?
**A5.** We have 4B passages in our datastore, which average around 360 tokens. We will report detailed statistics in the next version of the paper.
---
Rebuttal 2:
Comment: Dear Reviewer,
As the discussion deadline approaches, we want to ensure that all your concerns and questions have been fully addressed. To address the weaknesses listed in the original review, we have clarified our contributions and the detailed settings of our experiments. We will revise our paper accordingly.
We deeply appreciate your insights and the time you have invested in reviewing our work. Your feedback is invaluable in refining our research and ensuring its quality and relevance. Please let us know if you have any further questions! | null | null | null | null |
HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation | Accept (poster) | Summary: This work introduces a ControlNet-based conditioning method to enable pre-trained diffusion models to be layout-conditioned. The conditioning model processes each object in parallel and fuses them as residual features for the pre-trained diffusion model. In addition to the commonly used COCO dataset for evaluation, the authors also introduce a new dataset, HiCo-7K, which is fine-grained from the GRIT-20M dataset. The generation results look good even when there are many objects, and empirical results suggest that the model is backbone-agnostic.
Justification: Although the method itself does not exhibit a high degree of novelty, there are notable contributions in dataset curation. However, the dataset part is not well elaborated, and the experiments lack comparison with cutting-edge methods.
Strengths: * The results validate that the proposed method can be applied to multiple pre-trained diffusion models, demonstrating its backbone-agnostic nature.
* Qualitative results are good, especially for layouts with many objects.
* The authors propose a new fine-grained dataset named HiCo-7K.
Weaknesses: * The custom dataset, HiCo-7K, is an important contribution but has not been elaborated on sufficiently. The paper only provides information about the total number of images and the average number of objects. Details such as how the filtering was conducted and the criteria for manual cleaning should be included.
* Table 1 does not include comparison with GLIGEN, and the SoTA method InstDiff[A], whose code was released several months before the NeurIPS submission deadline, is missing. Both of these methods can be applied on COCO and thus should be compared.
* The local CLIP score in Tab 3 being higher than the Ground Truth potentially indicates that this metric may be unreliable. (This is a minor issue and my rating is not affected by this point, the author can skip this point during rebuttal if out of space)
* The fuse layer is not explained in enough detail. While the sum and average cases are straightforward, the mask case requires more explanation, particularly regarding how features of overlapping objects are merged.
A. Wang, Xudong, et al. "Instancediffusion: Instance-level control for image generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Technical Quality: 2
Clarity: 2
Questions for Authors: My major concerns are the weakness 1, 2, 4. I would like the authors to address these concerns. My final rating is subject to change based on the authors' feedback.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The author has discussed limitations and social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your affirmation and constructive comments.We address each of your comments below, additional figures and experimental results can be found in the Author Rebuttal material PDF.
## Weakness 1.
We have detailed the construction pipeline of the custom dataset HiCo-7K in Fig.3 of Rebuttal PDF. We found that GRIT-20M has some issues, such as a low labeling rate for targets with the same description and target descriptions being derived solely from the original captions. Compared to GRIT-20M, the pipeline of the HiCo-7K is as follows.
1、Extracting noun phrase. We use spaCy to extract nouns from captions and the LLM VQA model to remove abstract noun phrases. Meanwhile, we use the GroundingDINO model to extract richer phrase expressions.
2、Grounding noun phrase. We use the GroundingDINO model to obtain the bboxes. After that, we use NMS and CLIP algorithms to clean the bboxes.
3、Artificial correction. To address the issue of algorithmic missed detections for multiple targets with the same description in an image, artificial correction is employed to further enhance the labeling rate of similar targets.
4、Multi-captions with bounding box. We expand the basic text from the original captions and use GPT-4 to re-caption the target regions. The dataset of HiCo-7K contains 7000 expression-bounding-box pairs with referring-expressions and GPT-generated-expressions.
## Weakness 2.
We conducted comparative evaluations of the mentioned methods on coco-3K and HiCo-7K, with detailed results shown in Table 1~3 of Rebuttal PDF. We also tested the inference time and GPU resource utilization of different models under the same conditions, as shown in Fig.2 of Rebuttal PDF. Due to distinct data quality and distribution between COCO and DM-generated images, GroundingModel and CLIP are better suited for COCO-3K evaluation.
Table 3 shows that HiCo outperforms methods like InstanceDiffusion in terms of image quality and layout controllability on the HiCo-7K. However, on the COCO-3K dataset, our controllability is somewhat reduced.
The reason for this problem is that our model was trained on a 1.2M fine-grained long caption, which belongs to out of distribution data for coco data. HiCo has significant advantages over other methods in terms of inference time and GPU usage, with detailed results shown in Fig.2 of Rebuttal PDF.
## Weakness 3.
In the evaluation based on the LocalCLIP Score dimension, the HiCo model exhibits slightly better performance compared to GroundTruth, primarily due to the following reasons:
1.HiCo generates images that are clearer and more visually appealing.
2.HiCo produces clearer boundaries in the generation of overlapping target regions.
3.In the GroundTruth dataset, there are instances where adjacent identical targets are present, which both algorithms and human annotators have not fully addressed, leading to inaccuracies in the detection results from the Grounding-DINO detection algorithm.
In summary, both the quality of generated images and the accuracy of detection models will influence the LocalCLIP Score.
## Weakness 4.
We designed a mask fusion method, which not only decouples different targets and backgrounds, but also provides a prerequisite for further image editing.
The main operation of the mask fusion method is as shown in the equation 1 of Rebuttal PDF, which is to multiply the features of different branches with the masks of the corresponding regions and then use the summation fusion.
However, regarding overlapping and occlusion among different targets, the current mask fusion method handles fusion by directly adding features together. The occlusion order of overlapping targets can only be specified via the global prompt by text description, for example “bowl in front of vase”, as illustrated in Fig.1-(c)、Fig.1-(d) of Rebuttal PDF. For current version of HiCo, there indeed lacks of more explicit mechanism for occlusion order controlling. We have recognized this problem as our future work.
---
Rebuttal Comment 1.1:
Comment: The reviewer thanks author's feedback and is satisfied with author's response for weakness 1, 4.
Though from the result in the rebuttal, the performance may not be superior than the current SoTA under all scenario, the proposed method does show imporvement in certain aspects.
Considering the contribution of the proposed dataset, the reviewer improves the score to 5. | Summary: This paper studies layout-to-image generation. It proposes HiCo, a diffusion model that supports a complicated, hierarchical set of bounding boxes as the layout condition. The authors also constructed the HiCo-7K benchmark to provide challenging tasks for evaluations. The experiment results show that the proposed method can effectively generate images matching the layouts in various complex scenarios.
Strengths: - The key insight of disentangling each object with one branch of HiCo-Net, a ControlNet-like conditioning branch, and fusing them into the same image, is novel, interesting, and inspiring.
- The authors provided various ways to augment the HiCo-Net branches with LoRA, which makes the model more powerful and extensible.
- The authors constructed their dataset HiCo-7K for evaluation.
- Abundant ablation study shows the reasonability of each design choice.
Weaknesses: - The method only supports axis-aligned bounding boxes. It might be more powerful if it also supports more free-formed or precise bounding boxes like rotated squares or even polygon bounding boxes, which will make it desirable in some 3D generative tasks like layout-guided room generation.
- Given that each bounding box requires an individual branch to compute intermediate features, the time complexity will be linearly growing with the number of bounding boxes. This makes the proposed method less efficient than previous method whose running time is constant.
- Following the previous point, I wonder if there is a more efficient way to utilize each bounding boxes, e.g., assign a small number of nearby bounding boxes to one branch, to trade-off between efficiency and per-branch task complexity.
- (Minor) In Fig.3, the "Encoder" and "Decoder" are not accurate terminologies in UNet. They should be "downsampling" and "upsampling" as in Fig.4.
- (Minor) The texts in math formulas should not be italicized, which actually means the multiplication of each letter. For example, $Instruction$ means $I\times n\times s\times t\times r\times u\times c\times t\times i\times o\times n$ instead of $\mathrm{Instruction}$.
- (Minor) $\times$ should be used instead of "*" in L130 and L131.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please see "Weaknesses".
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have addressed the limitations and broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your wonderful review and detailed comments. Here we address each point of your comments, additional figures can be found in the Author Rebuttal material PDF.
## Weakness 1
We are very grateful to the reviewer for presenting constructive ideas. Currently, HiCo only supports axis-aligned bounding boxes, mainly based on the perspective of general scene applications, which can achieve simplicity, ease of use, and excellent performance.
For scenarios involving other controllable conditions, using HiCo framework and designing reasonable training data and fusion methods can greatly expand their application scenarios. For example, in our ongoing work, we changed the input layout image to the foreground image segmented by segmentation model, and achieved outstanding performance on inpainting and outpaintint. We will also explore the scalability of conditional generation in future work.
## Weakness 2
For inference run time and GPU usage, we conducted two additional comparisons.
The evaluation environment is a 24GB VRAM 3090 GPU. Generate images with a resolution of 512 * 512 using 50 step inference on the HiCO-7K.
Vertical dimension comparison, we assessed the inference time and GPU memory usage with different number of objects. Since each object is processed by a separate branch in HiCo, the inference can be accelerated by inferring all the branches in one batch, i.e., in “parallel mode”, which as shown in Fig.2-(b) of Rebuttal PDF, is much faster than the “serial mode”, i.e., inferring all the branches one by one in serial.
Horizontal dimension comparison, among 6 different models including: GLIGEN, InstanceDiff, MIGC, CAG, MtDM as well as our HiCo. Results in Fig.2-(a) of Rebuttal PDF show that HiCo has the shortest inference time and the 2nd lowest GPU memory footprint.
## Weakness 3
The HiCo model currently does not support merging multiple small and nearby layout boxes for generation. On one hand, the descriptions and bounding boxes of different targets within one branch cannot be distinguished. On the other hand, branches of HiCo can be inferred in parallel, which alleviates the impact of this issue to some extent.
Of course, we are also actively exploring more concise and efficient structures to further balance inference efficiency and task complexity.
## Weakness 4 ~ Weakness 6
We will thoroughly review the writing and formatting issues in the paper, and update it with the new experimental results and conclusions.
---
Rebuttal Comment 1.1:
Comment: I sincerely thank the authors for their rebuttal. All of my concerns are addressed, and I would like to keep the rating of 7.
I really like the paper's results and ideas. I hope the non-axis-aligned bounding boxes (or masks) can be supported soon so that the work will also benefit 3D and video tasks. | Summary: This paper propose HiCo (Hierarchical Controllable) Diffusion Model for layout-to-image generation. HiCo Net is a multi-branch structure that is introduced to hierarchically generate the global background and foreground instances for different layout regions. The author further evaluate the performance of multi-objective controllable layout generation in natural scenes and introduce HiCo-7K benchmark.
Strengths: 1.The paper writing is clear and easy to follow.
2.The HiCo model achieves spatial disentanglement and could generate more coherent and accurate images in complex scenarios.
3.The HiCo model demonstrate excellent compatibility with rapid generation plugins LORA.
Weaknesses: 1.The generation requires more inference time and computational resources.
2.The bounding-box and per-object generation process could not handle more complex interactions between entities such as ‘A man on the left and his wife on the right is holding their dog in the middle.’
3.In the context of bounding-box based layout-to-image generation, addressing the issue of overlaps between bounding boxes corresponding to different entities has been a focal point of academic discourse. However, the authors omit any discussion on how such overlaps are managed.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1.How to manage the overlapping issue mentioned above?
2.How does the runtime compare with previous works ?
3.There are similar works like LMD[1] and SLD[2], discuss the differences between HiCo and these two works.
4.Could HiCo Net be considered a variant of ControlNet within the modality of bounding box-based layout?
[1]Lian, Long, et al. "Llm-grounded diffusion: Enhancing prompt understanding of text-to-image diffusion models with large language models."
[2]Wu, Tsung-Han, et al. "Self-correcting llm-controlled diffusion models."
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors are encouraged to answer the questions and address the weakness above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your detailed review and valuable suggestions. We address each of your comments below, additional figures can be found in the Author Rebuttal material PDF.
## Question 1 & Weakness 2&3.
HiCo achieves hierarchical generation by decoupling each object’s position and appearance information into different branch, meanwhile controlling their overall interactions through a background branch with global prompt and the Fuse Net. The Fuse Net combines features from foreground and background regions, as well as intermediate features from side branches, then integrates them during the UNet upsampling stage. As illustrated in Fig.1-(a)、Fig.1-(b) of Rebuttal PDF, HiCo is capable of handling complex interactions in overlapping regions without any difficult.
The occlusion order of overlapping objects is also specified via the global prompt by text description, for example “bowl in front of vase”, as illustrated in Fig.1-(c)、Fig.1-(d) of Rebuttal PDF. But since there lacks corresponding occlusion order train data, the success rate is far from optimal. For current version of HiCo, there indeed lacks of more explicit mechanism for occlusion order controlling.
We recognize this problem as our future work. Actually we’re already working on the occlusion order data curation, which is a quite challenging task as it requires reliable depth estimation besides the object detection bounding boxes. The process deserves another dedicated technical report in the future.
## Question 2 & Weakness 1.
For inference run time and memory usage, we conducted two additional comparisons. The first comparison is horizontal, among 6 different models including: GLIGEN, InstanceDiff, MIGC, CAG, MtDM as well as our HiCo. Specifically, we evaluated the inference time and GPU memory usage for directly generating 512*512 resolution images on the HiCo-7K test set using a 24GB VRAM 3090 GPU, results in Fig.2-(a) of Rebuttal PDF show that HiCo has the shortest inference time and the 2nd lowest GPU memory footprint.
To make the results more complete, the second comparison is vertical: we assessed the inference time and GPU memory usage for generating 512*512 resolution images on the HiCo-7K test set with different number of objects. Since each object is processed by a separate branch in HiCo, the inference can be accelerated by inferring all the branches in one batch, i.e., in “parallel mode”, which as shown in Fig.2-(b) of Rebuttal PDF, is much faster than the “serial mode”, i.e., inferring all the branches one by one in serial.
## Question 3.
Thank you for the valuable references. LMD and SLD represent early works that integrate large language models (LLM) with diffusion models for instruction-following enhancement and controllable image generation and edition.
Unlike HiCo which is dedicated for layout-controllable image generation and requires layout and image specification directly from user input, LMD and SLD resort to LLM for image scene description and layout arrangement via text automatically.
For layout control, LMD and SLD adopt train-free approach by manipulating the latent and cross-attention map of each object. The solution is quite economic but the controlling effect is less satisfactory. In contrast, HiCo incorporates a dedicated conditioning network and learns layout condition from millions of data, providing comparatively superior controlling capability.
LMD and SLD, on the other hand, can perform complex instruction understanding where HiCo cannot be directly compared with. It’s worth pointing out that HiCo can be integrated with LMD and SLD, by treating HiCo as the replacement of their train-free layout controllable image generation module, we’ll add these discussion in the main manuscript.
## Question 4.
As mentioned in introduction, adapter models such as ControlNet and IP-Adapter are representative works that introduce additional conditions into the diffusion model by incorporating “adapter network” alongside the frozen pretrained stable diffusion model. HiCo is also one kind of adapter model in this respective, introducing layout condition by incorporating separate branch network for each object. But unlike ControlNet, the side branch input of HiCo during training or inference uses paired text descriptions and conditional images, and the fusion method of multiple side branches of HiCo plays an important role in good generation performance. We believe HiCo’s current design is definitely not the most optimal, we will conduct more exploration on model structure in its next version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. The authors have addressed most of my concerns in the rebuttal. I hold the firm belief that the current method still holds potential for improvement. Accordingly, I have raised my score from 5 to 6. I hope to see an optimized version in future open-source releases. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their thoughtful, overall positive feedback and encouraging feedback. We are particularly pleased that the reviewers believe that our method achieves spatial disentanglement by separating each object, making it seamlessly compatible with SD community plugins (Reviewer 72RR). Our method is novel, simple, and effective, with strong model scalability (Reviewer iE3j). Our work on custom dataset HiCo-7K has made significant contributions (Reviewer 6Tmf).
We would also like to thank the reviewers for all their insightful suggestions. We carefully read and analyzed all the weaknesses and questions, which can be summarized into the following aspects.
**Inference performance**. We tested the inference time and GPU resource utilization of different models under the same conditions, as shown in Fig.2.
**Target interaction**. Our method utilizes background branches to implicitly understand and generate complex interactions between targets, as shown in Fig.1-(a)、Fig.1-(b).
**Overlapping area**. Our method cannot display the specified layer order for generating overlapping areas, and can only use the prior knowledge of the model to autonomously and reasonably generate multi-target overlapping areas, as shown in Fig.1-(c)、Fig.1-(d).
**Fusion Network**. Our fusion network decouples different targets and backgrounds, improving the quality of complex layout image generation, and also provides feasibility for subsequent image editing.
**Dataset**. We have provided a detailed description of the processing procedure for the evaluation dataset HiCo, as shown in Fig.3.
**Comparison with other SOTA methods**. We conducted comparative evaluations of different methods on coco-3K and HiCo-7K. Refer to Table 1~3.
**Writing format**. We will conduct a detailed review and update of equations, symbols, terminology definitions, and method explanations.
Finally, we remind reviewers to refer to our Rebuttal PDF. At the same time, we will respond to each question and comment one by one. We are more than happy to discuss with the reviewers to address any other issues.
Pdf: /pdf/f5349dd222ef99b334350eaa71bc24db71bf254e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors | Accept (poster) | Summary: The authors propose a transformer-like NN by unrolling iterative optimization algorithms that minimize graph smoothness, which is used for imaging tasks.
Strengths: 1. The method is well-illustrated, and the theoretical details are convinced.
2. Experiments in imaging tasks show superior experimental results compared with SOTA methods.
Weaknesses: Some more discussions on the proposed method are needed. See my questions for details.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. The ideas of unfolding traditional graph optimization methods are popular in recent years. The authors may consider discuss more related works in GNNs, e.g.,
[1] Graph neural networks inspired by classical iterative algorithms Y Yang, T Liu, Y Wang, J Zhou, Q Gan, Z Wei, Z Zhang, Z Huang, D Wipf, International Conference on Machine Learning, 11773-11783.
[2] Simple and deep graph convolutional networks, M Chen, Z Wei, Z Huang, B Ding, Y Li
International conference on machine learning, 1725-1735
2. Could you provide a convergence guarantee for the proposed model.
3. It seems that all parameters are trained and updated following ADMM. However, in LISTA and other LISTA-like methods, most parameters are trained with back propagation. ADMM is just a way to build optimization-inspired networks, where each iteration corresponds to a single block in the network. What’s the difference between your model and these LISTA-like methods? And what’s the advantage of the proposed model.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: No limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments.
1. We thank the reviewer for the two suggested references. The first reference [1] indeed provides unrolling techniques based on proximal gradient descent (Section 2 in [1]) and IRLS (Section 3). While the objective in equation (1) in [1] employs a quadratic graph smoothness term similar to our GLR term in equation (4) in our paper, their learned parameter W in (1) differs from our learned parameters alpha and beta in our unrolled conjugate gradient (CG) to solve a linear system. More importantly, [1] assumes a FIXED graph Laplacian matrix L, while the key insight in our paper is the graph learning module that learns an appropriate similarity graph from data---one that resembles the self-attention mechanism in transformer. This insight enables us to interpret our unrolled neural net as a transformer, and explains its good performance that is comparable to SOTA transformer implementations. The unrolling of IRLS in Section 3 aims at improving robustness to edge connectivity errors in their defined unweighted Laplacian matrix; in contrast, we assume that the learned graph is sufficiently reliable for us to build a low-pass filter to compute the output. Thus, we believe that IRLS unrolling is orthogonal to our contributions.
The second reference [2] addresses the problem of "oversmoothing" in GCN using two techniques: initial residual and identity mapping. This work also assumes a FIXED graph, while the key insight of our work is again the graph learning module from data that is akin to the self-attention mechanism in transformer. Thus, we believe [2] is orthogonal to our contributions. We will add an abridged discussion of the first paper in the final draft of this paper.
2. Convergence of an ADMM algorithm for a strictly convex optimization objective is established in several notable works, for example:
Robert Nishihara, Laurent Lessard, Ben Recht, Andrew Packard, Michael Jordan, "A General Analysis of the Convergence of ADMM", Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:343-352, 2015.
We will reference these papers in the final draft of the paper.
3. We would like to clarify that to minimize the GTV objective in equation (11), we develop an ADMM algorithm that iteratively minimizes different terms until convergence. The unrolled algorithm dictates an architecture of a feed-forward neural net (see Fig. 1), whose parameters are optimized end-to-end via back-propagation and stochastic gradient descent. In this respect, our parameter tuning is no different than LISTA [15] and many other algorithm unrolling works in the literature such as [16]. Unlike previously algorithm unrolling works, however, the key insight in our work is to connect our graph learning module with the self-attention mechanism in transformer, so that our unrolled neural net, with periodic insertion of the graph learning module, can be interpreted as a transformer also, albeit with drastic reduction in parameter size. To the best our our knowledge, we are the first to draw the connection between unrolled graph-based algorithms and transformer, and demonstrate competitive performance, while reducing the number of parameters significantly.
---
Rebuttal Comment 1.1:
Comment: Thanks very much for the response. I know that the convergence of the ADMM algorithm has been well established by prior work. However, I wonder if the convergence of the objective still holds when all network parameters are updated by the back propagation and SGD as you mentioned.
---
Rebuttal 2:
Title: Response to Reviewer rsiC on convergence of unrolled ADMM
Comment: We thank the reviewer for the 2nd round comment, which clarifies an earlier question. In our implementation, we first select the number of iterations, say K, until the model-based iterative ADMM algorithm typically converges, and then we correspondingly “unroll” the K iterations into a fixed number of K neural layers. This means that before back-prop and SGD, the unrolled neural net already achieves convergence to ADMM solution x* typically. Then, during SGD in our supervised learning setting, we empirically observe that the updated network parameters (including the graph learning parameters, conjugate gradient parameters, Lagrange multipliers, etc) only guide the converged x* towards an improved output z* that further reduces the defined end-to-end loss function, resulting in better signal interpolation performance. Thus, SGD does not affect the convergence of ADMM to a fixed point solution.
We are aware of recent works that theoretically study the convergence of specific unrolled ADMM algorithms, for example:
W. Pu, Y. C. Eldar and M. R. D. Rodrigues, "Optimization Guarantees for ISTA and ADMM Based Unfolded Networks," ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, 2022,
At a future work, we can similarly study theoretical convergence of our graph-based unrolled ADMM network, in the particular case when a graph learning module is periodically inserted.
---
Rebuttal Comment 2.1:
Comment: Thanks for the reply. This solves my problems and I have no more questions. I will increase my rating. | Summary: This paper proposes a novel approach to build interpretable and lightweight transformer-like networks by unrolling graph-based algorithms. The authors propose unrolling iterative graph-based algorithms for signal restoration with graph smoothness priors (minimizing roughness) into interpretable neural network layers. Two graph smoothness priors are explored: graph Laplacian regularizer (GLR) and graph total variation (GTV). Experimental results demonstrate the advance of proposed methods.
Strengths: S1. The paper details the mathematical formulation for both GLR and GTV based optimization problems.
S2. The proposed methods achieve good restoration performance with significantly fewer parameters compared to conventional transformer-based methods.
S3. The authors show robustness of the proposed approach to covariate shifts.
Weaknesses: W1. My major concern is about the claim related to interpretability. The paper claims the proposed network is interpretable. However, the specific mechanisms for interpretability are unclear to me. Could the authors provide additional details or visualizations (e.g., activation maps) to support this claim?
W2. While the paper explains the connection between the proposed graph learning module and the self-attention mechanism in transformers, there is still a large gap between these two things as the authors acked on page 7. Furthermore, It's unclear why the authors highlight the similarity. Is it because the performance advantage directly stems from this connection? Can the authors clarify the purpose of this comparison?
W3. While the paper shows a reduce of proposed methods on parameter size, a runtime analysis of the proposed method compared to existing methods would be beneficial.
W4. The inclusion of commonly used metrics like cPSNR and SSIM would strengthen the evaluation. Besides, adding more baselines can be more convincing.
W5. The paper explores two priors, GLR and GTV. Can the authors discuss the advantages and disadvantages of each, and provide guidance on when to choose one over the other?
W6. Why is there only "iGTV" in the experimental results, how about "iGLR"?
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses section
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors don't discuss the limitations of the methods
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments, which we respond point-by-point below.
1. Interpretability in the context of "algorithm unrolling" (see [14, 15, 16]) means that each neural layer corresponds to an iteration in an iterative algorithm minimizing a formulated optimization objective. For example, each block in our ADMM layer in Fig. 1(c) corresponds to an equation we derive in our ADMM-based algorithm to minimize our GTV objective in equation (11). This is in stark contrast to off-the-shelf deep neural nets such as transformer, where individual neural layers and their combinations are not mathematically interpretable. [16] makes the same argument that the layers of their unrolled transformer-like network minimizing a sparse rate reduction (SRR) objective are interpretable. [14] provides a good recent overview of SOTA unrolled algorithms and the interpretability they provide.
2. First, while we acknowledge that there are MINOR differences between the notion of affinity in self-attention in equation (23) and the notion of similarity in graph learning in equation (25), we do argue explicitly that "the normalized edge weights in (25) are essentially the same attention weights in (23)" (second last paragraph in Section 5.2). THERE ARE NO LARGE GAPS. Second, drawing the connection between graph learning and conventional self-attention in transformer enables an "interpretation" of the graph learning module as a self-attention mechanism, and thus the unrolling of a graph-based algorithm (such as GLR-/GTV-based interpolation algorithms in our paper) can be interpreted as a transformer---the title of the paper. It also helps explain why our unrolled GTV neural net has comparable signal interpolation performance with SOTA transformer implementation, while employing a fraction of the network parameters.
3. While we claim and demonstrated a drastic reduction of parameter size in our lightweight and interpretable transformer-like neural net relative to conventional transformers, we never claim a reduction of runtime (during inference), which is related to the number of neural layers. Note that both our GLR and GTV algorithms are linear time in complexity, and thus the number of unrolled neural layers are also linear w.r.t. the input size. During inference, this would mean that our execution time is comparable to other linear time algorithms / neural nets.
4. We are unsure what the reviewer meant by "cPSNR"; to the best of our knowledge, there are no standard and widely accepted image quality assessment metrics by that name. We do clarify that for demosaicking, our posted PSNR numbers are averages computed over the three color components, R, G and B, while for interpolation, out posted PSNR numbers are for the Y-component in Y-Cr-Cb color space. We have added SSIM numbers in the attached PDF along with more baseline comparison schemes.
5. We did compare our GLR and GTV algorithms in the last paragraph in Section 4.2.3. The main point here is that unrolling of the GTV algorithm is more intricate and enables more network parameters for end-to-end learning, and thus results in better performance (as demonstrated in Section 6) if sufficient training data are available.
6. We have provided experimental results for iGLR (GLR algorithm without unrolling and end-to-end parameter turning) in the added PDF.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The authors' rebuttal has addressed my concerns, and I have no further questions. | Summary: The authors build a "white-box" Transformer-like neural network through algorithm unrolling and graph signal processing. This network utilizes convolutions to learn low-dimensional features per node, constructs sparse similarity graphs, and employs low-pass filtering at each layer, significantly reducing the parameter count compared to conventional transformers. The experimental results demonstrate its parameter efficiency and robustness to covariate shift in image interpolation tasks.
Strengths: 1. This paper utilizes algorithm unrolling and graph signal processing to construct a lightweight white-box Transformer-like model.
Weaknesses: 1. Considering directed graph signal processing (DGSP) is much better than graph signal processing (GSP) in reality. The later is limited to undirected graphs. Therefore, the method proposed in this paper merely approximates symmetric self-attention, where the query equals the key, and cannot mimic or surpass general self-attention.
2. The experiments are not enough to fully validate the theory.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Since this is a white-box Transformer-like neural network, why not use some Transformer benchmarks such as long-range arena for the experiments?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. The reviewer raised a good point that the affinity notion in conventional self-attention mechanism in equation (23) using query and key matrices, Q and K, is asymmetric, while the feature distance in graph learning in equation (25) is symmetric, leading to an undirected graph. However, one key goal of our paper is interpretability, and given an undirected graph with well understood notion of graph frequencies (e.g., eigenvectors of eigen-decomposable symmetric graph Laplacian matrix), our GLR and GTV minimization algorithms both lead to interpretable low-pass filters of the up-sampled observations (see last paragraph of Section 4). In contrast, notion of graph frequencies for directed graphs is still actively being investigated; for example, see the following recent publications proposing different frequency definitions for directed graphs:
H. Kitamura, H. Yasuda, Y. Tanaka and S. Muramatsu, "Realization of Digraph Filters Via Augmented GFT," 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 2023.
S. Kwak, L. Shimabukuro and A. Ortega, "Frequency Analysis and Filter Design for Directed Graphs with Polar Decomposition," ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Republic of, 2024.
Thus, constructing an interpretable low-pass filter of the input for directed graphs is significantly more difficult at the current time. We leave the generalization of our framework to directed graphs for future work. Nonetheless, we demonstrated in this paper that the symmetric notion of similarity for an undirected graph is already sufficient to result in unrolled neural nets that has SOTA image demosaicking and interpolation performance.
2. We thank the reviewer's suggestion to use benchmark datasets in long-range arena for comparisons. One key design that enables a lightweight and interpretable transformer-like neural net is the mapping from high-dimensional input embeddings to low-dimensional feature representations (from which we compute feature distances and edge weights), which we made possible using shallow CNNs--well-known in imaging for relevant feature generation. While we believe our framework can be generally applicable to other applications beyond imaging, analogous mapping functions to low-dimensional feature representations are needed for languages and other data types. We are currently engaging experts in other domains to build such lightweight transformers.
Nonetheless, in the attached PDF we provide a more comprehensive comparison with additional baseline schemes in image demosaicking and interpolation, as requested by another reviewer.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors response. Unfortunately, the authors did not base the theoretical explanation on directed graph signal processing, which makes the theoretical explanation in this paper insufficient.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer oH21 on sufficiency of undirected graph signal processing
Comment: We thank the reviewer for the 2nd round comment. With due respect, we fail to understand why theoretical analysis on undirected graphs is insufficient. We reiterate that our goal (as stated in our paper title) is to unroll an iterative optimization algorithm based on an undirected graph smoothness prior (GLR or GTV) into an interpretable “white-box” neural net, and show it is a form of transformer achieving SOTA results in signal interpolation. In so doing, for the first time in the literature, we provide a clear and intuitive connection between the plethora of undirected graph-based algorithms in the vibrant GSP community that study them and the latest deep learning architectures like transformers. Further, our low-pass filters from theoretical derivation to interpolate signals enables us to eliminate the value matrix V in conventional self-attention, resulting in drastic reduction in parameter size.
OUR GOAL IS NOT TO ANALYZE THE CONVENTIONALLY DEFINED SELF-ATTENTION MECHANISM using query and key matrices in standard “black-box” transformers, which is a form of directed graph. In fact, we demonstrate that (at least for image demosaicking / interpolation) the symmetric definition of self-attention is already SUFFICIENT to achieve SOTA performance—further generalization to a directed graph is not necessary here. Thus, we are confused why the reviewer presumes that “directed graph signal processing (DGSP) is much better … in reality” (1st round comment) and our “theoretical explanation is insufficient” (2nd round comment). As already stated in our rebuttal, defining frequencies on directed graphs is still an open problem in the GSP community, and thus expecting spectral analysis of signals on directed graphs is not reasonable, in our humble opinion. For more background, see:
A. G. Marques, S. Segarra and G. Mateos, "Signal Processing on Directed Graphs: The Role of Edge Directionality When Processing and Learning From Network Data," in IEEE Signal Processing Magazine, vol. 37, no. 6, pp. 99-116, Nov. 2020. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful and detailed comments. We responded to each reviewer individually in separate rebuttals below. We also attached a PDF containing extra experimental results requested by two reviewers.
Pdf: /pdf/4fcec15dcf9a6bae7730cd682945a7cf5e68ed04.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
$\textit{Read-ME}$: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design | Accept (poster) | Summary: This paper proposes to perform data-specific pruning from a regular LLM and, with a small amount of continual training, builds a set of smaller LLMs. Then they are used as experts, and a router network is trained to route requests to them. Note that the routing decision is one per query and not per token or per layer, and hence the proposal is very different from regular MoE. Empirical results are presented with Llama-7B as the starting point and show that the resulting ensemble has good accuracy-latency tradeoff.
Strengths: Please see the summary. The results seem sound.
Weaknesses: The main results of Table 1 does not support that the proposed method is at a better accuracy vs inference-cost trade-off. The average accuracy of the proposed method is only marginally better than Sheared-Llama yet its effective inference size of 4.7B is much larger than 2.7B. In the other direction and comparing against the Llama-7B starting point, the overall accuracy cost of pruning and ensemble is quite substantial from 61.4% to 55.5%.
In additional to the tasks in Table 1, it would help to add perplexity comparison.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and effort for the review. We answered your questions as follows.
**[W1 - Effectiveness of Read-ME]** Thank you for your question.
- We would like to first clarify that our method significantly outperforms Sheared-Llama. For example, Sheared-Llama nearly performs at random guess levels on the MMLU task (achieving only 26.4% accuracy on a 4-class classification), whereas Read-ME achieves a much higher accuracy of 38.9%.
- Second, our method is also more training-efficient, using only 1 billion tokens compared to Sheared-Llama's 50 billion tokens (*x50* times higher than ours). To further back up our claim, we perform the Sheared-llama continued tuning using the same amount of tokens as ours. We show the downstream task performance in Table A. By using 1B tokens for tuning, the performance of Sheared-llama reduces to 48.2%, which is significantly lower than our method (55.5%). Note that we use the publicly released Sheared-llama pruning checkpoint [1] as the initialization.
- Additionally, our method aims to convert a dense pre-trained model into a MoE for efficient inference, and such a compression method is typically not lossless. However, our method achieves a superior accuracy-cost trade-off compared to all baseline compression methods. To validate our approach, we plot the MMLU performance of our method alongside a large number of baseline methods in Figure 4. This data is also presented in tabular form in Table B. Our method outperforms all baseline models/methods of similar sizes.
Table A: Our method is more training-efficient, using only 1 billion tokens compared to Sheared-Llama's 50 billion tokens (x50 times higher than ours). If running sheared-llama with only 1 billion tokens, the performance drops significantly.
| Method | Cost | MMLU | Hell. | Wino. | ARC-E | ARC-C | LogiQA | CoQA | avg. |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Sheared-Llama | 50B | 26.4% | 70.8% | 67.0% | 67.0% | 41.2% | 28.3% | 71.7% | 53.2% |
| Sheared-Llama-efficient | 1B | 25.4% | 59.4% | 61.9% | 65.8% | 35.8% | 25.1% | 64.2% | 48.2% |
| Read-ME | 1B | 38.9% | 68.5% | 67.7% | 66.6% | 42.3% | 29.7% | 74.8% | 55.5% |
Table B: Evaluation of Read-ME on MMLU benchmark, compared with other representative open-source models and compression techniques.
| Method | OpenMoE | Sheared-Llama | Pythia | Open-Llama-v2 | LLM-Pruner | Compresso | **Read-ME** | LaCo | SliceGPT | Pythia | Open-Llama-v2 | Llama-2 |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| #Param | 2.1B | 2.7B | 2.8B | 3.4B | 4.5B | 4.5B | **4.7B** | 4.7B | 4.8B | 6.9B | 6.9B | 6.9B |
| MMLU | 26.2% | 26.4% | 26.9% | 25.7% | 23.9% | 25.9% | **38.9%** | 26.5% | 28.9% | 25.5% | 40.2% | 45.3%|
[1] https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-Pruned
**[W2 - Perplexity Comparison]**
Thank you for your suggestion. We tested the perplexity on Wikipedia, with the results presented in Table C. We set the sequence length to 1024, which is within the sequence limit of all models.
Table C demonstrates that our method shows negligible performance degradation (from 3.91 to 3.94) compared to Llama-2, the pre-trained dense model.
Furthermore, our method achieves better performance compared to most of the baseline methods of similar sizes, including Sheared-Llama, and Pythia. We will add the column to Table 1 in the future version, and we have also provided a tentative Table 1 in the uploaded PDF.
Table C: Comparison on Wikipedia perplexity.
| Model | # Params | Wikipedia PPL |
|:---:|:---:|:---:|
| Sheared-Llama | 2.7B | 6.77 |
| Pythia | 2.8B | 6.02 |
| Open-Llama-v2 | 3.4B | 3.69 |
| Read-ME | 4.7B | 3.94 |
| Pythia | 6.9B | 5.49 |
| Open-Llama-v2 | 6.9B | 2.85 |
| Llama-2 | 6.9B | 3.91 |
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I thank the authors for the discussion and has raised my rating from 3 to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thorough review and for the thoughtful comments you provided on our work.
We have made the necessary revisions in response to your comments, and we hope that our revisions meet your expectations.
If there are any remaining concerns or if you have any further suggestions that could help us improve the quality of our work, please do not hesitate to let us know. We would greatly appreciate any further guidance you can provide.
Once again, we are truly grateful for your support.
Authors | Summary: The paper proposes a novel framework to enhance the efficiency of pre-trained LLMs by transforming them into post-hoc Mixture-of-Experts (MoE) models. The key innovation lies in decoupling the router from the MoE backbone, which facilitates pre-gating and lookahead scheduling, thereby improving memory management and batching during inference. The proposed method, Read-ME, demonstrates significant improvements in both latency and task performance compared to existing models.
Strengths: 1. The paper nicely bridges the gap between algorithmic advancements and system-level optimizations. By addressing both fronts, the proposed framework ensures that the improvements in model architecture translate into real-world performance gains.
2. To the best of my knowledge, the introduction of a pre-gating, shared router decoupled from the MoE backbone is a significant innovation. Although breaking down dense LLMs into MoEs alone has been done before, this new gating approach allows for pre-computation and lookahead scheduling, addressing inefficiencies in traditional layer-wise routing systems.
3. The expert-aware batching and optimal expert caching algorithms are well-designed to leverage the pre-gating architecture, showing clear improvements in mean and tail latency.
4. The paper provides comprehensive experimental results that validate the effectiveness of the proposed approach. The improvements in MMLU performance and latency reductions are significant and clearly presented. The comparison with various baselines, including both dense and MoE models, is thorough and demonstrates the superiority of Read-ME across different metrics.
Weaknesses: 1. The experimental subject is limited. While the paper demonstrates impressive results on the LLaMA-2 model, the scalability and generalization of the proposed method to other model types and larger scales are not extensively explored. It would be essential to include experiments on more diverse datasets and larger models.
2. Although the paper claims minimal overhead for the auto-regressive router, the detailed analysis of its computational costs relative to traditional routers could be expanded. Can you provide a more detailed breakdown of the computational overhead introduced by the auto-regressive router compared to traditional routers? How does this impact overall inference latency, especially in high-throughput scenarios?
3. The paper could benefit from a more detailed explanation of the pre-gating and batching algorithms, potentially with pseudocode or flow diagrams to aid reproducibility. Sometimes I cannot fully follow the authors’ texts.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness. Overall, this seems to be reasonably solid LLM co-design work, but my main concern is its limited evaluation to Llama2 only. Reporting addition experiments on Gemma or Mistral would be strongly encouraged.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[W1 - Additional experimental results]** Thanks for the suggestion! To validate that our method remains effective in other scenarios, we use the Mistral model as the pre-trained dense model, and convert it to the MoE structure, with the proposed method. The task is challenging because we do not have prior knowledge on the Mistral original training data, and our experiment in Table A shows that our method remains effective without the prior knowledge of the original training domain.
Table A: Ablation study on Mistral[1] pre-trained model.
|Method | Pre. domain | FT. Domain | # Param | MMLU | Hell. | Wino. |ARC-E | ARC-C | LogiQA | CoQA | avg. |
|-|-|-|-|-|-|-|-|-|-|-|-|
| ReadME-mistral | N/A | Red-Pajama | 4.7B - (17B) | 39.2% | 79.1% | 68.2% | 77.1% | 49.3% | 30.9% | 76.2% | 60.0% |
| mistral | N/A | - | 6.9B | 62.1% | 84.5% | 79.3% |82.7% | 63.7% |33.5% | 80.3%| 69.4%|
| ReadME-Llama-2| Red-Pajama | Red-Pajama | 4.7B - (17B) | 38.9% | 68.5% | 67.7% |66.6% |42.3% | 29.7%| 74.8% | 55.5%|
| Llama-2 | Red-Pajama | - | 6.9B | 45.3% | 78.6% | 69.3% | 76.4% | 53.0% | 31.0% | 75.9% | 61.4%|
[1] Mistral 7b.
**[W2 - Computational cost of Auto-regressive Routers]**
Thanks for the suggestion!
For a detailed analysis, we added: (1) FLOPs comparison, (2) latency, and (3) latency breakdown with a larger batch size (high-throughput scenarios) of a Traditional Router (TR) and an Autoregressive Router (AR). To focus solely on the router’s impact on latency, we controlled other variables (e.g., the number of activated parameters) to be the same.
(1) flops comparison
Table B. Flops comparison between Traditional router and Auto-regressive router
| | Traditional Router | Auto-regressive Router |
|-|-|-|
| flops/sample | 4.7 KFLOPs | 3 KFLOPs |
(2) latency [ms]
Table C. Latency comparison between Traditional router and Auto-regressive router
| | bsz=5 | bsz=5 | bsz=10 | bsz=10 | bsz=20 | bsz=20 | bsz=30 | bsz=30 |
|-|-|-|-|-|-|-|-|-|
| | TR | AR | TR | AR | TR | AR | TR | AR |
| Router | 1.76 | 0.61 | 1.80 | 0.61 | 1.78 | 0.61 | 1.93 | 0.61 |
| Attention | 18.13 | 18.18 | 18.28 | 18.13 | 18.49 | 18.36 | 19.59 | 19.66 |
| Expert/MLP | 22.43 | 21.75 | 24.59 | 22.53 | 24.97 | 22.99 | 30.17 | 28.31 |
| Sum | 42.31 | 40.55 | 44.67 | 41.27 | 45.23 | 41.96 | 51.69 | 48.59 |
(3) latency breakdown [%]
Table D. Latency breakdown comparison between Traditional router and Auto-regressive router
| | bsz=5 | bsz=5 | bsz=10 | bsz=10 | bsz=20 | bsz=20 | bsz=30 | bsz=30 |
|-|-|-|-|-|-|-|-|-|
| | TR | AR | TR | AR | TR | AR | TR | AR |
| Router | 4.15% | 1.50% | 4.02% | 1.48% | 3.93% | 1.46% | 3.74% | 1.26% |
| Attention | 42.85% | 44.85% | 40.92% | 43.92% | 40.87% | 43.75% | 37.90% | 40.47% |
| Expert/MLP | 53.01% | 53.65% | 55.06% | 54.59% | 55.20% | 54.80% | 58.36% | 58.27% |
| Sum | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% |
Note that the computational cost of both the traditional router and the autoregressive router is *theoretically linear to batch size*. Therefore, when the batch size is high (in high-throughput scenarios), the cost increases linearly. In both cases, the computation can be parallelized, so this remains negligible in end-to-end latency even in high-throughput scenarios.
In fact, we would like to clarify that *the bottleneck in high-throughput scenarios is actually the expert layers*, as seen in table D – Expert/MLP row. This issue can be addressed by the methods discussed in Section 4. *Traditional layerwise routers do not allow for efficient system design*, which underscores the need for a careful co-design of routers.
**[W3 - Reproducibility and Pseudocode]**
In Appendix A, we provide the pseudocode for the batching algorithm and pre-gating. In summary, at each scheduling step, we find the expert with the most requests and select that expert for the current step. We then check whether the scheduled tokens exceed the maximum token length or the maximum number of requests that can fit. This process is repeated until no more requests can be scheduled. We will release the code publicly and refine the text to improve understanding as well.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I thank the authors for their rebuttal and supplemented experiments. I appreciate the reply, and I found my concerns adequately addressed. I will raise the score to 7 in response to the authors' rebuttal.
---
Reply to Comment 1.1.1:
Title: Many thanks for raising the score
Comment: Thank you very much for your insightful suggestions, which have been greatly enlightening and are crucial for enhancing the quality of our paper! We will adhere to these suggestions in the final version and also revise the paper according to all other comments. | Summary: This paper proposes Read-ME, a novel framework for pruning large LLMs into smaller MoEs with minimal training cost. Read-ME separates the gating routers from the critical paths of the inference process and trains an individual expert subnetwork to perform offline pre-gating. With this refactorization of MoEs, the paper further demonstrates the effectiveness of Read-ME by designing optimal expert pre-fetching and caching and expert-aware batching for low-latency and high-throughput serving. Extensive experiments show that Read-ME produces small MoEs that outperform existing small dense models with significantly higher inference performance, lower latency, and higher memory efficiency.
Strengths: - The paper is well-written, well-organized, and easy-to-follow.
- The idea of pruning LLMs into smaller MoEs is quite interesting.
- Experiments are thorough and extensive.
Weaknesses: - Overall, this is a very interesting work. My biggest question is: what's the motivation behind pruning LLMs into smaller MoEs? If I need a smaller model, why don't I just prune an LLM and have a small but dense model, if the small dense model has approximately the same (or even less) number of activated parameters?
- The observation of Figure 2 is interesting. However, what's the dataset (or input tokens) you use for plotting this figure? Does this observation still hold if the dataset (or input tokens) changes significantly?
- Section 2.3, "The above observations suggest that among many routing paths, only a few of them are in use during the inference..." with only mutual information between adjacent layers may not justify this assumption. It would be better to visualize the end-to-end routing paths (from the first to the last layer) and explicitly show that only a few paths are used frequently.
- The idea of separating routing logic from the inference process is not new. For example, [1] tries to distill the knowledge of gating networks and performs offline routing. How do you compare your expert subnetwork with [1]? Does training the expert subnetwork cost more than KD-based methods?
- To follow up on the previous comment, the evaluation does not compare Read-ME with any existing offline routing approaches like [1]. If the major benefits of Read-ME come from separating routing logics, then the paper's contributions would decrease.
- In evaluation, comparing Read-ME 4.7B with those baselines with fewer parameters may be unfair. Since Read-ME is pruned from much larger LLMs and still holds more activated parameters than baselines, it wouldn't be surprising to see that Read-ME has the best performance. This performance increase comes at the price of requiring more GPU memory. Once again, this question goes back to the motivation: why would someone need to prune LLMs into smaller MoEs instead of smaller dense models? Comparing Read-ME MoEs with smaller dense models pruned from the same LLM may help address this question.
- This pruning process may need a theoretical justification on the performance guarantee, i.e., smaller MoEs are guaranteed to not suffer large performance degradations.
[1] SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Model. MLSys'24
Minor issues:
- Figure 2(c) is never mentioned.
- Figure 3 does not have indices for sub-figures.
Technical Quality: 3
Clarity: 4
Questions for Authors: Pleease refer to the Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Pleease refer to the Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for all the interesting questions. Please see below.
**[W1 - MoEs Motivation]** We validated that MoE achieves better cost-accuracy trade-off than small dense models acquired by pruning, and provided the results in Appendix C.1.
We mentioned that prior compression efforts focus on converting large dense pre-trained models into smaller dense models. However, we argue that a smaller MoE model, with fewer activation parameters, is a better target architecture. To ensure a fair comparison, we (1) create a 4.7B parameter-dense model matching a single expert network size, and (2) fine-tune it for the same number of steps.
Table 5 shows refactorizing the pre-trained model into an MoE structure, rather than a smaller dense variant, leads to significant performance improvement.
Table 5: By adopting an MoE as the target structure instead of a dense model, our model achieves better overall performance.
|Eval|Arxiv|Books|C4|CC|Github|Stack.|Wiki.|MMLU|
|-|-|-|-|-|-|-|-|-|
|Dense|5.63|1.94|11.78|9.68|3.75|13.42|6.24|27.1%|
|Read-ME|4.18|1.31|10.57|7.72|2.39|12.52|3.94|38.9%|
**[W2 - Figure 2 Details]** For Figure 2, we used the Arxiv dataset, a subset of Red-pajama. The observation holds with Wikipedia and Github subsets as well. Please see the uploaded PDF for visualization results; we will add these in a future version.
The visualized high mutual information between two adjacent layers’ expert selection is sufficient to support our claim. Since $H(S_1, …, S_L) \le \sum_l H(S_l) - \sum_l I(S_{l+1}; S_l)$, high layer-wise mutual information implies low-entropy (deterministic) path selection.
Regarding visualizing end-to-end routing paths, firstly we would like to mention that since the model has 32 layers and each layer performs Top-2 selection out of 8 experts, there will be ${8 \choose 2}^{32}$ (approximately $2 \times 10^{46}$) possible paths.
Instead, we calculated routing statistics for 180k tokens, finding only 8.4k paths out of $2\times 10^{46}$ possible paths activated at least once. The top 20 paths are selected by 2805 tokens, and the top 40 paths by 4947 tokens. The observation validates that only a small fraction of routing paths are frequently selected. Please see the uploaded PDF for more visualization.
**[W3 - Comparison with SiDA]** SiDA is an alternative approach that separates the router from the inference path. However, Read-ME offers better opportunities to improve inference efficiency because it is designed to enable ***expert-aware batching.*** Additionally, we would like to emphasize that, by design, the Read-ME router provides ***exact expert selection,*** while SiDA and prior works offer approximate selection [1,2]. In detail,
(1) Batching and latency:
With SiDA, *tokens cannot be batched together* because they do not share routing decisions across all layers, unlike Read-ME (Since SiDA is distilled from the Switch Transformer, it is not possible to change decisions of routers). This vastly magnifies the expert space when batching and makes it difficult to compose an efficient batch that is aware of expert selection.
In effect, SiDA activates more experts for each batch at each layer, leading to increased latency. Table A compares the inference latency between SiDA and Read-ME. (SiDA did not release a checkpoint, so we used SwitchTransformer-8, the teacher model from which SiDA was distilled, as a proxy)
Table A. Inference latency of SiDA and Read-ME.
| |SiDA|Read-ME|
|-|-|-|
|Latency[ms]|62.34|48.59|
|Avg # of Activated Experts|5.51|3.51|
(2) Router accuracy:
SiDA’s offline routing function is an approximation method that is distilled from original layerwise routers. Thus, the accuracy of routing is not 100%. Table B shows the “failure rate” of the SiDA router (defined as prediction miss rate on the expert activation of the trained router), and Table C shows the resulting degradation in task performance. With an incorrect guess, the entire expert selection can fail, leading to a collapse in inference, especially as the model scales. In contrast, *our method is exact, ensuring no performance drop during inference*, regardless of model size.
Table B. SiDA Router failure rate
|#experts|SST2|MRPC|MultiRC|
|-|-|-|-|
|8|-1.00%|-2.59%|-8.26%|
|128|-1.22%|-1.35%|-9.51%|
Table C. Accuracy drop due to SiDA’s Router Failure
|#experts|SST2|MRPC|MultiRC|
|-|-|-|-|
|8|-1.75%|-2.51%|-1.05%|
|128|-6.98%|-7.41%|-7.44%|
(3) Training cost:
Note that SiDA is based on an MoE model and only distills the router, whereas our model is based on a dense model and builds both the expert network and the router, training them together. This means that SiDA's training time only accounts for the router training cost, while our training time includes both the router training cost and the expert specialization cost. Additionally, SiDA did not report the training cost in terms of tokens, making a meaningful comparison of training costs impossible.
In addition, our work introduces novel contributions to caching and prefetching, which SiDA lacks. SiDA relies on on-demand expert loading and FIFO-based expert eviction, which can negatively impact performance. Please refer to Section 4 for a detailed discussion of our contributions in these aspects. We will add discussion of SiDA in the revised version.
[1] Fast inference of mixture-of-experts language models with offloading
[2] SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Model
**[W4 - Theoretical Analysis]** In this work, we empirically examine the redundancy of layer-wise routers and propose a system-oriented method to convert a pre-trained dense model to an MoE with minimal additional training costs. While our focus is on empirical validation, providing a theoretical justification is beyond the scope of our paper and something we hope to explore in future study.
Thanks for catching the missing references and indices. We will definitely correct it in our revised version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the thorough answers from the authors. The rebuttal has addressed all my questions, provided interesting new results, and revealed the novelty of this paper.
In response to the rebuttal, I would like to raise my rating to help advance the possibility of acceptance for this paper.
---
Reply to Comment 1.1.1:
Comment: Many thanks for your thoughtful comments and suggestions. We value your support and will be incorporating all your feedback in our revised version | Summary: The paper proposes an inference-aware method to convert a pretrained dense model into a Mixture-of-Experts architecture, where each expert is smaller than the original dense model. To extract the different experts, they use a dataset from given subdomain to identify the top activated channels. To route among the experts, the authors first break down the limitations of current per-layer routing schemes (waiting for all tokens at the layer to finish, redundancy across layers). They propose to decouple the router from the base model, and instead train a 1-layer transformer block as the router, which predicts token expert assignment autoregressively.
Overall, these changes allow for inference friendly deployment, by enabling careful batching of examples that share similar experts, and with better caching of experts in resource constrained settings. The authors show that their approach strikes a good trade-off between efficiency and performance.
Strengths: 1. The authors do a good job at breaking down the current bottlenecks in deploying MoE models
2. The proposed external routing approach is simple and effective
3. The latency evaluation and experimentation is well designed and clear
Weaknesses: 1. I am a bit surprised that the gains in latency compared to the seed llama models are somewhat small (19%) given that the number of active parameters is reduced by 30%, and that there is also a decent performance drop from the seed llama model. For resource constrained settings, how would the approach of loading llama layer-by-layer fair against Read-ME ? Like Read-ME, this approach can know in advance what layers to load, and could hopefully retain the performance of the full lama model.
2. Some parts of the analysis require further clarification (please see questions)
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. If you keep the top activated neurons, should equation 3 be an argmax ?
2. What is the impact of the routing distillation loss ? Can you provide an ablation experiment where it is not used to access its impact ?
3. How would figure 3 look with READ-ME instead of standard MoEs ? This would be a good visualization to compare the methods.
4. How were the baseline numbers in Figure 4 obtained ? Did the authors rerun the baselines ? Can you confirm that all these baselines start from the same base model ?
5. The "cost" column of table 1 is misleading; given that your model starts from llama-2, you must either include the compute to create llama-2 in your analysis, or only monitor the additional compute starting from llama 2 (in which case the cost for llama 2 would be 0).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[W1 - Comparison with layer-by-layer loaded Llama ]**
Thanks for the question. We compared the latency of the layer-by-layer loaded llama and Read-ME model in the following table.
Table A. Latency comparison
|Method| Latency [ms] |
|---------|--------------|
| Layer-by-layer Llama-7b | 111.909 |
| Read-ME | 91.531 |
The difference arises from the size of the parameters to be loaded. The size of Read-ME's expert layer (including both the top-1 expert and residual expert) is approximately 25.0% that of the MLP counterpart in the LLaMA baseline. Regarding peak memory usage, Read-ME exhibits a 10% lower profile, though both models consume an insignificant amount of memory overall.
Our method converts a dense pre-trained model into a MoE for efficient inference. Although this compression is generally not lossless, please note that our approach offers a better accuracy-cost trade-off than all baseline compression methods.
**[Q1 - Understanding of Equation 3]** Sorry for the confusion. We found this is a typo in our original submission., As we are selecting the mask $\boldsymbol{M}$ to maximize the magnitude of activated channels, the operator should indeed be $\arg\max$ in Equation 3.
**[Q2 - Ablation on Routing distillation loss]** Thank you for the suggestion. We have ablated the routing distillation loss by removing the router training step and using a random routing mechanism instead, tuning only the expert weights using the language modeling loss. To ensure a fair comparison, we maintained the same number of training tokens and the same training schedule. The resultant performance is provided in Table B. The results show that with the routing distillation loss, the average downstream task accuracy increased from 51.5% to 55.5%, validating the necessity of the routing distillation loss.
Table B: Ablation study on routing distillation loss. We compare the performance with and without the routing distillation loss, while keeping the number of training tokens, to validate the necessity of routing distillation loss.
| Method | MMLU | Hell. | Wino. | ARC-E | ARC-C | LogiQA | CoQA | avg.|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Read-ME (w/ routing distillation loss)| 38.9% | 68.5% | 67.7% | 66.6% | 42.3% | 29.7% | 74.8% | 55.5% |
| Read-ME (wo routing distillation loss)| 30.6% | 65.1% | 65.8% | 64.2% | 39.9% | 24.9% | 69.7% | 51.5% |
**[Q3 - Read-ME compared to Figure 3]**
This is a great suggestion! Please check Fig 3 of the PDF uploaded.
In Fig 3-left, Read-ME batches tokens directed to the same expert, so all arrows point to a single expert at each layer. Also, there is only one router instead of three, resulting in all arrows following the same path from layer 1 to layer 3, leading to a unique activated expert count of 1.
In Fig 3-middle, Read-ME shifts the distribution to the lower left, where most points stay within the range of x < 4.5 and y < 55.
Fig 3-right already compares the traditional approach with Read-ME.
**[Q4 - Baseline numbers in Figure 4]** Thanks for asking.
- MMLU is the common evaluation benchmark for LLMs, so most of the numbers in Figure 4 are collected from their original paper, including OpenMoE, Llama-2, LaCo, and Compresso; For Sheared-Llama, Open-Llama and Pythia, as they do not report MMLU in their original paper, we test their publicly released checkpoints with the lm-eval-harness library[1]; LLM-Pruner and SliceGPT didn’t report MMLU nor release checkpoints, so we use the numbers reported in LaCo [3].
- Yes, all of the methods reported in Figure 4 use the same base model - Llama-2.
**[Q5 - Training Cost Calculation]** Thanks for the suggestion. First, our training cost calculation follows Sheared-Llama[2] (see Table 1 of the paper), a representative post-training method to generate a small model out of a large one by pruning. Second, the training cost here measures the computational resources needed by the LLM deployer to obtain a new model of an auxiliary architecture (a smaller model in the Sheared-Llama case, and a MoE in our case), given all the publicly available resources. Using the publicly released checkpoints (e.g. Llama-2) will not incur additional training costs. We appreciate the suggestion and will mention this in the caption. Please see the PDF for our tentative Table 1.
[1] https://github.com/EleutherAI/lm-evaluation-harness
[2] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
[3] LaCo: Large Language Model Pruning via Layer Collapse
---
Rebuttal Comment 1.1:
Title: Re: rebuttal
Comment: The authors have provided a very clear rebuttal and detailed answers to my initial set of questions. I will change my score accordingly
---
Reply to Comment 1.1.1:
Comment: Thank you once again for your insightful comments and suggestions. We greatly appreciate your support of our work and will be incorporating all the feedback into our revised version. | Rebuttal 1:
Rebuttal: We thank all reviewers [R1(wHVe), R2(ZdPD), R3(EGvf), R4(AX77)] for their thoughtful and constructive feedback. We are grateful that the reviewers found our approach interesting and effective [R1,R2,R3], the paper well-written and well-organized [R2], and the experimental results thorough and extensive [R2,R3].
We have thoroughly addressed all of the concerns raised by the reviewers. As a model compression effort, we emphasize our superior accuracy-cost tradeoff and training efficiency compared to all baseline methods. We provided additional experiments on routing distillation loss (R1), dense counterpart comparisons (R1,R2, R4), other evaluation metrics (R4), cost analysis of the auto-regressive router (R3), and generalizability to other model families (R3).
For other questions, we added detailed discussions for each reviewer. Please check our PDF if it is mentioned in the answer.
Pdf: /pdf/0cc8da0d427ffd84bfd9746aa2fcf4f3668482da.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions | Accept (poster) | Summary: The authors present SkiLD, an unsupervised RL method that learns skills by augmenting DIAYN reward with a graph-state dependency reward to induce meaningful changes in object interactions.
Strengths: **Experiments:** Experiments are performed on a reasonably comprehensive set of 10 downstream tasks in 2 domains, and demonstrate that SkiLD is significantly better than all baselines.
**Novelty and Idea:** The idea of using explicit factors as skill learning reward is interesting and novel. At a high-level, it’s also pretty intuitive.
Weaknesses: **Clarity: Quite a few clarity issues**
- In the main paper, the method could be presented more clearly at a higher level. For example, introducing the Graph-Selection policy first (in 3.1 instead of 3.2) could help with understanding, at a higher level, what the method is trying to do. Currently the paper first talks about the skill selection policy but, by the time the reader gets to 3.1, it’s not fully clear yet (it’s mentioned at the top of section 3 but then gets into details a bit too suddenly in 3.1) that there is an initial unsupervised skill learning phase and what that phase is trying to do. Going from high level idea → high level policy → low level policy would make this more clear.
- More explicit details are needed. How is the dynamics model trained? What’s the high-level overview of the algorithm? When and how are diversity rewards applied? What exactly are the input/outputs features of the graph selection policy? All of this could be more clear if the authors provide some pseudocode linked to from the main paper and beef up the appendix.
- High-level details about environment assumptions (learned/given dynamics, what is given in the graph factorization, etc.) in section 4.1 should be given, even if details are already in the appendix.
**Experiments:**
- Are 10M timesteps really needed even in Mini-behavior domain? This seems extremely sample-inefficient. Same with requiring up to 5M timesteps to learn downstream tasks reasonably in these environments. Perhaps some details about why all methods need so many timesteps would be helpful.
- How are hparams selected and tuned? And for the baselines? How do we know this is a fair comparison?
**Minor issues:**
- L163: missing end of sentence period
- Fig 4: CaSk?
Technical Quality: 3
Clarity: 2
Questions for Authors: One limitation of all unsupervised skill learning methods is that a significant portion of the learned skills are meaningless. Qualitatively, what proportion of the skills ended up being meaningful? Was it more with SKilD than baselines like DIAYN? This perhaps could be visualized by randomly sampling ~50 skills and visualizing them to see which ones are actually meaningful.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable and constructive feedback. We are particularly excited that the reviewer finds our idea of learning skills to induce diverse interactions between state factors novel and our evaluation sound.
Please find below our responses to address your concerns regarding our work:
>Regarding reorganizing Sec 3
We appreciate the reviewer's suggestions, and we will rearrange these sections for greater clarity.
We will also add more detail to the high-level description at the beginning of the section to make it clear what the high-level and low-level inputs are:
* the high-level policy outputs graphs and diversity parameters.
* the low-level skills take in both and output primitive actions.
>Regarding details on dynamics model training, when and how are diversity rewards applied, what exactly are the input/output features of the graph selection policy?
We appreciate this comment and will add the pseudocode to the main paper (it is also attached to the **global response**). We will also add the appendix section addressing the following components,
* Loss functions for dynamics model: line 167 mentions that the loss is the prediction error.
Specifically, we measure it as cross-entropy loss for discrete state space and mean squared error for continuous state space.
* When and how are diversity rewards applied: as shown in Eq. 3, when the induced graph matches the desired graph, the diversity is applied as **an additional reward** to the graph reward.
* The input/output features of the graph selection policy:
* As shown in Fig 2 and line 186, the input is the current state
* The output feature is a categorical distribution over N, where N represents the number of unique graphs in the history of seen graphs.
>High-level details about environment assumptions (learned/given dynamics, what is given in the graph factorization, etc.) in section 4.1 should be given, even if details are already in the appendix.
We agree with this constructive feedback and have added this information to Sec 4.1.
>Why do all methods need so many timesteps?
Though mini-behavior looks simple, the environments are challenging because of the following factors:
* The state factors are **highly sequentially interdependent**, making skill learning and task learning challenging: for example, in cleaning car environments, the agent can’t clean the car until it picks up the rag, turns on the sink, and soaks the rag. These interdependencies between state factors pose great challenges to the agent’s exploration ability.
* During the skill-learning stage, we would like the agents to learn **all possible skills** (e.g., manipulating all objects) rather than learning to manipulate a single object. Hence, it requires more samples. Furthermore, Fig. 4 shows that, with 10M timesteps, baselines still fail to learn to manipulate all objects.
* During task learning, we use **sparse rewards**, and thus it is further challenging for agents to explore. Our results match the results in the mini-behavior paper, which shows that it takes millions of timesteps to learn the task with sparse rewards.
* In addition, we use primitive actions, and **many actions have no effect if it’s not applicable in the current state**. So the task is especially challenging for exploration.
>Regarding hyperparameters tuning.
We apply grid search on the following hyperparameters for the following methods to ensure their best performance. We will add these details to the paper Appendix. Due to space limits, we leave the values we searched in the Appendix.
* SkiLD (ours):
* skill-learning:
* low-level policy: exploration noise, skill horizon
* high-level policy: entropy coefficient
* task learning: skill selection policy’s entropy coefficient
* DIAYN:
* skill-learning: low-level policy exploration noise
* task learning: skill selection policy’s entropy coefficient
* CSD:
* skill-learning: low-level policy exploration noise, intrinsic reward coefficient, learning rate
* task learning: skill selection policy’s entropy coefficient, skill horizon
* ELDEN:
* dynamics model: regularization lambda and its annealing, gradient cutoff threshold
* task policy: entropy coefficient
* COInS:
* Dynamics model:
* Passive and active model cutoff thresholds
* Interaction state reweighting ratio
* Granger-causal score threshold
* Skill learning:
* Goal sampling and relative action sampling distance
* Hindsight reweighting ratio
* Vanilla RL (PPO): entropy coefficient, learning rate, batch size
>Regarding meaningfulness of learned skills.
Fig 4 in the paper (full version: Fig 1 of the global response) shows the semantic meaningfulness of skills acquired by different methods (in terms of which interactions they can induce).
Specifically, for each method, we randomly sample its learned skills (see details of # of skills below) and we record which interactions they induce in each episode. From the global response Fig 1, we can see that
* SkiLD induces all inducible graphs, suggesting that it learns skills that cover all possible interactions with objects (i.e., at least 15 skills are semantically meaningful), though there may also exist redundant or less meaningful skills.
* In contrast, for DIAYN, with a similar number of skills, most interactions it induces are agent moving (i.e., agent, action -> agent) and agent getting blocked by other objects (e.g., agent, car -> agent). It suggests that a large portion of skills are navigation skills, and they are less meaningful when solving tasks requiring inducing object interactions.
the number of skills:
* SkiLD (ours): in the Cleaning Car environment, we uniformly sample from the history of 15 seen graphs and 4 discrete diversity variables: leading to 60 discrete skills
* DIAYN: we uniformly sample from 64 discrete skills.
>Regarding typos
We appreciate the detailed feedback and have corrected these typos.
---
Rebuttal 2:
Comment: Thanks for the detailed response.
Due to the neurips rebuttal format, I can't verify the suggested changes will be made, so I can only partially consider those in whether to increase the score. However, the plan makes sense. I can reconsider if the authors explicitly list out quotes for what will be changed.
> Though mini-behavior looks simple, the environments are challenging because of the following factors:
This is a reasonable description, thank you. This information however is not presented in the actual paper, with the only hint at this being: "While conceptually simple, this domain has been shown to be extremely challenging for Vanilla RL."
> Fig 4 in the paper (full version: Fig 1 of the global response) shows the semantic meaningfulness of skills acquired by different methods (in terms of which interactions they can induce).
This doesn't fully answer my question (though it is a helpful visualization) of "what proportion of skills end up being meaningful" as it's easier to induce these interactions in a discrete environment over a long horizon of 500 steps. Randomly sampling skills and visualizing their behavior, truly at random, in the Gibson environment (cont. control) would be a better comparison, or perhaps plotting a graph of the state distribution of the skills (again sampled at random) against x-y positions in the minigrid environments to get more information about what the policy is doing with each skill.
I have currently raised my score in response to the other parts of the rebuttal.
---
Rebuttal Comment 2.1:
Comment: Again, we thank the reviewer for the constructive feedback, including the new suggestions of a more detailed description of the mini-behavior environment and the visualization of state distribution! We will ensure that they are incorporated in the next version of the paper. | Summary: In the presented paper the authors propose a novel skill discovery method called Skill Discovery from Local Dependencies (SkiLD). The method utilizes the concept of local dependencies to incorporate the interaction of factors in a factorized state space. A novel intrinsic reward signal is introduced to guide the skill discovery. In the skill learning phases, different skill-conditioned policies are learned for the skills and in the task learning phase these (frozen) skills are used to learn a task policy solving more complex downstream tasks. The method is evaluated in two simulation environments and compared to several baseline methods.
---
**Post-Rebuttal/Discussion**: I appreciate the discussion with the authors and their effort in clarifying open questions, my main criticism wrt presentation and evaluation are not solved though, and I remain my score of 5.
Strengths: The paper tackles an interesting, challenging and important research direction: equipping agents to acquire skills autonomously and in the absence of a concrete task that can then be used in given tasks.
The main idea behind the method – utilizing local dependencies/interactions (realized with dependency graphs) and diversity – is well motivated and introduced.
Overall the paper is written well and understandable, notations are solid, and the figures are of good quality (especially Figure 2 gives a good overview of the method).
The explicit formulation of the two main research questions is appreciated and gives a good structure and focus for the evaluation.
Similar, the explanations of baselines and especially what the respective comparison highlight is a very nice. (Explanations of differences to the different related work sections as well).
Additional ablations show the effect of the main idea.
Weaknesses: My main concern is related to the use of overexaggerations and mismatch in some claims and with respect to the experimental results. For example, ‘…resulting in a robust set of transferable skills’, there is no evaluation or similar regarding robustness (in fact that word only occurs in that claim). Or more prominent, the domains are introduced with ‘…a LARGE number of state factors.’, while the state factors range from only 3 to 6 (Section 4.1).
While mentioning the research questions is good, the chosen evaluations do not fit these questions very well. Q1 is about the diversity of the interactions, but Figure 4 then only shows a comparison of some ‘particular’ (how are they chosen?) interactions. For a diversity comparison a general metric comparing all found/used interactions should be compared. The number of skills is also not shown/discussed or evaluated.
Similar, Q2 is about more efficient downstream learning. Efficiency in terms of what exactly? The according evaluation in Figure 5 (and Section 4.4) measures the performance. Is performance used as a proxy for efficiency here?
As there are many details, modules and methods, it is not completely clear what is assumed to be given/known, or how certain quantities are inferred/used. For example, the events of interest (every factor in just the next_state of the s,a,s’ tuple?) or the target dependency graph (a crucial parameter, and lines 157ff mentions HER for that, but not for all cases?). What exactly does the diversity parameter do? How is the exact flow of the skill learning phase (getting the target graph, the intrinsic reward, learning the policies, and multiple policies can be learned for each skill?)? Some additional effort in clarifying such details would greatly benefit the paper.
Some critical details, like that for some setups the ground truth dependencies were used instead of the learned, are only provided and thus somehow hidden in the Appendix.
Additional comments are given in the Questions.
While computational demanding and, hence, understandable, 5 seeds/runs are quite low for comparing RL based algorithms (known problem in the community, although often neglected/ignored). Recent frameworks and metrics have been proposed tackled at these problems related to low number of runs and comparing overall algorithms performance [1]. It would be beneficial to add such metrics (e.g., performance profiles and IQM/IQR) to the paper for a better comparison of the proposed algorithm.
-----------
[1] Agarwal, Rishabh, et al. "Deep reinforcement learning at the edge of the statistical precipice." Advances in neural information processing systems 34 (2021)
Technical Quality: 2
Clarity: 3
Questions for Authors: How can the approach deal with varying number of objects? The policies depend on the states and the state-specific graph?
Does the chosen representation allow for ‘infinite’ number of skills? How does the approach scale with increasing skill number? How does it effect the learned policies (skill and task policy)?
How are the two learning phases (skill and task) orchestrated? Is each phase done once? Or do they take turns, if so, how?
In the factored MDP, is the action space A defined over the full S, or does it take the factorization into account?
L168: ‘…we utilize a factorized lower-level policy, where there is a separate policy for each factor.’ So each skill is realized with multiple policies? How is this modelled and especially used? How are skills then chosen, does this affect the action set of the task policy?
Notation in Figure 4 (and same in text) is unclear, what does the ‘x,y -> x’ notation represent?
Figure 1 seems to be never referenced?
How are the skill policies modelled? Table 1 only give learning details, is it a neural network? What are its details? Same for other methods, the presented parameters are not complete.
What does ‘state-specific’ dependency graph mean?
Figure 6 misses the plot description (is it mean and std as well?).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed, focusing on the assumption of known state factorization and accurate detection of local dependencies.
An additional (potential) limitation is related to the scaling of the approach wrt. the training (data, time, ..), the number of objects and skills (rather low currently), or the transfer of skills to novel objects.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable and constructive feedback. We are particularly excited that the reviewer finds our method well-motivated and our presentation clear.
Please find below our responses to address your concerns regarding our work:
>Regarding the use of overexaggerations and mismatch in some claims and with respect to the experimental results
We appreciate the reviewer’s observation and will make sure to eliminate all overexaggerations. We have made adjustments to the language of the paper to better match the empirical results. In particular, we have removed any mention of “large” number of state factors or “robustness”, as well as other possible exaggerations such as “object-rich” or “realistic”, and replace with more precise language such as object-factored and 3D real-world-based simulation.
>Regarding Q1 evaluation
We appreciate this insightful comment and will incorporate the following clarifications into the paper.
Due to space limitations, we deliberately only show task-relevant interactions in Figure 4, since these interactions are more informative. That being said, we agree with the reviewer that a general metric comparing all found interactions against all inducible interactions will better showcase the diversity of interactions for different methods. To this end, **we additionally show *all inducible interactions* and whether each interaction is induced, in the global response Fig 1**.
**The diversity metric would be (# induced interactions) / (# induciable interactions)**. As shown in global response Fig 1, SkiLD induces all inducible interactions, while CSD only induces 80% and DIAYN only induces 60% of all inducible interactions.
Regarding the number of skills, we will add the following clarification:
* SkiLD (ours): in the Cleaning Car environment, we uniformly sample from the history of 15 seen graphs and 4 discrete diversity variables: leading to 60 discrete skills
* DIAYN: we uniformly sample from 64 discrete skills
* CSD: CSD uses continuous skill space, so there are an infinite number of skills.
>Regarding efficiency in Q2
We measure sample efficiency in terms of performance after a certain number of environment transitions.
We appreciate this comment and will add this description to the paper.
>Regarding details, modules, and methods
We appreciate this comment and will add the following details to the manuscript. We also attach the pseudocode in the global response Alg. 1 to improve the clarity of the workflow.
* Regarding the events of interest, it is known from the collected transitions.
* Regarding the target dependency graph, it is sampled from the high-level policy and known.
* Regarding HER, following the original paper, we apply it to 87.5% of sampled trajectories, and we will add this hyperparameter to the paper.
* Regarding the diversity variable,
* Fig 1 mentions that the skill has two goals: (1) inducing the target dependency graph, and (2) inducing the graph in diverse ways (i.e., reaching different states).
* Lines 173 - 175 mention that the diversity variable fulfills the second goal, in the same way as DIAYN – the agent will be rewarded to visit states if the diversity variable can be distinguished from the states. Hence, the agent is trained to visit different distinguishable states under different diversity variables.
* Regarding the flow of the skill learning phase: we describe it in the pseudocode in the global response and will include it in the paper.
>Regarding other metrics
We appreciate this insightful comment and have added IQM scores to the global response.
>Regarding varying numbers of objects
Please note that for the scope of this paper, we only consider MDPs, which are fully observable by definition, with a known state space (and therefore a known number of objects)).
To handle varying numbers of objects, one possible extension is to pre-assign a large number of state factors and use placeholders for objects that are not present. The assumption is that objects that are not present do not involve interactions, and their states do not change.
>Regarding allowing for ‘infinite’ number of skills and scaling to it
A simple way to extend to an ‘infinite’ number of skills is to have the diversity variable be continuous instead of discrete. As the number of skills increases, the capacity of the agent’s behavior grows proportionally, allowing for more diverse skills.
In the meantime, it will also be harder for the skill policy and the discriminator to process these skills, potentially hampering skill learning. Finding the “right” number of skills is often a domain-specific problem
>Regarding two learning phases
Each phase is done once.
>Regarding the action space
It is defined over the full state space.
> Regarding the factorized lower-level policy
Each skill is realized by one policy network.
* As described in Appendix Sec A, there are N (N: # of state factors) parameterized low-level networks, one per state factor.
* When the target dependency graph is specified, we will identify which state factor should be influenced and use its corresponding policy to sample primitive actions
* As a result, only one of the N networks is activated at every time step.
* The task policy still selects the target graph from the history of seen graphs and thus is unaffected by this specific design of low-level policy.
>Regarding Fig 4 notations,
$x,y \\rightarrow x$ represents that $x_{t+1}$ locally depends on $x_t$ and $y_t$.
>Regarding ‘state-specific’ dependency graph
A general dependency does not necessarily happen in every state. For example, though a knife can cut fruit, when they are far away, the cutting will not happen. The state-specific dependency graph encodes this state-specific dependency between factors, the edge represents the dependency happening in a given state.
>Regarding Fig 6
It is also mean and standard deviation, and we will add that clarification to the final paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for detailed response that clarified most points and the provided pseudo-algorithm helps to understand the flow better. However, it will be a challenge to incorporate all this necessary information in an updated paper.
One follow up question:
*"The diversity metric would be (# induced interactions) / (# induciable interactions). As shown in global response Fig 1, SkiLD induces all inducible interactions, while CSD only induces 80% and DIAYN only induces 60% of all inducible interactions."*
I do not understand how the mentioned 100% and 80% and the global response Fig 1 are connected? Fig 1 shows in how many episodes each graph was induced, except the first one which all methods finds, no other has 100%. Moroever, one graph was only induced in 0.5% according to the Fig 1. How does this fit together with the mentioned numbers? (and with 5 random seeds, how can the Figure numbers occur, shouldn't it be in steps of 20% then? the connection between seeds and episodes is unclear here)
---
Reply to Comment 1.1.1:
Comment: We appreciate your quick response! Please find below our responses to address your concerns regarding our work:
>Regarding incorporating necessary information
We thank the reviewer for the heads-up. The main space-taking thing we are planning to incorporate into the main text is the pseudocode. Since there will be one more page for the camera-ready version were this paper to be accepted. We believe we will have enough space to incorporate all feedback into the next version of our paper.
>Regarding diversity metric
Fig 1 in rebuttal shows all the induced interactions AND the percentage of times they appear at least once within an episode. Obviously, some interactions will be induced all the time while some hard interactions can rarely be induced, which is what the numbers reflect. The error bar shows the standard deviation of this percentage across random seeds (the number shows the mean).
Now, the new metric that we are reporting in the rebuttal (following the suggestions of the reviewer) is WHETHER an interaction will appear at all through randomly sampled skills, which can be obtained by counting the non-zero entries in the chart – 15 for SkiLD, 12 for CSD, and 9 for DIAYN, which is why our method has a skill coverage of 100%.
Why an interaction should count even if it is induced rarely – since we **uniformly sample skills**, it is very unlikely to fulfill the preconditions of a hard interaction and induce it. However, during task learning, one can **optimally select the sequence of skills** to induce it using planning/a learned task policy. Therefore, even if an interaction is induced infrequently in Fig 1, it shows the skills are useful for solving tasks relevant to this interaction. | Summary: The paper introduces SkiLD (Skill Discovery from Local Dependencies), an unsupervised skill discovery method. Unlike existing methods that focus on state diversity, SkiLD leverages state factorization to guide skill learning by inducing diverse interactions between state factors.
This method is designed to be more effective in complex environments, such as household settings with numerous objects. SkiLD uses local dependencies to model interactions and introduces a novel intrinsic reward mechanism.
The method is evaluated in several domains, including a realistic household robot simulation, demonstrating superior performance compared to existing methods.
Strengths: - The clarity, presentation and writing of the paper are great.
- The problem of unsupervised skill discovery is an important one.
- The paper presents strong empirical results, demonstrating that SkiLD outperforms other unsupervised reinforcement learning methods in various challenging tasks.
- The experimental setup is properly designed (i.e., right choice of baselines and domains).
Weaknesses: Minor weaknesses:
- The method's reliance on accurately detecting and modeling local dependencies adds a layer of complexity that may limit its applicability.
- The effectiveness of SkiLD hinges on the availability of a factored state space, which may not always be available or easily obtainable.
Technical Quality: 4
Clarity: 4
Questions for Authors: The two points raised within the weaknesses limit the applicability and thus the impact of the paper. Do you have insights on this? Especially, how the method can be made more general?
Nitpicking:
> L88: Formally, for an event of interest $Y$ and its potential causes $X = (X^1, ..., X^N )$, given the value of $X = x$, local dependencies focus on which $X$ is are the state-specific cause of the outcome event $Y = y$.
I think the vocabulary used in this sentence is wrong.
- $Y$ is not an event but a random variable.
- $Y = y$ is an event.
> L108: In this work, for a transition $(\mathcal{S} = s, \mathcal{A} = a, \mathcal{S}^\prime = s^\prime)$...
If I understand correctly the notations, this shouldn't be $\mathcal{S}$ and $\mathcal{A}$ but $S$ and $A$, because $\mathcal{S}$ and $\mathcal{A}$ are the state space and action space and are not random variables. Plus, you said in the Background section that "In this paper, we use uppercase letters to denote random variables and lowercase for their specific values...".
Same remark for L149-150.
> In this section, we describe SkiLD, which enhances the expressivity of skills using local dependencies. SkiLD represents local dependencies as state-specific dependency graphs, defined in Sec. 2.2.
State-specific dependency graphs are actually not defined in Section 2.2, there is just a sentence at the end of the section. It would be useful to better explain what are these graphs.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes, the authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable and constructive feedback. We are particularly excited that the reviewer finds our presentation clear and our evaluation sound.
Please find below our responses to address your concerns regarding our work:
>The method's reliance on accurately detecting and modeling local dependencies adds a layer of complexity that may limit its applicability.
With respect to detecting and modeling local dependencies, two possible avenues offer promising directions:
* Relaxing the dependence of precise local-dependency identification by using a window of states rather than a per-state reward when incorporating dependency information.
* Since many state-specific dependencies fall into broad categories (contact, co-occurrence, etc.), employing a meta-learning strategy for identifying local dependencies in a lifelong setting can improve the effectiveness of dependency identification.
>The effectiveness of SkiLD hinges on the availability of a factored state space, which may not always be available or easily obtainable.
With respect to factored states, we agree that SkiLD requires a factored state space. In the meantime, we believe that advances in vision, especially related work such as SAM [1] and other object-centric modeling [2] offer a path forward for extracting a factored state space from a dense state such as pixels, thereby extending the application scope of SkiLD.
[1] Ravi, Nikhila, et al. "SAM 2: Segment Anything in Images and Videos." arXiv preprint arXiv:2408.00714 (2024).
[2] Aydemir, Görkay, Weidi Xie, and Fatma Guney. "Self-supervised object-centric learning for videos." Advances in Neural Information Processing Systems 36 (2024).
>Regarding the usage of symbols
We appreciate the detailed feedback and will make sure we use the correct symbol in the paper!
>Regarding the definition of state-specific dependency graphs
We appreciate this insightful comment and will add definitions of state-specific dependency graphs to section 2.2 to help it better integrate into section 3.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal. I will maintain my score. | Summary: The paper introduces SkiLD, a novel method leveraging state factorization to guide skill learning in unsupervised reinforcement learning. SkiLD emphasizes learning skills that induce diverse interactions between state factors, which are crucial for solving downstream tasks. The authors demonstrate that SkiLD outperforms existing unsupervised RL methods through empirical validation.
Strengths: 1. **Novel Approach**: SkiLD introduces a unique approach to skill discovery by focusing on local dependencies and interactions between state factors, addressing the limitations of state diversity methods.
2. **Empirical Validation**: The effectiveness of SkiLD is demonstrated through experiments in various environments, showcasing its superior performance compared to baseline methods.
Weaknesses: 1. **Assumption of Factored State Space**: SkiLD assumes access to a factored state space, which may not always be available or easy to obtain in real-world applications.
2. **Evaluation on Limited Domains**: The evaluation domains are somewhat restricted. Broader evaluation across more commonly used environments could emphasize the method's performance. Evaluating environments like Crafter, 2D Minecraft, or manipulation environments with multiple objects would better illustrate the benefits of focusing on local dependencies.
3. **Scalability**: There is a need to explore how SkiLD scales with an increasing number of state factors, which is crucial for practical applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to Weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to Weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable and constructive feedback. We are particularly excited that the reviewer finds our idea of learning skills to induce diverse interactions between state factors novel and our evaluation sound.
Please find below our responses to address your concerns regarding our work:
>Assumption of Factored State Space: SkiLD assumes access to a factored state space, which may not always be available or easy to obtain in real-world applications.
With the recent advances in object segmentation and object-centric representation techniques, such as SAM [1] and SOLV [2], one potential way to construct the factored state space is to use these models to extract factored representation from real-world image observations.
We agree that testing the effectiveness of SkiLD with learned factored representations would be an interesting next step, and will add it to the future work section of the next version of this paper.
[1] Ravi, Nikhila, et al. "SAM 2: Segment Anything in Images and Videos." arXiv preprint arXiv:2408.00714 (2024).
[2] Aydemir, Görkay, Weidi Xie, and Fatma Guney. "Self-supervised object-centric learning for videos." Advances in Neural Information Processing Systems 36 (2024).
>Evaluation on Limited Domains: The evaluation domains are somewhat restricted. Broader evaluation across more commonly used environments could emphasize the method's performance. Evaluating environments like Crafter, 2D Minecraft, or manipulation environments with multiple objects would better illustrate the benefits of focusing on local dependencies.
>Scalability: There is a need to explore how SkiLD scales with an increasing number of state factors, which is crucial for practical applications.
We appreciate the reviewer’s suggestions and agree that further empirical testing in other commonly used or larger-scale settings is an important direction for future work.
Meanwhile, we emphasize that, while exploring scaling further is valuable, we believe that it will not fundamentally change the scientific insights we introduced – compared to prior works in unsupervised skill discovery, by focusing on local dependencies between state factors, SkiLD learns skills that induce more diverse interactions and enable more sample-efficient downstream task learning, for the following two reasons:
1. The environments used in the paper are also challenging and baselines fail to learn diverse skills and solve downstream tasks.
* Specifically, the mini-behavior environments have the same **inter-dependency between state factors** as 2D Minecraft (e.g., in Cleaning Car, the agent cannot clean the car until it soaks the rag in the sink).
* For the iGibson environment, despite the four task-relevant factors, it also contains many intractable objects such as jars, microwaves, garbage cans, etc.
In contrast, **most prior works in unsupervised discovery are only evaluated in environments with one or two state factors (where skills are limited to moving the agent to different locations)**. As a result, as shown in Fig 4 and 5, when evaluated on Mini-behavior and iGibson, baselines fail to learn diverse skills and solve downstream tasks. Compared to them, SkiLD learns to induce those complex inter-dependency graphs and solves tasks successfully.
2. In principle, nothing in our method prevents it from scaling to more state factors.
* For methods that focus on reaching diverse states (like DIAYN), due to the **exponentially growing state space**, it would be more challenging to learn to induce meaningful interactions.
* In contrast, **the number of inducible interactions typically increases much more slowly**. As a result, by focusing on inducing diverse interactions, our method in principle has larger advantages than existing methods.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal.
While I appreciate the clarifications and explanations provided, I believe the current version of your paper still requires additional experimental validation to convincingly demonstrate the effectiveness of the proposed framework. I will maintain my score.
---
Reply to Comment 1.1.1:
Title: scalability experiments on 2d minecraft
Comment: We agree with the reviewer that further evaluating our method in larger-scale settings (with more state factors) is important. To do so, we evaluate our method in **2D Minecraft with 15 state factors** following Andreas et al [1], and we hope this result addresses the reviewer's concern about the evaluation and scalability of our method.
Specifically, we use the **Mine Gold** task described below, measure the task success rate after **3M** time steps, and show the **IQM score** (the higher the better) as suggested by Reviewer3 c2kz across **5 random seeds**. Again, our method SkiLD outperforms the following baselines during task learning.
| **Task** | **SkiLD** | **Elden** | **DIAYN** | **Vanilla** |
|-------------------|-------------------|-------------------|-------------------|-------------------|
| Mine Gold | **0.613$\pm$ 0.065** | 0.000 $\pm$ 0.000 | 0.000 $\pm$ 0.000 | 0.000 $\pm$ 0.000
[1] Andreas, Jacob, Dan Klein, and Sergey Levine. "Modular multitask reinforcement learning with policy sketches." International conference on machine learning. PMLR, 2017.
We will attach the figure of task training curves in the next version of the paper (unfortunately, we can't upload figures during the discussion period). Also, due to the limit of time and computation, we will include the results of COInS and CSD in the next version of the paper.
In case you are interested, the **environment details** are listed below. We make sure they will be added to the appendix.
* state space (15 state factors): the agent (location and direction), 10 environment entities (the positions of 3 wood, 1 grass, 1 stone, 1 gold, and 4 rocks surrounding the gold), and 4 inventory cells (i.e., the number of stick, rope, wood axe, and stone axe that the agent has).
* action space (9 discrete actions):
* 4 navigation: going up, down, left, right
* pick up the environment entity in front, no effect if the agent does not have the necessary tool for collecting it
* 4 crafting: craft a stick/rope/wood axe/stone axe, no effect if the agent does not have enough ingredients
* Mine Gold task: the agent will receive a **sparse reward** after finishing **all** the following steps
* collecting a unit of wood to craft a wood stick
* collecting another unit of wood and combining it with the stick to craft a wood axe that
is required for collecting the stone and for removing the rock,
* collecting a unit of wood and a unit of stone to craft a stick and then a stone axe that is
required for collecting the gold,
* remove the rock surrounding the gold and collect the gold with the stone axe. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful feedback and suggestions. We are particularly excited that the reviewers find our idea of learning skills to induce diverse interactions between state factors well motivated (R3), novel (R1, R2, and R4), effective (R1, R2), and sound (R1, R2, R4). We also appreciate the reviewers for commending the clarity of the technical details (R2 and R3). In this thread, we summarize our responses to the common concerns shared by the reviews. Please refer to the rebuttal attached to each reviewer's comments for a reviewer-specific response.
1. To improve the clarity of the paper, in Alg 1 of the attached pdf, we show a pseudocode of how SkiLD learns diverse skills during the skill-learning stage, and we will add it to the paper.
2. To further evaluate the diversity of each method’s learned skills, as suggested by Reviewer3 c2kz, we show **ALL inducible interactions** and whether each interaction is induced by different methods, in Fig 1 of the attached pdf. Again, SkiLD (ours) induces all inducible dependency graphs, while baselines fail to induce hard graphs with challenging pre-conditions.
3. For a better comparison between different methods on their performance during task learning, as suggested by Reviewer3 c2kz, we compute and report the IQM scores as follows:
| **Task** | **SkiLD** | **Elden** | **COInS** | **DIAYN** | **CSD** | **Vanilla** |
|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| Install Printer | $0.996\pm 0.005$ | $0.981\pm 0.035$ | $0.000\pm 0.000$ | $0.000\pm 0.000$ | $0.000\pm 0.000$ | $0.886\pm 0.000$ |
| **Task** | **SkiLD** | **Elden** | **COInS** | **DIAYN** | **CSD** | **Vanilla** |
|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| Clean Rag | **0.016** $ \pm $ 0.031 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 |
| Clean Car | **0.177** $ \pm $ 0.286 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.177 $ \pm $ 0.286 | 0.000 $ \pm $ 0.000 |
| Soak Rag | **0.960** $ \pm $ 0.031 | 0.098 $ \pm $ 0.196 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.031 | 0.000 $ \pm $ 0.000 |
| **Task** | **SkiLD** | **Elden** | **COInS** | **DIAYN** | **CSD** | **Vanilla** |
|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| Thaw Olive | **0.223** $ \pm $ 0.157 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 |
| Thaw Date | **0.646** $ \pm $ 0.432 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.254 $ \pm $ 0.000 |
| Thaw Fish | **0.486** $ \pm $ 0.819 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 |
| Thaw any two | **0.101** $ \pm $ 0.093 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 |
| **Task** | **SkiLD** | **Elden** | **COInS** | **DIAYN** | **CSD** | **Vanilla** |
|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| Peach | **0.999** $ \pm $ 0.002 | 0.153 $ \pm $ 0.307 | 0.097 $ \pm $ 0.153 | 0.402 $ \pm $ 0.176 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 |
| Wash Peach | **0.990** $ \pm $ 0.013 | 0.000 $ \pm $ 0.000 | 0.005 $ \pm $ 0.010 | 0.001 $ \pm $ 0.002 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 |
| Cut Peach | **0.119** $ \pm $ 0.051 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 | 0.000 $ \pm $ 0.000 |
Pdf: /pdf/227ba2d66630c15537dec8bda54967c61812ce20.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How to Use Diffusion Priors under Sparse Views? | Accept (poster) | Summary: This paper mainly investigates the behavior of SDS in sparse-view 3D reconstruction, pointing out that SDS may unexpectedly harm the 3D reconstruction quality in this case. Compared to SDS in text-to-3D, sparse-view reconstruction requires leveraging visual cues encoded in input images (named "inline prior"), while the naive SDS fails to do so. To fill this gap, this paper proposes to use a diffusion inpainting model, taking a warped input image as input to guide SDS optimization. This approach is indicated as Inline Prior Guided Score Matching (IPSM). In addition, a 3DGS pipeline incorporating this prior is introduced in the paper, which achieves SOTA performance.
Strengths: 1. The perspective of this paper on the mode-seeking behavior of SDS in the sparse-view setting is interesting.
2. The results on two public datasets demonstrate the state-of-the-art performance of the proposed method.
3. Clear and detailed ablation studies are also provided.
Weaknesses: 1. Insufficient analysis of mode deviation. This paper analyzes the "mode deviation" problem of SDS mainly via Fig. 1 (empirical evidence) and Fig. 2 (intuitive explanations). While such evidence is appreciated, more in-depth analysis, such as theoretical analysis, would definitely help to make this statement more convincing. In Sec. 3.2 I cannot find math evidence as to why SDS cannot work, though it seems the authors wanted to show this.
2. Novelty of diffusion inpainting model. Leveraging a diffusion inpainting model with warped input images to guide novel views is interesting. However, this idea looks similar to that of [1] in a slightly different context (novel view synthesis w/o SDS). A detailed comparison with [1] in terms of methodology is needed.
References:
[1] Kant et al. iNVS: Repurposing Diffusion Inpainters for Novel View Synthesis. SIGGRAPH Asia, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. This paper limits the discussion of diffusion prior (used in SDS) to text-to-image models, which may reduce its contributions and make its motivation less persuasive -- we indeed have a better choice to do SDS! Compared to T2I models, a more natural alternative is view-conditioned diffusion models, e.g., Zero-1-to-3 [1], which takes images as input. A recent work, ZeroNVS [2], has shown the success of SDS with such view-conditioned guidance using only a single input image. The second row in Tab. 2 (the first part of IPSM) is essentially an SDS baseline using the diffusion inpainting model, showing improvements for all metrics. This indicates that $\textit{an appropriate selection of SDS guidance can avoid the problem stated in the paper}$.
2. Based on 1, additional results as follows would be highly appreciated.
(1) View-conditioned SDS baseline leveraging Zero-1-to-3 style image-based models, similar to the second row in Tab. 2 of the paper.
This aims to investigate if view-conditioned diffusion models for SDS also suffer mode deviation.
References:
[1] Liu et al. Zero-1-to-3: Zero-shot One Image to 3D Object. ICCV 2023.
[2] Sargent et al. ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Real Image. CVPR 2024.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed their limitations in Sec. 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the careful review. We appreciate for concerns and valuable suggestions and questions. Here are our corresponding responses.
* **Analysis of the motivation**. With detailed discussion about mode-seeking in previous works [A, B], here we provide a theoretical analysis of our motivation. The optimization objective of $\min _ {\theta} \mathbb{E} _ {t, \mathbf{v} _ j}[ \omega(t) D _ {KL}({q _ t^{\theta}(\mathbf{x} _ t^j)} \Vert{p _ t^*(\mathbf{x} _ t^j)})]$ derives ${q _ t^{\theta}(\mathbf{x} _ t^j)} \sim \mathcal{N}(\mathbf{x} _ t^j; \sqrt{\bar{\alpha} _ t}\mathbf{x} _ 0^j, (1 - \bar{\alpha} _ t)\mathbf{I})$ to the high-density region of $p _ t^*(\mathbf{x} _ t^j)$. Considering two mode $\mathbf{m} _ 1, \mathbf{m} _ 2 \in \mathcal{M}(\mathbf{x} _ t^j, y)$ of $p _ t^*(\mathbf{x} _ t^j)$, where $\mathbf{m} _ 1$ is the target mode and $\mathbf{m} _ 2$ is the failure mode. $\mathcal{M}$ is the mode range of $p _ t^*(\mathbf{x} _ t^j)$ decided by $\mathbf{x} _ t^j$ and $y$, which is the text prompt, and we do not elaborate on the conditions of the diffusion prior distribution for brevity. We denote the distance of two modes as $D _ M = \Vert \mathbf{m} _ 1 - \mathbf{m} _ 2 \Vert _ 2$. We want $\sqrt{\bar{\alpha} _ t}\mathbf{x} _ 0^j \approx \sqrt{\bar{\alpha} _ t}\mathbf{m} _ 1$ for any $t$. When $t$ is small, i.e. $t \rightarrow 0$, we have $\sqrt{\bar{\alpha} _ t} \rightarrow 1$, thus it is not hard for deriving $\sqrt{\bar{\alpha} _ t}\mathbf{x} _ 0^j$ to $\sqrt{\bar{\alpha} _ t}\mathbf{m} _ 1$. However, when $t$ is large, i.e. $t \rightarrow 1$, we have $\sqrt{\bar{\alpha} _ t} \rightarrow 0$, thus we have $\sqrt{\bar{\alpha} _ t}\mathbf{x} _ 0^j \rightarrow 0, \sqrt{\bar{\alpha} _ t}\mathbf{m} _ 1 \rightarrow 0, \sqrt{\bar{\alpha} _ t}\mathbf{m} _ 2 \rightarrow 0$. So, the optimization direction will still be affected, resulting in mode deviation. Back to the proposed IPSM, since IPSM introduces the rectified distribution $\tilde{q} _ t^{\theta, \phi}(\mathbf{x} _ t^j)$ as the intermediate state for narrowing the mode range $\mathcal{M}^{'}(\mathbf{x} _ t^j, y, \mathbf{M}^{i \rightarrow j} \odot \mathbf{I} _ 0^{i \rightarrow j}, \mathbf{M}^{i \rightarrow j})$, optimization directions are constrained and guided, thus suppressing mode deviation and promoting reconstruction quality.
* **Comparison with iNVS [C]**. **TLDR of core difference**: iNVS uses re-projection and Inpainting Stable Diffusion (ISD) for constructing a better way of conditioning for the diffusion model, while ours is for building an intermediate distribution to guide SDS optimization of 3D representations. **Difference**: **iNVS** aims to improve the conditional encoding of diffusion priors for *image-to-3D*. Unlike Zero-1-to-3 [D], which directly encodes the input image and the pose of novel views, iNVS uses the characteristics of ISD and re-projection techniques to encode the geometric prior of the input image on novel views into the conditional diffusion model, and fine-tunes ISD on large-scale external 3D dataset, thereby further improving object-consistency in generated images of novel views. **This work** aims to lift 3D information from inline prior for boosting SDS in *sparse-view reconstruction*. Unlike iNVS, this work starts from the analysis of the optimizing objective of SDS, and builds an intermediate distribution between the diffusion prior and the rendered images distribution by guiding the sampling trajectory of the pre-trained ISD with inline priors, thereby suppressing mode deviation and promoting improvements in reconstruction.
* **SDS using view-conditioned diffusion prior**.
* **An intuitive evaluation of Zero-1-to-3 on objects and scenes**. As shown in **Fig. 2** of the attachment, we provide the novel-view results of Zero-1-to-3 when given an image including an object and a scene. With the change of azimuth, Zero-1-to-3 can generate satisfactory results of objects in different new perspectives, but not for scenes, i.e. always fixed at the input perspective. This is because Zero-1-to-3 is fine-tuned on the large-scale 3D object dataset Objaverse [E]. During the optimization process of Zero-1-to-3, inherent inductive biases about objects are introduced into the model, making it hard to produce satisfactory results from new perspectives on scenes.
* **Quantitative experimental results of SDS using Zero-1-to-3**. We product experiments of SDS using Zero-1-to-3 on LLFF with 3 input views shown below. Please note that Zero-1-to-3 controls the camera position by controlling polar, azimuth, and radius, but does not provide a way to control the camera posture. So we do not add noise to the camera posture for generating pseudo views when using Zero-1-to-3. We can notice that Zero-1-to-3 added as a prior cannot improve the quality of scene reconstruction. This is consistent with the intuitive visualization results we mentioned above. 3D diffusion priors lack a large amount of 3D data for learning the data distribution of the real 3D world, making it difficult to produce simple and direct effects on sparse-view reconstruction.
Table 1. Quantitative experimental results of SDS.
| Setting | SSIM | LPIPS | PSNR | AVGE |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| Base | 0.625 | 0.254 | 19.00 | 0.125 |
| w/ SDS(Zero-1-to-3, CFG=3.0 default) | 0.566 | 0.361 | 17.65 | 0.160 |
| w/ SDS(SD, CFG=7.5) | 0.647 | 0.267 | 18.80 | 0.128 |
| w/ SDS(SD, CFG=100) | 0.576 | 0.367 | 17.53 | 0.162 |
| w/ SDS(ISD, CFG=7.5) | 0.636 | 0.245 | 19.22 | 0.121 |
| w/ IPSM(CFG=7.5) | 0.670 | 0.229 | 19.60 | 0.113 |
References:
[A] DreamFusion: Text-to-3D using 2D Diffusion. ICLR 2022.
[B] Stable score distillation for high-quality 3d generation. arXiv:2312.09305, 2023.
[C] iNVS: Repurposing Diffusion Inpainters for Novel View Synthesis. SIGGRAPH Asia, 2023.
[D] Zero-1-to-3: Zero-shot one image to 3d object. CVPR 2023.
[E] Objaverse: A universe of annotated 3d objects. CVPR 2023. | Summary: This paper introduces a novel approach for synthesizing novel views from sparse view inputs using diffusion priors.
The authors conduct a thorough analysis of SD optimization under sparse views and propose an inline prior guided score matching algorithm to rectify the distribution of rendered images.
The 3DGS is chosen as the 3D representation and rendering method to incorporate IPSM for generating novel view images.
The experimental results demonstrate that the proposed method significantly improves the results of novel view synthesis when given sparse views on several commonly used benchmarks.
Strengths: 1. The authors conduct a thorough analysis of SDS algorithms conditioned on sparse view images.
2. The proposed IPSM method is both novel and reasonable.
3. The paper is generally well-written, with clear and concise explanations
Weaknesses: 1. The paper solely compares the quantitative results of the proposed method with SDS, without providing qualitative results for SDS.
2. The experimental settings lack detailed information. For instance, the level of sparsity in the input views, the size of baselines in the input views, and the size of baselines between input and output novel views are not clearly specified.
3. The paper fails to clarify the necessity of using 3DGS.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors should address the questions raised in the Weaknesses section to provide a more comprehensive understanding of their work.
2. It would be beneficial for the authors to discuss how the proposed method performs in extreme circumstances, such as when the two input views are opposite.
3. It is important to clarify whether the proposed method can handle extrapolation scenarios. For instance, how does it perform when the azimuths of the two input views are 0 degrees and 90 degrees, and the azimuths of the new views are 180 degrees?
4. It would be interesting to explore whether the 3DGS can be replaced with NeRF and how the replacement influnces the performance.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors discuss the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the efforts and patience of the careful reviewing. We appreciate the suggestions and questions for this paper. Here we provide detailed responses.
* **Qualitative results for SDS**. As shown in **Fig.1 (a)** of the attachment, the guidance of SDS will produce the imaginary reconstruction caused by mode deviation when using the diffusion prior directly as we demonstrated in the main manuscript. This property is reasonable and acceptable in text-to-3D generation tasks, but it fails in specific scene reconstructions limited by sparse views. As shown in **Fig.1 (b)** of the attachment, we can observe that SDS will also produce large floaters during optimization, which indicates the characteristic of training instability of SDS, since SDS overlooks the inline prior of sparse views and is hard to provide stable guidance towards target mode. We provide more qualitative comparison with SDS in the video of the supplementary material.
* **Detailed experimental settings**. Here we provide detailed experimental settings below. It is supposed to be noted that *all baselines optimized per scene share the identical protocol of the experimental settings*.
* **LLFF Dataset** includes 8 scenes in total. For each scene of the LLFF dataset, following RegNeRF [A], we choose every 8th image as the test set and use 3 views uniformly sampled from the remaining images. The original resolution of the LLFF dataset is $4032 \times 3024$. Following DNGaussian [B], we downsample the resolutions of images to $8\times$ for both training and testing. In **Tab. 1** of the attachment, we report the level of sparsity.
* **DTU Dataset** includes 124 scenes in total. The prevailing pre-training method like PixelNeRF [C] utilizes 88 scenes for training and 15 test scenes, i.e. IDs: [8, 21, 30, 31, 34, 38, 40, 41, 45, 55, 63, 82, 103, 110, 114], for per-scene fine-tuning and testing. Following the prevailing works of optimizing per scene, i.e. RegNeRF [A], and DNGaussian [B], we directly optimize our model per scene on the 15 test scenes. For each scene of the DTU dataset, following RegNeRF [A], the IDs of 3 input training views are: [25, 22, 28], and the IDs of test views are: [1, 2, 9, 10, 11, 12, 14, 15, 23, 24, 26, 27, 29, 30, 31, 32, 33, 34, 35, 41, 42, 43, 45, 46, 47]. The original resolution of the DTU dataset is $1600 \times 1200$. Following RegNeRF [A], we downsample the resolutions of images to $4\times$ for both training and testing.
* **Extreme circumstances**. For the sparse-view scene reconstruction task, researchers focus on using inline priors and external priors to suppress the overfitting issue. Therefore both methods on the mentioned extreme circumstances have poor effects. We construct corresponding data and conduct experiments with the state-of-the-art method DNGaussian [B].
* **Two opposite input views**. We select 2 opposite views of each scene on the MipNeRF-360 dataset, i.e. the IDs of training views of each scene: [2, 26] of bicycle; [22, 151] of bonsai; [57, 185] of counter; [1, 57] of garden; [14, 171] of kitchen; [2, 79] of room; [26, 34] of stump. The test views are selected every 8th image following Mip-NeRF. The quantitative comparisons with state-of-the-art methods DNGaussian and FSGS are shown in **Tab. 2** of the attachment. We report the PSNR, SSIM, LPIPS, and AVGE for each scene and the average of all scenes. It can be seen that our method outperforms the state-of-the-art method DNGaussian on every scene and our model achieves improvements of $22.03$%, $19.21$% on average PSNR and AVGE scores respectively.
* **Extrapolation scenarios**. We select 2 views on 0 and 90 degrees of each scene on the MipNeRF-360 dataset, i.e. IDs: [2, 14] of bicycle; [22, 248] of bonsai; [57, 145] of counter; [1, 15] of garden; [14, 37] of kitchen; [2, 291] of room; [26, 28] of stump. The test views are selected on the 180 degrees, i.e. IDs: [26] of bicycle; [151] of bonsai; [185] of counter; [57] of garden; [171] of kitchen; [79] of room; [34] of stump. The quantitative results similar to Tab.2 are shown in **Tab. 3** of the attachment. It can be seen that our method outperforms the state-of-the-art method DNGaussian on every scene and our model achieves the improvements of $25.78$%, $21.34$% on average PSNR and AVGE scores respectively.
* **3DGS as the backbone**.
* **Reason for choosing 3DGS**. NeRF, as an implicit 3D representation, has the advantages of photographic-realistic rendering quality, but it takes the issues of slow training and rendering speeds. 3DGS, as a new explicit 3D representation, has the advantages of fast training and rendering speeds, which provides researchers with a convenient 3D representation framework, which is why we chose 3DGS as the backbone.
* **How about NeRF**? In text-to-3D and image-to-3D tasks, score distillation methods are widely used on different backbones. For example, the representative methods using NeRF as the backbone include Dreamfusion [D]; the representative methods using 3DGS as the backbone include DreamGaussian [E]. Since this paper proposes a score distillation method, it can theoretically be migrated to NeRF, just like VSD [H] can be applied to both NeRF and 3DGS.
[A] M. Niemeyer, J. T. Barron, B. Mildenhall, M. S. Sajjadi, A. Geiger, and N. Radwan, “Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs,” CVPR 2022.
[B] J. Li, J. Zhang, X. Bai, J. Zheng, X. Ning, J. Zhou, and L. Gu, “Dngaussian: Optimizing sparse-view 3d gaussian radiance fields with global-local depth normalization,” CVPR 2024.
[C] A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “pixelnerf: Neural radiance fields from one or few images,” CVPR 2021.
[D] B. Poole, A. Jain, J. T. Barron, and B. Mildenhall, “Dreamfusion: Text-to-3d using 2d diffusion,” ICLR 2022.
[E] J. Tang, J. Ren, H. Zhou, Z. Liu, and G. Zeng, “Dreamgaussian: Generative gaussian splatting for efficient 3d content creation,” ICLR, 2023. | Summary: This paper deals with the problem of novel view synthesis from a sparse set of input views. While this problem has been tackled with depth or semantic regularization in the past, the authors approach the problem by introducing priors from a pre-trained diffusion model following a few recent works like ReconFusion. However, these recent works require fine-tuning a diffusion model on multi-view data and do not directly apply pre-trained diffusion models as is common practice in text-to-3D, e.g. via an SDS objective. The authors observe that this is because when using SDS in a straightforward manner, the performance of the model decreases under standard CFG settings. Given their observation, they propose IPSM which decomposes SDS into two sub-objectives. The reasoning is that we can guide the SDS process by additionally supervising the predicted noise of the target view to be close to the noise predicted from an inpainted version of the target view obtained through re-projection and infilling. In addition to the IPSM objective, the authors supervise the model with a depth loss and an image-based rendering loss (using the same process as the re-projection in IPSM).
Strengths: - The paper provides a simpler alternative to ReconFusion, i.e. the geometric consistency is promoted through a simple reprojection-based guidance instead of a PixelNeRF that needs to be jointly trained with the diffusion model on external data.
- The paper writing is good, i.e. clear motivation, the related work focuses on dissecting the difference of existing work to this one, and clear method section.
- The experimental results show convincing improvements over the 3DGS baseline, naive SDS, and other few-shot methods such as FSGS
Weaknesses: - Ablation in Tab. 2: It is great to see that using both objectives in IPSM is superior, however, it would be great to see a version *without* IPSM and *with* $\mathcal{L}^\text{geo}$ and $\mathcal{L}^\text{depth}$ to judge if IPSM is needed or if the geometric regularization alone is sufficient.
- The experimental setup focuses on NVS from 3 input views. However, the method should be suitable for arbitrary sparse view setups. It would be interesting to investigate if the improvements hold with 6/9 views (ReconFusion setup) or even more. This would broaden the application scenarios of the method. It would be very interesting to see if this simpler approach can rival the performance of the more complex ReconFusion pipeline given their data is released.
- Fig. 3 is broken in the paper. Fortunately, the supplementary video also shows the figure.
Technical Quality: 3
Clarity: 3
Questions for Authors: I hope the authors can mainly answer the first question I listed in weaknesses, but I think also addressing the second concern could make the paper stronger.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations were sufficiently discussed in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your efforts and patience in reviewing this paper. We appreciate the positive comments, valuable concerns, and suggestions on our work. Here are our responses to the mentioned weaknesses and questions.
* **Additional ablation study**. To supplement more complete experimental results, we provide additional ablation study using 3 views on the LLFF and DTU dataset in Tab. 1 and 2 respectively. We can see that $\mathcal{L} _ {\rm{depth}}$ presents a strong prior for optimization since it directly provides the 3D geometric guidance on 3D representations. Notably, although both $\mathcal{L} _ {\rm{geo}}$ and $\mathcal{L} _ {\rm{IPSM}}$ use re-projection techniques to introduce the 2D visual prior information of the sparse views to promote optimization, $\mathcal{L} _ {\rm{IPSM}}$ achieves satisfactory performance comparable to direct 3D guidance of $\mathcal{L} _ {\rm{depth}}$ as shown in Tab. 2. At the same time, it is difficult for $\mathcal{L} _ {\rm{geo}}$ to promote optimization independently without the assistance of other regularizations.
Table 1: Additional Ablation Study on the LLFF dataset with 3-view setting
| Setting | SSIM | LPIPS | PSNR | AVGE |
| -----------| ----------- | ----------- | ----------- | ----------- |
| Base | 0.625 | 0.254 | 19.00 | 0.125 |
| Base + $\mathcal{L} _ {\rm{depth}}$ | 0.687 | 0.212 | 20.08 | 0.105 |
| Base + $\mathcal{L} _ {\rm{geo}}$ | 0.651 | 0.235 | 19.35 | 0.117 |
| Base + $\mathcal{L} _ {\rm{IPSM}}$ | 0.670 | 0.229 | 19.60 | 0.113 |
| Base + $\mathcal{L} _ {\rm{IPSM}}$ + $\mathcal{L} _ {\rm{depth}}$ + $\mathcal{L} _ {\rm{geo}}$ | 0.702 | 0.207 | 20.44 | 0.101 |
Table 2: Additional Ablation Study on the DTU dataset with 3-views setting.
| Setting | SSIM | LPIPS | PSNR | AVGE |
| -----------| ----------- | ----------- | ----------- | ----------- |
| Base | 0.836 | 0.134 | 19.11 | 0.087 |
| Base + $\mathcal{L} _ {\rm{depth}}$ | 0.849 | 0.122 | 19.77 | 0.079 |
| Base + $\mathcal{L} _ {\rm{geo}}$ | 0.835 | 0.135 | 19.28 | 0.086 |
| Base + $\mathcal{L} _ {\rm{IPSM}}$ | 0.853 | 0.122 | 19.67 | 0.080 |
| Base + $\mathcal{L} _ {\rm{IPSM}}$ + $\mathcal{L} _ {\rm{depth}}$ + $\mathcal{L} _ {\rm{geo}}$ | 0.856 | 0.121 | 19.99 | 0.077 |
* **Additional experimental results with more input views**. Thanks for the concern on the performance of our method with more input views. Experimental results using more input views can further explore the robustness of our method when working with sparse views. We provide additional experimental results under 6 and 9 input views on the LLFF dataset in Tab. 3 and 4 respectively. Denote that * represents results reported in ReconFusion and # represents results reported in DNGaussian. Notably, our method uses exactly the same parameters as the LLFF dataset with 3 views for training.
* **6 input views**. As shown in the first Table below, we achieve an improvement of $11.18$% on LPIPS compared to ReconFusion. It is supposed to be noted that ReconFusion requires additional computational resources for pre-training an encoder with external data as we demonstrated in the main manuscript. Excluding methods that require additional resources for pre-training, our method achieves improvements of $8.12$%, $8.34$%, $31.82$%, $30.68$% on PSNR, SSIM, LPIPS, and AVGE respectively, compared to DNGaussian, which is the state-of-the-art method based on the 3DGS.
* **9 input views**. Similar to the experimental results of 6 input views, our method still outperforms all state-of-the-art methods on SSIM, LPIPS, and AVGE scores and achieves comparable results on PSNR. As shown in Tab.2 of the attachment, compared to 3DGS-based DNGaussian, we achieve improvements of $7.94$%, $8.38$%, $26.11$%, $33.77$% on PSNR, SSIM, LPIPS, and AVGE respectively.
Table 3: Quantitative comparisons with 6 input views on the LLFF dataset
| Method | Pub. | Pretrain |PSNR | SSIM | LPIPS | AVGE |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| Zip-NeRF * | ICCV 2023 | - | 20.71 | 0.764 | 0.221 | 0.097 |
| RegNeRF * | CVPR 2023 | - | 23.09 | 0.760 | 0.243 | 0.084 |
| DiffusioNeRF * | CVPR 2023 | ✔ | 23.60 | 0.775 | 0.235 | 0.079 |
| FreeNeRF * | CVPR 2023 | - | 23.72 | 0.773 | 0.232 | 0.078 |
| SimpleNeRF * | SIGGRAPH Asia 2023 | - | 23.05 | 0.737 | 0.296 | 0.091 |
| ReconFusion * | CVPR 2024 | ✔ | **24.25** | 0.815 | 0.152 | 0.063 |
| 3DGS # | SIGGRAPH 2023 | - | 20.63 | 0.699 | 0.226 | 0.108 |
| DNGaussian # | CVPR 2024 | - | 22.18 | 0.755 | 0.198 | 0.088 |
|Ours | - | - | 23.98 | **0.818** | **0.135** | **0.061** |
Table 4: Quantitative comparisons with 9 input views on the LLFF dataset
| Method | Pub. | Pretrain |PSNR | SSIM | LPIPS | AVGE |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| Zip-NeRF * | ICCV 2023 | - | 23.63 | 0.830 | 0.166 | 0.067 |
| RegNeRF * | CVPR 2023 | - | 24.84 | 0.820 | 0.196 | 0.065 |
| DiffusioNeRF * | CVPR 2023 | ✔ | 24.62 | 0.807 | 0.216 | 0.069 |
| FreeNeRF * | CVPR 2023 | - | 25.12 | 0.820 | 0.193 | 0.063 |
| SimpleNeRF * | SIGGRAPH Asia 2023 | - | 23.98 | 0.762 | 0.286 | 0.082 |
| ReconFusion * | CVPR 2024 | ✔ | **25.21** | 0.848 | 0.134 | 0.054 |
| 3DGS # | SIGGRAPH 2023 | - | 20.44 | 0.697 | 0.230 | 0.108 |
| DNGaussian # | CVPR 2024 | - | 23.17 | 0.788 | 0.180 | 0.077 |
|Ours | - | - | 25.01 | **0.854** | **0.133** | **0.051** |
* **Broken figure of the pipeline**. We are sorry that Fig. 3 in the main manuscript may be broken on different PDF viewers. On Google Chrome, Fig. 3 can be viewed normally. We will replace it with an image file. | null | null | Rebuttal 1:
Rebuttal: We thank all ACs and reviewers for their efforts in reviewing, valuable comments and suggestions for this paper. We addressed the reviewer's comments and questions in individual responses to each reviewer and provided supplementary figures and tables in the one-page pdf attachment. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Causal language modeling can elicit search and reasoning capabilities on logic puzzles | Accept (poster) | Summary: This work attempts to solve a filtered list of Sudoku puzzles by training a transformer model with data derived from solutions produced by a mechanistic 'simple solver' (rather than a sophisticated recursive planner). They show that the training regime transformer model can be engineered to enable the model to learn to do the task effectively. In addition, the authors perform linear probes to show the extent of the model's internal representation of the ongoing problem solution.
Strengths: This work tackles a problem that is readily understood by the public, using transformers which might be expected to struggle (compared to GOFAI solvers, which work on even harder Sudoku puzzles than tested here).
The number of experiments, looking at different aspects of the learnability of the task, and the probing of the resulting model internal states is very nicely done.
Weaknesses: Selecting only those solvable by a simple solver is quite a simplification : Puzzles that require a full-backtracking approach are excluded. This means that the results show that Transformers are capable of planning when the situation is simple, which is far from being fully capable of planning/reasoning.
Should link to dataset (https://www.kaggle.com/datasets/radcliffe/3-million-sudoku-puzzles-with-ratings).
According to the dataset description on Kaggle, 43% of the puzzles in the dataset have a difficulty of zero, meaning that it can be solved using a simple scanning technique. The filtered dataset is 1.9M of the original 3.0MM (63%), so only 31% of the dataset being used is not amenable to the 'simple scanning technique'. Perhaps this should be highlighted.
Should mention the 42M param GPT-2 architecture earlier in the paper than Appendix B.
Minor quibbles:
* L32: "In this work, we aim to understand how complex a reasoning task can Transformers trained with next-token prediction solve by focusing on a synthetic task: Sudoku puzzles."
+ -> "In this work, we aim to understand how complex a reasoning task Transformers trained with next-token prediction can solve by focusing on a synthetic task: Sudoku puzzles."
* L70: "This ensures that all our puzzles are solvable in polynomial time." Should be 'reasonable time' - the polynomial time claim is beyond what's proven.
Technical Quality: 3
Clarity: 4
Questions for Authors: Could an analysis of model performance vs the 'rating' of the puzzle hardness (which I believe exists in the dataset) be done? i.e. Does the model get more puzzles wrong, the objectively harder they get?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The filtering of the initial dataset should have been emphasised more : It may be that the model learns only puzzles that would be in an 'Easy Sudoku' collection.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and constructive feedback. We will make a pass over the paper and fix the typos pointed out and any other errors we find. We address your comments below.
- *Selecting puzzles solvable by a solver*: We emphasize that we did this filtering on both our train and test sets to ensure that the puzzles can be solved in reasonable time. Additionally, although our solver only uses 7 strategies, many of the strategies are highly advanced (a hobby Sudoku solver might not be aware of some of them). We thus argue that our solver is a complex algorithm, albeit one which doesn’t use backtracking. We agree that this rules out puzzles which can be solved in reasonable time with backtracking and hence our results currently do not show that causal language modeling on transformers yields them with planning capabilities in complex scenarios requiring extensive backtracking.
- *Difficulty rating of puzzles on Kaggle*: We wish to point out that the difficulty rating provided on Kaggle is an imperfect measure of the difficulty of the puzzle. This rating is computed as follows. To rate a puzzle, it consider a solver (different from the one we use to generate our solver-order data) which tries to iteratively make progress on a puzzle using some elimination techniques. When the solver gets stuck, it makes guesses and tries to solve the puzzle. The difficulty rating is the average number of guesses the solver had to make to solve the puzzle. Therefore, even a puzzle rated 0.5 can require complex strategies beyond simple scanning to solve them without guessing. Moreover, even for puzzles with rating 0, the elimination technique employed by the solver includes the hidden single strategy which is quite computationally intensive. That being said, we provide an analysis about how the complete puzzle accuracy changes as we increase the rating of puzzle in the attached PDF of the global response.
- *“Polynomial time claim”*: To clarify the claim, by polynomial time we mean that polynomial in $n$ if the size of the puzzles is $n \times n$. We say that all our filtered puzzles are solvable in polynomial time because if we consider the general version of Sudoku of size $n \times n$ and also consider a generalized version of the solver that employs the same set of strategies, then all the examples filtered by the solver will be solved by polynomial time in $n$ because each strategy in the solver runs in polynomial time in $n$ and none of the filtered Sudoku requires guessing a value in some empty cell. We will clarify this claim in the revised version of the paper.
In addition to these comments, we have added additional ablations and error analysis in the general response above. We have also shown additional experiments in a different puzzle solving setting - solving Zebra puzzles (also known as Einstein riddles) which shares some similarity with Sudoku puzzles but has many differences. We observe a similar outcome with Zebra puzzles as well where causal language modeling on solver order data is able to teach the model to perform well on these puzzles. Please see the general response for more details on these experiments.
---
Rebuttal Comment 1.1:
Comment: * "Selecting puzzles solvable by a solver" : Makes sense. But wouldn't hurt to be upfront in the paper.
* "Difficulty rating of puzzles on Kaggle" : Good addition, makes the results stronger, by showing that success rates roughly align with difficulty (particularly since it's calculated independently)
* "Polynomial time claim" : I understood the scaling part at L46. By the time you get to L70, n is a fixed constant. So "polynomial" just sounds misplaced.
Sticking with my scores. | Summary: This paper applies causal language models (Transformers) to solve Sudoku puzzles, reporting a 94.21% accuracy rate in solving the puzzles completely. The authors claim to demonstrate that these models can develop human-like reasoning capabilities by employing insights from CoT prompting through carefully structured training data and probing analyses.
The contribution of the paper is:
1. Demonstration that causal language models can learn complex reasoning tasks like Sudoku solving through carefully structured training data.
2. Development of a novel training methodology that leverages solver-decomposed reasoning order to enhance model performance.
Strengths: Novel Methodology and Application: Applying Transformers to Sudoku Solving is an interesting extension of Transformers techniques, and the use of sequences mimicking human reasoning steps is a creative training method.
Weaknesses: 1: Limited novelty: While applying Transformers to Sudoku is new, the underlying techniques are not innovative. The paper lacks significant theoretical or methodological advancements.
2: Narrow scope: Focusing solely on Sudoku limits the paper's impact and generalizability. It's unclear how well this approach would generalize to other reasoning tasks or more complex Sudoku variants. Testing models across various types of reasoning tasks, especially those requiring different logical structures or knowledge types (rule-based vs. rule-less puzzles), could significantly enhance the understanding of the model's generalization capabilities. [1]
3: Inadequate comparisons: A major oversight is the lack of comparisons to traditional Sudoku-solving algorithms or other AI approaches. It's necessary to compare neural approaches with traditional algorithms to assess advancements meaningfully [1]
4: Overstated claims: The paper may overstate the model's "reasoning" capabilities. It's not yet convincing that what's described could be pattern matching rather than actual reasoning. Distinguishing between genuine reasoning and sophisticated pattern matching can be challenging. Further evidence could be demonstrated by testing the model's ability to solve puzzles it was not directly trained on, or by altering puzzle formats to see if the model can adapt its solving strategy without retraining.
5: Computational efficiency: There's insufficient discussion of the computational costs involved comparing the different order/decomposing as well as beam search appraoches.
6: Lack of error analysis: A detailed examination of where and why the model fails would provide more insight than focusing on its successes.
[1] Giadikiaroglou, P., Lymperaiou, M., Filandrianos, G., & Stamou, G. (2024). Puzzle Solving using Reasoning of Large Language Models: A Survey.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How does this approach compare speed and efficiency to traditional Sudoku-solving algorithms?
2. What evidence supports the claim that the model is genuinely "reasoning" rather than pattern matching?
3. How well does this method generalize to other types of puzzles or reasoning tasks?
4. What are the computational efficiency for different tested methods presented?
5. Can you provide a more detailed error analysis? what are standard failure modes?
6. Have you investigated whether the model can explain its reasoning process, similar to how humans might describe their Sudoku-solving steps?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and constructive feedback. We address your comments below.
- *Lacks theoretical or methodological advancements*: We reiterate that our main contribution is not proposing a new theory or a new method to train LLMs. Rather it is an advance in our understanding of the precise capabilities and limitations of causal language modeling. Our work is similar to works such as [1], [2], [3] which show that Transformers can learn useful world models.
That being said, we posit that our insight on showing the right style of data to train models on for reasoning puzzles might be helpful in developing better pre-training data collection strategies for general LLMs.
Our contribution is novel because on general-purpose LLMs such as GPT-4, there is much evidence [5], [6], [7] that these models do not do well at challenging reasoning/planning tasks. Hence, apriori, it is not obvious that Sudoku puzzles can be solved by causal LM alone.
- *Narrow scope - evaluations on other reasoning puzzles/variants*:
- Firstly, we note that Sudoku is a challenging task where prior work had not shown success of the causal LM approach (see point in general response about performance of frontier LLMs). While Sudoku is a specific puzzle setting, this enables us to perform a controlled study where we can quantify the exact amount of generalization we observe.Taking the reviewer’s feedback we added experiments with a second puzzle type: Zebra puzzles, where we observed similar results. Please refer to the general response and the attached PDF for more details. We will also add these to the paper.
- In addition, we believe that Sudokus involve many sub-reasoning tasks involving deductive logic, mapping to abstract concepts etc and are representative of many challenging deterministic puzzles. Hence, we believe, our main message is robust to changes to the puzzle such as the logical dependency structure while keeping the required components of logical inference, search, and planning the same.
- Rule-less puzzles require world knowledge to solve and complicate a principled and controlled study of the type we conducted. It is challenging to generate CoT style data for such puzzles on a large scale. We believe this is a very interesting direction for future research.
- We also like the idea of stochastic puzzles. It would be interesting to see how causal LM performs on them but we believe this is outside the scope of the current paper.
- *Comparison to traditional Sudoku solvers and AI approaches*: Please see the corresponding point in the general response above.
- *Overstated claims of reasoning vs pattern matching*: We disagree with this comment. Human reasoning also involves a great deal of pattern matching. However, this happens in an abstract concept space rather than the raw input space. For instance, when presented with a novel coding problem, one might perform an abstract pattern matching to understand whether dynamic programming or divide and conquer are applicable. A lot of frontier research also involves pattern matching. Hence, we argue that learning to pattern match in an abstract concept space consistently over a number of steps is reasoning. While we agree that our setting focuses on a single puzzle type, the model learns to solve unseen puzzles; it learns abstract “strategies” during training and applies them for a test puzzle. This is pattern matching in the abstract concept space. Moreover, some of these abstract strategies can involve O(n^3) computation to find the right square to apply the strategy to. The model is learning to do this search! In addition, it learns to do this consistently for over 50 steps. Hence, our claim of the model learning to reason.
- *Computational Efficiency*: The computational cost of the different orders of the data in our paper is the same. In particular, giving only the value filled in each cell in a step by step manner is more efficient than giving an entire search trace as is done in [4].
Beam search only adds a constant K^2 factor to the decoding process. While further improving the computational efficiency of LLMs is an important area of research, that is not our primary focus and we leave it for future work. We will add a detailed discussion on the computational efficiency aspects in the paper.
- *Error analysis*: Thanks for the feedback. We have a preliminary error analysis in the paper. Please refer to the general response for details on this. We will be adding a more extended error analysis in the paper.
- *Explaining model’s reasoning process*: Our toy model can’t generate English explanations. However, our probing study shows that the model implicitly keeps track of candidate sets (the set of possible values in a given cell). This is a commonly used technique by humans. In addition, we have done an edit distance analysis to study how much the choice of cell locations to decode varies between the model vs the solver. We find that there is a great deal of alignment between these two orders thus providing additional evidence of a human-like reasoning process.
[1] Emergent world representations: Exploring a sequence model trained on a synthetic task. Li et al.
[2] Physics of language models: Part 1, context-free grammar. Allen-Zhu et al.
[3] Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process. Allen-Zhu et al.
[4] Beyond a*: Better planning with transformers via search dynamics bootstrapping. Lehnert et al.
[5] [1] Large language models still can’t plan (a benchmark for llms on planning and reasoning about change). Valmeekam et al.
[6] Travelplanner: A benchmark for real-world planning with language agents. Xie et al.
[7] Limits of transformers on compositionality. Dziri et al.
---
Rebuttal Comment 1.1:
Title: response to the rebuttal
Comment: Thanks for the detailed response especially on the error analysis and the additional study on the zebra puzzle; I still hope to see more experimental results on comparison with other solvers and models to justify its effetiveness, but I am willing to increase my rating for the current draft. | Summary: * This work presents a study of solving sudoku puzzles via causal language modeling.
* Given the sequence of filled places and their values in sudoku, the model must output the series of empty cell positions and the values that correspond to them.
* They study how the model performs with various input representations of a sudoku puzzle. (considering sudoku puzzle as a matrix)
* Fixed cell order (from top-left to bottom-right)
* Random cell order
* Chain-of-thought prompting (using a solver to provide the method to solve the sudoku)
* Through experiments, the authors have demonstrated that appropriate training data that breaks down the problem into smaller components is essential for the model to solve the puzzle correctly.
* The model's performance improved with CoT prompting. It's even enhanced with the use of position hints and beam search.
Strengths: * The problem definition, model description, and experimental setup are presented clearly, making the paper accessible and informative.
* Introduced a sudoku puzzle dataset with steps to solve the puzzles (1.9M puzzles).
* Probing analysis for tracking the candidate set of cell,
Weaknesses: 1. No SoTA models are evaluated on the sudoku puzzle data.
2. A full ablation analysis is not included. This is to understand better how different settings (CoT, beam search, puzzle complexities) affected the model performance and where the model is struggling. Only improvements in accuracies are mentioned in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do models like GPT-4, Gemini-1.5, and Llama-3 perform on these sudoku puzzles? (with and without CoT)
2. Are the plots in Figure 3 in the CoT+Beam search setting?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and constructive feedback. We will make a pass over the paper and fix the typos pointed out and any other errors we find. We address your specific comments and questions below.
- *No SoTA models are evaluated on the sudoku puzzle data*: We reiterate that our main focus is not to compete with SoTA approaches to Sudoku solving. We are studying the capabilities and limitations of causal language modeling (which is accepted as the dominant approach to train language models today) in a controlled setting and not looking to compare with other approaches. Nonetheless in our general response above, we present a comparison with some other approaches to solve Sudoku puzzles and will add this discussion to the paper. Note that in our approach we don’t handcraft any parts of the architecture for Sudoku puzzles.
- *How do models like GPT-4, Gemini-1.5 and Llama-3 perform on these Sudoku puzzles (with and without CoT)?*: Frontier LLMs like GPT-4 and Gemini-1.5 are expected to perform very poorly on a challenging reasoning task such as Sudoku. The poor performance of these models on planning/reasoning tasks has been reported in [1], [2], [3] and others. For completeness, we performed a small-scale study on Gemini-1.5 and GPT-4o where we queried them with 3000 Sudoku puzzles each (in a 4-shot with CoT manner) and we analyzed how well they can solve the puzzle. Overall, we found that they got 0% of the puzzles completely right and their accuracy on a per-cell basis was around 8-10% which is close to random guessing. We will include these results in the paper.
- *A full ablation analysis is not included in the paper*: Thank you for this feedback. We will be extending the number of ablation studies we have in our paper. We wish to point out that we did include some ablations such as experiments with CoT data vs random order data vs fixed order data. In addition we have also included ablations for different beam search widths. In addition, we will have now performed ablations with respect to puzzle difficulties. The results are described in the general response.
- *Are the plots in Figure 3 in the CoT+Beam search setting?*: We apologize for the confusion. These plots are just CoT training without beam search. We will clarify this in the figure’s caption.
In addition to these comments, we have added additional ablations and highlighted some error analysis into where the model struggles in the general response above. We have also shown additional experiments in a different puzzle solving setting - solving Zebra puzzles (also known as Einstein riddles) which shares some similarity with Sudoku puzzles but has many differences. We observe a similar outcome with Zebra puzzles as well where causal language modeling on solver order data is able to teach the model to perform well on these puzzles. Please see the general response for more details on these experiments.
[1] Large language models still can’t plan (a benchmark for llms on planning and reasoning about change). Valmeekam et al. 2022.
[2] Travelplanner: A benchmark for real-world planning with language agents. Xie et al. 2024
[3] Limits of transformers on compositionality. Dziri et al. 2024.
---
Rebuttal Comment 1.1:
Comment: My concerns are addressed. Thank you for your response.
---
Rebuttal 2:
Comment: Thank you for your response! Let us know if there are any additional comments/concerns that we can answer.
If we have addressed your concerns, can you reconsider the score? Thank you. | Summary: This paper assesses causal language models', particularly transformer decoders', abilities to solve Sudoku puzzles. The authors encode Sudoku puzzles into sequences, representing each cell as a (row, column, value) triple, and train a model from scratch on 1.8M puzzles. They then evaluate the trained model on a 0.1M holdout test set. Results indicate that when unfilled cells in the training data are arranged in an easy-to-hard order (based on a solver's results), the model can solve Sudoku puzzles with a 94.21% full-puzzle solving rate. The authors also use linear probing to show that the model's activations contain information about possible values for any given cell. The paper concludes that causal language models may exhibit complex reasoning capabilities when trained on data that informs appropriate reasoning steps, without requiring techniques like Chain-of-Thought or external reasoning engines.
Strengths: - The selected task and settings are suitable for studying the reasoning and planning capabilities of language models;
- The paper presents strong results that causal LMs can solve the Sudoku puzzle by training with appropriate data, without the need of techniques such as using CoT, search or external solvers.
- The results indicate that causal LMs may be able to perform search and planning internally, which seems novel and insightful for further research.
- The writing is easy to follow.
Weaknesses: - The probing study's methodology is somewhat questionable. It merely compares the top-k predictions for each cell against the ground truth candidate set. This approach may not accurately be termed "probing" as it doesn't examine intermediate representations. Furthermore, this study might not conclusively demonstrate that the language model internally tracks candidate sets, given that the model is explicitly prompted to predict the cell. A more effective approach could involve probing potential values of one cell while prompting the model to predict another. Positive results from such a method could more convincingly show that the model can internally reason about other cells relevant to solving the current one.
- I’m not sure about what the takeaway of Sec. 3.5 is.
- It seems like the paper used the wrong template.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Showing some example data in the appendix can be helpful.
- When trained with random order data, do you include tokens for cell locations for the loss calculations? Additionally, how do you ensure that the model predicts every cell during testing?
- I assume the input context always includes previously predicted cells and values, is this correct?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors have properly adequately limitations of their work in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and constructive feedback. We will make a pass over the paper and fix the typos pointed out and any other errors we find. We address your comments below.
- *Clarifying our probing methodology*: We agree that the more common way to perform a probing analysis involves learning a linear/non-linear model which takes in the embedding and outputs a label indicating a concept. However, we use probing in more general sense to refer to understand some of the innerworkings of the model. In addition, we want to clarify some confusion about how we do our probing analysis. We DO NOT only evaluate the top-k candidates on the cell the model chooses to predict. Although during its natural course of decoding the model might wish to decode cell location A, we force it (by conditioning) to decode at every other location and evaluate the top-k candidates.
- *Probing potential values of one cell while predicting another*: We had considered this approach but felt it was harder to get it working for the following reasons. It is unlikely that the same probe will work across different cells (since the embedding conditioned for predicting a particular cell might suppress information about other cells). This might require us to train separate probes for the values of each cell for which the amount of data available becomes sparse. We did try an alternative approach instead where we take the whole sequence of embeddings the model has produced as input to the probe. However, this concatenated embedding becomes a ~25000 dimensional vector which is very high dimensional and makes it hard to train the probe.
- *Takeaway of Section 3.5*: This section shows the additional gains we get when we use beam search decoding instead of greedy decoding. Beam search decoding with beam width K, maintains K `plausible’ candidates to continue decoding at all times. Note that we prevent a combinatorial explosion with the decoding length by always truncating the list to what we believe are the K most plausible prefixes so far. Using this approach, we show that we are able to further boost the complete puzzle solving accuracy. The main takeaway from this section is that, even in situations where the model’s most likely next token is incorrect, the correct token is in the top-K most likely tokens.
- *Wrong Template*: We are unsure of what the reviewer meant by the wrong template. We apologize if a formatting error slipped past us. Please let us know what specifically you are referring to and we will fix it.
We answer the reviewer’s technical questions below:
- *Showing example data in the appendix*: We thank the reviewer for this feedback. Indeed, we agree that this would help explain the challenges in our work better and we will add examples to this end in the Appendix.
- *When training with random data, do you include cell location tokens for loss calculations?*: We have tried both including it and excluding it. The results are similar in both settings.
- *Additionally, how do you ensure that the model predicts every cell during testing?*: We deliberately don’t explicitly ensure this. The model is supposed to learn the basic rules of Sudoku as well purely from data. How well they are able to adhere to the rules during inference is a measure of their generalization ability.
- *I assume that the input context always includes previously predicted cells and values*: Yes this is correct. Note that the input puzzle context length can vary as different puzzles have a different number of filled cells to begin with.
In addition to these comments, we have added additional ablations and a detailed error analysis into where the model struggles in the general response above. We have also shown additional experiments in a different puzzle solving setting - solving Zebra puzzles (also known as Einstein riddles) which shares some similarity with Sudoku puzzles but has many differences. We observe a similar outcome with Zebra puzzles as well where causal language modeling on solver order data is able to teach the model to perform well on these puzzles. Please see the general response for more details on these experiments.
---
Rebuttal 2:
Comment: Thank you for your response and the additional results. My concerns are mostly addressed. I like the additional results on the Zebra puzzles and I encourage the authors to include them in the next revision.
- Regarding the probing methodology: I'm still unsure this is the best way to perform probing. However, I agree that the current setup can provide evidence that the model is solving the puzzle in a way that resembles that of a logical solver.
- Additional notes on the difficulty measurement: in Figure 1 of the general response, I notice that the model performs almost perfectly on the easiest puzzles and performs worse and worse as the difficulty increases. Although this behavior is expected, I think it's worth discussing the difficulty distribution of the training and test data. Especially, how will the training data mixing affect the generalization performance to other difficulties? If the model truly learns the reasoning strategy, should we expect it to generalize from hard to simple questions? I feel like there are many interesting points that can be explored here.
Regardless, I'm happy to increase the rating of the current version. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their careful reviews and constructive feedback. We first address comments raised by multiple reviewers.
- *Comparison to traditional solvers and AI approaches*: We reiterate that our main focus is not to propose a new approach to solve the sudoku puzzle. We study the capabilities and limitations of causal language modeling (which is the dominant approach to train LLMs today) in a controlled setting. Note that in our approach we *don’t handcraft any parts of the architecture or the loss function* for Sudoku puzzles. Nonetheless we present here a comparison with some other approaches and will include a discussion in the revised draft:
- Combinatorial (Traditional) solvers: SoTA combinatorial solvers can solve a much larger fraction of Sudoku puzzles than our method. In fact, we filter to train and test only on those puzzles solvable by one such combinatorial solver. So our performance on a more general test set will be worse than such solvers. But this is an unfair comparison and not the point of the paper as these solvers are handcrafted with human intellect.
- Frontier LLMs: Frontier LLMs like GPT-4 and Gemini-1.5 are expected to perform very poorly on a challenging search and reasoning task such as Sudoku. The poor performance of these models on planning/reasoning tasks has been reported in [1,2,3]. Also see point 2 below.
- “Large Language Model Guided Tree-of-Thought” by Jieyi Long: This paper studies tree-of-thought prompting of LLMs and gets it to work for 5x5 Sudoku puzzles. This is an expensive prompting scheme already for 5x5 puzzles and these puzzles are much easier than 9x9 puzzles.
- “Recurrent Relational Networks” by Palm et al.: This paper handcrafts the recurrent network to match the puzzle structure (and obey the constraints) and performs multiple rounds of message passing between cells of the sudoku puzzle to arrive at a solution. We evaluate our trained model (trained using causal language modeling) on the test dataset proposed in Palm et al. and we observe a comparable performance without handcrafting the network or loss function (see attached PDF for the result).
- “Solving Sudoku with neural networks” by Akin-David et al.: They study how well handcrafted CNNs or LSTMs work for solving Sudoku puzzles and observe worse results than us.
- *Experiments with Frontier LLMs*: We performed a small-scale study on Gemini-1.5 and GPT-4o where we queried them with 3000 Sudoku puzzles each (in a 4-shot with CoT manner). Overall, we found that they got **0% of the puzzles completely** right and their accuracy on a per cell basis was around **8-11% (close to random guessing)**. We will add these results to the paper.
- *Error analysis*: We have some preliminary error analysis in the paper. Figure 3 (Appendix) shows where during decoding the model makes the first error. It tends to make it more often in the first 10-15 steps of decoding than later. We have added a breakdown of accuracy vs puzzle difficulty in the attached PDF. We also found some examples where there exists some easy-to-decode cells but the model fails to find such cells and therefore tries to decode a harder cell and ends up making a mistake. We have added one such example in the attached PDF. This adds to the evidence (as already mentioned in the paper) that when we provide hints about easy to decode positions to the model, it predicts the value of the cell correctly ~100% of the time. This indicates that the model is struggling to search for the positions that are easy to decode and the model selects positions which can’t be solved with the current information sometimes.
- *Additional experiments with Zebra puzzles*: To extend our analysis beyond sudoku puzzles as requested by the reviewers, we conduct our experiments for the Zebra puzzle (also known as Einstein's Puzzle) [4].
- *Background*: The Zebra puzzle is characterized by no. of entities and no. of attributes for each entity. e.g., each house in the original Einstein’s puzzle [4] is an entity and color, nationality, drink, smoke and pet are attributes associated with each house. In the puzzle, clues about relationships between entities and attribute values are given and the task is to figure out values for all attributes and for all the entities. Please see [4, 5] for some example puzzles. Observe that each clue can be abstracted out to a relationship between value of attributes and entities. e.g., “The Englishman lives in the red house” clue says that house (entity) which has color attribute = red also has nationality attribute = Englishman.
- *Experimental details*: To generate a zebra puzzle, we start with a random permutation of entities, attributes and values of each attribute and then we keep adding clues until the solver can solve the zebra puzzle without guessing/backtracking. To obtain a solution for a puzzle, the solver keeps track of possible values of each attribute for each entity and tries to narrow down them using the clues. The solver tries to make progress with clues with easier reasoning steps first and if it isn’t able to make progress then it tries clues with hard reasoning steps. Thus, the solver will fills the value of attributes which are easier to decide before going to the harder ones. Similar to the sudoku puzzle, we use the order in which attribute values are filled by the solver to train a transformer model of the same size using Causal LM (using the same hyperparameter as in Sudoku puzzle). We report our results in the attached pdf. The trained model solves **92% of the puzzles completely** and for **96% of the attributes**, the model predicts the correct value.
[1] Large language models still can’t plan. Valmeekam et al. 2022.
[2] Travelplanner: A benchmark for real-world planning with language agents. Xie et al. 2024
[3] Limits of transformers on compositionality. Dziri et al. 2024.
[4] Zebra Puzzle. Wikipedia page.
[5] Zebra puzzle on Brainzilla webpage.
Pdf: /pdf/061a588ca601d3530e9c5413cdbc52010bc0cc31.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning to Predict Structural Vibrations | Accept (poster) | Summary: The paper suggests a method of using operator network theory in order to form predictions of the resultant steady state vibrational patterns which appear on a beaded flat plate under several levels of external excitation. They firstly use numerical structural engineering software (rooted in FEMs) to generate a diverse set of input-output relationships over these flat plates and the corresponding (i) Velocity fields, and (ii) The frequency response functions (FRF). The flat plate model chosen for the simulations is based on Mindlin plate theory. This forms the basis of a benchmark dataset. They then they use a bespoke deep operator network in order to derive the input-output relationships with respect to the functional operator framework.
Strengths: - I think the largest strength of this paper is that of the development of the benchmark framework based upon 2D flat plate theory. It is unfortunate that many structural / mechanical / aeronautical datasets are not as publicly available, and that the problems faced in these fields are no where near as well explored when compared against those seen in CV and NLP fields, means that the addition of an engineering benchmark 2D flat plate dataset is very welcome.
- The fact that several other methodologies were used on the benchmark dataset, and that the (anonymously shared) code is neatly written is great.
- Additional commentary in the appendix is welcome.
Weaknesses: - It appears that the proposed numerical plate simulations are still somewhat simplistic, as for example there is no high frequency noise present in FRFs (which would result from experimental limitations of sensors). Moreover there is no investigation of difficult FRF phenomenon which may arise in real life such as the appearance of closely spaced modes (which are sometimes to within 0.1 Hz) of one another. Whilst such a setting may not arise according to geometry + Midlin theory, the current wide spacing of the FRF modes makes the problem somewhat removed from the real-life difficulties that would be seen as incredibly valuable.
- The proposed FQO-UNet does not feel like a strong original contribution. It is constructed very much from "commercial off the shelf/plug-and-play"-type NN components.
- The necessity of the FQO-UNet model having to interpolate (predict within a neighbourhood) for computational reasons when demonstrating the FRF plots at varying frequencies doesn't seem like the most adaptable solution for real life settings. It could miss closely spaced modes, and in the presence of noise perhaps introduce spurious modes.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Are there any plans to extend the nature of the dataset? Perhaps to different plate geometries (I see you have referenced this point in your limitations section -- but do you have intentions to move in this direction)? Perhaps also to include the effect of damage in the plate? A simple model could be that of punched holes in the plate represent a highly discrete form of damage, which could then also extend the range of tasks available to your benchmark to being that of binary classification.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - Since it would feel that the main contribution is that of the benchmark dataset generation I believe a good step forward would be to generate it even comprehensively with respect to various machine learning tasks. For example, right now the notion of FRF and velocity field prediction is performed. Perhaps also the transfer learning section can be enhanced. A small study is performed with respect subset design spaces, but in structural engineering a major point would be something akin to taking a pre-trained model on bridge A, and zero/one shot applying it to bridge B -- a different structure. This may be akin to demonstrating that you can take pre-trained models on a square plate and transfer onto an entirely different geometry such as a triangular or circular plate.
- It is understood that training took place with a lot of available data. In engineering, oftentimes in real life we do not have that much data on the real world structures. It would be good to show that some pre-traind model on the suggested computational dataset, can be extended to a real world model with an absolute minimal amount of real world data (low data cardinality)
- Moreover in terms of pre-training, it is understood that for example even the available computational training data for fluid flow simulations within Navier Stokes (Large Eddy Simulations) is dearth. It would be good to analyze what is the absolute minimal amount of computational training data necessary in order to produce a maximal amount usability from the proposed operator training models, especially as it pertains to model transferability.
I understand these are all very difficult problems within the engineering field, but some commentary and addressing of these problems would be good.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
thanks for taking the time to write your detailed and helpful review and for recognizing the contribution of our benchmark, given that there are scarcely any publically available in the mechanics domain.
**General Comments:**
> The necessity of the FQO-UNet model having to interpolate [...] doesn't seem like the most adaptable solution for real life settings.
We are able to drop this constraint. Please see Section 3 of our general answer for details.
> It appears that the proposed numerical plate simulations are still somewhat simplistic, as for example there is no high frequency noise present in FRFs
We agree that considering noisy data is an interesting task for applications like structural health monitoring, where sensor data is available. However, we target design space exploration / optimization of a numerical model. In this setting, we do not have noisy data, but deterministic simulation data in a high dimensional design space.
> Moreover there is no investigation of difficult FRF phenomenon which may arise in real life such as the appearance of closely spaced modes [...]
Conceptually, our model directly predicts deflection shapes, which are a superposition of all modes. Even for frequency responses with closely spaced peaks, the underlying physics and governing equations apply irrespectively.
For modeling frequency responses with dense modes, our frequency query approach is suitable since it allows us to sample the FRF at arbitrary resolution.
> The proposed FQO-UNet does not feel like a strong original contribution. It is constructed very much from "commercial off the shelf/plug-and-play"-type NN components.
We kindly disagree on this point. Our FQO-UNet results from systematic experimentation and exploration specifically for vibration prediction. It involves established as well as unconventional components, but the combination of them into a working system is far from trivial. These are several crucial choices:
* Transformation procedure between frequency response and velocity fields: The choice of our exact transformation (Eq. 1 in paper and code) is crucial to avoid numerical issues and enable gradient flow.
* Balance compute between the shared embedding of the plate geometry and the frequency-specific decoder.
* Strategy for incorporating scalar geometry parameters and frequency query: The geometry parameters are inserted in earlier layers, to affect the shared embedding for the plate geometry. FiLM layers lead to a better generalization than concatenating parameters or sinusoidal embeddings.
* Use of self-attention: The global information exchange through self-attention support the prediction, since vibration patterns depend on the global plate geometry.
Our model clearly outperforms established architectures, like the Fourier Neural Operator and DeepONet. We believe that building on established components is indeed one of the reasons for the impressive progress in (applied) machine learning in recent years.
If you disagree with this perspective, we gladly elaborate on this.
**Questions:**
> Are there any plans to extend the nature of the dataset? Perhaps to different plate geometries [...]? Perhaps also to include the effect of damage in the plate? [...]
We extended the G5000 setting by variable boundary conditions and force position, please see Section 1 of the general answer. In the future, we intend to move to more complicated structures, such as lightweight plate structures with additional frames and stringers. Modeling structure-fluid interactions to study sound radiation is an additional area of interest.
Damage prediction would require taking the frequency response as an input for a classifier. While this is an important application, it is not directly compatible with our model design which maps from plate geometries to frequency responses.
Thus, we mainly consider the design optimization task (general answer Section 2) which is directly applicable to our existing dataset.
**Limitations:**
We identified two main points in the answer: transfer learning/domain shift and sample efficiency.
Transfer Learning / Domain Shift:
> Perhaps also the transfer learning section can be enhanced. [...] taking a pre-trained model on bridge A, and zero/one shot applying it to bridge B [...] pre-traind model [...], can be extended to a real world model [...]
We concur on the significance of transfer learning and conduct an additional experiment assessing transfer learning between from a more constrained dataset to a more general dataset (see Section 1 general answer).
We hope you understand that addressing transfer to entirely new geometries and real world data is not possible within the scope of this paper.
Sample Efficiency:
> [...] It would be good to analyze what is the absolute minimal amount of computational training data [...]
We agree that data-efficiency is crucial for engineering problems and generate a new dataset consisting of 50,000 plate geometries (V5000 setting) but only 15 frequency evaluations per geometry. We train models on subsets with a fixed amount of 150,000 data points by varying the ratio of frequencies and plate geometries (Table 4, Figure 2 PDF). Our original dataset has 1.5 Mio data points.
Reducing the frequencies per geometry drastically increases the data efficiency of our method. With a tenth of data points, the MSE metric approaches the value of the original dataset, with slightly less favorable results for the other metrics. When training with the whole new dataset, half the size of V5000, the MSE is less than a third of our original model. Note, this strategy is only compatible with a frequency-query approach.
**Table 4**
|# freqs.|# geometries|MSE|EMD|E_Peaks|E_F|
|---|---|---|---|---|---|
|300|500|0.48|13.16|0.32|5.7|
|150|1,000|0.31|11.18|0.22|4.3|
|30|5,000|0.12|8.74|0.14|2.1|
|10|15,000|0.10|11.18|0.17|1.6|
|3|50,000|0.10|11.47|0.20|1.4|
|15|50,000|0.02|3.61|0.04|0.08|
|original||0.08|4.24|0.07|1.7|
---
Rebuttal Comment 1.1:
Comment: I believe the majority of my queries have been addressed adequately in the reply. I think the detail you mentioned of
> Transformation procedure between frequency response and velocity fields: The choice of our exact transformation (Eq. 1 in paper and code) is crucial to avoid numerical issues and enable gradient flow.
was somewhat lost on me upon reading, and it would be good if more detail and significance is talked about in relation to Eq 1, especially in the enabling of gradient flow.
I agree that a lot of what I am asking for (or have general queries about) tend to be more pushing towards the boundaries of realistic engineering settings, and that this current paper tries its best to remain grounded and fundamental in its approach, so that it may be used as a spring board for future studies. And that therefore some of the ideas "I floated" may be too much too soon, however I appreciate the effort the authors went through to try to address these as such and to provide details of a few additional experiments.
Based on this I am willing to move my score from 4 --> 5, however I am hoping that the details of this new experiment are placed in the camera ready version of the paper, that the significance of Equation 1 is expounded upon a little more, and the "fundamental-ness" of the paper (i.e. the addressing of queries such as my own including noise / SHM applications / general real life issues and how they don't readily pertain to this particular study) are addressed or discussed about in some minor capacity so that readers better understand the overall angle of this paper.
---
Reply to Comment 1.1.1:
Title: Thank you for your answer!
Comment: Thank you for your answer and raising the score. We appreciate your perspective of our paper as a 'spring board' for deeper investigations of more advanced settings and plan to investigate such challenges in the future.
The additional experiments and discussions will be included along with further explanation of the transformation (Eq. 1 + code) in the camera-ready version. One aspect is that the log space enables the loss (and thus gradients) to be sensitive in off-peak regions of the frequency response. | Summary: This work presents a benchmark dataset of 12,000 rectangular plate geometries with different beading patterns and their corresponding vibrational responses. The authors suggest that the dataset can be used to construct surrogate models to aid in the design and optimization of plate structures for noise reduction purposes. In addition to the dataset, the authors propose evaluation metrics to measure prediction accuracy: mean squared error, earth mover's distance, and peak frequency error. Lastly, they introduce a new neural network architecture that can map plate geometry and excitation frequencies to vibration patterns.
Strengths: The paper introduces a novel dataset and neural network architecture.
Weaknesses: The study is limited in scope regarding the forcing terms, material properties, and boundary conditions. It focuses exclusively on rectangular plates with elliptical and linear beading patterns.
It is unclear how the beading patterns are integrated into the Mindlin differential equation.
Extending the methodology to more complex systems is not straightforward, and the assumption of simply supported boundary conditions is restrictive.
The practical significance of predicting vibration patterns is not well-justified. In many applications, the frequency response of the plate is more critical than the detailed vibration modes.
The motivation for the study is weak and lacks clear justification.
Technical Quality: 3
Clarity: 2
Questions for Authors: Can the authors please justify their comparisons to the baselines?
Why are these specific architectures chosen?
How do they differ?
Why do these networks have gaps in performance?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Although the authors outline many limitations of their approach, they do not address the challenge of extending their neural network architecture to different geometries. The current design relies on an image-like grid, which is not easily adaptable to other shapes or forms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
thank you for taking the time to write your thoughtful review. We appreciate that you recognize our dataset and neural network architecture as novel, as the area of engineering and more specifically vibration prediction is not well established in the ML community. Please also take a look at our general answer, where we detail some additional results and discuss extensions to our dataset.
**General Comments:**
>The study is limited in scope regarding the forcing terms, material properties, and boundary conditions.
Please note that the G5000 setting already consists of variable material properties (see Tab. 4 in the appendix). However, we agree with your assessment that a flexible model is desirable and extended the G5000 setting by variable boundary conditions and force position. At this complexity level of our dataset, there are already several challenging methodological questions in e.g. data efficiency, design optimization and transfer learning specifically for frequency response data. We believe developing an understanding of these issues is easier with our extended dataset than a more complex dataset (please see Section 1 general answer for further discussion).
**Possible misunderstandings and lack of clarity:**
> It is unclear how the beading patterns are integrated into the Mindlin differential equation.
Technically, beading patterns define the mid-surface of the global plate geometry. This geometry is discretized via the FEM method using shell elements. Shell theory combines a plate formulation (Mindlin differential equation) and a disk formulation for in-plane loads. Thus, the beading patterns define the global position and orientation of the shell elements. Hence, the beading patterns do not have to be incorporated explicitly in the Mindlin differential equation, since the equation models the local behavior of a Shell element.
>The practical significance of predicting vibration patterns is not well-justified. In many applications, the frequency response of the plate is more critical than the detailed vibration modes.
We agree that from a practical standpoint, the frequency response function (FRF) is an important quantity. Because of this, we evaluate our method only on FRFs (metrics). However, for training, a clear result from our experiments is that FRF predictions become more accurate when predicting velocity fields and then directly calculating the FRF from the fields. We hypothesize that velocity field prediction acts as a regularizer and prevents (some) physically impossible solutions.
Furthermore, from an acoustics perspective, the vibration pattern is crucial to predict the sound radiation characteristics. The normal velocity field of a vibrating structure induces pressure waves in the surrounding fluid. Assuming a weak coupling of the fluid-structure interaction, our DL model could be used to model the Neumann boundary condition of an adjacent fluid domain. This is only possible if we predict the velocity field.
> The motivation for the study is weak and lacks clear justification.
As acknowledged by other reviewers (1y6b, 75jo), vibration prediction is an important problem. Specifically, panel structures, which can be simulated using our approach, are used in mobility, civil engineering, renewable energy technologies, household appliances and many more. By providing a method for faster vibration mode prediction, we can contribute to the noise-reduced designs. Our benchmark dataset and method represents a first step in applying deep learning based surrogate models to these vibration prediction problems.
Thank you for drawing our attention to these points. This will enable us to improve the relevant sections of our paper.
**Questions:**
>Can the authors please justify their comparisons to the baselines? Why are these specific architectures chosen? How do they differ? Why do these networks have gaps in performance?
There have been some prominent and successful works in the machine learning for differential equations community, where we position our work in a broader sense. Fourier Neural Operator and DeepONet are arguably among the most well-known methods in this space, which is why we choose these methods as baselines. The k-Nearest Neighbor approach serves as a tool to gauge the complexity of the learning task: we expect a deep learning-based method to perform significantly better than k-nearest neighbors.
The remaining architectures were constructed to investigate central architecture components based on our analysis of the problem (our paper, Section 3 Q1 to Q3). The reasoning why we assumed e.g. that the frequency-query approach is better than predicting the response for all frequencies at once is that there is intense variation between consecutive frequencies and with frequency queries the neural network can focus on single frequencies for one forward pass. In contrast to the baseline methods, our specific architectures are designed for the problem at hand, and have useful inductive biases such as the frequency query approach.
Feel free to ask if you have further questions regarding our choice of baselines.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response.
I have revised my score. | Summary: This paper proposes a new surrogate deep learning model, the Frequency-Query Operator (FQO), designed to study structural vibrations in excited plates by mapping these plates as well as specific excitation frequencies to the resulting vibrations patterns. It introduces a new benchmark featuring 12000 plate geometries with varying geometric and material properties, and associated velocity field responses to excitations computed numerically using the finite element method. The FQO’s performance is then compared with numerous other architectures.
Strengths: - The tackling of a new time-independent problem in solid mechanics using deep learning
- The introduction of a new FQO architecture to infer the structure vibrations, showing better performance than other classic neural operator models and a high speedup when compared against classical FEM.
- A new 12,000 element benchmark data-set of plate responses to various frequency excitations, for different plate geometries and material properties
- The paper is well written and organized
Weaknesses: - Only very simple plate geometries (rectangular shaped) are considered here.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the current architecture scale to larger grids (finer FEMs), which would be used for instance to capture/represent smaller beadings, both in terms of accuracy and runtime ?
- Would adding a physics loss based on equation be feasible and yield better accuracy ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - New encoder/decoders would need to be designed to deal with more complex geometries
- Costly FEM simulations would need to be run on complex geometries to extend the dataset in this case
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
thanks for taking the time to write your informative and helpful review. Your recognition of the contributions of our work as well as finding it well written and organized is highly appreciated!
Please also take a look at our general answer, where we detail some additional results and discuss extensions to our dataset. There, we also discuss our reasoning for keeping our plate geometries comparatively simple. As you say, to extend the dataset to more complex geometries, costly simulations would need to be run. Our research currently focuses on data-efficiency, including active learning methods to keep the simulation costs reasonable, while extending our approach to more complex geometries.
**Questions:**
>How does the current architecture scale to larger grids (finer FEMs), which would be used for instance to capture/represent smaller beadings, both in terms of accuracy and runtime ?
Predicting the velocity fields at a higher resolution requires adding additional upscaling layers, which would cause moderate increases in runtime. The speed-up of one neural network prediction compared to one FEM simulation will increase, since the computational cost of the FEM increases drastically with increased number of degrees of freedom. In terms of accuracy, we would not expect large changes in prediction quality. As the velocity fields are spatially smooth, no additional information would be added. In the investigated frequency range it is expected that very small beadings also only have very small effects on the vibration patterns and therefore not affect the accuracy of prediction much.
>Would adding a physics loss based on the equation be feasible and yield better accuracy ?
Adding a physics based loss and setting up a physics informed neural network (PINN) is a compelling idea and might yield better accuracy. However, using a PINN for shell structures is not straightforward, since it requires solving the shell equations on a non-Euclidean domain. Current research has investigated such PINNs for solving a single Shell model and compares it to classical FEM. However, extending the approach to varying geometries, like in our case the different beading patterns, remains an open challenge.
We will add this discussion to the paper.
---
Rebuttal Comment 1.1:
Title: My questions have been addressed
Comment: Thank you for these clarifications and for the rebuttal. I maintain my score as is for the review. | Summary: The paper report development of a surrogate model for prediction of the structural vibrations. The paper reports outperformance of the their method to physics informed architectures such as DeepOnet.
Strengths: The authors tackles and interesting and important problem in engineering domain which is prediction of structural vibration. This problem traditionally was solved by means of numerical techniques which is computational costly. A surrogate model can enhance the time to solution.
Weaknesses: It is not clear well if such surrogate model is robust to domain shift. This means what happen if we expose to model to the structural vibrations at different scale (lower or higher) comparing to what has been used in the training phase.
Technical Quality: 3
Clarity: 3
Questions for Authors: Does the network learn the vibrations physics or just learn the mapping from the input data to the output? Does it learn to solve a second order vibration ODE or it is imitating the input training data?
Can the authors explain how the surrogate model can predict the structural vibration of structures that has not been seen in the training phase?
How the can one use the model for scaling predicting for higher scale geometries? Have the authors utilized the model for extrapolations?
How would the model behave in presence of domain shift of the input data?
After the model is built, Can the model be used for frequency response prediction of a different system such as rotor dynamics systems or a ball bearing system?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I still doubt about the generalization and performance of of the model beyond the domain of the training data set. Can authors perform experiments to evaluate the performance of the model on a different system other than plates? such a rotary machine data or any other vibration data authors can identify.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
thank you for taking the time to write your thoughtful review. We are pleased that you also consider structural vibrations to be an interesting and important problem in the engineering domain. We discuss generalization and domain shift in more detail in Section 2 of our general answer.
**Questions:**
> Does the network learn the vibrations physics or just learn the mapping from the input data to the output? Does it learn to solve a second order vibration ODE or it is imitating the input training data?
Our numerical model can be considered a mapping from the input space to resulting vibrations, which is approximated by our network.
Our results in generalization, e.g. transfer learning and finding new beading patterns designs with properties outside the training data distribution (see General Answer Section 2), suggest that some physical knowledge is acquired.
> Can the authors explain how the surrogate model can predict the structural vibration of structures that has not been seen in the training phase?
Conceptually, analogously to other application fields, with the right inductive biases (e.g. predicting velocity fields, simplicity bias of SGD) a neural network learns a function that
approximates the ground truth function (in our case numerical simulation).
Empirically, we find our model to generalize to unseen beading patterns within distribution (Table 1 general answer) and out-of-distribution (Table 2).
> How the can one use the model for scaling predicting for higher scale geometries?
> Have the authors utilized the model for extrapolations? How would the model behave in presence of domain shift of the input data?
We agree, that these are interesting and challenging questions. In Section 4 of the paper, we split our dataset (based on the number of mesh elements that are part of a beading or not) and trained either on the 'more beadings' or 'less beadings' subset while evaluating on the other set. Although the performance decreased, our networks were still able to perform better than several baseline methods despite never having seen data from this distribution.
Further, we report results from taking a pre-trained model on V5000 and fine-tune it on G5000 (Table 2 in Section 2 general answer). As G5000 includes a substantially larger design space than V5000, the gain in performance is promising for the potential of fine-tuning a model under domain-shift.
Without fine-tuning, we believe large extrapolations will fail with the current setup.
>After the model is built, Can the model be used for frequency response prediction of a different system such as rotor dynamics systems or a ball bearing system?
To extend our model to applying to systems like ball bearings and rotors, we would have to construct it differently. Our network depends on the specific input space defined by the beading pattern and scalar parameters of a plate. A true foundational model capable of simultaneously modeling plates, rotors and ball bearings would need to map all these systems to a common input space. Unfortunately, we are not yet at this point. However, our method could be used to simulate plate components in such systems, where rotating components can be modeled as harmonic excitations.
---
Rebuttal Comment 1.1:
Title: My questions are answered and addressed. Additional comments added.
Comment: Please make sure in the revised version, you include a section to address the response to the questions asked which would be very beneficial for the future readers to have an understanding on the limitations of the approach (Domain shift evaluation, applicability to other datasets, scalability limitations and etc.)
Please, if possible, identify a methods to quantify the difference in the distribution of the training data set and the tested unseen data set, so that it support your statement on applicability of the NN to an unseen enough different data set. Metrics for quantification of the distance between two distribution such as relative entropy might be useful for you.
Please also describe completely the architecture of the tested DeepOnet model you tested and the details of the training and evaluation of the physics informed network. This is of outmost importance for reproducibility of the research be others and ease of verifications of contributions of the model.
A section for mentioned future works can also open new interesting topics for the community.
Thanks for the rebuttal. I keep my score as is for this review.
---
Reply to Comment 1.1.1:
Title: Thank you for your answer!
Comment: Thank you for your answer. We will make sure to include any missing information in our revised version. We trained DeepONet in a data-driven manner, as described in Lu et al. (2019, 2021), where it was introduced. This will be further described in our revised version. Thank you as well for pointing out the possibility of quantifying distribution differences between different datasets, we will investigate this. | Rebuttal 1:
Rebuttal: Dear reviewers,
Thank you for many valuable and thoughtful comments. We are pleased that the reviewers recognize the value of our novel benchmark (**ifbc, 1y6b, YeHT**) and method (**YeHT, ifbc**) for the important problem (**1y6b, ifbc**) of structural vibration prediction. as well as rating the paper (**ifbc**) and code (**75jo**) well-written.
First, we would like to comment on three key points, mentioned by multiple reviewers:
## 1. Dataset complexity: Extension to more complex geometries, boundary conditions, or forcing terms (**ifbc, YeHT, 75jo**)
More complex geometries are an exciting research avenue. We deliberately constrained ourselves to rectangular plates to focus on methodological issues, as an investigation of data scarcity, design optimization and transfer learning (also suggested by the reviewers) is already challenging in this setting. This enables us to benefit from the comparatively fast numerical simulations for rectangular plates and obtain insights which we expect to generalize to more complex and less constrained geometries.
Therefore, in the scope of this paper, we opted for retaining the current plate geometries which have the direct engineering application of metal beading but extend the G5000 dataset (G5000 new) by two new aspects: (a) variable rotational stiffness at the boundary, allowing us to predict simply supported as well as clamped boundary conditions, and (b) a variable point force position. This extension makes the prediction task more challenging:
**Table 1**
| |MSE |EMD |E_Peaks|E_F|
|----|----|----|-------|---|
|G5000 - original|0.09|4.94|0.07|2.5|
|G5000 - new |0.11|7.47|0.08|3.1|
## 2. Deeper investigation of generalization, data efficiency and transfer learning (**1y6b, 75jo**)
Generalization and robustness to domain shift are critical for surrogate models to be applicable when the mechanical model changes. We reported results on transfer between different subsets of our dataset in Table 2 in the paper. To further investigate generalization we show new results when using our FQO-UNet for (1) design optimization in conjunction with a guided diffusion method (see Figure 1 in PDF), (2) an experiment on minimizing the amount of training data for a given prediction quality, and (3) transfer learning between V5000 and G5000:
1. For design optimization, the goal is to find beading patterns with the lowest possible mean frequency response in predefined frequency ranges. Based on the gradient information from the surrogate model, our guided diffusion method is able to generate beading patterns with a mean frequency response well below any plate in the training data (verified by numerical simulation). This highlights the potential for generalization beyond the training data distribution, as the resulting beading patterns look different from the training data, with more variety in thickness, form and number of beadings.
2. We generate a new dataset consisting of 50,000 plate geometries but with only 15 frequency evaluations per geometry. Reducing the frequencies per geometry drastically increases the data efficiency of our method. With a tenth of data points compared to our original dataset, the MSE metric approaches the original value (Figure 2 in PDF, and answer to reviewer **75jo**).
3. For the transfer learning experiment, we took a model trained on V5000 as an initialization for the G5000 setting. When initializing with the pretrained model, the performance improves in all metrics (Table 2).
**Table 2**
| | |MSE |EMD |E_Peaks |E_F|
|----|-----|-----|-----|-----------|---|
|G5000 | from scratch |0.086|4.94 |0.068 |2.5|
|G5000 |V5000 fine-tuned|0.061|4.01 |0.053 |1.9|
|G5000 - new | from scratch |0.111|7.47 |0.079 |3.1|
|G5000 - new | V5000 fine-tuned|0.095|4.63 |0.078 |1.9|
## 3. Limitations of FQO-UNet: Grids and Frequency (**YeHT, 75jo**)
Indeed, our architecture requires a regular grid structure with respect to the input plate, constraining the input geometries (see a discussion of this setting in general answer 1).
However, this limited setting enables us to focus on understanding which aspects are relevant for frequency response prediction. For example, some of our findings are:
* The frequency query approach leads to better predictions. They also enable evaluating the FRF at arbitrary locations (e.g. in 0.1 Hz steps or beyond the training frequency range).
* Predicting a field quantity (in our case velocity) first and then directly calculating a frequency response from it is more data efficient than directly predicting the frequency response.
* Convolutions pose a beneficial inductive bias for predicting the field quantity.
These insights can now inform the development of more flexible neural network architectures.
As mentioned by **75jo**, we originally mapped to five velocity fields per query. This constraint is technically not necessary and was only introduced to speed up training.
This goal can also be achieved by training on a subset of frequencies per geometry per batch, without frequency bundling. The results (Table 3) are close to identical, while frequency subset training is around 2 times faster.
**Table 3**
| |MSE |EMD |E_Peaks|E_F|
|-----|-----|-----|-----------|---|
|V5000 - original|0.09|3.90|0.08 |1.8|
|V5000 - one prediction per query |0.08|4.24|0.07 |1.7|
# Conclusion
Concluding, we consider this work a solid foundation for the vibroacoustics and ML community to make progress in applying ML methods. Interdisciplinary work like ours first needs to establish common problems and a common language, which we are grateful to the reviewers to acknowledge. With the design optimization task and the exploration of data-efficiency we display two promising future research directions of our work.
Pdf: /pdf/ad9570f12949b3ab28f3541ba61358dc2d7da1c0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable | Accept (poster) | Summary: This paper studies reconstruction attacks on machine unlearning. The authors propose a reconstruction attack that can accurately recovers the deleted sample given the pair of linear models before and after sample deletion. This is made possible by leveraging the closed-form single-sample training algorithm for linear regression as well as the ability to accurately estimate the covariance matrix of the training data from a public dataset. They also extend the attack to the setting where the model consists of a (fixed and known) embedding function, followed by a trained linear layer.
Strengths: - The problem studied in this paper is interesting.
- The proposed attack achieves good performance for linear regression models.
Weaknesses: - The proposed attack is based on strong assumptions. This paper assumes that the attacker has knowledge of the training data distribution, the model’s loss function, and the embedding function. However, this information is usually not published by the model maintainer. It is unclear how the attacker can obtain this information in practice. The authors should provide more details to justify these assumptions.
- The proposed attack has limited applications. This paper mainly focuses on linear models, and it is unclear whether the proposed attack can maintain good performance with more complex models.
- It is unclear which unlearning method is adopted in the experiments. Will the attack performance vary when different unlearning methods are applied?
- The proposed attack relies on a public dataset with the same distribution as the training data. The authors do not provide information about the size of this dataset in their experiments. It would be more convincing if they could evaluate the effect of dataset size on the attack performance.
Technical Quality: 2
Clarity: 3
Questions for Authors: See above weaknesses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *[Response to Weaknesses 1]*
The assumption of having access to the training data distribution is standard throughout the membership inference literature. Because we are trying to compare the risks of data deletion to the known risks absent data deletion, we adopt the same model. In general we view this as an appropriately cautious assumption about what an attacker might know, especially when attacks are being used to assess model risk. You are right that it might sometimes be difficult for an attacker to find an appropriate sampling distribution, and understanding how much it can be relaxed is an important question in general for the entire literature on privacy attacks; but one that we view as outside the scope of this paper.
In our experiments, we assume the most commonly-used loss functions, including MSE, logistic loss, and hinge loss, and show positive results for each. Random fourier features are also widely used in areas where latency is a concern.
----
*[Response to Weaknesses 2&3]*
An important aspect of existing machine unlearning approaches is that almost all of them aim to approximate full retraining with reduced computational cost. These approximations are proposed due to the intense compute required for full retraining, especially for large neural networks and LLMs.
Our study is explicitly focused on exposing the risk present in even very simple models, such as linear regression, logistic regression, SVMs, and feature augmentation using random Fourier features. For such models, full retraining is feasible, and would be the expected solution to the data deletion problem, as computational approximations to retraining are not needed.
Full retraining for a single data deletion in linear regression is equivalent to updating the model parameters using Newton's update as we discussed in Sec. 3.2. By leveraging this concept, our attack achieves almost perfect reconstruction, and the only error source is the estimation of the Hessian matrix using public data. For more complex models, a number of popular unlearning methods approximate full retraining by taking a Newton update. When we apply our attack to more complex models, we act as if full retraining results in the model that is derived from taking a newton step --- and the reason our reconstruction performance degrades is simply that this approximation becomes imperfect. However, if rather than full retraining, the machine unlearning method employed was one that simply took a newton step, our reconstruction would again be near perfect. We will elaborate on this point in the revision.
----
*[Response to Weaknesses 4]*
As we mention in Sec. 4, in all our experiments, the dataset is split into two halves; the first half is used for training the private model, and the second half is used for attacking the trained model.
The public dataset is used to estimate the Hessian matrix, which is the covariance matrix for linear regression. The quality of the estimation increases asymptotically with respect to the number of samples in the public dataset, and it is the only source of error in attacking linear regression, thus, the attack performance behaves similarly to the quality of the estimation. This grows with the dimension of the model; we can elaborate on this point in the revision. | Summary: This paper presents reconstruction attacks against Machine Unlearning in the following sense: the attacker is assumed to have access to a model's parameters before and after the removal of a single data point; they then produce a guess for this point, which is evaluated in terms of its cosine similarity to the original data point.
Strengths: This paper is nice and easy to read. The description of the attacks is easy to follow, and the theoretical derivations are interesting.
The specific application of reconstruction attacks against machine unlearning is, to my knowledge, novel; it stems from a large body studying the privacy of machine unlearning.
The presented problem is also well-scoped, and this research opens the space for new studies in the area.
Weaknesses: 1. The threat model is unrealistic: it is far fetched to assume that the attacker has access to the model parameters (before and after unlearning), and yet at the same time to assume that they cannot see the target point x in this process. Now, strong assumptions such as this one have been used in prior literature; but usually their purpose is to set upper bounds on the adversary when proving the security of defenses. That is not the case here.
A second assumption that is quite strange is that the attacker somehow knows the model's parameters, yet they don't know the training set Xpriv, and they need to rely on a public one.
2. The authors focus on the very limited and quite simplistic scenario specified above. Yet they had various options for exploration:
- what if the attacker only has black-box access to the model? Based on similar prior work that evaluated both white- and black-box access (e.g., Balle et al.), I would expect your attacks to transfer well.
- You mentioned DP in several places, yet provided no evaluation of said defense: what parameter set can prevent these attacks?
- What if more than 1 points were unlearned at once? Would your attack apply?
Technical Quality: 4
Clarity: 4
Questions for Authors: Can you please explain your threat model choices (see Weaknesses above)?
Typo: "To simply the notation"
Baseline: an interesting baseline to consider would be the point from the public set that is closest to the target point in the private set; this would intuitively be a better baseline than MaxDiff. Could your methods beat this baseline?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: These were appropriately discussed, although the authors should better emphasize that the threat model is not realistic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *[Response to Weaknesses]*
We take the position that security assurances should be based on minimal assumptions. Here, we view the assumption that an attacker who has API access to the model does -not- have access to model parameters to be dangerously strong. Consider for example a d dimensional linear model, as we study initially in our work. Just from query access, an attacker can recover the model parameters by querying the model on d linearly independent points and solving a system of linear equations. This requires no knowledge of the data distribution. So, there is no meaningful difference between white box and black box access in such scenarios.
What about for more complex models? Even there the boundary between white-box and black-box access has been blurring. For example recent work [1] has shown that it is possible to reconstruct the embedding matrix from a production LLM model with only black-box access. And of course, open source models explicitly release parameters with each version of the model.
As for the fact that the attacker does not know the training set: first we note that the goal of a reconstruction attack is to recover data from the training set. Even for linear models, in which recovering the parameters given only black box access to the model is trivial, it is in general not possible to recover the training set from the parameters of a single linear model. To see this note that many datasets produce the same set of parameters --- linear regression parameters are e.g. invariant to rotations of the dataset For linear models, and the model can be expressed with only O(d) bits of information (after discretizing weights), whereas a dataset with n datapoints and d features requires Omega(n * d) bits to represent. For larger models, existing state of the art methods such as [2,3] rely on creating a set of samples such that the realized model parameters satisfy the KKT conditions of the loss function (the parameters minimize the loss at recovered samples). The success of these approaches requires that datasets are trained on a small number of samples (e.g. 500).
[1] Carlini, Nicholas, et al. "Stealing part of a production language model." arXiv preprint arXiv:2403.06634 (2024).
[2] Haim, Niv, et al. "Reconstructing training data from trained neural networks." Advances in Neural Information Processing Systems 35 (2022): 22911-22924.
[3] Buzaglo, Gon, et al. "Deconstructing data reconstruction: Multiclass, weight decay and general losses." Advances in Neural Information Processing Systems 36 (2024).
----
Re: multi-sample deletion, our approach recover an approximation of the gradient sum for all the deleted samples. A determined adversary can poison an unlearning round by requesting deletion of n-1 samples known to the adversary, and so can recover the gradient of the single unknown point from this sum. Nevertheless we agree that multi-sample recovery is an important question that our work does not address, and think this is one of the most interesting questions for future work arising from our paper.
On defenses: differential privacy guarantees compose, so given two models (from before and after a deletion) each trained with $\epsilon$-differential privacy, we have the guarantees of $2\epsilon$-differential privacy. When examples are drawn from a distribution uniform on some data domain $\mathcal{X}$, then $\epsilon \leq \Omega(\log|\mathcal{X}|) is enough to provably prevent reconstruction. More generally differential privacy bounds the advantage that an adversary has attempting to reconstruct a sample given the model parameters (compared to their success rate ``just guessing''), and so precise guarantees depend on the entropy of the data distribution (as low entropy distributions allow high rates of "reconstruction" without the adversary even needing to see the model. These are standard/generic properties of differential privacy which is why we did not devote space to it, but we are happy to elaborate in the revision.
----
*[Response to Questions]*
Unless we misunderstand, the ''baseline'' you propose could not be implemented without already knowing the private dataset: otherwise the attacker would have no way of knowing what the ``closest point to the target point'' in the private set is. Of course with knowledge of the private training set, there is nothing left to do. If we misunderstand your proposal please let us know, and we'll be happy to discuss further!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, and in particular for addressing my comments on:
- threat model: agreed on the linear model, and if I understand correctly also to the "Fixed Embedding Functions" setting. I'm on the fence as to whether this threat model is any useful for more general models, but I take your point.
- multi-sample: noted, although It seems to me that this could've easily featured in one of your experiments.
- DP: this simple derivation is actually quite interesting to me personally. "These are standard/generic properties of differential privacy which is why we did not devote space to it, but we are happy to elaborate in the revision.": it's of course entirely your call on whether to include them or not in the paper.
Regarding the "baseline": I meant that, as an evaluation baseline, you could consider an (optimal, to some extent) adversary, who outputs the point from the public dataset that is closest to the target. Of course, this would be unrealistic as an attacker; however, to my understanding, it should provide a good intuition as to how much better than "just using the public data" your method is doing.
This is not a requirement for acceptance; just a mere suggestion.
My (positive) score is unchanged.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thanks for the engagement, the helpful suggestions, and the positive score. We appreciate it! | Summary: The authors propose an attack that can accurately recover unlearned samples through reconstruction attacks on linear regression models. They extend this work to include linear models with fixed embeddings and generalize it to more generic loss functions and model architectures by employing Newton’s method for the reconstruction attack. This work significantly contributes to understanding the privacy vulnerabilities in machine unlearning.
Strengths: 1. The authors provide rigorous theoretical proof of the reconstruction attack.
2. They conducted thorough experiments across different tasks, datasets, and architectures, demonstrating the effectiveness of their attack.
3. The extension of the work to linear models with fixed embeddings and the generalization to other loss functions and model architectures showcase the adaptability and robustness of their method.
Weaknesses: 1. This work is limited to the exact unlearning scenario, i.e. retraining from scratch without the unlearned data, and focuses solely on unlearning a single data point. In contrast, the scenarios that have received more attention in unlearning research involve approximate unlearning and unlearning multiple data points at the same time.
2. The experimental evaluation lacks diversity in metrics, which could provide a more comprehensive understanding of the attack's effectiveness.
3. There is limited discussion on the potential defenses against the proposed attack.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Is “Avg” a valid or commonly used baseline? It seems too straightforward to predict the deleted example as an average of the public samples, as described in Section 4. Can the authors elaborate on the principle or intuition behind this choice?
2. How robust is this attack across different configurations? For example, how does the attack's performance vary when using different model architectures on the same dataset?
3. Can the authors elaborate on the computational complexity and scalability of the proposed attack method when applied to various datasets and model architectures?
4. What practical countermeasures can be implemented to mitigate the identified privacy vulnerabilities (e.g. the proposed reconstruction attack in this paper) in machine unlearning?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See Weaknesses Section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *[Response to Weaknesses]*
Our study is explicitly focused on exposing the risk present in even very simple models, such as linear regression, logistic regression, SVMs, and feature augmentation using random Fourier features. For such models, full retraining is feasible, and would be the expected solution to the data deletion problem, as computational approximations to retraining are not needed.
Full retraining for a single data deletion in linear regression is equivalent to updating the model parameters using Newton's update as we discussed in Sec. 3.2. By leveraging this concept, our attack achieves almost perfect reconstruction, and the only error source is the estimation of the Hessian matrix using public data. For more complex models, a number of popular unlearning methods approximate full retraining by taking a Newton update. When we apply our attack to more complex models, we act as if full retraining results in the model that is derived from taking a newton step --- and the reason our reconstruction performance degrades is simply that this approximation becomes imperfect. However, if rather than full retraining, the machine unlearning method employed was one that simply took a newton step, our reconstruction would again be near perfect. We will elaborate on this point in the revision.
Re: multi-sample deletion, our approach recover an approximation of the gradient sum for all the deleted samples. A determined adversary can poison an unlearning round by requesting deletion of n-1 samples known to the adversary, and so can recover the gradient of the single unknown point from this sum. Nevertheless we agree that multi-sample recovery is an important question that our work does not address, and think this is one of the most interesting questions for future work arising from our paper.
On the displayed metrics: We show both the full distribution of cosine similarity as well as randomly selected reconstructions for visual inspection. While other metrics such as similarity on a well-chosen embedding function could be of interest, we chose to simplify the presentation and show the more challenging (pixel-wise, for images) cosine similarity comparison. We are open to suggestions and are happy to engage if you have particular additional metrics that you think would be informative.
On defenses: Our work highlights the privacy risk of unlearning in standard models. As we discuss in our work, training models using differential privacy (at small epsilon values) provably prevents reconstruction, even in unlearning scenarios because of differential privacy's composition property. There is an exciting opportunity for research suggested by our work: is there a way to give unlearning methods which have meaningful privacy guarantees even when the additional model training procedure did not?
----
*[Response to Questions]*
On the use of “Avg” as a baseline: This is a straightforward way of leveraging information about the data distribution in a manner that does not incorporate any information about the model parameters. The “MaxDiff” baseline is also included, and this is motivated by the assumption that the overall performance of the update on held-out data would be, in relative terms, much smaller than the peformance difference of the sample being deleted (since this sample transitions from being in the training distribution, to being outside it)
On the robustness to model architectures: Our existing experiments already show results for a variety of simple models and loss functions on the same datasets (linear and logistic regression, ridge regression, ridge regression over random Fourier features, cross-entropy minimization over random Fourier features, as well as support vector machines over both raw features and random Fourier features).
On the computational complexity of the attack: Our attack in its most general form relies on a Hessian-vector product per public sample as shown in Eq. 13. This can be efficiently implemented in all common deep learning frameworks with a computational complextiy of $O(nd^2)$ with $n$ being the number of public samples and $d$ the number of parameters in the model. It is also possible to replace the Hessian computation altogether by using the Fisher information matrix, bringing the computational complexity down to $O(nd)$.
On countermeasures: The most straightforward countermeasure would be to train the original and updated models using differential privacy. This would provably prevent any reconstruction attack (as privacy composes across pairs of models if they are both trained privately). An exciting research direction suggested by our work is the study of unlearning methods in which the parameter update is itself differentially private with respect to the deleted samples --- and a fuller understanding of how this trades off with other unlearning desiderata.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and clarifications. I have carefully reviewed your feedback and do not have any further questions at this time. I will maintain my current (positive) rating.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thank you for engaging with our response and for your positive rating --- we appreciate it! | Summary: This work focuses on investigating privacy issues in machine unlearning. Specifically, assuming the availability of model parameters before and after unlearning, as well as the ability to sample data from the original data distribution, the proposed reconstruction attack aims to recover deleted samples. By analyzing the training objective of linear regression, the study found that the difference between the parameters before and after unlearning is proportional to the deleted sample. Based on this observation, this study proposes an algorithm to extract deleted samples accurately. The method extends to more complex models and arbitrary loss functions using Newton's method to approximate the parameter update process.
Strengths: 1. The topic of the study, concerning privacy risks in unlearning, is crucial. Since the data deleted in unlearning usually has high privacy sensitivity, recovering such data poses a significant threat.
2. The proposed algorithm is elegant and achieves near-perfect results in linear regression. It also has the potential to extend to more complex models.
Weaknesses: 1. The assumptions are too strong. The authors assume access to model parameters before and after unlearning and the ability to sample from the original data distribution. The authors need to clarify under what circumstances an attacker could have the assumptions mentioned in the paper, especially the sampling ability, since the deleted data is typically highly sensitive or inappropriate, making sampling difficult.
2. The goal of this paper is to explore the privacy risks in machine unlearning. In real-world scenarios, machine unlearning is achieved through existing unlearning methods. However, in this paper, unlearning is implemented by retraining after removing the samples to be unlearned. This creates a gap between the resulting model and the model obtained through actual unlearning methods. Therefore, it is questionable that the experimental results obtained from such a model can directly demonstrate the privacy risks associated with machine unlearning in real-world situations.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In what scenarios is the threat model proposed in the paper realistic?
2. Is the proposed method effective when unlearning is performed using existing unlearning methods?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *[Response to Weaknesses 1]*
We take the position that security assurances should be based on minimal assumptions. Here, we view the assumption that an attacker who has API access to the model does -not- have access to model parameters to be dangerously strong. Consider for example a d dimensional linear model, as we study initially in our work. Just from query access, an attacker can recover the model parameters by querying the model on d linearly independent points and solving a system of linear equations. This requires no knowledge of the data distribution. So, there is no meaningful difference between white box and black box access in such scenarios.
What about for more complex models? Even there the boundary between white-box and black-box access has been blurring. For example recent work [1] has shown that it is possible to reconstruct the embedding matrix from a production LLM model with only black-box access. And of course, open source models explicitly release parameters with each version of the model.
With regards to access to the data distribution, we only assume access to samples from the same distribution as training data, not to private samples atually used in training. This is a standard assumption underlying even simpler attacks suck as membership inference. We agree with the reviewer that it may sometimes be difficult for an attacker to ascertain what this distribution is, but again, we take the view that when analysing attacks as a means to audit security vulnerabilities, we should be generous (within reason) about the assumed abilities of the attacker.
[1] Carlini, Nicholas, et al. "Stealing part of a production language model." arXiv preprint arXiv:2403.06634 (2024).
*[Response to Weaknesses 2]*
An important aspect of existing machine unlearning approaches is that almost all of them aim to approximate full retraining with reduced computational cost. These approximations are proposed due to the intense compute required for full retraining, especially for large neural networks and LLMs.
Our study is explicitly focused on exposing the risk present in even very simple models, such as linear regression, logistic regression, SVMs, and feature augmentation using random Fourier features. For such models, full retraining is feasible, and would be the expected solution to the data deletion problem, as computational approximations to retraining are not needed.
Full retraining for a single data deletion in linear regression is equivalent to updating the model parameters using Newton's update as we discussed in Sec. 3.2. By leveraging this concept, our attack achieves almost perfect reconstruction, and the only error source is the estimation of the Hessian matrix using public data. For more complex models, a number of popular unlearning methods approximate full retraining by taking a Newton update. When we apply our attack to more complex models, we act as if full retraining results in the model that is derived from taking a newton step --- and the reason our reconstruction performance degrades is simply that this approximation becomes imperfect. However, if rather than full retraining, the machine unlearning method employed was one that simply took a newton step, our reconstruction would again be near perfect. We will elaborate on this point in the revision.
---
Rebuttal Comment 1.1:
Comment: (1) I partially agree with the author's viewpoint that when studying security risks, assumptions about the attacker's capabilities should be generous. However, if the assumed attacker's capabilities are too strong, the significance of the proposed method may decrease accordingly.
(2) I still think that perfect unlearning is impossible in real-world scenarios, and what is usually achieved is an approximation. If this approximation leads to a decrease in the effectiveness of the proposed method, it would undoubtedly undermine its value.
Based on the above reason, I raised my score to 5.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thanks --- we appreciate your engagement. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Pre-trained Large Language Models Use Fourier Features to Compute Addition | Accept (poster) | Summary: The authors utilize discrete Fourier transform to determine which Fourier components play the most important role in computing the addition of relatively small numbers (< 520) in Large Language Models, such as the GPT-2 family, and, _likely_, Phi-2, GPT-J, and others. They found out that MLP modules of GPT-2 use mostly low-frequency Fourier components to approximate the magnitude of the answer. In contrast, MHA modules utilize high-frequency components to approximate the last digit of the answer. Moreover, the model converges to the correct answer layer by layer.
The paper shows that this mechanism appears after pretraining the model; the weights of the embedding layer play a special role in it.
Strengths: The paper provides interesting insights into the addition mechanism inside of Large Language models:
- It contains original research about the role of Fourier components in small numbers addition inside of GPT-2 family models;
- it provides some observations that indicate that the same mechanism may appear in Phi-2, GPT-J, and bigger models;
- it demonstrates how the roles of MHA and MLP complement each other in this mechanism;
- it shows the connection between this mechanism and the pretraining of the model, especially the embedding layer.
Weaknesses: - Most of the experiments were devoted to the models from the GPT-2 family; very few experiments have been carried out on other models, so any arguments about other models are weaker;
- Only small (< 520) numbers were considered;
- The experimental setup is questionable (see "Questions" part of the review);
- The paper is not easy to follow. Some figures are confusing, and there are many typos (see "Questions" part of the review).
Technical Quality: 2
Clarity: 3
Questions for Authors: Questions:
- If I understood correctly, Table 1 contains the ablation experiments only for GPT-2-XL. Why didn't you repeat the experiments from Table 1 for other models? E.g. for Phi2, GPT-2-base, etc.
- How does the mechanism change when you add the bigger numbers, encoded in several tokens instead of one token?
- You wrote in 91 lines:
> ℓ ∈ [L].
Does it mean, that you use the outputs of all layers, preceding layer number L, for prediction?
- You wrote at 594 lines:
> We consider numbers in base 10 up to a maximum value 260. For each pair of numbers between 0 and 260, we generate various phrasings of addition questions and their corresponding answers. The different phrasings used are: “Total of num1 and num2.”, “Add together num1 and num2.”, “Calculate num1 + num2.”, “What is the sum of num1 and num2?”, and “Put together num1 and num2.”. The dataset is shuffled to ensure randomness and then split into training (80%), validation (10%), and test (10%) sets
Does it mean that train, valid and test could contain the addition questions for the same numbers, but in different phrasing? I.e., could the train contain the phrase “Total of 11 and 12.”, while the test contains “Put together 11 and 12.”?
---
Suggestions:
- GPT-2 is a model with absolute position embeddings. What about other types of position embeddings? (see also my question/suggestion about ablation experiments on Phi2 above). It would be interesting to research the connection of the addition mechanism with the position embedding type. See the paper "**Positional Description Matters for Transformers Arithmetic**" by Ruoqi Shen, Sébastien Bubeck, Ronen Eldan, Yin Tat Lee, Yuanzhi Li, and Yi Zhang.
- It would also be interesting to research the connection between the Fourier addition mechanism and outlier dimensions (see, for example, the paper "**Outlier Dimensions Encode Task-Specific Knowledge**" by William Rudman, Catherine Chen, Carsten Eickhoff, and many other papers about Outlier Dimensions in transformers).
---
Typos and presentation issues:
- Figures 1 (b) and (c) are very confusing because the word "Number" is closer to the color bar, than the word "Logits". So, at first, I didn't understand, that the color means the **Logit value**. As a result, I was confused about these diagrams, and it took time for me to sort things out.
- You wrote in line 165:
> We show in Appendix A that this has a simple closed-form solution involving a linear projection.
I didn't find this in Appendix A, can you please, elaborate, on what you meant?
- You wrote in the description of Figure 13:
> Visualization of the Fourier component whose period is 520 analysis for the final logits.
Can you please, elaborate, on what did you mean?
- You wrote in line 204:
> D is the size of the token embeddings, denote the token embedding for numbers
Can you please, elaborate, on what did you mean?
- What does Fig.5(a) show? Are these some average number embeddings?
- Why does Fig.6 show the legend with the different colors of cos/sin, while only red bars are visible? Maybe the bars are too narrow, so their colors mix together?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: There are two small "Limitation" and "Impact" sections: Appendix G Limitations, and Appendix H Impact Statement.
The authors addressed the limitations correctly overall. However, I would add that most of the experiments were devoted to the models from the GPT-2 family; very few experiments have been carried out on other models, so any arguments about other models are weaker. It is also a big limitation of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the supportive feedback and comments.
***Q1: Why not repeat the experiments from Table 1 for other models.***
The goal of this paper is to understand how pre-trained LLMs solve addition tasks. We conduct most of the experiments, such as Figures 1-5, on GPT-2-XL to provide a deep and comprehensive understanding of its mechanisms in solving these tasks. We focus on GPT-2-XL because it offers a balance between model complexity and interpretability, allowing us to draw meaningful insights.
To address the generalizability of our findings, we provide evidence for the existence of Fourier features in other pre-trained LLMs and tasks. Specifically, our Fourier analysis of the pre-trained embeddings for different models (as shown in Figure 14) reveals a consistent sparsity in Fourier space, even without fine-tuning. Additionally, the intermediate logits (Figure 8) and final predictions (Section 4.3) further demonstrate this sparsity across models.
These results suggest that the observed phenomena are not unique to GPT-2-XL but are likely intrinsic to the architecture of pre-trained LLMs in general. Therefore, while repeating all experiments on every other pre-trained LLM would be ideal, we believe the provided evidence sufficiently supports our claims across different models. This approach balances thoroughness with practicality, ensuring that our main findings are robust and broadly applicable without unnecessary redundancy.
***Q2: In 91 lines: $\ell \in [L]$. Does it mean that you use the outputs of all layers, preceding layer number L, for prediction?***
No, $[L]$ stands for $\{1,2,3,4….L\}$. Hence, $h^{(\ell)}$ where $\ell \in [L]$, refers to one specific hidden state on the residual stream at layer $\ell$.
***Q3: GPT-2 is a model with absolute position embeddings. It would be interesting to research the connection of the addition mechanism with the position embedding type. See the paper "Positional Description Matters for Transformers Arithmetic"***
Thank you for your comments. In our paper, each number is treated as one token, corresponding to one token embedding (as mentioned in line 58). Therefore, the type of positional embedding does not change our results and observations.
Regarding the paper "Positional Description Matters for Transformers Arithmetic," it's important to note that in the footnote of page 3, they mention: “In this paper, for every dataset used, a space is inserted before each digit. This ensures the tokenizer tokenizes each digit as an individual token.” This approach allows them to leverage the position of each digit to solve arithmetic tasks, which is not a common way to tokenize numbers in the GPT-2 model in real-world scenarios.
***Q4: It would also be interesting to research the connection between the Fourier addition mechanism and outlier dimensions (paper "Outlier Dimensions Encode Task-Specific Knowledge").***
Thank you for your comments. Outlier dimensions are the dimensions in the embedding space that exhibit large variance. In our work, we demonstrate that there are outlier components in the Fourier space that play a significant role in arithmetic tasks. We emphasize that the outlier are in two different space for these two cases.
***Q5: Figures 1 (b) and (c) are very confusing***
Thanks for your feedback. We have revised Figures 1(b) and 1(c) to improve clarity by adjusting the placement of labels and color bars.
---
***For Q6 - Q9, Reviewer mHEL requested further elaboration on the processes, figures, and concepts discussed in the paper. Due to space constraints, we have addressed these details in the official comment.***
---
***Q10: Why does Fig.6 show the legend with the different colors of cos/sin, while only red bars are visible?***
Thanks for noticing that. We used the same plotting method in Figure 3 and Figure 6. In Figure 3, we use the legend with different colors to show whether the outlier component is sine or cosine. However, for Figure 6, there are no outlier Fourier components and we do not need that legend. We have removed the legend in Figure 6 in the revised version of our paper.
***Q11: The authors addressed the limitations correctly overall. However, I would add that most of the experiments were devoted to the models from the GPT-2 family; very few experiments have been carried out on other models, so any arguments about other models are weaker. It is also a big limitation of this work.***
We acknowledge that this paper primarily focuses on the GPT-2 family. However, we believe that this already constitutes a significant step forward, highlighting our contribution rather than a limitation. As discussed in the related work section, prior studies [1,2,3,4] mainly concentrated on training shallow networks from scratch and analyzing their performance in modular addition. To the best of our knowledge, this is the first work analyzing how pre-trained LLMs solve addition.
We delve deeply into understanding the mechanisms by which GPT-2 leverages Fourier features. We then provide evidence of the existence of these Fourier features in other closed-source and open-source pre-trained LLMs. We demonstrate similar Fourier features in both pre-trained number embeddings and the intermediate hidden states. Additionally, these outlier Fourier components in other models exhibit almost the same periods as those in GPT-2. Based on these findings, we conclude that they leverage Fourier features to solve addition.
[1] Morwani, Depen, et al. "Feature emergence via margin maximization: case studies in algebraic tasks."
[2]Nanda, Neel, et al. "Progress measures for grokking via mechanistic interpretability."
[3]Zhong, Ziqian, et al. "The clock and the pizza: Two stories in mechanistic explanation of neural networks."
[4]Gu, Jiuxiang, et al. "Fourier circuits in neural networks: Unlocking the potential of large language models in mathematical reasoning and modular arithmetic.
---
Rebuttal 2:
Title: Detailed Elaborations: Part 1
Comment: ***Q6: In line 165: We show in Appendix A that this has a simple closed-form solution involving a linear projection. I didn't find this in Appendix A, can you please, elaborate, on what you meant?***
Thanks for your comments. In Definition A.6 (line 476 in Appendix A), we formally define the low/high-pass filter used in Section 3.3 as an optimization problem. We provide the closed-form solution to this problem in Remark A.5 (line 483 in Appendix A). To clarify this closed-form solution, let's walk through the derivation behind it step by step.
**Problem Statement**
We aim to filter the vector $x \in \mathbb{R}^D$ using a high-pass or low-pass filter, defined by the following optimization problem:
$
\min_y \| x - y \|_2^2 \quad \text{subject to} \quad B F W_U y = 0
$
Here, $B$ is a diagonal binary matrix that selects which frequency components to retain, $F$ denotes the Fourier basis, and $W_U$ is the output embedding matrix.
**Intuition Behind the Solution**
The objective is to find the vector $y$ that is as close as possible to $x$ while satisfying the constraint $B F W_U y = 0$. This constraint ensures that the filtered output $y$ lies in the null space of $B F W_U$.
**Null Space Projection**
To solve this optimization problem, we project $x$ onto the null space of $B F W_U$. The null space $N(B F W_U)$ consists of all vectors $v$ such that $B F W_U v = 0$. The projection of any vector $x$ onto this null space gives us the closest vector in the null space to $x$.
**Deriving the Projection**
The objective function $\| x - y \|_2^2$ measures the Euclidean distance between $x$ and $y$. Minimizing this function ensures that $y$ is as close to $x$ as possible. The constraint $B F W_U y = 0$ ensures that $y$ lies in the null space of $B F W_U$. To satisfy the constraint while minimizing the distance, we need to project $x$ onto the null space of $B F W_U$. The projection operator onto the null space $N(B F W_U)$ can be represented by the matrix $N(B F W_U) N(B F W_U)^\top$.
**Closed-form Solution**
The closed-form solution to the optimization problem is obtained by projecting $x$ onto the null space of $B F W_U$:
$
y = P_{N(B F W_U)} x = N(B F W_U) N(B F W_U)^\top x
$
Here, $N(B F W_U)$ is the basis for the null space of $B F W_U$, and the matrix $N(B F W_U) N(B F W_U)^\top$ represents the projection operator.
***Q7: You wrote in the description of Figure 13: Visualization of the Fourier component whose period is 520 analysis for the final logits. Can you please, elaborate, on what did you mean?***
Thank you for your comments. Let us emphasize the significance of Figure 13 and its implications. In our paper, we separate the frequency components into low-frequency and high-frequency components. We state that MLP layers primarily approximate the magnitude of the answer using low-frequency features, while attention layers primarily perform modular addition. One may ask: why can't we just use low-frequency components for classification? For example, ‘a+b mod 520’ will give the correct answers in our dataset. Figures 12, 13, and lines 515-526 in the appendix address this question.
When the model computes ‘a+b mod 2’, it is equivalent to a binary classification problem, which is simpler than ‘a+b mod 520’ (a 520-class classification). Figure 12b shows that the model accurately places the peak of the wave at the answer for ‘a+b mod 2’, ‘mod 5’, and ‘mod 10’. However, for large-period components, Figure 12a shows that the model only approximates the answer for ‘a+b mod p’, where p is the period of the components.
As we need to show all the large-period components in Figure 12a, we cannot display the entire number range. Therefore, the reader cannot tell whether the wave with a period of 520 correctly places its peak at the correct answer. Hence, we include Figure 13, which shows the period-520 wave over the entire number range. Figure 13 clearly demonstrates that the period-520 wave fails to place its peak on the correct answer and only approximates it.
As discussed in lines 129-136, the final logit is the accumulation result from the output of all the layers and is sparse in Fourier space. Figures 12 and 13 imply that the accumulation of low-frequency (large-period) components across all layers cannot make the correct prediction for ‘a+b mod p’ for large periods ‘p’. This provides evidence for why high-frequency (small-period) components are necessary for the model to make accurate predictions.
---
Rebuttal 3:
Title: Detailed Elaborations: Part 2
Comment: ***Q8: You wrote in line 204: D is the size of the token embeddings, denote the token embedding for numbers Can you please, elaborate, on what did you mean?***
In line 204, we wrote ‘Let $W_E \in \mathbb{R}^{p \times D}$, where $p = 521$ and $D$ is the size of the token embeddings, denote the token embedding for numbers.’ By this, we mean that $W_E$ denotes the token embedding matrix for numbers. Normally the token embedding has a shape (vocabulary size, token embedding size). However, since we only consider the integer in [0,520], the vocabulary size becomes the size of the number space, and we have $p=521$.
***Q9: What does Fig.5(a) show? Are these some average number embeddings?***
Fig. 5(a) is not an average number embedding. Lines 203-216 explain Figure 5(a). Let us elaborate on the process of creating Figure 5(a) step by step to help you gain a better understanding.
1. For each pre-trained LLM, there is a pre-trained token embedding matrix. Since we only consider the embeddings for operands in the range [0, 520], we select the 521 rows corresponding to these 521 numbers from the token embedding matrix. We define this selected token embedding as the number embedding $E$ for simplicity. This number embedding matrix’s shape is (size of embedding dimension, 521).
2. Next, we apply the Discrete Fourier Transform to this number embedding matrix. We multiply this number embedding matrix by a Fourier basis, whose shape is (521, 521), resulting in the number embedding matrix in Fourier space. The shape of the number embedding matrix in Fourier space is (size of embedding dimension, 521). Each entry represents the magnitude of each Fourier component distributed to each embedding dimension.
3. Then, we take the L2 norm along the embedding dimension and obtain a vector ‘v’ with size 521. Each entry reflects the magnitude of each Fourier component distributed over the embedding dimension. Intuitively, each entry in this vector shows how important each Fourier component is in constituting the number embedding.
4. We ignore the constant term in that vector and plot the remaining values in Figure 5(a). According to the construction of the Fourier basis (Definition A.3), v[i] represents the sine wave when ‘i’ is even, and the cosine wave when ‘i’ is odd. Hence, we obtain the plot shown in Figure 5(a).
5. Figure 5(a) demonstrates that the number embedding has outlier components with periods of 2, 2.5, 5, and 10. The insight here is that these outlier Fourier components represent the numbers in the pre-trained GPT2-XL model. Compared with Figure 7(a), we show that using Fourier features to embed numbers is a learned behavior during pre-training. Additionally, we provide further evidence in Figure 14 to show that pre-trained LLMs tend to use Fourier features to embed numbers.
***If you need any further elaboration, please let us know.***
---
Rebuttal 4:
Comment: As the discussion period approaches the deadline, we would like to follow up to see if our response and elaboration above has addressed your questions. Again, thanks for your time and effort. Looking forward to hear from you. | Summary: This paper focuses on understanding the mechanisms the LLMs employ to carry out mathematical operations, in particular sum of two numbers. It demonstrates: a) Models utilize Fourier features, with different parts of the model utilizing different frequency ranges — attention mechanism uses high frequency components while MLP primarily uses low frequencies. This is further corroborated by conducting filtering studies. b) Model embeddings and the pre-training process play a critical role in learning these Fourier features. Models that are not pre-trained do not seem to be able to induce Fourier features and have lower accuracy. However, such models when seeded with only embeddings from pre-trained models do learn the Fourier features.
Strengths: * Well written paper on understanding basic mechanism of how LLMs carry out the specific task of addition of small numbers.
* Interesting findings on pre-training vs task specific training from scratch, demonstrating criticality of model pre-training.
Weaknesses: * Study is limited to addition and that too of numbers where the final sum can be represented by a single token, leaving a gap in understanding of how addition of larger numbers spanning multiple tokens is achieved.
* With use of tools / functions in conjunction with the LLMs, mathematical operations are often carried out using a calculator or similar tool. Thus attempting to improve LLMs' arithmetic abilities may not be very fruitful. However, do the discoveries made in the paper apply only to number processing by LLMs or can they have broader implications, say on logical reasoning?
Technical Quality: 3
Clarity: 3
Questions for Authors: Lines 154-156 : “We also illustrated that in one example, the high-frequency components primarily approximate the magnitude, while the low-frequency components are crucial for modular addition tasks, as depicted in Figure 4.“ Is it the other way around?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations sufficiently addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the supportive feedback and thoughtful comments.
***Q1: With the use of tools / functions in conjunction with the LLMs, mathematical operations are often carried out using a calculator or similar tool. Thus attempting to improve LLMs' arithmetic abilities may not be very fruitful. However, do the discoveries made in the paper apply only to number processing by LLMs or can they have broader implications, say on logical reasoning?***
Thank you for your feedback. We agree that using a calculator or similar tool can solve arithmetic problems easily. However, we believe that without understanding numbers and performing basic arithmetic, a model cannot fully grasp more complex concepts in physics or mathematics like humans do. With billions of parameters, LLMs should be capable of easily solving number-related tasks such as arithmetic problems and time-series prediction. Thus, we believe improving LLMs' arithmetic abilities is important.
Additionally, we emphasize that our observations have broader implications beyond arithmetic tasks. One key finding is that number embeddings after pre-training are sparse in the Fourier space for many pre-trained LLMs without fine-tuning (Figure 14). This suggests that the model inherently uses Fourier features to represent numbers and solve number-related tasks. This insight can inspire future research in several ways:
- Extending this number embedding strategy to handle larger numbers.
- Adding regularizers to help models learn high-frequency Fourier features to enhance performance on number-related tasks.
- Exploring the potential for these Fourier features to benefit logical reasoning and other tasks beyond arithmetic.
By understanding and leveraging these Fourier features, we can improve LLMs' overall capabilities in number processing and related areas, thereby contributing to their broader applicability and effectiveness. More broader implications and future directions can be found in our response to all reviewers.
***Q2: Lines 154-156 : “We also illustrated that in one example, the high-frequency components primarily approximate the magnitude, while the low-frequency components are crucial for modular addition tasks, as depicted in Figure 4.“ Is it the other way around?***
Thank you for pointing out that typo. We have revised it to: “We also illustrated that in one example, the low-frequency components primarily approximate the magnitude, while the high-frequency components are crucial for modular addition tasks, as depicted in Figure 4.” | Summary: This paper analyzes how language models perform addition, showing that they use Fourier features. Most of the analyses are done on fine-tuned models. First, periodicity is shown with a logit-lens technique on different layers. Fourier analysis shows this nicely in the frequency space, and shows differences between MLP and attention components, the latter exhibiting only high-freq components. This is connected to "approximation" of the answer vs "classification" per small modulo. Ablation by removing low/high frequency supports a difference in behavior of the two components. The next section shows that models trained from scratch do not have such Fourier features, but plugging in trained number embeddings recovers this behavior. Finally some analysis shows a similar pattern in prompted models.
Strengths: 1. Seemingly novel observations about language models' mechanism of addition computation.
2. Clear and to-the-point analyses and visualizations.
3. Both correlational and causal experiments via ablations to raise hypotheses and test them.
Weaknesses: 1. Missing quantitative analysis of the superposition pattern in section 3.2. See below in questions.
2. While the main experiments in section 3 (and to a lesser extent also 4.1 and 4.2) are quite thorough and convincing, the ones in 4.3 and in some places in the appendix are less detailed and not sufficiently contrasted with the main results. See below in questions.
3. Potential problem with datasplit having leakage of expressions from training to test. See below in questions.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Data split seems to allow cases where the same expression (say 12 + 17) appears in both training and test sets, just with different templates. When looking at generalization, especially when fine-tuning a pretrained model, it could be beneficial to make sure no such overlap happens.
2. Section 3.1 assumes that "If the models merely retrieve and recombine pieces of information learned during training, certain layers will directly map this information to predictions". I am not convinced by this. I don't see why 'memorization and recombination' cannot be implemented by the full model, with this gradual refinement process as an implementation of memorization.
3. The periodicity analyses in section 3.2 are striking and insightful. That said, one concern is quantitative analysis - I found the example in figure 4 helpful, but would appreciate a quantitative evaluation of the superposition phenomenon. Also, given the residual stream view, one could say that this is trivial. That is, the final logits are always the sum of earlier ones. So, it would be useful to explain the significance of this finding and whether there is anything special in this case of addition compared to the general behavior of the model. Even better, some control test in a quantitative evaluation could make this point more compelling.
4. Section 3.3 is excellent - it shows which components are causally important for which roles. The experiments are well planned and convincing. One comment I have is on line 183, which says "the approximation tasks are primarily performed by the MLP modules alone". If I read Table 1 correctly, then, given that removing low-freq from both Attn and MLP leads to an even greater reduction in accuracy, we can conclude that both MLP and Attn are involved in approximation. This is essentially the same argument made in the paper just a few lines above.
5. Since the analysis in section 3 is done on fine-tuned model, I don't think we can conclude that "The previous section shows that pre-trained LLMs leverage Fourier features to solve the addition problem" (line 195). The rest of the section in fact gives evidence to this, but the phrasing at this point should be revised.
6. Figure 3 is very nice but other figures aren't as clean, specifically figure 8 (4-shot) and figure 15 (multiplication), figure 17 (other format), and figure 18 (GPT-J). I think the discussion of each of these cases was a bit too optimistic and naive, and the comparison to the main analyzed case should be made more carefully, and differences highlighted.
7. The related work section is well done. Clarifying the difference from [28] early on in the paper could be useful for the reader. And, one can also mention this paper on arithmetic mechanisms in the related work: Stolfo et al., A Mechanistic Interpretation of Arithmetic Reasoning in Language Models using Causal Mediation Analysis.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Sufficient discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the supportive feedback and thoughtful comments.
***Q1: Why 'memorization and recombination' cannot be implemented by the full model, with this gradual refinement process as an implementation of memorization.***
Thank you for your feedback. Our assertion is based on prior research and specific empirical observations detailed in Section 3.1. [1,2] indicate that only a few late layers integrate information for prediction in fact-retrieval tasks. However, our findings in the addition task (Figure 1a) demonstrate that the model utilizes over 20 layers to compute the result, suggesting a more complex process than mere retrieval and recombination. This extensive use of layers aligns with the model performing actual computation, progressively refining the output across multiple layers, contrary to what would be necessary if it were merely memorizing and recombining pre-learned facts.
Furthermore, in Figure 1a, we can see the results gradually refining from far from the correct result to accurate as processing progresses through the layers, which clearly show that the model is performing computation.
[1] Meng, Kevin, et al. "Locating and editing factual associations in GPT."
[2] Merullo, Jack, Carsten Eickhoff, and Ellie Pavlick. "Language models implement simple word2vec-style vector arithmetic."
***Q2: Requests quantitative evaluation of the superposition phenomenon (Figure 4) in periodicity analyses and questions the triviality of findings given the model's architecture.***
Thanks for your insightful comments. We emphasize that Figure 4 shows different dimensions in the Fourier space for the final logits. Hence, It is not the sum of the logits of earlier ones. The goal of that experiment (Figure 4) is to provide readers with an intuition of how the final logits are expressed with only a few Fourier components and why sparse Fourier components are sufficient to make the prediction.
In Figures 2 and 3, we perform quantitative analysis in Fourier space to show that each layer’s contribution to the residual stream is sparse in Fourier space. This Fourier space analysis may not intuitively explain how these outlier components form the final logits. Therefore, we believe that by showing the final logits with only a few Fourier components in number space for this specific example, readers can better understand how these outlier Fourier components contribute to the prediction.
***Q3: Critique of the statement that approximation tasks are primarily performed by MLP modules, suggesting involvement of both attention and MLP modules.***
Thank you for your comments. Regarding the statement on line 183, "the approximation tasks are primarily performed by the MLP modules alone," we would like to emphasize our findings as follows:
In Table 1, we observe that removing low-frequency components from the attention output alone does not lead to a significant reduction in accuracy. However, when we remove low-frequency components from both the attention and MLP modules, there is a larger reduction in accuracy compared to removing them from the MLP modules alone. This suggests that while both attention and MLP modules are involved in approximation, the MLP modules play a more critical role.
Furthermore, the minimal impact on accuracy from removing low-frequency components from the attention output indicates that the MLP modules are capable of performing the approximation tasks independently. During prediction, the low-frequency components of the attention output do not contribute significantly, as the MLP modules have the ability to handle the approximation effectively on their own.
Therefore, we conclude that “the approximation tasks are primarily performed by the MLP modules alone,” as the MLP modules are sufficient to achieve the desired approximation even in the absence of low-frequency components from the attention output.
***Q4: Suggestion to revise phrasing regarding the use of Fourier features in pre-trained LLMs due to the analysis being on a fine-tuned model.***
Thanks to the reviewer for pointing that out. You are correct. That is one of our observations in Section 4. We have revised that sentence to “The previous section shows that pre-trained LLMs leverage Fourier features to solve the addition problem after fine-tuning”.
***Q5: Figure 8,15, 17,18 are not as clean as Figure 3***
Thank you for your detailed feedback. We have updated the appendix in our paper to incorporate more discussion about the figures you mentioned, which will be reflected in the revised manuscript. We put the full version in the official comment. Here is a summary of the updated discussion.
In the analysis of various figures, Figure 17 highlights the effective learning of the 'mod 10' task by the last few layers, showing more prominent outlier frequency components compared to Figure 3. Figures 8 and 18 exhibit similar outlier components to Figure 3 but with less clarity, indicating that without fine-tuning, models like 4-shot Phi2 and GPT-J (with respective accuracies of 55.44% and 72.21%) struggle to fully utilize Fourier features for arithmetic tasks. This observation suggests potential improvements in model performance through enhanced leveraging of these features. Meanwhile, Figure 15 reflects the complexity models face with multiplication tasks, where despite data expansion and fine-tuning, accuracies remain relatively lower (74.58%) due to inadequate utilization of Fourier features, as also supported by recent studies from McLeish et al. (2024) and Dziri et al. (2024).
***Q6: Clarify the difference from Nanda et al. (2023) and add Stolfo et al. (2023) to related work.***
Thank you for your supportive comments. We have added more discussion for Nanda et al. (2023) and Stolfo et al. (2023) to the related work section, which will be reflected in the revised manuscript. Due to space limits, we put the fully revised version in the official comment.
---
Rebuttal Comment 1.1:
Title: Thanks for the response; missing one important question
Comment: Thank you for your response.
I'd love to see a response about my concern regarding the data split in my question 1.
1. Concerning memorization vs computation/recombination, I guess it's partly philosophical and hard to prove, but I don't see "direct output" as a requirement for memorization. I think memorization can be implemented by a gradual computation process (even if in other, retrieval-oriented tasks, it's more direct).
2. Thanks for the clarification about figure 4. Still, figures 2+3 don't quantify the superposition phenomenon, do they? Would there be a way to quantify this specifically?
3. I still think that the statement "primarily performed by the MLP modules alone," is inaccurate, since attention modules are involved, as you also state. There may be some redundancy, but the above statement seems misleading.
4. Thanks for a more careful discussion of other figures besides 3 and look forward to seeing it in the paper's next revision. This statement is excellent:
" we believe that without fine-tuning, the model does not fully leverage Fourier features to solve the addition problem. Even though the pre-trained model has learned to represent the numbers in Fourier space, it fails to fully understand the question phrasing and apply the Fourier method to solve it. We believe that better leveraging the Fourier features to improve the models’ performance on arithmetic tasks can be an interesting direction."
I believe something along these lines should appear in the main body of the paper, to accurately represent the contribution of the present work.
5. Your clarifications about differences from refs [1], [2] are helpful; thank you.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: We are glad that we address your concern about ‘other figures besides 3’ and ‘difference from reference [1,2]’ and we will make sure to add these discussions to the revised version. We will also incorporate the statement you mentioned into the main body of the paper to highlight our contribution, which we believe will provide valuable insights for future work.
We have addressed the ‘concern about data overlap’ in our response to all reviewers. In summary, there is no overlap between the training data and the test data.
Thank you once again for your insightful questions. Below is our response to the rest of your questions:
***Q1: Concerning memorization vs computation/recombination***
We acknowledge that it is partly philosophical and difficult to prove that "direct output" is a requirement for memorization. However, as noted in other memorization tasks mentioned in our previous response, where the model uses only a few layers to make the prediction, we believe it is reasonable to hypothesize that the model is performing computation rather than memorization. In later sections of our paper, we correctly split the dataset to fine-tune the model. Through ablation studies and Fourier analysis, we provide additional evidence to support the claim that the model is indeed computing rather than memorizing.
***Q2: superposition phenomenon***
Thank you for your comments. We would like to clarify that the purpose of Figure 4 is to help readers understand how the results in Fourier space correspond to number space. Our experiments are primarily conducted in Fourier space, which may cause some confusion regarding how the results in Fourier space enable the model to identify the correct answer in number space. As explained in lines 137-150, we aim to help readers grasp, for example, why we assert that the Fourier component with a period of 2 is performing a ‘mod 2’ task. Although the superposition phenomenon is not the primary focus of our contribution, we believe that studying this phenomenon (using the top-k Fourier components to express the final logits) is an interesting direction. Since with only a few components, the model can accurately express the final prediction, a potential direction for future research could be making the model to process only with these outlier Fourier components to enhance LLMs’ speed and performance on arithmetic tasks.
***Q3: "primarily performed by the MLP modules alone," is inaccurate***
Thanks for your feedback! We acknowledge that the word ‘alone’ is misleading. We have changed the sentence to ‘As shown in Table 1, the approximation tasks are primarily performed by the MLP modules, with contributions from the attention modules as well.’
Thanks for reading our responses! Please let us know if the above addresses the questions and concerns.
Sincerely,
Authors
---
Rebuttal 2:
Title: More details about Q5 and Q6
Comment: ***Q5: Figure 8 (4-shot) and figure 15 (multiplication), figure 17 (other format), and figure 18 (GPT-J) are not as clean as Figure 3***
**Figure 17 (alternative format)**: Compared to Figure 3, Figure 17 also clearly shows the outlier frequency components. However, the outlier components in Figure 17 are not distributed across as many layers as in Figure 3, especially for the attention output. From the color bars, we can see that in Figure 17, the magnitude for the 10-period component in the last few layers is larger, particularly for the attention output. This suggests that the last few layers in Figure 17 learn the ‘mod 10’ task well and contribute more to the final logits compared to other layers. Hence, if we clip the values above 600 to 600, we can achieve results similar to those in Figure 3.
**Figures 8 (4-shot) and 18 (GPT-J)**: Compared to Figure 3, we can see the same outlier components in Figures 8 and 18, but they are not as clean as in Figure 3. Our experiments show that the accuracy for the 4-shot Phi2 (Figure 8) is 55.44%, and the accuracy for 4-shot GPT-J (Figure 18) is 72.21%. Hence, we believe that without fine-tuning, the model does not fully leverage Fourier features to solve the addition problem. Even though the pre-trained model has learned to represent the numbers in Fourier space, it fails to fully understand the question phrasing and apply the Fourier method to solve it. We believe that better leveraging the Fourier features to improve the models’ performance on arithmetic tasks can be an interesting direction.
**Figure 15 (multiplication)**: We can clearly see the outlier components in Figure 15, but they are not as clean as Figure 3. Note that for multiplication tasks, even if we expand the dataset size for multiplication, the accuracy only reaches 74.58% after fine-tuning. Multiplication is more difficult for LLMs, as shown in [1,2]. Hence, we believe that since the model does not fully learn to leverage Fourier features to solve the problem, Figure 15 appears less clean compared to Figure 3.
[1] McLeish, Sean, et al. "Transformers Can Do Arithmetic with the Right Embeddings." arXiv preprint arXiv:2405.17399 (2024).
[2] Dziri, Nouha, et al. "Faith and fate: Limits of transformers on compositionality." Advances in Neural Information Processing Systems 36 (2024).
***Q6: Clarify the difference from Nanda et al. (2023) and add Stolfo et al. (2023) to related work.***
Mechanisms of pre-trained LMs …Furthermore, [1] helps to understand the mechanism of pre-trained LMs when solving arithmetic tasks, by illustrating how LMs transmit and process query-relevant information using attention mechanisms and MLP modules.
Fourier features in Neural Networks. …[2] notes that Fourier features occur in shallow transformer models trained on modular addition. In contrast, our work focuses on the addition task for pre-trained LLMs. We specifically explore the necessity of various Fourier components and their contributions to the predictions...
[1] Stolfo, Alessandro, Yonatan Belinkov, and Mrinmaya Sachan. "A mechanistic interpretation of arithmetic reasoning in language models using causal mediation analysis." arXiv preprint arXiv:2305.15054 (2023).
[2]Nanda, Neel, et al. "Progress measures for grokking via mechanistic interpretability." arXiv preprint arXiv:2301.05217 (2023). | Summary: This paper analyzes how language models conduct the task of mathematical addition. Specifically, the paper shows that when performing the mathematical addition task, the internal states of language models exhibit periodical patterns, referred to as "Fourier Features" in the paper. The paper then conducts Fourier transformations to analyze such features and shows that (1) high-frequency features are more helpful in reaching the correct prediction (2) filtering out Fourier components causes performance degradation, hence these features are causally important for model predictions (3) such feature arises during pre-training, in the token embedding components. Finally, the paper shows that their findings generalize to other pre-trained LLMs (GPT-J, Phi-2, GPT-4, etc.) since they exhibit similar behaviors.
Strengths: 1. This is an in-depth analysis of one specific facade of the internal workings of the LLM.
2. While this does not seem like the first time Fourier features are studied in neural networks, I still find the analysis method quite interesting.
3. The writing quality is high. Findings are very nicely presented in a progressive and easy-to-follow way.
Weaknesses: 1. While the study is thorough, I find the scope to be quite narrow -- it's only focused on one specific arithmetic task. I'm skeptical about how much this will help downstream model development.
2. To make the study feasible, the paper focused on the single-token case, which in turn limits the range of addition to <520. This makes the setup very artificial. That said, this is unfortunately an inherent weakness of many similar analysis papers, and I realize it's tricky to reach a good solution.
Technical Quality: 3
Clarity: 4
Questions for Authors: ### Presentation Suggestions
- Section 3: I know this is mentioned in the existing literature but it would be nice to define "modular addition" before you use it here for the first time.
- L91: I'm confused by $[L]$. Do you mean a range $[0, L)$ or something similar?
- L114: "approximately" is a weird word in this context. I'm not sure what you are trying to say, but to me, it makes perfect sense even if you remove it (similar occurrence in L121).
### Questions
1. How much do you think the choice of positional embedding has played in the creation of Fourier features? To be more specific, if we use learned positional embedding rather than sinusoidal/rotary positional embedding, do you think you'll still observe these features?
2. I might have missed it but I don't think there's ever an explanation for why attention only learns high-frequency features, while MLP has both. Do you have any insights?
3. More of a suggestion: Did you look at stronger open-source models such as the Mixtral ones? It's a pity that you had to do in-context learning for the model to be able to conduct arithmetic addition.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors should address the limitations their experiment configurations have created, including but not limited to:
- single token case only
- small number addition only
- findings are constrained to a single arithmetic operation, and it's not clear whether they'll generalize to others
(To clarify, not asking for extra experiments, but just acknowledging the limitations)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s supportive comments and constructive suggestions.
**Q1: The term "modular addition" should be defined before its first use.**
We have added a definition of "modular addition" in Section 3.1 to the revised version of our paper for clarity: "Modular addition is an arithmetic operation in which the sum of two numbers is divided by a modulus, and the remainder of this division is taken as the result. This operation ensures that the resulting value always falls within the range from zero to one less than the modulus, effectively "wrapping around" to zero once the modulus is reached."
**Q2: Confusion about the notation [L].**
$[L]$ stands for $\{1,2,3,4….L\}$. Hence, $h^{(\ell)}$ where $\ell \in [L]$, refers to one specific hidden state on the residual stream at layer $\ell$. We have included this definition in Section 3.1 in the revised version.
**Q3: The word "approximately" is potentially confusing in the context it's used.**
For line 114, we agree that the term "approximately" might cause confusion. We originally included it to indicate that while outlier components have significantly larger magnitudes, there are still small magnitudes for other components. However, we recognize that this might not be necessary for clarity in this context. Therefore, we have removed "approximately" from lines 114 and 121 to improve clarity.
**Q4: How much do you think the choice of positional embedding has influenced the creation of Fourier features?**
In our research, we treat each number as a distinct token with a corresponding token embedding. Consequently, the choice of positional embedding does not directly impact our results and observations. Moreover, it is important to note that the pre-trained embeddings have already shown Fourier features. We believe that when it comes to multiple tokens, these features will also enhance model accuracy. However, this may introduce additional complexity due to the increased number of tokens involved.
**Q5: Why do attention mechanisms tend to learn high-frequency features, whereas MLPs capture both high and low-frequency features?**
Thank you for your insightful question. While our paper focuses primarily on the interpretability of pre-trained LLMs, we can provide insights into this observation based on existing research and theoretical understandings.
- **The key to do modular addition is to compute the trigonometric functions.** As shown in [2], to solve modular addition $a + b \mod p$, the Transformer model tends to address the equivalent problem $\arg\max_{c} \cos(2\pi(a + b - c)/p)$. The token embedding in transformers represents inputs as $\cos(wa)$, $\cos(wb)$, $\sin(wa)$, and $\sin(wb)$ with different frequencies $w$. Using trigonometric functions, the model computes $\cos(w(a + b))$ and $\sin(w(a + b))$, which are then processed to get $\cos(w(a + b - c))$ and $\sin(w(a + b - c))$.
- **Attention can compute trigonometric functions.** Assuming the key and query matrices as identity matrices for simplicity, the attention mechanism can be simplified as $ \text{softmax}(XX^\top)V $. As $\cos(wx)$ and $\sin(wx)$ have been encoded in $X$, the attention can effectively compute the result of $\cos(w_1 a) \cdot \cos(w_2 b)$ with different frequencies $w_1, w_2$. More evidence can be found in [1] Section 5.2. These shows why attention mechanisms are suitable to compute trigonometric functions.
- **MLP can approximate magnitude.** According to [3], during arithmetic tasks, the information about numbers $a$ and $b$ is processed by MLPs in the later layers of a neural network. If the magnitudes of $a$ and $b$ are represented in specific entries of the hidden state, say $h[i]$ and $h[j]$, an MLP can compute $w[i]h[i] + w[j]h[j]$, which approximates the magnitude of $a + b$. This demonstrates that MLPs can approximate the magnitude of $a+b$.
In summary, the tendency of attention mechanisms to learn modular addition (high-frequency features) while MLPs primarily learn approximation (low-frequency features) can be attributed to their different structures. We hope these insights help clarify the observation and provide a deeper understanding of the underlying mechanisms.
**References:**
1. Gu, Jiuxiang, et al. "Fourier circuits in neural networks: Unlocking the potential of large language models in mathematical reasoning and modular arithmetic." arXiv preprint arXiv:2402.09469 (2024).
2. Nanda, Neel, et al. "Progress measures for grokking via mechanistic interpretability." arXiv preprint arXiv:2301.05217 (2023).
3. Stolfo, Alessandro, Yonatan Belinkov, and Mrinmaya Sachan. "A mechanistic interpretation of arithmetic reasoning in language models using causal mediation analysis." arXiv preprint arXiv:2305.15054 (2023).
**Q6: Consider using stronger open-source models like Mixtral for arithmetic tasks.**
Thank you for your feedback. We have indeed tried our experiments on stronger open-source models. However, we found that the predictions were not controllable without in-context learning. The models sometimes generated intermediate steps or repeated the questions even when we appended ‘Answer is’ at the end of our prompts. In-context learning helped ensure that the model understood the task and provided the correct answers. We believe that in-context learning does not change the model’s underlying strategy to solve the tasks but rather helps the model to respond appropriately.Additionally, we emphasize that for closed-source models, there is also evidence supporting the existence of Fourier features. This further validates our findings and suggests that the observed phenomena are not limited to the specific models we tested. | Rebuttal 1:
Rebuttal: **Response to All Reviewers:**
We thank reviewers [R1(YRn2), R2(XwiY), R3(wYe3), R4(mHEL)] for their thoughtful and highly supportive feedback! We are glad that the reviewers found the analysis method interesting and [R1, R3], the observations about the application of Fourier features in language models performing addition tasks insightful and novel [R2, R4], the presentation of our findings easy to follow [R1, R2], and the experimental results compelling, especially regarding the roles of Fourier components in model behavior and the importance of model pretraining [R3, R4].
***Q1: Limitations pointed out by the reviewers: focusing on a single arithmetic task (addition) and the single-token case limiting the range to numbers less than 520.***
1. **Scope of Study**: As an interpretability paper, we focused on a single-token case with numbers less than 520 to provide a controlled and clear analysis environment. This setup allows us to dive deep into understanding the fundamental mechanisms at play. It is a starting point to inspire further research and improvements in the model’s performance on arithmetic tasks. These limitations are inherent to many interpretability studies. For example, [1,2] focus on modular addition, a simpler task than general addition, and their analyses are conducted on shallow neural networks or transformers. In contrast, our study investigates pre-trained LLMs on the addition task, offering insights into more complex and realistic scenarios.
2. **Broader Implications**: We emphasize that the observations on pre-trained number embeddings are not limited to addition tasks. The number embeddings after pre-training are sparse in the Fourier space for many pre-trained LLMs without fine-tuning (Figure 14). This suggests that the model does not learn to use Fourier features to embed numbers just for one task, implying that these learned Fourier features will benefit other number-related tasks as well. In this paper, we focus on addition to help readers understand how these Fourier features assist the model in solving addition problems. We also provide evidence that the model uses these Fourier features in multiplication (Figure 15, Section C.2).
3. **Future Research Directions**: The insights and observations from this paper can inspire many interesting future directions:
- Exploring the potential of adding a regularizer to help models learn high-frequency Fourier features to enhance performance on number-related tasks.
- Investigating the training dynamics of how models learn to use Fourier features to represent numbers and solve arithmetic problems during pre-training.
- Understanding why models tend to use these Fourier features to solve problems.
- Examining whether models still utilize these Fourier features when dealing with numbers tokenized using sub-word tokenizers.
- Investigating the use of digit-wise tokenizers, such as in Llama, to see if the model leverages Fourier features for arithmetic tasks.
- Developing strategies to help the model learn Fourier features in both cases mentioned above to improve performance on arithmetic tasks.
- Improving the model's performance on larger numbers or decimal numbers by finding strategies to correctly embed numbers.
By highlighting these points, we hope to demonstrate the broader implications of our findings and the potential for future research inspired by our work.
**References:**
1. Morwani, Depen, et al. "Feature emergence via margin maximization: case studies in algebraic tasks." arXiv preprint arXiv:2311.07568 (2023).
2. Nanda, Neel, et al. "Progress measures for grokking via mechanistic interpretability." arXiv preprint arXiv:2301.05217 (2023).
***Q2: Concern about data overlap in training and test sets due to the use of the same mathematical expressions with different templates.***
We apologize for the confusion. In line 595, the mention of generating "various phrasings" for addition questions was a typo. In the actual experiments, we used a distinct phrasing for each pair of numbers, selecting one template from five available templates. This ensures that every unique pair of numbers between 0 and 260 is presented with a consistent phrasing from these templates. We have fixed that typo in the revised version. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Set-based Neural Network Encoding Without Weight Tying | Accept (poster) | Summary: This paper proposes SNE, or Set-based Network Encoding, a method of encoding the weights of arbitrary neural networks in order to predict properties such as performance, generalization gap, training loss, number of epochs, etc. Specifically, using a chunking mechanism as well as layer-wise and block-wise positional encodings to generalize across different types of neural networks. Transferability experiments are conducted where a predictor is trained on one network model zoo, then evaluate on another, demonstrating an ability few predictors have. Additional experiments and ablations are performed, producing convincing results. Finally, the paper is decently well-written, but has a few fixable presentation flaws.
Strengths: - The contribution of the paper, using a network's weights to determine attributes about it, are quite novel.
- The Methodology section is quite clear and easy to follow. Specifically, the aggregation mechanism in Eq. 9 is similarly analogous to graph aggregation in GNNs.
- The experimental results are generally convincing.
- Some visualization of the network representations from different methods are provided.
- There are ablation results included in the appendix.
- A limitations section is provided, highlighting reasonable restrictions on the experiments.
Weaknesses: - First, the motivation of this paper, specifically lines 29-30, is quite weak and fails to really provide examples of the application of this research other than asserting that they must exist.
- Overall, the presentation is a weak spot, and should be revised. Some examples:
- E.g., in line 31, the authors mention "Implicit Neural Representations (INR)" - what are these? No citation is provided. By contrast, "b) the performance of CNNs" is very easy to understand.
- "modelzoo", this should be "model-zoo" [1] or "model zoo" [2-4].
- Citations in the paper have issues. For NeurIPS its generally just [num], or if you want to be fancier, "Author et al., Year [num]" but not "Author et al. [num]" without the year.
- Float captions need work, e.g., Tabs 2/3 need longer captions to fully explain the performance metric Kendall's Tau, etc. Should be longer than 1 sentence.
- Figure 2 needs work. The plots should be bigger and legend is clipped.
- NeurIPS checklist should be placed after the references and supplementary.
- The evaluation is done on simpler datasets/networks, however, the limitations section highlights why.
Technical Quality: 4
Clarity: 2
Questions for Authors: I find the idea of predicting end-to-end neural network attributes from the weight values to be quite fascinating. It is very similar to the concept of a neural predictor [5] in Neural Architecture Search [6] but quite orthogonal in the approach, relying on the weights instead of the graph structure. Similarly, most neural predictors suffer from a lack of generalizability/transferability [7, 8], like most of the baselines in this paper. Can the authors provide some commentary/comparison between these two umbrella fields?
Similarly, is it possible to leverage the encoding mechanism (Eqs. 1-9) to derive which types of weights/layers/values are responsible for certain end-to-end attributes, e.g., performance of epochs, e.g., as [9] do?
Refs:
[1] "Equivariant Architectures for Learning in Deep Weight Spaces" - ICML'23.
[2] https://modelzoo.co/
[3] https://github.com/onnx/models
[4] https://pytorch.org/serve/model_zoo.html
[5] "How Powerful are Performance Predictors in Neural Architecture Search?" - NeurIPS'21.
[6] "Efficient Neural Architecture Design via Capturing Architecture-Performance Joint Distribution" - AISTATS'24.
[7] "GENNAPE: Towards Generalized Neural Architecture Performance Estimators" - AAAI-23.
[8] "Bridge the Gap Between Architecture Spaces via A Cross-Domain Predictor" - NeurIPS'22.
[9] "Building Optimal Neural Architectures using Interpretable Knowledge" - CVPR'24.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: Yes, limitations and future work is discussed in App. B. The authors explain that their experiments are on simpler/smaller/older networks as they have limited computational resources. I find this to be an acceptable justification.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for taking the time to offer constructive feedback for improving the paper. We respond to the questions raised below.
**Provide examples of the application of this research other than asserting that they must exist.**
We add the following to elaborate on the potential applications of this research to complement lines 29-30:
‘The ability to infer such fundamental properties of trained neural networks using only the parameter values has the potential to open up new application and research paradigms such as learning in the latent space of neural network weights for tasks such as weight generation [1, 2], latent space transfer of weights across datasets allowing for transferring weights from one dataset to another as was recently demonstrated in [3] and latent space optimization using gradient descent where optimization is performed on the weight embeddings [4].’
We hope this addition enhances the motivation for this line of research.
**Overall, the presentation is a weak spot, and should be revised. Some examples:**
- We have added a citation to Implicit Neural Representations (INR) [1] and added the following description, “INRs are neural parameterizations of signals such as images using multi-layer perceptrons”.
- All occurrences of ‘modelzoo’ have been replaced with ‘model zoo’.
- We have changed the citation format to ‘Author et al., Year[num]’.
- We’ve included the following to captions on Tables 2/3 “Models are evaluated using Kendall’s Tau[5], a rank correlation metric.’ with a citation to the paper that introduces the metric.
- We have reduced the number of plots in Figure 2 to one per model and increased the size (see Figure 2 in the attached pdf). The remaining figures are moved to the appendix in enlarged form and the clipped legend fixed.
- The checklist has been placed after the references and supplementary as pointed out.
**The evaluation is done on simpler datasets/networks, however, the limitations section highlights why.**
Based on the suggestion of Reviewer nYWS we provide additional results on more complicated networks, vision transformers, in our response to nYWS and in the general response. We hope that these additional results on more complicated architecture demonstrates the utility of the proposed method.
**Can the authors provide some commentary/comparison between these two umbrella fields (NAS and weight encoding)?**
In NAS, the predictors mostly rely on the computation graph of generated architectures. This is quite different from the problem we consider where we require access to the parameter values themselves, optimized for some number of epochs. However, the general underlying problem has commonality between the two fields: namely predicting some properties of the networks in consideration. Perhaps a new line of exploration would consider combining both fields. For instance, a neural network encoder, like ours, could be trained to conditionally take as input either the weights, the computational graph or both. Such a pre-trained predictor could be applied to NAS where in test mode, only the computational graph is given to estimate the performance of newly generated architectures. This would require the encoder to be agnostic to architectural choices like our model. The discovered architectures when trained could then be used to update the property predictor using both weights and computation graphs. Such a combination would find application in both fields. To the best of our knowledge, we are unaware of any works that attempt such a combination and we leave this exploration as future work and will draw attention to such a merger of both fields in our discussion and limitation sections.
**Similarly, is it possible to leverage the encoding mechanism (Eqs. 1-9) to derive which types of weights/layers/values are responsible for certain end-to-end attributes, e.g., performance of epochs, e.g., as [9] do?**
Technically, such an analysis should be possible. For instance, we could train SNE to predict multiple properties for a given network and at test time, we can remove the encodings of different layers (in Eq 9) and see how it affects the predictions compared to the full network. This will provide a mechanism for determining which layers/weights are responsible for certain attributes. Currently we are unable to do such an analysis since all the model zoos used in this paper only come with a single attribute in the meta-data. Such analysis will require regenerating the model zoos, a task which we are unable to undertake during the short rebuttal period. However, we are inclined to agree with the Reviewer that such an analysis will be very useful.
**References**
[1] Hyper-Representations as Generative Models: Sampling Unseen Neural Network Weights, 2022
[2] Wang, Kai, et al. "Neural Network Diffusion." arXiv preprint arXiv:2402.13144 (2024).
[3] Soro, Bedionita, et al. "Diffusion-based Neural Network Weights Generation." arXiv preprint arXiv:2402.18153 (2024).
[4] Rusu, Andrei A., et al. "Meta-learning with latent embedding optimization." arXiv preprint arXiv:1807.05960 (2018).
[5] Kendall, M. G. A new measure of rank correlation. Biometrika, 30(1/2):81–93, 1938.
---
Rebuttal 2:
Comment: I thank the authors for their detailed rebuttal, which satisfactorily addressed my concerns and questions. After reading the other reviewer comments and corresponding rebuttals I find that the reasons to accept outweigh concerns to reject, and thus raise my score. If accepted, I encourage the authors to integrate rebuttal experimental results as well as revise related work and limitations to reflect these past discussions.
Title: 5->6
---
Rebuttal Comment 2.1:
Title: Discussion
Comment: We would like to thank the Reviewer for the continued engagement. As requested, the additional experimental results as well as the revised related work, limitations and discussions of potential applications and similarities to NAS research will be incorporated in the paper. Thank you. | Summary: This work introduces a set-based neural network encoding that processes each chunk of an input model independently to predict network properties. The proposed model, SNE, is trained using Logit Invariance instead of weight tying to maintain generalizability to unseen architectures. Unlike previous approaches, SNE can be trained and tested on different architectures and it demonstrated superior performance in out-of-distribution experiments involving both datasets and architectures.
Strengths: SNE is more general than its predecessors and can be applied to any input architecture. Results indicate that it outperforms other encoding methods.
Weaknesses: While SNE shows improvements in novel tasks such as cross-architecture and cross-dataset scenarios, its comparison on traditional tasks is limited to predicting INR frequencies. Expanding Table 1 to include SNE's performance on additional outputs, such as accuracy, would enhance the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: The writing is generally good but could be improved in certain areas. For example, the abstract contains a redundant sentence: “by utilizing a layer-wise encoding scheme that culminates to encoding all layer-wise encodings to obtain the neural network encoding vector,” which should be revised.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations were properly addressed in appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for taking the time to offer constructive feedback for improving the paper. We respond to the questions raised below.
**The writing is generally good but could be improved in certain areas. For example, the abstract contains a redundant sentence: “by utilizing a layer-wise encoding scheme that culminates to encoding all layer-wise encodings to obtain the neural network encoding vector,” which should be revised.**
We have removed the sentence from the abstract and replaced it with ‘Furthermore, our $\textbf{S}$et-based $\textbf{N}$eural network $\textbf{E}$ncoder (SNE) takes into consideration the hierarchical computational structure of neural networks.’ We hope this resolves the redundancy pointed out.
**Expanding Table 1 to include SNE's performance on additional outputs**
Our request to [1] for the dataset of INRs for the ShapeNet dataset is currently pending and this task will be added to Table 1 when the dataset is received. However we note that fundamentally, there is no difference between the accuracy prediction task for INRs and the frequency prediction task that we present in Table 1 since these are all evaluations on MLPs.
We would also like to draw the Reviewers attention to the additional results we provide on generalization on vision transformers which we provide in our general response. We hope these additional evaluations further demonstrate the wide range applicability of our method.
**References**
[1] De Luigi, Luca, et al. "Deep learning on implicit neural representations of shapes." arXiv preprint arXiv:2302.05438 (2023). | Summary: This work tackles an original and interesting challenge: predicting neural networks properties from their trained weight values. To do so, the authors propose to leverage set to set and set to vector transformations in order to encode weight values. Furthermore, they propose to account for the deep neural network architecture and operation order through the addition of positional encoding at several stages of the property predicting model. Ultimately, they anticipate the problem of various model sizes with a padding/chinking mechanism.
The proposed method is then evaluated on small scale datasets in a cross-dataset fashion against method of the field, published in recent years.
Strengths: Such research studies can have a significant in other fields. For instance, a good accuracy estimator from model weight values can be leveraged as a proxy objective in DNN quantization or weight pruning.
The approach is well motivated and the solutions to the highlighted challenges are sound.
The paper is well written and simple to follow.
The authors provide a thorough evaluation and ablation study of the proposed method and its components.
Weaknesses: From my understanding of this work, I identified two major drawbacks:
1. the authors insist on the generalization across different model architectures and tasks. From my perspective, the former is the most important one. However, as it stands, there is little variation between Arch1 and Arch2 as presented in this study: both are simple CNNs. I wonder if the authors could provide numbers w.r.t transformer architectures against CNNs.
2. Since these experiments can be expensive (require multiple training of DNNs to create the predictor training set), it would be interesting to measure the impact of the dataset size w.r.t the predictor's performance, in order to estimate how far the community is from being to reasonably work on large scale datasets such as ImageNet.
As it stands, I believe this work aims at providing empirical results which are not fully convincing yet. I look forward to the authors' response and other reviews and will update my rating accordingly.
Technical Quality: 3
Clarity: 3
Questions for Authors: I summarize the previous points:
1. could the authors provide results with a predictor trained on CNNs from arch1 and predict the performance of transformers?
2. could the author measure the importance of the training set size (pairs of DNNs and accuracy) w.r.t. the predictor's performance, in order to estimate the cost of working on ImageNet-like tasks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors made a relevant attempt at highlighting the limitations of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for taking the time to offer constructive feedback for improving the paper. We respond to the questions raised below.
**could the authors provide results with a predictor trained on CNNs from arch1 and predict the performance of transformers?**
We provide the requested results below. We generate a model zoo of transformer classifiers based on the [1] and test the transfer from arch1 to the transformer model zoo as requested.
For this task, we are unable to benchmark against NeuralGraph[2] as was done in Table 2, since the model cannot process transformer weights when trained on Arch1. Hence we benchmark against the DeepSets baseline as in Table 2. We present the results in the table below:
| Arch1 $\rightarrow$ Transfer | DeepSets | SNE(Ours) |
|----------------------------|----------|------|
| MNIST $\rightarrow$ MNIST | 0.1975 $\pm$ 0.000 | **0.4625** $\pm$ 0.006 |
| CIFAR10 $\rightarrow$ MNIST | 0.1970 $\pm$ 0.000 | **0.3278** $\pm$ 0.029 |
| SVHN $\rightarrow$ MNIST | 0.1906 $\pm$ 0.000 | **0.3735** $\pm$ 0.009 |
From the results, the proposed SNE generalizes better to the unseen transformer architecture at test time than the baselines showing strong architectural transfer. Additionally, here, the model encodes an architecture with about 5 times the number of parameters in Arch1 demonstrating the scalability of our approach. The DeepSets baseline fails to generalize on this task.
**could the author measure the importance of the training set size w.r.t. the predictor's performance, in order to estimate the cost of working on ImageNet-like tasks?**
We provide the requested results in Figure 1 of the attached pdf. From this it can be seen that SNE is more data efficient compared to the baselines (DWSNet[2] and NFT[3]) across all percentages of the full training data demonstrating that the proposed method learns a good embedding even in the limited data setting. We thank the Reviewer for suggesting this ablation as it further demonstrates the strength of SNE.
**References**
[1] https://github.com/s-chh/PyTorch-Scratch-Vision-Transformer-ViT
[2] Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, and Haggai Maron. Equivariant architectures for learning in deep weight spaces. arXiv preprint arXiv:2301.12780, 2023.
[3] Allan Zhou, Kaien Yang, Yiding Jiang, Kaylee Burns, Winnie Xu, Samuel Sokota, J Zico Kolter, and Chelsea Finn. Neural functional transformers. arXiv preprint arXiv:2305.13546, 2023.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for their response and the new experimental results they provided for this rebuttal.
Overall, I consider that my concerns have been addressed. Thus, I am likely to increase my rating for this work after the end of the discussion period. | Summary: The study introduces Set-based Neural Network Encoding (SNE) without weight tying, allowing for encoding network information into a compact form. SNE utilizes Logit Invariance Regularization to ensure the correct functional equivalence of network permutations, departing from traditional weight tying methods. The choice of Set-to-Set and Set-to-Vector functions, including attention mechanisms and pooling layers, are crucial in implementing the SNE framework. Experimental results demonstrate the effectiveness of SNE in predicting generalization performance for Convolutional Neural Networks (CNNs) and Incremental Neural Networks (INRs). SNE is compared with various baselines such as MLP, Deepsets, HyperRep, and others, showcasing its superiority in certain aspects.
Strengths: - New evaluation benchmarks of cross-dataset and cross-architecture evaluation.
- Solid empirical performance. SNE significantly outperforms HyperRep by very large margins in the cross-architecture task. Also, SNE demonstrates true agnosticism to architectural choices compared to NeuralGraph, and shows robust performance without being trained on the training set of a specific model zoo.
Weaknesses: - The proposed method did not ensure weight-space equivariance by design but use a way that is more like an augmentation plus a consistency loss. The theoretic contribution is therefore a bit weak.
- The padding and chunking designs look tricky and it is not clear for me how they help with the performance.
- Fig 1 is not really clear. (1) the legend is not sufficient to understand the computation diagram presented in the figure. (2) the input is shown as neurons but from the method I think it should be the weight matrix that connecting neurons in adjacent layers.
Technical Quality: 2
Clarity: 2
Questions for Authors: - In table 1 NeuralGraph is not reported. Is it because the method is not applicable there?
- Does the method need more optimization steps than baseline methods that ensure weight-space symmetry by design? I suppose optimization would be harder because it requires to optimize for the logit invariance.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations of the method are discussed in Appendix B
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for taking the time to offer constructive feedback for improving the paper. We respond to the questions raised below.
**The proposed method did not ensure weight-space equivariance by design but use a way that is more like an augmentation plus a consistency loss. The theoretic contribution is therefore a bit weak.**
While we use a regularization approach to achieve approximate minimal equivariance in weight-space, our usage of the logit invariance regularizer [1] theoretically guarantees that we indeed learn the correct invariance property similar to the weight-typing approaches used by the baselines. Additionally, our formulation is what allows us to deal with arbitrary architectures using a single model, as opposed to the baselines, since strict enforcement of weight-space equivariance by design requires crafting a new model for every distinct architecture. In this sense, our approach provides a general encoder which in principle is applicable to any architecture, resolving the limitations of purely weight-tying approaches.
**The padding and chunking designs look tricky and it is not clear for me how they help with the performance.**
The padding and chunking operations are introduced in our pipeline to offer computational efficiency in cases where the parameters are large and hence infeasible to process all together. In Table 6 of the appendix we provide an ablation on the effect of the chunk size, which we reproduce below on the frequency prediction task:
| ChunkSize | MSE |
|-----------|---------------------|
| 4 | $0.095 \pm 0.012$ |
| 8 | $0.090 \pm 0.023$ |
| 16 | $0.118 \pm 0.019$ |
| 32 | $0.056 \pm 0.020$ |
Here we see that performance stays almost the same until the largest chunksize (which in this case is set to the largest size hence requiring no chunk/pad operations). The chunksize is selected to suit the gpu memory requirements allowing us to train our method even with limited gpu memory which in our case is a single GTX 1080 gpu with 11GB of memory.
**Fig 1 is not really clear. (1) the legend is not sufficient to understand the computation diagram presented in the figure. (2) the input is shown as neurons but from the method I think it should be the weight matrix that connecting neurons in adjacent layers.**
We have modified the figure to show the weight matrix instead of the layer in compressed form as shown in the figure in the updated paper. We thank the Reviewer for pointing this out as this improves the clarity of the presented pipeline. Additionally, we link the relevant approach subsections to each stage in the figure to aid clarification of the computation diagram.
**In table 1 NeuralGraph is not reported. Is it because the method is not applicable there?**
Yes. We were unable to apply NeuralGraph for this task. Before submission, correspondences with the authors of NeuralGraph could not resolve the issue we encountered when applying the method for this task and that of Table 4.
**Does the method need more optimization steps than baseline methods that ensure weight-space symmetry by design?**
No. In all experiments, we train our method for the same number of epochs as all the baselines. The addition of the logit invariance regularizer introduces no further optimization steps in all our experiments and we encountered no difficulties in minimizing the general objective together with the logit invariance loss.
**References**
[1] Artem Moskalev, Anna Sepliarskaia, Erik J Bekkers, and Arnold WM Smeulders. On genuine invariance learning without weight-tying. In Topological, Algebraic and Geometric Learning Workshops 2023, pages 218–227. PMLR, 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
- I wonder what the issue was with NeuralGraph on the experiments in Table 1 and Table 4? If the NNs for the INRs are MLP architecture, I do not expect any issue there since NeuralGraph is able to work on classification of MNIST / CIFAR in INRs.
- The experiments on INRs now are on frequency predictions of INRs fitting simple sine waves, which is pretty low-level and toy-ish. I have the concerns about not following the conventions in previous weight space symmetry methods (DWSNet, NFT, NeuralGraphs, etc.) to test INR classification tasks on MNIST, FashionMNIST and CIFAR. This should be a more high-level and sophisticated task than sine wave frequencies, and direct comparison with performance reported by prior work would be more convincing.
- What is the NN architecture tested for the chunksize experiments?
---
Rebuttal 2:
Title: Discussion
Comment: We would like to thank the Reviewer again and respond to the new queries below:
**I wonder what the issue was with NeuralGraph on the experiments in Table 1 and Table 4? If the NNs for the INRs are MLP architecture, I do not expect any issue there since NeuralGraph is able to work on classification of MNIST / CIFAR in INRs.**
- Firstly, we utilize the official implementation of NeuralGraphs provided at [1].
In Table 1 we are unable to get NeuralGraphs to converge on this task using multiple hyper-parameter runs.
Secondly, the experiments of Table 4 are on ConvNets (the architecture is outlined in Tables 10 and 11 of the appendix. In Table 4, we evaluate in terms of cross-dataset performance prediction and the issue with NeuralGraph, under this evaluation setting, lies in the following: we are unable to obtain positive correlation coefficients when we evaluate it cross-dataset. As we elaborated in our response to Reviewer bv1s, in our correspondences with the authors of NeuralGraph, we were unable to find and resolve the issues with negative correlation values in Table 4. Interestingly, we do not observe these issues in our evaluation of NeuralGraph in Table 2.
These explain the absence of NeuralGraphs in Tables 1 and 4.
**This should be a more high-level and sophisticated task than sine wave frequencies, and direct comparison with performance reported by prior work would be more convincing.**
- As we elaborated in our response to Reviewer EgRw, there is no fundamental difference between the task of INR frequency prediction and INR classification as these require the same guarantees in MLP weight space. Our request to [1] for the dataset of INRs for the ShapeNet dataset is currently pending and this task will be added to Table 1 when the dataset is received together with INR classification (a task which we are currently undertaking but do not expect to be completed during this discussion period. This is partly due to the large datasets required for for training INR classification for MNIST, FashionMNIST and CIFAR: all of DWSNet, NFT, NeuralGraph generate up to a million datapoints as opposed to the 60000 samples in the original MNIST classification task, resulting in long training iterations).
Importantly, we point the Reviewers attention to the following: that all the experiments presented in Table 2, 3,4 and 7 of the appendix together with the newly introduced Transformer evaluation task suggested by Reviewer NYWS are much more difficult tasks compared to classifying INRs since these require respecting symmetric properties in both CNN (with MLP heads) and Transformer weight spaces. We hope this provides some clarity.
**What is the NN architecture tested for the chunksize experiments?**
- We utilize the architecture outlined in Table 15 of the appendix which consists of 3 linear layers each with hidden size of 32 and the first two followed by sinusoidal activation function. This is the architecture for generating the model zoo used in the experiments of Table 1.
Finally, we would like to thank the Reviewer for the continued engagement as both the discussions and our response to the original review concerns as well as the questions we’ve presently addressed greatly improves clarity of the paper and these will be incorporated in the paper.
**References**
[1]https://github.com/mkofinas/neural-graphs/tree/1f2b671ab4988ef212469363005a5b99eec16580
[2] De Luigi, Luca, et al. "Deep learning on implicit neural representations of shapes." arXiv preprint arXiv:2302.05438 (2023).
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer,
We thank you for taking the time to provide feedback. As the discussion period draws to an end, we would like to know if there are any further queries you'd like as to respond to.
Thank you for your time. | Rebuttal 1:
Rebuttal: **General Response**
We thank all Reviewers for taking the time to offer constructive feedback for improving the paper. Based on the suggestions of Reviewers nYWS, we provide the following additional results.
**Evaluation on Transformers**
We generate a model zoo of transformer classifiers based on the [1] and test the transfer from arch1 to the transformer model zoo as requested.
For this task, we are unable to benchmark against NeuralGraph[2] as was done in Table 2, since the model cannot process transformer weights when trained on Arch1. Hence we benchmark against the DeepSets baseline as in Table 2. We present the results in the table below:
| Arch1 $\rightarrow$ Transfer | DeepSets | SNE(Ours) |
|----------------------------|----------|------|
| MNIST $\rightarrow$ MNIST | 0.1975 $\pm$ 0.000 | **0.4625** $\pm$ 0.006 |
| CIFAR10 $\rightarrow$ MNIST | 0.1970 $\pm$ 0.000 | **0.3278** $\pm$ 0.029 |
| SVHN $\rightarrow$ MNIST | 0.1906 $\pm$ 0.000 | **0.3735** $\pm$ 0.009 |
From the results, the proposed SNE generalizes better to the unseen transformer architecture at test time than the baselines showing strong architectural transfer. Additionally, here, the model encodes an architecture with about 5 times the number of parameters in Arch1 demonstrating the scalability of our approach. The DeepSets baseline fails to generalize on this task.
**Measuring the Importance of training set size.**
We provide a plot of training set size versus error in Figure 1 of the attached pdf. From this it can be seen that SNE is more data efficient compared to the baselines (DWSNet[2] and NFT[3]) across all percentages of the full training data demonstrating that the proposed method learns a good embedding even in the limited data setting. We thank the Reviewer for suggesting this ablation as it further demonstrates the strength of SNE.
We hope that these additional results further emphasizes the benefits of the proposed method.
**References**
[1] https://github.com/s-chh/PyTorch-Scratch-Vision-Transformer-ViT
[2] Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, and Haggai Maron. Equivariant architectures for learning in deep weight spaces. arXiv preprint arXiv:2301.12780, 2023.
[3] Allan Zhou, Kaien Yang, Yiding Jiang, Kaylee Burns, Winnie Xu, Samuel Sokota, J Zico Kolter, and Chelsea Finn. Neural functional transformers. arXiv preprint arXiv:2305.13546, 2023.
Pdf: /pdf/b25ed22a69e043b5abe4c5d71eb95bbf9e7e5e55.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Belief-State Query Policies for User-Aligned POMDPs | Accept (poster) | Summary: This paper proposes a method for computing optimal policy in a given POMDP and a specific policy class (the class of BSQ policies).
Strengths: In this work, to ensure compliance with user's preference, the policies are directly parameterized by the preferences (formalized by boolean formulas). This approach is quite different from the more well-known approach of including preferences in the objective function, and it is worth investigating.
Weaknesses: (1) Readability: This paper introduces many heavy notations. The definitions (of BSQ, policies, etc.) are pretty abstract.
(2) The policies are defined using BSQ. The expressiveness of such a policy class is not well-justified: it is not clear whether this policy class includes all policies that comply with the agent's preferences.
(3) The experiments are performed on several different environments, but the uniformly random baseline (RCompliant) is probably too weak.
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback.
**Weakness 2:** We assume that a policy must comply with a BSQ preference to align with the user’s intentions. Therefore, we can prove that all aligning policies are included in the policy class.
**Weakness 3:** The baseline RCompliant is there to (1) show the BSQ preferences alone are insufficient for solving the evaluation problems and (2) to evaluate how much the solution of PRS can improve the solution.
We did run experiments searching for a competitive baseline that we excluded from the paper. We found that a brute-force grid search requires significantly more time than PRS because evaluating the expected cost of a point in the parameter space requires numerous samples. For smarter parameter optimization algorithms (Nelder Mead and particle swarm), we found that the sampling cost, along with optimizing over a non-convex continuous parameter space that lacks a gradient, made these algorithms too unreliable. POMDP solvers would be an unfair comparison to use as a baseline since PRS receives additional information on the problem. Otherwise, there is no existing work that we can directly compare against. We will add experiments and analyses on this into the supplementary for the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will keep my score and remain positive about this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your review and being optimistic about our paper. | Summary: This paper provides a method for computing policies with user preferences called belief state queries (BSQ). A BSQ consists of a condition and a desired action and can be used for expressions like: "Given a high likelihood of a, do b." To express this, the condition of a BSQ compares the probability of a first-order logic formula over the belief to a threshold. This paper aims to find the threshold values for a BSQ and the corresponding policy that minimizes the number of steps required to reach the goal states with a positive probability.
The authors give a theoretical analysis, showing that the expected cost function, i.e., the number of steps to reach the goal states for the first time, for a BSQ-compliant policy is piecewise constant. These piecewise constant areas of the expected cost function can be used to partition the parameter space. Using this result, they develop an algorithm for computing the threshold values for the optimal BSQ-compliant policy. They try various partition refinement algorithms and compare their resulting BSQ-compliant policies against a policy using uniformly selected random partitions (RCompliant). The results show that their BSQ-compliant policies have a lower expected cost and a higher probability of goal achievement than the RCompliant policies.
Strengths: - The problem of user-preference compliant policy computation in a partially observable setting is interesting. It allows specifying additional objectives for the agent that cannot be directly expressed with LTL, as the agent cannot observe the complete state of the model.
- The definition of BSQs allows the specification of user preferences in more natural language, making it accessible to non-experts.
- The paper removes limitations from previous works: the difficulty of guarantees in reward calibration approaches, the need for past traces in the works of Mazzi et al., and the lack of a formal analysis in the work of Srivastava et al.
Weaknesses: -- I raise my score after the rebuttal of the authors with an additional evaluation of the method --
- The experimental comparison is limited. It would have been interesting to see how the approach compares to the reward calibration approaches that do not guarantee preference compliance or the referenced works of Mazzi et al. There is no mention of why this comparison would not be possible. Another interesting comparison would be to compare to a non-BSQ-compliant policy and analyze the difference in probability of reaching the goal states and satisfying the BSQ. Such a comparison would give insight into the gains and losses of being BSQ-compliant.
- Information about the sizes of the problems used for the experiments is missing, making it difficult to estimate the strength of the approach.
- The definition of the goal-oriented POMDPs is hard to follow. It is still unclear to me where the constant symbols are used, why the states S are defined as valuations for Vf, how the set of state variables for a function F follows from instantiations of F, and what the type of F is. The second part, starting with the definition of the transition function, is clear. The part before that is not.
I list a couple of questions related to these weaknesses, and I am happy to reconsider the score depending on the answers.
Minor comments:
- The paper mentions relational goal-oriented POMDPs at the start of section 3, but afterward, only goal-oriented POMDPs are explained.
- Grounded state is mentioned without explanation (L59). This term is not familiar to me.
- Figure 2a contains a lot of information introduced in different parts of the text. Perhaps this can be split into two iterative figures for clarity.
- I think the input theta for the compound BSQ in lemma 1 (L220) should be a capital theta.
- Definition 9 (L204) contains a curly P to many.
- The R with superscript 1 and subscript 0 (L503) was not introduced.
- The union between the expected cost of a partition and the expected cost of a leaf in Algorithm 1 line 10 seems a bit weird. Is this the correct operator? If so, a small explanation would be helpful.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What is the motivation for minimizing the cost instead of maximizing the probability of reaching the goal states?
2. Is there a reason for not comparing your approach to other existing approaches? Can you give some intution on how the approaches would compare to each other?
3. What are the sizes of the problems used for the experimental evaluation? What makes them challenging (L327)?
4. Why is the comparison sign not an input for the BSQ?
5. Definition 1 (goal-oriented POMDPs) uses unintroduced symbols (the non-curly F, O, and T). Is this correct? If so, what do they mean?
6. Can you clarify Definition 1 (goal-oriented POMDP)? See weaknesses for the unclear parts.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors addressed the limitations of their work, discussing the impact on their results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and questions.
**Q1:** Maximizing the probability of reaching the goal does not factor in the time required for goal completion. Therefore, the optimal strategy would be to use the maximum allowed time for gathering information before choosing to reach the goal. By minimizing the time to achieve the goal, we balance the probability of goal achievement and the average goal completion time without reward engineering.
**Q2:** We experimented with various algorithms for comparing against PRS that are not included in the paper, which can be categorized into parameter search and unconstrained POMDP solvers. The main challenge with parameter search is that we are optimizing over a non-convex continuous search space that does not have a gradient. Additionally, we observed that evaluating the expected cost accurately requires numerous samples of the same point and is computationally expensive. For example, evaluating a parameter value in Lane Merger by taking 200 samples on average requires 22.07 seconds, and the average coefficient of variation of the expected cost is 135.88%, meaning more samples are needed. Therefore, we explored using more advanced parameter search algorithms like Nelder Mead and particle swarm optimization. However, they were inconsistent, with Nelder Mead having a high expected cost coefficient of variation of 78.8% on a toy example. PRS is taking advantage of the structure of the expected cost function, allowing it to consistently and quickly converge on a solution.
For unconstrained POMDP solvers, we evaluated POMCP [Silver and Veness, 2010] and DESPOT [Somani et al., 2013]. POMCP failed to converge. DESPOT was unable to run on Lane Merger. On Graph Rock Sample, DESPOT only reached the goal 0.5% of the time due to a farther goal and sparse rewards. On Spaceship Repair, DESPOT achieved a 7.57% higher goal achievement rate and 13.3% lower expected cost. On Store Visit, DESPOT achieved a 0.58% higher goal achievement rate and 20.0% lower expected cost. We evaluated the alignment with the BSQ preferences and found that DESPOT was compliant 7.3% and 0% of the time on Spaceship Repair and Store Visit, respectively. Therefore, we can conclude that (1) BSQ preferences can help solve problems that unconstrained solvers cannot and (2) the unconstrained solver can perform better on some problems, but their policies are unlikely to comply with the user’s intentions.
Mazzi et al.'s work solves a different problem of normalizing behavior, so we could not compare our approach to theirs.
We will include these experiments and analyses in the supplementary for the final version of the paper.
**Q3:** The size of the evaluation problems are as follows:
* Spaceship Repair: 44 states, 3 actions, 4 observations
* Lane Merger: 307,040 states, 4 actions, 32 observations
* Store Visit: 30 states, 5 actions, 27 observations
* Graph Rock Sample: 128 states, 13 actions, 3 observations
These problems are challenging due to the long horizon and safety risks. Evaluating for a horizon of 100 makes it intractable to explore the entire strategy tree (Definition 6). Each problem contains potential risks, such as merging into another car or entering unsafe areas, which would prevent the agent from completing the goal. As these risks are partially observable and relate to the goal, no policy guarantees goal completion. Therefore, the agent must balance the trade-off between taking longer to reach the goal (to gather more information) and the associated risks.
**Q4:** We excluded the comparison operator as an input into a BSQ since it is constant after initialization. Note only the belief state and the parameter values vary after. To remain consistent, we will either remove the first-order logic formula as an input for the same reason or add the comparison operator as an input. Thank you for pointing this out.
**Q5:** In Definition 1, non-curly $F$ and $T$ are syntax errors and have been corrected. Non-curly $O$ is the set of objects, which we will clarify in the next version of the paper. Thank you for pointing these errors out.
**Q6:** Our gPOMDP definition draws upon existing literature on expressive languages for specifying POMDPs [Sanner, 2010, Srivastava, 2012]. This representation allows us to specify large POMDPs by reusing functions from the set of functions $\mathcal{F}$ and instantiating them from the set of objects $O$. For example, whether each rock is safe to sample in Graph Rock Sample can be represented as $safe(rock_i)$. To build the set of states $\mathcal{S}$, we first construct all the state variables $\mathcal{V}_f$, which are all the combinations of function parameters using $O$. Each state in $\mathcal{S}$ represents a set of values setting each state variable.
**References:**
Sanner, Scott. "Relational dynamic influence diagram language (rddl): Language description." Unpublished ms. Australian National University 32 (2010): 27.
Silver, David, and Joel Veness. "Monte-Carlo planning in large POMDPs." Advances in neural information processing systems 23 (2010).
Somani, Adhiraj, et al. "DESPOT: Online POMDP planning with regularization." Advances in neural information processing systems 26 (2013).
Srivastava, Siddharth, et al. First-order open-universe POMDPs: Formulation and algorithms. Technical report, EECS-2013-243, EECS Department, UC Berkeley, 2012.
---
Rebuttal Comment 1.1:
Title: Thank your for your response.
Comment: I thank the authors for their efforts in answering my questions. I urge the authors to add part of the additional evaluation to the main body of the paper. Trusting that this will happen, I will raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support and increasing the score. We are glad our response helped answer your questions. We will add further evaluation to the final version of the main paper, as you suggested. | Summary: This paper introduces a method for partially observable policy optimization under belief-state query (BSQ) policy constraints (or in the author's terminology, preferences). A BSQ preference, as defined by the authors, is a class of policies parameterized by probability thresholds $\theta$ on beliefs about certain propositions. Fixing the thresholds to particular values results in a BSQ policy --- a policy that branches on (conjunctions of) threshold-based predicates about the agent's belief state, taking certain actions if the agent believes with high enough probability that some proposition $\phi$ if true, and going into another branch if the condition is not met.
After introducing these definitions, the paper proves a number of theoretical results, including the result that the expected cost-to-go of a BSQ policy is a piece-wise constant function of the threshold parameters $\theta$ for finite-horizon goal-POMDPs. In particular, the parameter space can be partitioned into a finite number of intervals, within which the cost-to-go is constant. This motivates the development of a BSQ policy optimization algorithm based on partition refinement --- by repeatedly selecting intervals of the parameter space, then trying to refine them by sampling a concrete set of threshold parameters, it is possible to eventually find the optimal (multi-dimensional) interval of threshold values. In experiments, the authors show that this approach leads to better performance than a baseline with randomly sampled threshold values.
Strengths: BSQ polices haven't received attention for some time, so it's good to see a paper developing the theory and practice of BSQ policy evaluation and optimization. My concerns about the framing and motivation around "preferences" aside (see Weaknesses), the paper is technically sound, and elucidates the structure of the BSQ policy value function in a much deeper way than previously studied by Srivastava et al (2012). Given the piecewise constant nature of this value function, the partition-based policy optimization algorithm is a natural but principled approach. The proofs that I checked appear to be correct, and the experiments indicate that convergence to the optimal policy is indeed achieved. Insofar as BSQ policies provide a tractable and interpretable alternative to more direct approaches to solving POMDPs (though to me this remains to be seen), this paper is a step towards improving their practical applicability, and is likely of interest to others working on POMDPs.
**References:**
Srivastava, S., Cheng, X., Russell, S., & Pfeffer, A. (2012). First-order open-universe POMDPs: Formulation and algorithms. Technical report, EECS-2013-243, EECS Department, UC Berkeley.
Weaknesses: My concerns about this paper center around two main issues (i) motivation and framing around "preferences" (ii) the strength of the evaluation.
**Motivation and Framing**
Building upon the concept of BSQ policies introduced in Srivastava et al (2012), this paper introduces the concept of "BSQ preferences", which are basically BSQ policies without fixed probability thresholds $\theta$. In other words, a "BSQ preference" is just a parameterized family of BSQ policies -- a *policy class* -- which can alternatively be interpreted as a *policy constraint* (Hasanbeig et al, 2018), a *policy sketch* (Andreas et al, 2017; Verma et al, 2018) or a *partial (policy) program* (Andre & Russell, 2002; Hahn et al, 2022) where branch conditions take the form of belief-state queries.
Given these other interpretations of what a "BSQ preference" is -- which, in my opinion, are much more natural -- it is strange to me that the paper adopts the terminology of "preferences" instead. This leads to a motivation in the Introduction that feels quite at odds with the actual technical content: While it's undoubtedly the case that reward engineering is difficult, especially in partially observable settings, it's far from obvious that specifying BSQ policy constraints actually address these specification challenges. Indeed, the paper does not study whether humans find it easier to specify BSQ policy constraints rather than (a) rewards or (b) the more common notion of preferences as pairwise comparisons (used in e.g. RLHF).
I understand that the authors are building upon the literature of "planning with preferences" in the way they use the term "preferences", but IMO that choice of terminology was always unfortunate*, and people should really have called stuff like LTL specifications "constraints" rather than "preferences". This is especially important if the authors are hoping to learn BSQ policies from human preference data in the future (as they suggest in the Discussion) --- it will be very confusing for the term "preference" to both refer to a class of (BSQ) policies, and also the more standard notion of a pairwise comparison between trajectories / choices.
At the very least, I think the authors should discuss the alternative interpretations of a "BSQ preference" that I have listed above -- and cite the relevant literature -- though I think it would be ideal to just move away from the "preference" terminology altogether.
(*Note that even in the "planning with preferences" literature, a preference specification is understood to introduce an ordering relation over plans, in accordance with the standard notion of preferences. This notion of an ordering relation isn't discussed by the definition of a "BSQ preference" in this paper. I would be happier with the "preference" terminology if ordering relations were actually discussed, and if the paper said something about how to handle *multiple* BSQ policy constraints, each with different priorities, since this would introduce a richer ordering relation than the trivial one that prefers all policies that satisfy the constraint.)
**Strength of Evaluation**
While the Partition Refinement Search (PRS) algorithm is well explained and motivated, the evaluation is unfortunately quite limited, only comparing PRS variants to a baseline where the probability thresholds are sampled at random, with no policy optimization at all. I think minimally, there should be some kind of grid-search baseline similar to the one in Srivastava et al (2012), which more clearly shows the benefits of a search strategy based on partition refinement, compared to one which enumerates over the full space of thresholds until the time limit is up. I think there should also be a random search baseline --- i.e. one which repeatedly samples random threshold values, evaluates the resulting policies, and returns the best threshold combination so far. An even stronger baseline would be some kind of coordinate-ascent or Gibbs sampling algorithm, which optimizes one threshold at a time (via gridding or sampling) while keeping the other thresholds fixed.
The above suggestions only pertain to policy optimization under the assumption that we know for sure we want to satisfy the constraint induced by the BSQ policy class. But if the authors really want to show the benefits of BSQ policies as either (i) a way of specifying user constraints or (ii) a way of avoiding downsides of unconstrained policy optimization in POMDPs, then I think more experiments are in order.
For example, to show that BSQ policy constraints might be a more concise way of communicating user requirements than alternatives, it would be great to see a comparison with reward engineering, or with fitting and then optimizing a reward model learned from pairwise trajectory preferences (as in RLHF). I think this would be especially important if the authors want to keep the "preference" oriented framing of the paper.
And to show that BSQ policies have practical engineering value over direct POMDP solution approaches like POMCP or PBVI, it would be good to find some way of comparing the solution quality of PRS vs. those other approaches --- for example, it might be the case that BSQ policy optimization actually produces better outcomes in less solution time because it allows users/developers to constrain the space of policies (i.e. provide "policy guidance"), similar to the benefits of HTN planning over classical planning.
**References:**
Srivastava, S., Cheng, X., Russell, S., & Pfeffer, A. (2012). First-order open-universe POMDPs: Formulation and algorithms. Technical report, EECS-2013-243, EECS Department, UC Berkeley.
Hasanbeig, M., Abate, A., & Kroening, D. (2018). Logically-constrained reinforcement learning. arXiv preprint arXiv:1801.08099.
Andreas, J., Klein, D., & Levine, S. (2017, July). Modular multitask reinforcement learning with policy sketches. In International conference on machine learning (pp. 166-175). PMLR.
Verma, A., Murali, V., Singh, R., Kohli, P., & Chaudhuri, S. (2018, July). Programmatically interpretable reinforcement learning. In International Conference on Machine Learning (pp. 5045-5054). PMLR.
Andre, D., & Russell, S. J. (2002, July). State abstraction for programmable reinforcement learning agents. In AAAI/IAAI (pp. 119-125).
Hahn, E. M., Perez, M., Schewe, S., Somenzi, F., Trivedi, A., & Wojtczak, D. (2022). Recursive reinforcement learning. Advances in Neural Information Processing Systems, 35, 35519-35532.
Technical Quality: 3
Clarity: 2
Questions for Authors: POST-REBUTTAL: I have adjusted my score to a 6 given the additional baseline comparisons in the author's rebuttal, which show that PRS outperforms less naive parameter search methods, as well as unconstrained policy optimization methods like POMCP and DESPOT.
===
In light of the issues with framing and evaluation, I am currently recommending a paper score of 4. If only the framing issues existed, I would give a score of 5. But unfortunately the combination of issues leads me to think this paper isn't ready for publication in NeurIPS yet, even though the theory portion is interesting and sound.
To address these issues, here are my recommendations / questions:
- Consider renaming "BSQ preference" to "BSQ policy class/sketch/constraint". Minimally, please discuss these alternative interpretations of a "BSQ preference".
- If the "BSQ preference" terminology and framing is kept, there should be more rhetorical and empirical justification for why BSQ policies are a useful way of specifying user preferences, and more discussion of the notion of preferences as ordering relations.
- Why is PRS not compared against stronger policy optimization baselines, e.g. grid search, random search, or coordinate ascent?
- Consider comparing PRS against unconstrained policy optimization to more clearly show the benefits of BSQ policy constraints as a form of policy guidance.
- If the "BSQ preference" framing is kept, consider comparing user specified BSQ preferences against user specified reward functions or reward models fitted to pairwise comparison data.
Minor Comments:
- In the description of PRS, the terms "partition" and "optimal partition" seem to be used in place of "interval" / "optimal interval", which led to some confusion when I was reading that section. I think the authors should try to keep to the standard notion of "partition" as a way of splitting a set into mutually exclusive and exhaustive subsets, and use "interval" to refer to one of those (continguous) subsets.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: A number of important limitations are discussed in Appendix H. There are other limitations that I've pointed out above that should be either addressed or discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and for appreciating the strength of technical contributions in the paper. We agree that the BSQ framework deserves a much-needed deeper analysis, and the framework developed here takes key steps towards it. We believe that presenting this new analytical paradigm and the surprising result about piecewise-constant parameter spaces to a broad community such as NeurIPS will enable faster theoretical and algorithmic development in this direction.
**Q1 and Q2:** We agree with you on both points, that (a) the notion of preferences used in this paper has been established in prior alliteration in AI, and (b) this existing choice of terminology is rather unfortunate due to the reasons you elaborate. We used it to be consistent in situating our work and acknowledging past scholarship.
Partial programs [Andres et al., 2017], Alisp programs [Marthi et al., 2005], and this work represent different evolutions of the idea of using partial specifications, and all these directions draw upon Hierarchical abstract machines [Parr and Russell, 1997], albeit in fully observable settings. In fact, BSQ templates can express finite-state machine based partial specifications as well. They also specify an implicit ordering induced by the if-then-else structure. Considering this broader context, we will use the term BSQ guidelines instead of BSQ preferences to better reflect this nature.
We respectfully maintain that terminological choices aside, this work presents a novel technical contribution whose dissemination at this stage will greatly help advance research on the topic.
**Q3:** We extensively evaluated for competitive baselines, including methods such as grid-search on preliminary simpler toy problems. PRS is significantly faster than grid-search making it challenging to compare the two on the same time scale. For example, constructing the non-convex example of Spaceship Repair shown in Figure 1(c) is equivalent to doing a brute-force search, and it took 159.65 minutes to evaluate all parameter combinations with 0.01 precision. In comparison, PRS converged on a longer horizon version of Spaceship Repair within 4 minutes (Figure 4). Furthermore, to match the performance of PRS would require significantly higher precision (e.g., PRS consistently found the solution $0.600000024 < \Theta_1 ≤ 0.600000143$ and $0.49999994 < \Theta_2 ≤ 0.50000006$ for Spaceship Repair).
During our evaluation, we observed that estimating the expected cost is challenging due to the high variability in sampling noise, necessitating numerous samples per point, which is computationally expensive. For instance, evaluating a random point in the Lane Merger problem with 200 samples takes, on average, 22.07 seconds. The average coefficient of variation for the expected cost for this point is 135.88%, suggesting that even more samples are required.
To address this, we also tried Nelder Mead and particle swarm optimization, which are significantly more advanced than vanilla parameter search. PRS outperformed them both, with the better of the two, Nelder Mead, having an expected cost coefficient of variation of 78.8% on a toy example, 11 times higher than PRS.
These experiments and analyses are not in the paper, but we will add them to the supplementary for the final version.
**Q4:** We evaluated unconstrained policy optimization using both POMCP [Silver and Veness, 2010] and DESPOT [Somani et al., 2013]. POMCP failed to converge. DESPOT could not run on Lane Merger and only reached the goal 0.5% of the time in Graph Rock Sample. This supports your hypothesis that BSQ templates can help guide the solver. On the other two problems, DESPOT achieved lower expected costs (20.0% and 13.3% for Store Visit and Spaceship Repair, respectively). However, we evaluated the trajectories produced by DESPOT and only 7.3% and 0% of the time do they align with the user’s intentions in Spaceship Repair and Store Visit, respectively. We will add these experiments and analyses to the supplementary for the final version.
**Q5:** This paper focuses on developing a formal framework and key technical results, illuminating the nature of the problem and solution directions for further research in an area that has been rather understudied. Human subject studies constitute a valuable direction for future work beyond the scope of this current paper.
**Partition Terminology:** The PRS algorithm uses partition terminology to emphasize that the set of partitions $X$ is always mutually exclusive and covers the parameter space. Additionally, a partition can represent multiple disjoint intervals due to the disjunction or the negation of a conjunction of BSQs.
**References:**
Andreas, Jacob, Dan Klein, and Sergey Levine. "Modular multitask reinforcement learning with policy sketches." International conference on machine learning. PMLR, 2017.
Marthi, Bhaskara, Stuart Russell, and David Latham. "Writing Stratagus-playing agents in concurrent ALisp." Reasoning, Representation, and Learning in Computer Games (2005): 67.
Parr, Ronald, and Stuart Russell. "Reinforcement learning with hierarchies of machines." Advances in neural information processing systems 10 (1997).
Silver, David, and Joel Veness. "Monte-Carlo planning in large POMDPs." Advances in neural information processing systems 23 (2010).
Somani, Adhiraj, et al. "DESPOT: Online POMDP planning with regularization." Advances in neural information processing systems 26 (2013).
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: Thank you to the authors for their response, and for providing additional results showing that PRS is significantly faster than grid search, Nelder Mead, and particle swarm optimization, while also outcompeting unconstrained policy optimization in a number of cases. It would be ideal for these results to be expanded upon (i.e. don't just perform Nelder Mead / particle swarm optimization on a toy example, run it on all domains), and for them to be included in not just the Appendix but in the main paper (including the POMCP and DESPOT results). Renaming "BSQ preferences" to "BSQ guidelines/templates" is also appreciated, and I think it would be good to reframe the motivation accordingly (perhaps by highlighting how BSQ guidelines can provide solution guidance that render a POMDP solvable even when unconstrained methods would fail to solve the POMDP). If the authors can make all of these changes for the final paper, then I am happy to increase my score to a 6. In the meantime, I am increasing my score to a 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for improving the score. We are happy to hear that our response addressed your questions. We are currently evaluating Nelder-Mead and particle swarm optimization on our four test problems, as you suggested. The current results show that solutions solved with Nelder-Mead for Spaceship Repair, Store Visit, and Graph Rock Sample have a higher expected cost, lowest goal achievement rate, and significantly higher standard deviation compared to PRS as expected. We will include these results, with the POMCP and DESPOT results, in the final version of the main paper using an extra page. We will also switch the paper over to guidelines-based terminology rather than preferences and adjust our work's motivation appropriately. | Summary: The paper presents a new way of encoding preferences using so-called belief state queries in goal-oriented POMDPs. The modelling allows for optimising agent behaviour while complying to those preferences. In a formal analysis, the paper shows that the expected value function, although non-convex (abstract) / non-concave (main body), is piecewise constant, enabling a finite partitioning of the parameter search space. The experiments show that the theoretical potential is realisable in an implementation for the given examples.
Strengths: - The paper tackles an interesting problem by looking at goal orientation and preferences in combination.
- The framework appears sound and well-rounded.
Weaknesses: - The presentation could be improved. There are some (minor) details that need fixing for a final version, should the paper be accepted:
* Please use abbreviations consistently. Introduce an abbreviation once when the term occurs for the first time in the main text and then use the abbreviation only (exception: headings). At least one abbreviation is introduced several times (BSQ, PRS), at least one is not introduced upon first occurrence of the term (gPOMDP), one is not introduced at all in the main body (POMDP).
* It usually is a sign of a weak text structure if sections have single subsections as seen with 5.1 and 6.1 (as also argued in most style guides). In the cases here, the first part of the sections deserve their own subsection headings. If the space does not permit this, consider making 5.1 and 6.1 part of the section, maybe using \paragraph to further structure the overall section.
* I recognise that space is scarce but the figures are very small, especially the experiments figures are hard to read. Please make sure that a colour-blind person can also read the figures.
* It could be more explicitly stated which parts of the formalisation is new and which part is already existing. The way it is formulated right now, it could be read as gPOMDPs and BSQ being existing concepts used for the first time for preference encoding or that both are new concepts.
* Again a question of available space but wrapping an algorithm in text is highly uncommon in academic AI papers.
(Not sure where this needs to go in the review form; minor comments:
- For the sake of consistency: align convexity / concavity between the abstract and the main body of the paper
- In Def. 1, the $F$ in line 104 is not defined. Should it be $\mathcal{F}$?
- p. 7, line 276: missing space between (PRS)(Algo. 1) // abbreviate Algorithm by Alg. to save the space of the 'o'?
)
Technical Quality: 4
Clarity: 2
Questions for Authors: -
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The limitations are discussed in the appendix. It would be great to have at least some discussion / acknowledgement of it in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and suggestions. We will incorporate these changes in the final version of the paper. We also plan to use the extra page to update the figures and add a portion of the limitations into the conclusion section.
Our work is the first to formally define a BSQ framework for partially observable environments and to conduct a formal analysis of BSQs. We are the first to prove the surprising result that the parameter space for these preferences is piecewise constant and non-convex. Leveraging these properties, we developed a probabilistically complete algorithm for computing policies that match user intentions that balance goal achievement rate and average goal achievement time.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for the response. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your review and for providing valuable feedback. We appreciate your support for our work. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed reviews and comments. We answer the questions posed by the reviewers separately. Please find them in the response below the reviews. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
PCP-MAE: Learning to Predict Centers for Point Masked Autoencoders | Accept (spotlight) | Summary: This paper makes a novel observation (at least to me) that for mask-based autoencoder paradigm for point cloud self-supervised pretraining, the centers of patches are important and the reconstruction objective does not necessarily rely on representations of the encoder. This is different from the 2-D case on mask-based autoencoding for images. Then the authors introduce a simple yet strong scheme that directly learning to Predict Centers for Point Masked AutoEncoders to prevent the encoder from the failure of semantic information learning. This approach is more efficient than baselines, and achieves state-of-the-art performance on public datasets.
Strengths: 1) The observation is interesting and the motivaiton behind the approach is clear, though the abstract I think needs improvement to make it more logical, crisp, and convincing.
2) It shows the difference between 2-D and 3-D data for mask-based autoencoding, and the knowledge and evidence shown in this paper is worth publishing (in NeurIPS 2024) at this moment.
3) The devised approach is cost-effective, and shows strong empirical performance.
4) The paper is basically clearly written with comprehensive experiments.
I think the paper gives a very important observation that center-aware objective makes the pre-training trivial. It opens up space for improvement for the community as also reflected by their experimental results. And this is the major reason for me to vote for acceptance.
Weaknesses: The authors may discuss the limitation of their approach (if there was any).
The abstract can be improved, e.g. the numbers in abstract is vague as there is no specific metric.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could this approach inspire the development in other areas of selfsupervised learning?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time, detailed comments, and valuable suggestions. We are delighted that you recognize the clear motivation, high efficiency, and novelty of our PCP-MAE. Here are our responses to the questions you raised:
> Q1. The limitation of our approach.
We have discussed the limitations of our approach in Appendix C of the submitted paper. The limitations of our work can be summarized as follows:
1. PCP-MAE is a single-modal self-supervised method. However, the current dataset of 3D point clouds is constrained in size due to the challenges associated with collecting point cloud data, which in turn limits the wider applicability of our approach.
2. PCP-MAE capitalizes on generative learning but does not leverage the benefits of contrastive learning. Some other works, such as ReCon [1], show great performance by harmoniously combining generative and contrastive learning.
Future work could focus on developing a multi-modal PCP-MAE or a hybrid model that effectively combines the strengths of both generative and contrastive learning.
> Q2. The abstract can be improved, e.g. the numbers in abstract is vague as there is no specific metric.
Thank you for your valuable advice. We have enhanced the abstract by adding a specific metric.
Regarding lines 17-20, we have modified the text from
- "Our method is of high pre-training efficiency compared to other alternatives and achieves great improvement over Point-MAE, particularly outperforming it by 5.50%, 6.03%, and 5.17% on three variants of ScanObjectNN."
to
- "Our method is of high pre-training efficiency compared to other alternatives and achieves great improvement over Point-MAE, particularly surpassing it by 5.50% on OBJ-BG, 6.03% on OBJ-ONLY, and 5.17% on PB-T50-RS for 3D object classification on the ScanObjectNN dataset."
If you have any further concerns, feel free to contact us.
> Q3. Could this approach inspire the development in other areas of self-supervised learning?
Yes. Our approach offers insights for the broader field of self-supervised learning. Specifically, it challenges the conventional use of positional encodings in the reconstruction process, highlighting the importance of semantic richness in these encodings in the point cloud SSL. While our study focuses on point cloud data in 3D form, the implications extend to other 3D data representations such as meshes and voxels. In these contexts, the usage of positional encodings may need to be reconsidered if they contains case-specific information rather than only case-agnostic information.
Furthermore, our findings illuminate considerations for adapting methods across different domains. For example, when extending the Masked Autoencoder (MAE) approach to 3D from 2D, it is crucial to account for the distinct interpretations of positional encodings in these dimensions. This underscores the need for domain-specific adjustments to ensure the transferability and effectiveness of self-supervised learning techniques.
[1] Qi Z, Dong R, Fan G, et al. Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining[C], ICML 2023. | Summary: The paper proposes a novel self-supervised learning method called PCP-MAE for point cloud understanding. The key innovation of PCP-MAE is that it guides the model to learn to predict the positional embeddings of the centers of masked regions, rather than directly providing the coordinates. This approach encourages the encoder to learn richer semantic representations, leading to performance improvements on downstream tasks such as point cloud classification.
Strengths: 1.Instead of directly providing the coordinates of the centers of masked regions, PCP-MAE guides the model to learn to predict the positional embeddings of these centers. This idea encourages the encoder to learn richer semantic representations.
2.The authors have conducted a comprehensive set of experiments, comparing PCP-MAE against state-of-the-art methods on a variety of point cloud understanding tasks.
Weaknesses: 1.There are significant concerns about the actual effectiveness of the center point prediction method being discussed. Although the author conducted numerous experiments to demonstrate the superiority of their method, these comparisons with previous methods are unfair. This unfairness is reflected in the following aspects: 1. Although this method has achieved significant improvement, it seems that the main improvement comes from the use of more complex data augmentation tricks during pre training, without a fair comparison with previous methods when using the same tricks. Due to the lack of detailed code provided by the author, based on my personal experience, when using only the center point prediction method with the same settings or tricks, the proposed method does not show significant improvement and may even experience performance degradation. For example, when using the center point of ground truth without center point prediction and with the same settings such as position encoding and data augmentation, I don't think the author's center point prediction will be better than using the ground truth method.
2.On line 69, the author get the conclusion that 'the reconstructing objective may make the encoder unable to learn semantic features'. I would like to know how the author came to this conclusion. Can a randomly initialized token with a 100% mask rate be used to reconstruct the approximate shape of a point cloud using only the center point as the position encoding through an encoder? I don't think so. This phenomenon does not mean that the encoder cannot learn semantic features. Secondly, the author only conducted a qualitative analysis without providing a detailed quantitative comparison. In fact, the CD distance of the point cloud reconstructed by this method is much greater than that of the Point MAE method, indicating that the effect of the point cloud reconstructed by this method is very poor.
3.The author's improvement is incremental and limited, so I do not think that this paper meets the contribution criteria of NeurIPS. The proposed method is only an improvement on Point MAE, in fact, Point MAE is just one method in the field of point cloud self supervised learning.
4.The author did not demonstrate the generality of the proposed method. Assuming that this central prediction method is effective, it should be applicable to all MAE based point cloud self-supervised methods, such as Point BERT, ACT, Point-M2AE, PointFEMAE, I2P-MAE, Recon, etc. The author lacks further analysis of the generality of the proposed ideas.
5.Insufficient Visualizations and Explanations: The paper lacks some key visualizations and explanations that would aid reader understanding. For example, the pipeline (Fig. 2) does not clearly annotate the "stop gradient" operation, which is a crucial component of the proposed method. Additional visualizations and explanations would help readers better comprehend the technical details of PCP-MAE.
Technical Quality: 3
Clarity: 3
Questions for Authors: Although the author has demonstrated superior improvement in various experiments, this improvement seems to come more from trick's improvements (such as data augmentation) rather than the proposed method itself. Meanwhile, the author lacks a detailed analysis of the generality of the proposed method. Therefore, I reject this article and encourage the author to conduct a more in-depth, thorough, and fair analysis and comparison.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The author has pointed out some issues in the Limitations section, and I hope the author can make more improvements based on the review comments, so that their work becomes truly solid rather than superficial incremental improvements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. There may be some misunderstandings on your part regarding certain aspects of our work. We hope that our response will address your concerns. Here is our clarification.
> Q1.1. Comparisons with previous methods are unfair. It seems that the main improvement comes from the use of more complex data augmentation tricks during pre training, without a fair comparison with previous methods when using the same tricks.
Thank you for pointing out the lack of baseline performance using the same augmentation. Firstly, different methods suit different pre-training augmentations. During experimentation, we found our approach works well when using a composition of augmentations, *i.e.*, Scale&Translate+Rotation. This can be attributed to the fact that a composition of augmentations enables a variety of centers, thus allowing our PCP-MAE to learn richer center distribution information and be more robust. While this also slightly benefits the performance of the baseline Point-MAE, **it doesn't benefit the previous SOTA methods such as Point-FEMAE and ReCon.** The performance of Point-MAE under different augmentations is displayed as follows for clarity:
Augmentation||OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-|-
Pre-training|Fine-tuning|
Scale&Translate|Scale&Translate $^*$ |90.02|88.29|85.18
Rotation|Rotation $^\dagger$ |92.60|91.91|88.42|
Scale&Translate+Rotation|Rotation $^\ddagger$|92.94|92.25|88.86
$^*$ Adopted by Point-MAE.
$^\dagger$ Adopted by other SOTA methods include Point-FEMAE and ReCon.
$^\ddagger$ Adopted by our PCP-MAE.
**When aligning the augmentations of Point-FEMAE and ReCon with ours, there are no performance gains, and they even suffer from a performance drop.** This phenomenon is caused by the features of different methods. The performance is shown as follows:
Method|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-
Point-FEMAE (Origin)|95.18|93.29|90.22
Point-FEMAE (Our augmentation)|94.32|92.94|89.38
ReCon (Origin)|95.18|93.63|90.63
ReCon (Our augmentation)|94.49|92.77|89.55
**Feel free to validate this through their official codebase.** We apologize for the lack of baseline performance in our paper, which we will definitely add in our next version to ensure fairness and clarity. However, it's also important to reclaim that the composition of augmentations benefits our method due to its features and hampers other SOTA methods, which means the main improvement comes from our approach, not only from the explored augmentations.
> Q1.2. Through your personal experience, you found when using only the center point prediction method with the same settings or tricks, the proposed method does not show significant improvement and may even experience performance degradation.
First, when you try our idea in person, please note there remain some subtle designs and pay attention not to implement them incorrectly. These include sin-cos positional embedding followed by MLP projection, the detailed design of the projector, the shared weight encoder with self/cross attention and *etc*.
Second, regarding the $\eta$ in line 210, it's important to select an appropriate $\eta$ to improve the performance. Empirically, a small $\eta$ is preferred, which still needs to be tuned carefully. So be careful to choose an appropriate $\eta$ when you try our idea.
Moreover, the sin-cos position embedding doesn't bring performance gain:
Method| OBJ-BG | OBJ-ONLY | PB-T50-RS
-|-|-|-
Point-MAE (Orig.)|92.94|92.25|88.86
Point-MAE (Sin-cos)|92.94| 92.42 | 88.65
The pre-training augmentation aligns well with our approach and may hinder other methods. This should be seen not as a trick but as a feature of our PCP-MAE. For more details, please refer to our response to Q1.1
**Regarding the effectiveness of our approach, the ablation results have confirmed its effectiveness when all other experimental setups are aligned:**
Reconstruction|w/PCM|Using $PE_m^{\text{pred}}$ |OBJ_BG|OBJ_ONLY|PB_T50_RS
-|-|-|-|-|-
✓|✗|✗|92.94|92.42|88.65
✓|✓|✗|94.32|93.11|89.38
✓|✓|✓|**95.52**|**94.32**|**90.35**
**We have provided our code and the corresponding checkpoints via an anonymous link to the AC.** Hope this will help you to better comprehend our method.
> Q2.1. Can a randomly initialized token with a 100% mask rate be used to reconstruct the approximate shape of a point cloud using only the center point as the position encoding through an encoder?
Yes. When only given positional embeddings for a masked patch and a global, randomly initialized **learnable** mask token, the point cloud can be reconstructed only using the decoder after pre-training. The visualization can be seen in Fig. 1. Furthermore, we found that this phenomenon also holds on a more challenging dataset, ScanObjectNN.
> Q2.2. This phenomenon does not mean that the encoder cannot learn semantic features.
This phenomenon indicates that the decoder does not necessarily rely on the representations of the encoder for reconstruction, which **may** make the encoder unable to learn semantic representations. Our PCP-MAE guides the encoder to learn to predict semantically rich centers to force the learning of richer representations, thus improving performance.
> Q2.3. The quantitative reconstruction analysis between Point-MAE and Mask 100%.
We use the Reconstruction Chamfer Distance L2 loss to evaluate the quality of reconstruction. The losses for them are shown as follows:
Methods|Epoch 1 Loss|Epoch Last Loss
-|-|-|
Point-MAE|0.12921|0.00270
Mask 100|0.15823|0.00324
The Mask 100% exhibits a slightly higher loss than that of the original Point-MAE, but continues to decline and can reconstruct the point cloud after pre-training. Though the encoder slightly benefits point cloud reconstruction, the reconstruction can still be achieved without it. This phenomenon aligns with our core idea that the decoder does not necessarily rely on the representations of the encoder.
**Please check the official comment for the remaining questions.**
---
Rebuttal Comment 1.1:
Title: Quick Response to author
Comment: I believe you could provide me with your anonymous checkpoints and source code. I hope to verify the effectiveness of your method through a detailed analysis of your code.
---
Rebuttal 2:
Title: Continued rebuttal
Comment: > Q3.1. The author's improvement is incremental and limited.
We have demostrated the effectiveness and superior performance of our method through ablations and experiemnts. Hope the rebuttal can help you better understand our method.
> Q3.2. The proposed method is only an improvement on Point-MAE, in fact, Point MAE is just one method in the field of point cloud self supervised learning.
We agree with you that Point-MAE is just one method in 3D SSL field, but it's of significant meaning in this field. It inspires many great works, improving it from different aspects including Point-M2AE, Point-FEMAE, ACT, Joint-MAE, ReCon and etc. Our method is proposed based on a motivating observation to improve the milestone method.
We acknowledge that Point-MAE represents just one of many approaches within the field of 3D self-supervised learning. However, its contribution cannot be overstated. Point-MAE has served as a foundational model, sparking a wave of innovative research and developments aimed at enhancing its framework. This includes advancements through models like Point-M2AE, Point-FEMAE, ACT, Joint-MAE, and ReCon, among others. Our proposed method builds upon these insights, originating from a critical observation that seeks to further refine this pivotal technique.
> Q4. The generality of the proposed method. And our approach should be applicable to all MAE based point cloud self-supervised methods.
Thank you for proposing a good question to inspire us to explore the generality of our approach. You mensioned that our method should fits all MAE-based method. However, we design our method to improve Point-MAE but not implement an universal framework. Take Point-FEMAE as an example. It address the limited representation issue in the Point-MAE but that doesn't mean it fits all MAE-based method and can improve their performance undoubtedly.
Intuitively, our method fits all MAE-based method, but the concrete design should be carefully considered. Because different MAE-based method may optimize towards different directions, though within the same framework. Directly add PCM (Predicting Center Module) may hamper the consistency.
To explore the generality of our approach, **we incorporated PCM into MAE-based methods including Point-BERT and Point-FEMAE with additional minor changes to maintain architecture consistency.** The experiment results show that incorporating our center point predicting task enhances performance when other experiment setups are aligned:
Method|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-|
Point-BERT|87.43|88.12|83.07
Point-BERT+PCP ($\eta=2.0$)|89.32|89.84|84.77|
Point-FEMAE $^\dagger$|94.32|92.94|89.38
Point-FEMAE+PCP ($\eta=0.1$) $^\dagger$|95.00|93.45|89.83|
$^\dagger$ Reproduced results adopting our augmentation $i.e.$, Scale&Translate+Rotation for pre-training.
Due to time constraints, exploration of various design choices and different $\eta$ was not feasible. We believe that further customization could lead to more substantial improvements.
> Q5. Insufficient Visualizations and Explanations.
Thank you for your valuable advice. We have added the stop gradient to Fig. 2 according to your suggestion. Please check the polished figure in the attached rebuttal PDF. If you have any further concerns, please contact us.
---
Rebuttal 3:
Title: Anonymous source code and checkpoints are available.
Comment: Thank you for your prompt reply and valuable feedback.
As per your suggestion, we have made the comments on the anonymous source code and checkpoints visible to you. You can now access this content at the following link:
https://openreview.net/forum?id=i1xjK5a0X8¬eId=PtuAJIdLVH
If you have any questions or need further clarification while reviewing, please don't hesitate to contact us at your earliest convenience.
---
Rebuttal Comment 3.1:
Title: Results reproduction
Comment: Dear Authors, Reviewers, and AC,
After running the code provided by the authors, I found that the proposed method does indeed show some improvement over the baseline Point-MAE. However, I think the authors have somewhat **overstated their results**. As shown in the following tables, considering that the method does not show improvements over the state-of-the-art and is merely an incremental improvement over Point-MAE with limited generality and minimal inspiration for other research, I am raising my score to a border reject. While I acknowledge that the authors' work has some merit, I consider it to be a gradual contribution and do not believe it meets the NeurIPS acceptance standards.
Specifically, I reran the authors' code on the classification task of ScanObjectNN using the recommended settings. I conducted a relatively fair comparison between PCP-MAE, the baseline Point-MAE, and the state-of-the-art method Recon. I repeated the experiments ten times on three different variants of ScanObjectNN, using ten identical random seeds for each method. I reported the average and highest values across the ten runs to further eliminate the impact of randomness. The experiments show that, while PCP-MAE indeed offers some improvement over the original Point-MAE, it underperforms compared to the state-of-the-art method Recon in both average and highest values.
Additionally, the authors reported their results in Table 2 as 95.52%, 94.32%, and 90.35%, which differ significantly from the results I reproduced: 94.32%, 93.28%, and 89.94%, respectively. The discrepancies are 1.20%, 1.04%, and 0.41%. Although random seeds can significantly impact the results, a fair comparison using ten identical seeds is sufficient to illustrate the issue. Therefore, I believe the authors have overstated their results.
**Best result of ten experiments.**
| Methods | OBJ-BG | OBI-ONLY | PB-T50-RS |
| ------------ | ------------ | ------------ | ------------ |
| Point-MAE | 93.12 | 92.77 | 89.04 |
| PCP-MAE | 94.32 | 93.29 | 89.94 |
| Recon | 95.19 | 93.12 | 90.25 |
**Average result of ten experiments.**
| Methods | OBJ-BG | OBI-ONLY | PB-T50-RS |
| ------------ | ------------ | ------------ | ------------ |
| Point-MAE | 92.67 | 92.31 | 88.59 |
| PCP-MAE | 93.99 | 92.62 | 89.34 |
| Recon | 94.37 | 92.51 | 89.95 |
---
Rebuttal 4:
Title: Response 1
Comment: Dear Reviewer GmbZ,
First, we want to express gratitude for your responsible and thorough review. **We are also pleased to have clarified, to some extent, a previous misunderstanding; specifically, that the improvements observed are attributable to our method, not to a "data augmentation trick."**
Given that there remain some concerns, we would like to address the issues you raised in your comment.
> Q1. The authors have somewhat overstated their results. The authors reported their results in Table 2 as 95.52%, 94.32%, and 90.35%, which differ significantly from the results I reproduced: 94.32%, 93.28%, and 89.94%, respectively. The discrepancies are 1.20%, 1.04%, and 0.41%.
We agree with you that random seeds can significantly impact experimental results. However, **it is important to consider that variations in hardware, such as different machines and GPU types, as well as differing computational environments, also play a crucial role in the performance outcomes.** This is particularly evident in the field of 3D SSL, where such factors are well-known to affect results. That's to say, discrepancies between the machines used for pre-training and those used for fine-tuning can lead to a performance drop. This is exemplified by our observations that directly downloaded checkpoints from Point-FEMAE and ReCon yield results that are consistently lower than the initially reported outcomes:
Methods|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-
ReCon (reported)|95.18|93.63|90.63
ReCon (reproduced)|95.00|92.94|90.18
Point-FEMAE (reported)|95.18|93.29|90.22
Point-FEMAE (reproduced)|94.49|92.77|89.73
To address the discrepancies in your reproduced results and the reported results, we suggest the possibility of machine inconsistency between pre-trained machine (ours) and the fine-tuning machine (yours). For a truly fair comparison, we recommend pre-training our model directly rather than using pre-provided checkpoints to prevent the performance drop brought by the different machines (our machine and your machine).
Additionally, it is quite unexpected that you got such a bad performance with our PCP-MAE. For instance, you reported only a 94.32% accuracy on the OBJ-BG using your setup, whereas we routinely achieve performances exceeding 95% with relative ease. Furthermore, for the OBJ-ONLY, we replicate the methodology Point-MAE used to produce their results, which involves running multiple experiments at checkpoints 300, 275, and 250 (refer to the issues discussed on the official Point-MAE GitHub repository). We achieved 94.15% at ckpt-300 and 94.32% at ckpt-275. Both of these checkpoints easily yield performances above 93.45%, which is higher than the results you obtained.
**We provide the logs for three variants of ScanObjectNN (OBJ-BG, OBJ-ONLY, PB-T50-RS), which we got in April 2024. We have updated these logs at the anonymous code link https://anonymous.4open.science/r/2128-PCP-MAE-529D for you to check. The timestamps in these logs are continuous, indicating that the experiments were conducted continuously and fairly without selectively choosing high-performing logs.** Specifically, there are 10 continuous logs for the OBJ-BG, 6 for the OBJ-ONLY (ckpt-300), 8 for the OBJ-ONLY (ckpt-275), and 100 for the PB-T50-RS. Please review the logs at the provided code link. The specific accuracies from these logs are detailed below:
Benchmark|1|2|3|4|5|6|7|8|9|10
-|-|-|-|-|-|-|-|-|-|-
OBJ-BG (ckpt-300)|**95.53**|94.14|94.83|94.14|94.32|95.35|94.49|94.83|93.45|94.49
OBJ-ONLY (ckpt-275)|93.29|92.94|93.29|92.60|93.12|**94.32**|93.63|93.63|||
OBJ-ONLY (ckpt-300)|93.45|92.77|93.80|93.63|92.59|**94.15**|
PB-T50-RS (ckpt-300)|89.49|**90.35**|88.86|89.55|89.49|89.83|89.63|89.24|89.83|89.38|
The results of this logs can be summarized as:
Best results:
Methods|OBJ-BG|OBJ-ONLY||PB-T50-RS
-|-|-|-|-
Point-MAE|93.12|92.77||89.04
PCP-MAE|95.53 (ckpt-300)|94.32 (ckpt-275)|94.15 (ckpt-300)|90.35 (ckpt-300)
Recon|95.19|93.12||90.25
Average results:
|Methods|OBJ-BG|OBJ-ONLY||PB-T50-RS|
|-|-|-|-|-|
Point-MAE|92.67|92.31||88.59
PCP-MAE|94.56 (ckpt-300)|93.35 (ckpt-275)|93.40 (ckpt-300)|89.57 (ckpt-300)
Recon|94.37|92.51||89.95
**The results show that we do not overstate our results.**
We also understand that you directly ran the ReCon pre-training checkpoints on your machine, which may have also introduced architectural inconsistencies, leading to a performance drop, albeit less than ours. Therefore, comparing your results directly with ours might not provide a fair assessment. However, due to the more significant machine and GPU inconsistencies encountered when you use the checkpoints trained on our machine, we recommend that you run the pre-training PCP-MAE yourself and then fine-tune to conduct an entirely fair comparison.
**Finally, we wish to affirm that we are fully responsible for the logs (results) provided and guarantee their reproducibility.** We will definitely release the corresponding fine-tuning checkpoints for each benchmark.
---
Rebuttal 5:
Title: Response 2 (Response 1 continued.)
Comment: > Q2. Considering that the method does not show improvements over the state-of-the-art (ReCon) and is merely an incremental improvement over Point-MAE.
We previously addressed this question in our rebuttal. Initially, our method was motivated by the observation that point cloud reconstruction does not necessarily rely on the encoder's representation. Hence, we introduced the Point Center Masking (PCM) technique to guide the encoder in predicting masked centers effectively.
Our method enhances Point-MAE with reasonable additional computational needs and achieves state-of-the-art (SOTA) performance in some benchmarks. **Although our performance improvements over previous SOTA methods like ReCon are marginal, it is crucial to highlight the distinct advantages of our approach:**
1. **Independence from Pre-trained Models:** Unlike ReCon, which heavily relies on pre-trained models such as the text encoder from CLIP and ViT-B pre-trained on ImageNet, our PCP-MAE is simpler and more concise, as it does not depend on external pre-trained models. This simplicity facilitates easier implementation in real-world scenarios.
2. **Single-modal Data Utilization:** The pre-training of ReCon not only requires point cloud data but also paired images and text, whereas our PCP-MAE exclusively utilizes single-modal data, i.e., point cloud. The need for paired data significantly hampers the scalability of the ReCon model, as larger models require larger amounts of data according to the scaling law. To build datasets larger than ShapeNet, it is necessary to collect extensive amounts of unlabeled point cloud data, render paired images, and manually annotate corresponding text descriptors. The rendering process is time-consuming, and annotation is labor-intensive, contradicting the initial intentions of self-supervised learning. Currently, ReCon uses the labels from point cloud samples as text inputs in the ShapeNet pre-training dataset, which somewhat violates the principles of self-supervised learning. **In contrast, our PCP-MAE can scale easily using only unlabeled point cloud data.**
3. **Enhanced Efficiency:** ReCon introduces greater complexity with more trainable parameters, whereas our PCP-MAE significantly improves efficiency. We've detailed these comparisons in the rebuttal. Here comes the comparison again for clarity:
Method|Pre-training|| Fine-tuning| | | | |
-|-|-|-|-|-|-|-
||Params (M)|Time (s/epoch)|Params (M)|OBJ-BG Time (s/epoch)|OBJ-ONLY Time (s/epoch)|PB-T50-RS Time (s/epoch)|ModelNet40 Time (s/epoch)
PCP-MAE (ours)|29.5 (1.00x)|120 (1.00x)|22.1 (1.00x)|10 (1.00x)|10 (1.00x)|49 (1.00x)|30 (1.00x)
ReCon|140.9 (4.78x)|452 (3.77x)|43.6 (1.97x)|13 (1.30x)|13 (1.30x)|60 (1.22x)|42 (1.40x)
Though the improvements of our PCP-MAE over the cross-modal method ReCon are not statistically significant, our method notably excels in implementation feasibility and training efficiency.
> Q3. Our PCP-MAE is with limited generality.
We demonstrate the generality of our method through demo experiments, including incorporating the proposed PCM into Point-BERT and Point-FEMAE. Without extensive exploration of architecture and the hyperparameter $\eta$, we still achieve improvement over the original methods:
Methods|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-|
Point-BERT|87.43|88.12|83.07
Point-BERT+PCP ($\eta=2.0$)|89.32|89.84|84.77
Point-FEMAE $^\dagger$|94.32|92.94|89.38
Point-FEMAE+PCP ($\eta=0.1$) $^\dagger$|95.00|93.45|89.83
$^\dagger$ Reproduced results adopting our augmentation $i.e.$, Scale&Translate+Rotation for pre-training.
**We believe that simply by varying $\eta$, we can achieve further improvements, and additional exploration of the architecture could lead to even higher performance. This also indicates that there is substantial room for further exploration, contradicting the notion of limited generality as you suggested.** We leave the development of a universal PCP incorporation framework to future work.
---
Rebuttal 6:
Title: Response 3 (Response 1 continued.)
Comment: > Q4. Our PCP-MAE is with minimal inspiration.
We strongly believe our PCP-MAE is well-motivated and novel. And our approach aligns closely with the found motivation. **This perspective is shared not only by us but also by four other reviewers (Reviewer MoXR, nJYR, DT8p, rD3v) who agree that our method is innovative and sheds new light on the differences in positional encodings between 2D MAE and Point-MAE.** It opens up new directions for improvement in the field of point cloud self-supervised learning.
To the best of our knowledge, prior to our work, no one had observed that the decoder in 3D SSL does not necessarily rely on the encoder's representation before which is significantly different in the 2D counterpart. This led us to hypothesize that the distinction arises from the differing roles of positional encodings in 3D versus 2D contexts, prompting us to develop a straightforward yet effective approach, PCP-MAE.
Point-MAE has been a foundational model in the 3D SSL landscape, with most leading methods such as ACT, ReCon, I2P-MAE, and Point-FEMAE building upon it. Enhancing this influential model carries significant importance.
Our method not only highlights a potential drawback in existing MAE-based frameworks but also inspires the development of other 3D data representations, such as meshes and voxels. In these contexts, the application of positional encodings might require reevaluation, particularly if they contain context-specific information, as opposed to merely context-agnostic details.
> Q5. The words that the authors want to say.
Setting aside the fact that our method improves upon Point-MAE with affordable additional computational demands, achieves state-of-the-art in some benchmarks, and enhanced training efficiency compared to other SOTA methods, **it is crucial to recognize that in high-quality research, performance is not the sole criterion. The core idea and its contribution to advancing thought in the field merit greater attention.** A valuable research work should be evaluated not only by its performance metrics and quantitative outcomes but also by its potential to inspire deeper understanding and innovation within the relevant community.
Thus, while you might focus on the marginal numerical improvements our PCP-MAE method offers over the much more complex cross-modal method, ReCon, we encourage you to also consider the deeper insights it provides. Exploring the true significance and advantages of our method will reveal the broader value of our work.
Kind regards, The Authors
---
Rebuttal 7:
Title: Look forward to further reply.
Comment: Dear Reviewer GmbZ:
We would like to express our sincere gratitude for your thorough and insightful review comments. Your feedback has been invaluable in helping us refine and improve our paper significantly.
In our most recent discussion, we have provided the **training logs** for our model. These logs offer a transparent and verifiable record of our method's performance, allowing you to easily confirm that our approach indeed achieves the results reported in the paper. This step towards greater reproducibility is part of our commitment to scientific rigor and openness.
We are particularly interested in understanding whether these newly provided training logs have influenced your perspective on our work.
We value your expertise and judgment, and we are eager to hear your thoughts on this new information. If you have any remaining questions or areas where you feel further clarification is needed, please don't hesitate to let us know.
---
Rebuttal 8:
Comment: Dear All,
Based on the author's recommendation, I retrained PCP-MAE from scratch and conducted experiments on three variations of ScanObjectNN using ten different seeds for each. The experimental results are as follows:
**Best result of ten experiments.**
| Methods | OBJ-BG | OBI-ONLY | PB-T50-RS |
| ------------ | ------------ | ------------ | ------------ |
| Point-MAE | 93.12 | 92.77 | 89.04 |
| PCP-MAE (Reproduce) | 95.01 | 93.29 | 89.76 |
| PCP-MAE (Author) | 95.53 | 94.32 | 90.35 |
| Recon | 95.19 | 93.12 | 90.25 |
**Average result of ten experiments.**
| Methods | OBJ-BG | OBI-ONLY | PB-T50-RS |
| ------------ | ------------ | ------------ | ------------ |
| Point-MAE | 92.67 | 92.31 | 88.59 |
| PCP-MAE(Reproduce) | 94.46 | 92.65 | 89.37 |
| PCP-MAE (Author) | 94.56 | 93.35 | 89.57 |
| Recon | 94.37 | 92.51 | 89.95 |
I only provided the latest reproduction results. My recommendation is borderline (neither accept nor reject), and I fully respect the final decision of the AC. | Summary: The paper studies the point cloud pretraining under the self-supervised learning paradigm. Authors experimentally found that the position embedding in the decoder may decrease the learning ability of the encoder and propose a new method to overcome this issue.
Strengths: 1. The paper is clearly written and easy to follow.
2. The phenomenon revealed in this paper is new to me and sounds reasonable.
3. The proposed method seems reasonable and aligns with the core motivation of the paper.
Weaknesses: 1. In line 30, authors mentioned the difficulty in collecting 3D dataset and want to use SSL to solve this problem. However, 3D data is different with 2D images which can be collected easily on the internet. For the experimented dataset like Scanning datasets, the label will be collected during scanning, then why SSL for 3D is important?
2. According to table 1, Point-FEMAE achieves almost the same performance as the proposed model, thus it is hard to say that the reconstructing objective may make the encoder unable to learn semantic features in Line 70.
3. The improvements of the proposed method over Point-FEMAE are marginal.
4. One possible reason behind the phenomenon of the position embedding may be the simplicity of the dataset. Would the phenomenon also hold for lidar datasets?
Technical Quality: 3
Clarity: 4
Questions for Authors: pealse refer to the weakness
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for providing your detailed feedback. Below, we answer your questions in detail:
> Q1. The importance of SSL for 3D.
Upon revisiting our statement, we find it more appropriate to remove "collect" in line 27 of the original paper, which we will update in the next version. Here are clarifications regarding your question.
We agree with you that collecting 3D data is easy through scanning. However, to our knowledge, most labeled 3D datasets, such as ScanNet [1] and NuScenes [2], which contain vast amounts of 3D data, require laborious annotation, and labels cannot be directly assigned or collected within them.
Take autonomous driving as an example, a data acquisition vehicle can collect more than 200k frames of point clouds within 8 working hours, but a skilled worker can only annotate 100-200 frames per day [3]. Thus, effectively leveraging unlabeled 3D data becomes a critical issue in practical applications. **Self-supervised learning in 3D can effectively leverage large amounts of unlabeled 3D data by designing specific pre-text tasks for pre-training, benefiting downstream tasks.** Therefore, 3D SSL is of significant importance.
> Q2. According to table 1, Point-FEMAE achieves almost the same performance as the proposed model, thus it is hard to say that the reconstructing objective may make the encoder unable to learn semantic features in Line 70.
Thank you for your insightful question. It's crucial to focus on the word **"may"** in our statement. We have observed that reconstruction does not necessarily depend on the encoder’s representation when positional embeddings (centers) are directly provided, which underscores the significance of these embeddings. Our proposed PCP-MAE model addresses this by guiding the encoder to learn distributions of semantically rich centers, replacing the directly provided centers.
**However, apart from directly guide the encoder to enhance its understanding on the centers to address the found position leakage issue, other approaches could indirectly alleviate it.** Take Point-FEMAE as an example. It adds a branch to the pre-training model which prevents the model from learning limited representations. This will possibly enhance the model's understanding to the point cloud better and indirectly enhance its grasp of center distributions. Consequently, the position leakage issue may be alleviated. That's why we use **may** in our statement. In essence, while the reconstruction objective could impede learning semantic features, it can be alleviated with auxiliary tools.
**Most importantly, our approach is the first to directly address the position leakage issue**. Although Point-FEMAE could also possibly alleviate this issue, it significantly lags behind in efficiency (check the answer to the Q3).
> Q3. Marginal improvement over Point-FEMAE.
Our PCP-MAE and Point-FEMAE are two orthogonal works based on Point-MAE. Point-FEMAE proposes a global branch and a local branch to capture latent semantic features, rather than using just one branch. Our PCP-MAE, on the other hand, guides the model to learn to predict centers based on a motivating observation.
The reported results show that both PCP-MAE and Point-FEMAE significantly improve the baseline performance, with our PCP-MAE marginally outperforming Point-FEMAE.
**However, when it comes to training efficiency, Point-FEMAE significantly lags behind our approach.** Point-FEMAE not only adds an additional branch but also uses extra Local Enhancement Modules for modeling the local point clouds. This results in a significant increase in both parameter and time costs compared to Point-MAE. The comparisons among these three methods are shown as follows:
Method|Pre-training||Fine-tuning|||||
-|-|-|-|-|-|-|-
||Params (M)|Time (s/epoch)|Params (M)|OBJ-BG Time (s/epoch)|OBJ-ONLY Time (s/epoch)|PB-T50-RS Time (s/epoch)|ModelNet40 Time (s/epoch)
Point-MAE (baseline)|29.0|88|22.1|10|10|49|29
Point-FEMAE|41.5 (1.43x)|326 (3.70x)|27.4 (1.24x)|30 (3.00x)|30 (3.00x)|148 (3.02x)|72 (2.48x)
PCP-MAE (ours)|29.5 (**1.01x**)|120 (**1.36x**)|22.1 (**1.00x**)|10 (**1.00x**)|10 (**1.00x**)|49 (**1.00x**)|30 (**1.01x**)
Point-FEMAE retains the Local Enhancement Modules during fine-tuning, which adds an unignorable extra computational burden and thus greatly decreases the speed of model fine-tuning.
> Q4. One possible reason behind the phenomenon of the position embedding may be the simplicity of the dataset. Would the phenomenon also hold for lidar datasets?
Thank you for your nice suggestion. In addition to the pre-training ShapeNet dataset, we have experimented with setting the mask ratio to 100% on another dataset called ScanObjectNN [6]. This is a much more challenging dataset, sampled from real-world scans that include background and occlusions, and is built based on two popular real scene mesh datasets, SceneNN [7] and ScanNet [1]. **When only using the decoder (mask 100%) for pre-training, the point cloud still can be reconstructed well through experimenting.**
From a statistical perspective, the Chamfer Distance L2 Loss (Eq. 15) is even lower than that of the simpler ShapeNet dataset. The reconstruction losses are as follows:
|Dataset|Epoch 1 Loss|Epoch Last Loss|
|-|-|-|
|ShapeNet|0.15823|0.00324|
|ScanObjectNN|0.44190|0.00250|
We will add the visualizations in the next version of our paper. Additionally, this phenomenon is likely to hold for lidar datasets, which we will test and update the results in the next version.
Please check the official comment for the Reference list.
---
Rebuttal 2:
Comment: Note that the pre-training time cost contradicts the statistics in our paper's Table 1, which is reported from Point-FEMAE. It seems Point-FEMAE provides incorrect time efficiency comparison results, which we correct here and will update in the next version of our main paper.
To ensure a fair time comparison, the code for Point-MAE should be modified slightly in two ways:
1. Add "config.dataset.train.others.whole = True" to the training to align Point-FEMAE and our method.
2. Instead of using KNN_CUDA, change it into the knn_point function (refer to the official code of ReCon [4] or Point-FEMAE [5]) which directly uses torch operation to align with Point-FEMAE and our approach. This will significantly increase the training speed.
Reference
[1] Dai A, Chang A X, Savva M, et al. Scannet: Richly-annotated 3d reconstructions of indoor scenes[C], CVPR 2017.
[2] Caesar H, Bankiti V, Lang A H, et al. nuscenes: A multimodal dataset for autonomous driving[C], CVPR 2020.
[3] Mao J, Niu M, Jiang C, et al. One million scenes for autonomous driving: Once dataset[J]. arXiv preprint arXiv:2106.11037, 2021.
[4] Qi Z, Dong R, Fan G, et al. Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining[C], ICML 2023.
[5] Zha Y, Ji H, Li J, et al. Towards compact 3d representations via point feature enhancement masked autoencoders[C], AAAI 2024.
[6] Uy M A, Pham Q H, Hua B S, et al. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data[C], ICCV 2019.
[7] Hua B S, Pham Q H, Nguyen D T, et al. Scenenn: A scene meshes dataset with annotations[C]//2016 fourth international conference on 3D vision (3DV). Ieee, 2016: 92-101.
---
Rebuttal 3:
Title: Look forward to your further reply.
Comment: Dear Reviewer DT8p:
Thanks again for the valuable feedback and constructive suggestions. As the discussion phase is coming to an end, we have provided our understanding and perspective on the significance of self-supervised learning for point clouds in our rebuttal, along with new experimental results. We would like to know if our rebuttal has addressed your concerns, and if you have any further questions that need clarification from us. | Summary: This work examines representation learning of 3D point clouds using masked autoencoding. A known issue with such an approach is that the coordinates of the patch centers leak significant information about the geometry and semantics of the shape being reconstructed, which degrades the representations learned by these approaches. This work proposes to add an extra criterion, whereby the model predicts the positions of the masked patch centers. Pretraining is performed on ShapeNet. This simple change leads to significant improvements over Point-MAE when finetuning on ScanObjectNN and ModelNet40 for 3D classification, 3d scene segmentation, and object part segmentation.
Strengths: **Originality**
* The proposed point center prediction objective is novel to the best of my knowledge, and requires several subtle innovations on the implementation side to yield improved performance (e.g., choice of sin-cos positional embeddings before MLP projection module, re-using predicting positions in the reconstruction objective with a stop-gradient operation, and the shared network architecture with cross-attention for point center prediction). Ablation experiments demonstrate the importance of each of these factors.
**Clarity**
* The work is reasonably clear, enough that the reader can understand basic ideas, motivation, and details of the proposed method. Releasing code with the paper will greatly improve reproducibility and impact.
**Quality**
* The work is of a reasonable quality; experiments and numerical results on standard benchmarks and standard evaluation setups.
**Significance**
* This work contributes to a large body of work on MAE-based representation learning for 3D point clouds, and provides an interesting and simple approach for addressing a known limitation of these approaches. Improvements over Point-MAE, the most similar baseline, are notable.
Weaknesses: * Improvements over previous MAE-based methods, such as Point-FEMAE, are quite marginal, and perhaps not statistically significant. While the contributions of other MAE related methods are orthogonal and could be potentially combined with the proposed method, it is unclear if the issue of 3D center point position leakage is still significant when combined with other tools.
* Numerical results are primarily restricted to end-to-end fine-tuning. Would be curious to see the results of linear or frozen evaluations to assess the quality of the representations, as finetuning from scratch (i.e., random representations) can already achieve much of the topline performance in many cases.
Technical Quality: 3
Clarity: 2
Questions for Authors: * If the authors have the capacity to conduct such an experiment, I am primarily curious to see the effect of the proposed center point prediction task with stronger baselines which provide advances orthogonal to the contributions in this work.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time, thorough comments, and nice suggestions. Your endorsement of our method and experiments gives us significant encouragement. Here are our clarifications.
> Q1.1. Improvements over previous MAE-based methods, such as Point-FEMAE, are quite marginal, and perhaps not statistically significant.
Although our PCP-MAE only marginally outperforms other advanced MAE-based methods such as Point-FEMAE and ReCon, **our approach significantly advances in efficiency,** particularly regarding pre-training and fine-tuning time costs. The efficiency comparisons are shown below (experiments conducted using an empty RTX 3090):
Method|Pre-training|| Fine-tuning| | | | |
-|-|-|-|-|-|-|-
||Params (M)|Time (s/epoch)|Params (M)|OBJ-BG Time (s/epoch)|OBJ-ONLY Time (s/epoch)|PB-T50-RS Time (s/epoch)|ModelNet40 Time (s/epoch)
Point-MAE (baseline)|29.0|88|22.1|10|10|49|29
Point-FEMAE|41.5 (1.43x)|326 (3.70x)|27.4 (1.24x)|30 (3.00x)|30 (3.00x)|148 (3.02x)|72 (2.48x)
ReCon|140.9 (4.85x)|452 (5.14x)|43.6 (1.97x)|13 (1.30x)|13 (1.30x)|60 (1.22x)|42 (1.45x)
PCP-MAE (ours)|29.5 (**1.01x**)|120 (**1.36x**)|22.1 (**1.00x**)|10 (**1.00x**)|10 (**1.00x**)|49 (**1.00x**)|30 (**1.01x**)
> Q1.2. While the contributions of other MAE-related methods are orthogonal and could be potentially combined with the proposed method, it is unclear if the issue of 3D center point position leakage is still significant when combined with other tools.
Thank you for your good question. First, 3D center point position leakage results from the Point-MAE or exactly the reconstruction objective due to the directly provided centers. Point-MAE remains the core component of MAE-based methods such as Point-FEMAE and ReCon, and point cloud reconstruction continues to be one of the pre-training objectives. **Thus, the issue of position leakage still exists in these methods.** Our approach directly guides the encoder to learn distributions of centers, which helps to mitigate this issue.
**However, the extent of the negative impact caused by position leakage varies in different MAE-related methods.** Take Point-FEMAE [2] as an example. It adds a branch to the pre-training model which prevents the model from learning limited representations. This will possibly enhance the model's understanding to the point cloud better and indirectly enhance its grasp of center distributions. Consequently, the position leakage issue may be alleviated. Although the performance of these methods can reveal the degree of influence posed by position leakage, a quantitative method to analyze this issue is necessary and could be addressed in our future research.
> Q2. Would be curious to see the results of linear or frozen evaluations to assess the quality of the representations.
Following ReCon [1], we assess the performance of MLP-Linear, which freezes the pre-trained encoder and only updates the classification head to evaluate the quality of the learned representations. Here are the experiment results:
MLP-Linear|OBJ-BG|OBJ-ONLY|PB-T50-RS|
-|-|-|-
Point-MAE|82.77±0.30|83.23±0.16|74.13±0.21
ACT (Cross-modal)|85.20±0.83|85.84±0.15|76.31±0.26
Point-FEMAE|88.98±0.15|89.50±0.22|80.32±0.10
ReCon (Cross-modal)|89.50±0.20|89.72±0.17|81.36±0.14
PCP-MAE (ours)|89.41±0.13|89.41±0.26|80.63±0.08|
The results show that our approach greatly outperforms the baseline Point-MAE, marginally surpasses Point-FEMAE, and lags behind the leading cross-modal method ReCon.
> Q3. The effect of the proposed center point prediction task with stronger baselines which provide advanced orthogonal improvement to the contributions in this work.
Thank you for this nice suggestion. We tried to incorporate our center point prediction task with Masked Point Modeling (MPM) family methods include Point-BERT [3] and Point-FEMAE [2] among which Point-BERT is parallel to the Point-MAE [4] and Point-FEMAE provide advanced orthogonal improvement based on Point-MAE compared to this work.
Although intuitively feasible, it is important to note that implementing our method into various MPM-based frameworks requires careful consideration due to the different optimization goals of each specific approach. **We incorporated PCP (learning predicting masked centers) into MPM methods including Point-BERT and Point-FEMAE with additional minor changes to maintain architecture consistency.** The experiment results show that incorporating our center point predicting task enhances performance when other experiment setups are aligned:
Methods|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-|
Point-BERT|87.43|88.12|83.07
Point-BERT+PCP ($\eta=2.0$)|89.32|89.84|84.77|
Point-FEMAE $^\dagger$|94.32|92.94|89.38
Point-FEMAE+PCP ($\eta=0.1$) $^\dagger$|95.00|93.45|89.83|
$^\dagger$ Reproduced results adopting our augmentation $i.e.$, Scale&Translate+Rotation for pre-training.
Due to time constraints, exploration of various design choices and different $\eta$ was not feasible. We believe that further customization could lead to more substantial improvements.
[1] Qi Z, Dong R, Fan G, et al. Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining[C], ICML 2023.
[2] Zha Y, Ji H, Li J, et al. Towards compact 3d representations via point feature enhancement masked autoencoders[C], AAAI 2024.
[3] Yu X, Tang L, Rao Y, et al. Point-bert: Pre-training 3d point cloud transformers with masked point modeling[C], CVPR 2022.
[4] Pang Y, Wang W, Tay F E H, et al. Masked autoencoders for point cloud self-supervised learning[C], ECCV 2022. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their detailed and valuable suggestions and are grateful for the encouraging comments:
1. The paper is clearly written and easy to follow. (DT8p, nJYR, rD3v)
2. The core observation and motivation remain novel and clear. This paper is well-motivated and provides new insights into the differences in positional encodings between 2D MAE and Point-MAE. It opens up new directions for improvement in the field of point cloud self-supervised learning. (MoXR, nJYR, DT8p, rD3v)
3. The proposed method aligns well with the motivation. It is a simple and effective approach that successfully addresses the identified limitations. (MoXR, nJYR, DT8p, rD3v)
4. The experimental results sufficiently demonstrate the effectiveness of the proposed method. (MoXR, nJYR, rD3v)
5. Extensive ablation studies show the effectiveness of different components and designs. (MoXR, nJYR)
Based on the comments, we conclude some noteworthy replies for the reviewers including:
- **[Inappropriate details in the overview figure (Fig. 2). Reviewer MoXR, GmbZ]** We have clarified the details for the reviewers and modified the figure according to the feedback. Please check the attached rebuttal PDF.
- **[Marginal improvement over other SOTA methods. Reviewer MoXR, nJYR, DT8p]** In response, we highlighted the simplicity and efficiency of our method. Compared to other SOTA methods, we require significantly less training time and fewer parameters. Specifically, Point-FEMAE requires 326s per epoch, whereas our PCP-MAE only necessitates 120s per epoch. Our approach is 2.7x faster than the Point-FEMAE.
- **[The generality of our approach. Reviewer MoXR, nJYR, GmbZ]** We have incorporated our core idea into different methods, including Point-BERT and Point-FEMAE. Both demonstrate performance gains after integrating our idea, showcasing the great generality of our approach.
- **[Seemingly unfair comparison and concerns about the effectiveness of our approach. Reviewer GmbZ]** Given that Reviewer GmbZ tested our idea personally, we kindly remind them that there may be some subtle design aspects that should be carefully implemented. The reviewer also believes that all our improvements stem from the different pre-training augmentations rather than the method itself, leading to potentially unfair comparisons with previous SOTA methods. **To address this, our experiments show that other SOTA methods suffer performance drops when using the same augmentation, which we will add to the next version of our paper. Our method benefits from this due to its specific features, which we explain in the rebuttal.** We thank him for reminding us to display the performance of previous methods under the same settings. More importantly, we have confirmed through experiments that the effectiveness of our approach stems from the proposed method itself, rather than from the "data augmentation trick", as demonstrated by testing other methods with the same augmentation. To further enhance the reliability of our approach, we have provided the code and corresponding checkpoints to the ACs via an anonymous link.
- **[Additional experiments. Reviewer MoXR, DT8p]** We have added new experiments based on the comments, including frozen evaluations and validation of the core phenomenon on other datasets.
We sincerely hope that this work provides valuable insights into the field of point cloud self-supervised learning and that the misunderstandings raised by Reviewer GmbZ can be resolved through this rebuttal. Thanks again to all reviewers for their valuable time to help improve our work.
Pdf: /pdf/e3a75602bfb1aed911622968c93b2adb14caaea4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes a masked autoencoding based self-supervised approach for 3D point clouds. The approach which terms as PCP-MAE, learns to predict centers for Point Masked AutoEncoders. The paper investigate that the coordinates of the centers are essential in the point cloud field and the decoder can even reconstruct the masked patches without the encoder representation of the visible patches. Thus, the paper introduces another pre-training objective which predicts the centers and uses them to replace directly provided centers, leading to improved performance. Moreover, this allows the network to not only encode visible centers effectively but also learn the inter-relationships between visible and masked centers. Finally, the approach is efficient and outperforms Point-MAE on multiple benchmark datasets.
Strengths: - Provide an insight on why only providing center is good enough for point cloud reconstruction in masked autoencoding.
- Propose an additional objective task that learns to predict the center of the masked patches and the use of predicted center of the masked patches in the decoder instead of ground-truth center.
- Experiments results on ScanObjectNN, ModelNet40 and S3DIS.
- Extensive ablation studies on different components in PCP-MAE, loss functions, masking ratio and other architecture related modules.
Weaknesses: - The paper writing can definitely be improved and sometimes it's really to hard to follow the underlying idea behind it.
- The figure 2 can be misleading and it doesn't correlate with the equation 10. In figure 2, it looks like only the masked tokens are passed to the PCP module but in equation 10 which is given as:$PE_{m}^{pred} = PCM (E_{m}, E_{v}, PE_{v})$, it looks like PCP module uses encoded representation of masked patches, visible patches and positional embeddings of visible patches. Can the authors please clarify about this?
- In Table 2, PointMAE achieves 90.02 88.29 85.18 on three variants of ScanObjNN but in the ablation study (c.f. Table 5 ) the results on the 4th row ( just reconstruction only ) achieves 92.94 92.42 88.65. Isn't only doing reconstruction equivalent to PointMAE? Does it mean the performance boost comes from the positional encoding?
- Although approach looks simpler and effective, but it only shows improvement by incorporating the additional details on PointMAE. Compared to other the baselines like Point-FEMAE, it doesn't outperforms them on multiple datasets. Can the authors please provide additional results by incorporating PCP and new positional encoding in multiple MAE approaches for point clouds?
- Did the authors adapt the same set of augmentations for pre-training baselines?
- The methods looks very sensitive to $\eta$ (c.f. Figure 4)
- Some additional results on indoor datasets like ScanNet would be helpful.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the questions and the suggestions in the weakness section.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, the authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time, thorough comments, and nice suggestions. We are pleased to clarify your questions one-by-one.
> Q1. The paper writing should be improved.
Thank you for your feedback. We will improve the manuscript's language and structure for better clarity and readability, and these updates will be reflected in the next version.
> Q2. The figure 2 can be misleading and it doesn't correlate with the equation 10.
Thank you for pointing out this misleading detail. We have changed Figure 2 to align with Equation 10. Please check the attached rebuttal PDF.
> Q3. Isn't only doing reconstruction equivalent to PointMAE? Does it mean the performance boost comes from the positional encoding?
Yes and no. They are equivalent except for a minor change in the positional encoding. We experimented on the effect of this behavior when aligning other setups:
Method|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-
Point-MAE|92.94|92.25|88.86
Reconstruction_only|92.94|92.42|88.65
The results show that it only slightly influences performance.
The performance boost mainly comes from the augmentations, i.e., the rotation augmentation explored by [1] for fine-tuning and the Scale & Translate + Rotation pre-training augmentation explored by us. The ablation results of Point-MAE are shown as follows:
Augmentation||OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-|-
Pre-training|Fine-tuning|
Scale&Translate|Scale&Translate $^*$ |90.02|88.29|85.18
Rotation|Rotation $^\dagger$|92.60|91.91|88.42|
Scale&Translate+Rotation|Rotation $^\ddagger$|92.94|92.25|88.86
$^*$ Adopted by Point-MAE.
$^\dagger$ Adopted by other SOTA methods including Point-FEMAE and ReCon.
$^\ddagger$ Adopted by our PCP-MAE.
Note that Scale&Translate+Rotation pre-training augmentation benefits both Point-MAE and our PCP-MAE but does not benefit other SOTA methods, including Point-FEMAE and ReCon (experiment results are in Q5).
> Q4.1. Compared to other the baselines like Point-FEMAE, it doesn't outperforms them on multiple datasets.
Although our PCP-MAE only marginally outperforms previous SOTA methods like Point-FEMAE and ReCon or performs comparably, **our PCP-MAE significantly exceeds them in efficiency.** The comparisons among these three methods are shown below (using one RTX 3090):
Method|Pre-training||Fine-tuning|||||
-|-|-|-|-|-|-|-
||Params (M)|Time (s/epoch)|Params (M)|OBJ-BG Time (s/epoch)|OBJ-ONLY Time (s/epoch)|PB-T50-RS Time (s/epoch)|ModelNet40 Time (s/epoch)
Point-MAE (baseline)|29.0|88|22.1|10|10|49|29
Point-FEMAE|41.5 (1.43x)|326 (3.70x)|27.4 (1.24x)|30 (3.00x)|30 (3.00x)|148 (3.02x)|72 (2.48x)
ReCon|140.9 (4.85x)|452 (5.14x)|43.6 (1.97x)|13 (1.30x)|13 (1.30x)|60 (1.22x)|42 (1.45x)
PCP-MAE (ours)|29.5 (**1.01x**)|120 (**1.36x**)|22.1 (**1.00x**)|10 (**1.00x**)|10 (**1.00x**)|49 (**1.00x**)|30 (**1.01x**)
> Q4.2. Additional results by incorporating PCP and new positional encoding in multiple MAE approaches
Although intuitively feasible, it is important to note that implementing our method into various MAE-based frameworks requires careful consideration due to the different optimization goals of each specific approach.
**We incorporated PCP into MPM (Masked Point Modeling) methods including Point-BERT [4] and Point-FEMAE [5] with additional minor changes to maintain architecture consistency.** The results demonstrate that PCP enhances performance when other setups are aligned:
Methods|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-|
Point-BERT|87.43|88.12|83.07
Point-BERT+PCP ($\eta=2.0$)|89.32|89.84|84.77
Point-FEMAE $^\dagger$|94.32|92.94|89.38
Point-FEMAE+PCP ($\eta=0.1$) $^\dagger$|95.00|93.45|89.83
$^\dagger$ Reproduced results adopting our augmentation $i.e.$, Scale&Translate+Rotation for pre-training.
Due to time limits, we couldn't explore different design choices and $\eta$, which could bring improvement.
>Q5. Did the authors adopt the same set of augmentations for pre-training baselines?
We used the Scale&Translate+Rotation augmentation for pre-training because we found that this combination benefits our PCP-MAE and slightly benefits Point-MAE, **which is different from previous baselines (Tab. 10).** This improvement can be attributed to the fact that the combination of augmentations enables a variety of centers, allowing our PCP-MAE to learn a richer distribution of center information and become more robust. **In contrast, previous SOTA methods like ReCon and Point-FEMAE do not benefit from this augmentation empirically.**
The detailed augmentations are illustrated in Q3. We further provide the results after aligning the augmentations of Point-FEMAE and ReCon with ours for clarity:
Method|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-
Point-FEMAE (Origin)|95.18|93.29|90.22
Point-FEMAE (Our augmentation)|94.32|92.94|89.38
ReCon (Origin)|95.18|93.63|90.63
ReCon (Our augmentation)|94.49|92.77|89.55
>Q6. The methods looks very sensitive to $\eta$ (Figure 4).
We tried values from 0.1 to 1.0 in increments of 0.1 for convenience and the results in Figure 4 show a small $\eta$ is suitable for our PCP-MAE. When choosing more fine-grained $\eta$, our approach behaves less sensitively:
$\eta$|0.02|0.04|0.06|0.08|0.10|0.12|0.14|0.16|0.18
-|-|-|-|-|-|-|-|-|-
ScanObjectNN (OBJ-BG)|94.14|94.66|94.49|95.35|**95.53**|95.00|94.66|94.49|94.66
ModelNet40 (w/o voting)|93.56|93.48|93.64|93.72|**93.96**|93.84|93.52|93.44|93.52
>Q7. Some additional results on indoor datasets like ScanNet would be helpful.
We experimented with the indoor dataset S3DIS [2] (lines 275-279), which provides detailed indoor 3D point cloud of over 6,000 square meters across six buildings. The scene segmentation results are:
Methods|mAcc|mIoU
-|-|-
Point-MAE|69.9|60.8
PCP-MAE (ours)|71.0|61.3
For ScanNet [3], the dataset's different statistics (e.g., 2k input points for ShapeNet vs. 50k for ScanNet) require changes to the model architecture and pre-training setups. Due to time and resource constraints, we couldn't complete this but will update the results in the next paper version.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thanks for the detailed rebuttal. The authors have addressed some of my concerns regarding the paper diagram and promise to improve the manuscript. I have also read other reviewers comments and rebuttals by the authors and I conclude that although the PCP-MAE shows some improvement compared to other SOTA methods, I agree with Reviewer GmbZ that most the improvement comes from the other tricks like augmentation. Moreover, I would encourage the authors to study the impact of proposed approach in other methods more rigorously. Furthermore, I understand the randomness in current 3D datasets, so it would be important to study the impact of the approaches on datasets like ScanNet. Even on S3DIS dataset, the approach doesn't show any improvement compared to ReCon, Point-FEMAE. So, I will maintain my original score.
---
Reply to Comment 1.1.1:
Title: Clarification on Method's Effectiveness: Beyond Augmentation Tricks
Comment: Thank you for your response, which gives us the opportunity to clarify that the benefits of our method do not stem from augmentation tricks.
Firstly, we attempted to apply the mentioned augmentation tricks to previous SOTA methods, but this did not yield improvements and even showed a declining trend, as demonstrated below:
Method|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-
Point-FEMAE (+ Our augmentation)|94.32|92.94|89.38
ReCon (+Our augmentation)|94.49|92.77|89.55
PCP-MAE (Ours) | **95.52** | **94.32** | **90.35**
Besides, we believe that different methods are suited to **different hyperparameters** (including augmentation), and it is widely accepted to apply new hyperparameters to the newly proposed methods. Moreover, we have also provided results for the baseline method under **our hyperparameter** settings for **fair comparison**, which show that our method significantly outperforms both the baseline and other state-of-the-art methods:
Methods|OBJ-BG|OBJ-ONLY|PB-T50-RS
-|-|-|-
Point-MAE (baseline) |92.60|91.91|88.42
Point-MAE (+ our aug) |92.94|92.25|88.86
Ours | **95.52** | **94.32** | **90.35**
Furthermore, our method (PCP) can serve as a **plug-in** module, applicable to a wide range of models. This demonstrates that our approach is not merely a trick, but a method with broad applicability:
Methods|OBJ-BG|BJ-ONLY|PB-T50-RS|
-|-|-|-
Point-BERT|87.43|88.12|83.07
Point-BERT+PCP ($\eta$=2.0)|89.32|89.84|84.77|
Point-FEMAE + (Our Aug)|94.32|92.94|89.38
Point-FEMAE+PCP ($\eta$=0.1) + (Our Aug)| **95.00** | **93.45** | **89.83**
Additionally, we would like to reiterate that one of our contributions is revealing a vulnerability in the Point MPM (Masked Point Modeling) series of methods. Specifically, we showed that it is possible to reconstruct masked point clouds without needing an encoder or decoder. This challenges the assumptions of many researchers and can stimulate deeper reflection in the field.
---
Rebuttal 2:
Comment: Reference
[1] Dong R, Qi Z, Zhang L, et al. Autoencoders as cross-modal teachers: Can pretrained 2d image transformers help 3d representation learning?[J]. arXiv preprint arXiv:2212.08320, 2022.
[2] Armeni I, Sener O, Zamir A R, et al. 3d semantic parsing of large-scale indoor spaces[C], CVPR 2016.
[3] Dai A, Chang A X, Savva M, et al. Scannet: Richly-annotated 3d reconstructions of indoor scenes[C], CVPR 2017.
[4] Yu X, Tang L, Rao Y, et al. Point-bert: Pre-training 3d point cloud transformers with masked point modeling[C], CVPR 2022.
[5] Zha Y, Ji H, Li J, et al. Towards compact 3d representations via point feature enhancement masked autoencoders[C], AAAI 2024. | null | null | null | null | null | null |
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts | Accept (poster) | Summary: This paper is motivated by the fact that traditional scaling approaches are computationally expensive and overlook the significance of efficiently improving model capabilities from the vision side. The authors introduce CuMo, a method to enhance MLLMs by incorporating sparsely-gated Mixture-of-Experts (MoE) blocks into the vision encoder and MLP connector. CuMo employs a three-stage training process and has achieved impressive results across multiple benchmarks.
Strengths: 1. This paper is well-written, and the proposed method is simple and easy to follow.
2. This paper has addressed an important issue: traditional MoE-based MLLMs are computationally expensive and overlook the vision encoder.
3. CuMo achieves good performance across various benchmarks.
Weaknesses: 1. The author employ the MoE method in the vision encoder and connector, but don't explain why using this approach enhances the model's capabilities.
2. The use of the MoE method inherently expands the trainable parameters to some extent. The author should consider whether merely increasing the parameters of the vision encoder alone will also enhance the model's capabilities. This will provide readers with valuable insights.
3. CuMo's LLM is mainly limited to Mistral-8$times$7B. Since this paper primarily explores the expansion of the vision encoder and connector, I think the author should include comparisons with other LLM backbones, such as Qwen[1] and LLaMA[2], to demonstrate that the model's improvements are not due to differences in the inherent capabilities of the LLM itself.
4. Since CuMo is three-stage training, the increased amount of training data may lead to an unfair comparison with other baselines, such as LLaVA1.5.
[1] Qwen Technical Report.
[2] LLaMA: Open and Efficient Foundation Language Models.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors mention that the main limitation of this work is the hallucination problem and indicate that it will be improved in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Why MoE in the vision encoder and connector enhances the model's capabilities**
A1: MoE has been widely used in LLMs to improve the capacity [1] of text generation as it improves the model size during training while keeping inference costs lower at inference. In our work, we apply MoE in the vision encoder and connectors of multimodal LLMs, which has the potential to generate versatile visual tokens and further improve the model capabilities on many vision-language instruction-following tasks. We verify our assumptions with detailed ablation studies on the effectiveness of the proposed MLP-MoE and CLIP-MoE.
[1] Shard: Scaling Giant Models with Conditional Computation and Automatic Sharding
**Q2: Whether merely increasing the parameters of the vision encoder alone will also enhance the model's capabilities for comparisons**
| Model | ImageNet | Res. | Params | TextVQA | MMVet | SEED |
|----------------|----------|------|--------|---------|-------|------|
| CLIP-ViT-L | 76.2 | 336 | 0.30B | 57.6 | 32.1 | 66.4 |
| SigLIP-SO400M | 83.1 | 384 | 0.43B | 58.1 | 32.5 | 67.5 |
| CLIP-ViT-H | 78.0 | 224 | 2.52B | 49.2 | 29.5 | 58.2 |
A2: It depends. We compare the CLIP-ViT-L encoder with the other two larger vision encoders: CLIP-ViT-H and SigLIP-SO400M. CLIP-ViT-H has 8x more parameters than CLIP-ViT-L while performing much worse due to low-resolution inputs. SigLIP-SO400M has 0.13B more parameters and performs consistently better than CLIP-ViT-L. These findings have been also explored in recent multimodal LLMs [2,3] that the model size is not the top factor that affects the overall performance of multimodal LLMs.
[2] MM1: Methods, Analysis, and Insights from Multimodal LLM Pre-training
[3] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
| Model | TextVQA | MMVet | SEED |
|--------------------------|---------|-------|------|
| SigLIP-SO400M | 58.1 | 32.5 | 67.5 |
| + CLIP-MoE & MLP-MoE | 59.4 | 34.1 | 69.8 |
We further apply CLIP-MoE and MLP-MoE to the stronger SigLIP-SO400M vision encoder and they can still make improvements upon a larger and more powerful vision encoder.
**Q3: CuMo's LLM is mainly limited to Mixtral-8x7B, and the author should include comparisons with other LLM backbones.**
| Model | TextVQA | MMVet | SEED |
|--------------------------|---------|-------|------|
| Mistral-7B | 57.6 | 32.1 | 66.4 |
| + MLP-MOE & CLIP-MoE | 59.3 | 34.3 | 69.6 |
| Vicuna-7B-v1.5 | 58.2 | 30.5 | 66.1 |
| + MLP-MOE & CLIP-MoE | 59.4 | 32.6 | 68.5 |
A3: CuMo is mainly focusing on Mistral-7B and Mixtral-8x7B. Our ablations are based on Mistral-7B and verified the effectiveness of CLIP-MoE and MLP-MoE separately. We further evaluate CuMo on Vicuna-7B-v1.5 as shown in the table above with added CLIP-MoE and MLP-MoE to show their effectiveness.
**Q4: Since CuMo is three-stage training, the increased amount of training data may lead to an unfair comparison with other baselines, such as LLaVA1.5.**
A4: In Table 2, we maintain a fair comparison to LLaVA1.5 under the same training data and CuMo-7B consistently performs better than LLaVA1.5, either under Vicuna7B or Mistral-7B. In Table 1, we compare CuMo with models that use various datasets even with private datasets like LLaVA-NeXT and MM1, while CuMo is trained under fully open-sourced datasets. We may remove LLaVA1.5 from Table 1 and keep LLaVA-NeXT for comparison in the updated version to avoid confusion.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author's response. Partial of my concerns are addressed during the rebuttal. I would like to raise my score. | Summary: The paper introduces CuMo, a novel approach to enhancing multimodal large language models (LLMs) by integrating sparsely-gated Mixture-of-Experts (MoE) blocks into both the MLP connector and vision encoder. CuMo addresses the challenge of scaling multimodal LLMs effectively by leveraging MoE's efficiency in parameter usage during training and inference. Key contributions include a detailed exploration of MoE integration strategies across different components of LLMs, a three-stage training methodology to stabilize model training, and the introduction of auxiliary losses for expert load balancing. Experimental results demonstrate that CuMo outperforms existing state-of-the-art multimodal LLMs on various benchmarks, showcasing its efficacy in enhancing model performance while managing computational efficiency
Strengths: Innovative Integration of MoE: Integrating sparsely-gated Mixture-of-Experts (MoE) blocks into both the MLP connector and vision encoder of multimodal large language models (LLMs) represents a novel approach. This approach is not only innovative in terms of architecture but also in its application to enhance multimodal understanding.
Experimental Rigor: The authors provide a thorough experimental evaluation, comparing CuMo against state-of-the-art models on multiple benchmarks. This comprehensive evaluation includes ablation studies, which validate the effectiveness of their proposed approach.
Weaknesses: Issue: While the paper touches upon the scalability benefits of MoE, there's limited discussion on the computational efficiency and training time required for integrating MoE blocks into multimodal LLMs. This is crucial as MoE designs can potentially introduce additional computational costs during training.
While the paper focuses on performance metrics, there's a lack of discussion on the interpretability of MoE decisions within CuMo. Understanding how MoE blocks make decisions and whether they exhibit consistent behavior across different inputs is crucial for deploying models in real-world applications.
The paper shows that Mixtral-8×7B's CuMo does not consistently outperform Mini-Gemini on several datasets, particularly on TextVQA and MME. This discrepancy could be due to architectural differences, especially Mini-Gemini's specialization in high-resolution data. The author should conduct a comparative study with non-MoE architectures like Vicuna or Llama3, focusing solely on CuMo's projection MoE and Vision Encoder MoE. This would provide insights into how CuMo's MoE-based approach performs against architectures not leveraging MoE.
Table 1 highlights CuMo's results in datasets like VQAv2, SeedImg, and MMbench, even though it does not consistently achieve the best performance. This can mislead readers about CuMo's comparative performance against other models.
The experiments primarily focus on Mistral or Mixtral-8×7B architectures with MoE integration. There's a lack of exploration into how CuMo's MoE-based enhancements compare against non-MoE architectures like Vicuna or Llama3.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Computational efficiency and training time of MoE.**
| CuMo | CLIP | MLP | LLM | Total | Time |
|--------------------------|-------|-------|-------|--------|------------|
| Mistral-7B | 0.30B | 0.025B| 7.25B | 7.58B | ~16h |
| + Top 2-in-4 MLP-MoE | 0.30B | 0.10B | 7.25B | 7.65B | ~16h |
| + Top 2-in-4 CLIP-MoE | 0.91B | 0.10B | 7.25B | 8.26B | ~20h |
A1: We provided the breakdown of additional parameters during training in Table 6 and here we further include the training time of CuMo-Mistral-7B with MLP-MoE and CLIP-MoE. We used a single 8xA100 machine and LLaVA-665K data for reference here. Note that we only used the model parallelism in deepspeed for implementation while the training can be faster if expert parallelism is further added.
**Q2: Lack of discussion on the interpretability of MoE decisions within CuMo.**
| Subset | Layer ID | Top 1 Expert Ratio |
|-----------|----------|--------------------|
| OCR | 8 | 31.54% |
| Color | 7 | 33.97% |
| Code | 18 | 34.49% |
| Reasoning | 1 | 35.01% |
A2: Following Section 5 of Mixtral-8x7B[1], we did the expert distribution analysis in Figure 4, which shows that the experts are loaded pretty balanced overall across the layers. We further used the subsets of images in MME by topic and found that they show a preference towards certain experts in some layers, which may imply some hidden patterns of the assignment of experts based on the applications or topics. We may update these results in Section 4.4 as parts of the analysis.
[1] Mixtral of Experts, https://arxiv.org/abs/2401.04088
**Q3: The paper shows that Mixtral-8×7B's CuMo does not consistently outperform Mini-Gemini on several datasets, particularly on TextVQA and MME.**
A3: CuMo-Mixtral-8x7B is better than Mini-Gemini-Mixtral-8x7B on MME (+0.5), MMMU(+3.2), and MM-Vet (+2.9), while worse on TextVQA (-3.2) and MMBench (-0.3), as shown in Table 1. One main reason behind this is that Mini-Gemini takes high-res inputs and benchmarks like TextVQA are sensitive to input resolution as shown in the Table 2 \& 3 in Mini-Gemini.
**Q4: The experiments primarily focus on Mistral or Mixtral-8×7B architectures with MoE integration. The author should conduct a comparative study with non-MoE architectures like Vicuna or Llama3, focusing solely on CuMo's projection MoE and Vision Encoder MoE.**
| | TextVQA | MMVet | SEED |
|------------------|---------|-------|------|
| Mistral-7B | 57.6 | 32.1 | 66.4 |
| + MLP-MOE & CLIP-MoE | 59.3 | 34.3 | 69.6 |
| Vicuna-7B-v1.5 | 58.2 | 30.5 | 66.1 |
| + MLP-MOE & CLIP-MoE | 59.4 | 32.6 | 68.5 |
A4: Our ablation studies are mainly on the Mistral-7B, which is a non-MoE LLM to verify the effectiveness of MLP-MoE and CLIP-MoE. We further evaluate CuMo on Vicuna-7B-v1.5 to make comparisons as shown in the table above under the same LLaVA-665K training data. The CLIP-MoE and MLP-MoE also make improvements over the Vicuna-7B-v1.5.
**Q5: Table 1 highlights are misleading.**
A5: Thanks for the suggestions. We'll revise that and highlight the best performance across models in each session of Table 1. | Summary: The paper presents upcycling for large multimodal models (LMMs). It specifically looks at how to enable upcycling for the different components of an auto-regressive based multimodal model (e.g., LLaVA). It shows how the MLP and vision-encoder (in this case, CLIP) are the two modules that should be upcycled and instead of relying on upcycled LLMs, it is better to use pretrained MoE models (for their specific example of upcycled-Mistral vs Mixtral). The paper outlines the training recipe, which is based on 3 stages, with the first focusing on enabling a stable multi-modal model, and then incorporating their CuMo recipe in two stages. To enable stability and balance across the introduced experts, the authors enabled both load balancing loss for the experts and $z_{loss}$ (as suggested in the ST-MoE paper) as auxiliary losses. They apply these auxiliary losses to the two different upcycled MoEs separately. This is followed by a detailed training recipe and evaluation (both qualitative and quantitative) and ablations that explain their design choices.
Strengths: 1. The paper outlines a clear recipe for upcycling in the context of multimodal models, which has not been explored in literature before.
1. The authors support their design choices through well-designed ablations - particularly for the MoE blocks (which form the core of their method) - for the MLP connector and CLIP model. They show the benefits of using pretrained MoE LLM models over upcycling a dense LLM.
2. They present the benefits of using different auxiliary losses (balancing + $z_{loss}$, which they term as bzloss) to train the upcycled model.
2. The paper relies on fully open datasets, and it presents all training settings and hyper-parameters, enabling the reproduction of its results.
1. The authors show quantitative results, which show for models of similar # active parameters (during inference), the models are competitive with other popular LMMs of similar sizes and outperform some benchmarks. They show this with two different LLMs (Mistral + Mixtral), showing their method is composable with different LLMs. For results that use the GPT4 API, they present the statistical average of 3 API calls to calibrate their results. They give a full breakdown of how the # active parameters are computed for their models based on the upcycled components.
2. They follow this up with some qualitative analysis for the balance in experts (relying on the bzloss), seeing an approximately equal distribution of tokens through the experts. They also show some dialogue examples based on sample images for 3 different LMMs, highlighting the benefits of using their method.
3. For some of their training recipes, they show studies of using high-fidelity data and how relying on the latest training methods, such as multi-resolution features, helps boost their overall model quality.
Weaknesses: 1. One main criticism of the upcycling setup is the limit on the gains from upcycling. The original setting [1] compares the upcycled model to the original MoE model in a 100% dense compute setting and shows that it takes ~20% additional capacity to catch up to the upcycled model (on the LLM setting). While it does take more computing on the ViT models, do the authors have any intuition on when training MoE-based ViTs (such as V-MoE [2]) and MoE connectors from scratch will be much better / potentially start outperforming the dense-only case?
2. Another criticism of the setup is the diminishing gains as the base model size increases (see Figures 2 & 3 in [1]). Do the authors have an intuition of how this will apply to the CuMo setup? If we scale the CLIP-ViT model or use a much stronger model like SigLIP-ViT, will the gains from CuMo still hold? Please note that I'm not expecting new experiments or results here — just a sense of the authors' intuition.
[1] Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints, https://arxiv.org/abs/2212.05055
Technical Quality: 3
Clarity: 3
Questions for Authors: __Suggestions__:
1. In Table 1, please highlight all the best numbers, not just where the CuMo model is the best (both for the 7B dense & MoE sections)
2. In Table 2, TextVQA is highlighted as the best with the CuMo model, but the QwenVL model has a higher score. How is the highlighting done in this case - if it is still the "best" result (despite # data samples used), then it will be good to highlight the appropriate model.
3. For both table captions, it would be good to mention "best results are highlighted in bold" so that it is clear to the user.
4. For Figure 6, please mention which model generates the responses - Cumo with Mistral or Mixtral.
5. In the Appendix intro paragraph, the authors mention the "M3" model in the last line, but no other reference has been made to this model before. It would be good to clarify this.
__Questions__:
1. The authors mention a non-society license in the impacts section - is there a specific license being applied that the authors cannot speak about as it will break double-blind? Or is it a general enough license so it is well understood how the release is regulated?
2. To clarify, the authors mention only the datasets in the checklist (question 5), but in the impacts section, they mention releasing both code and model. It would be good to be consistent on this front.
3. For the usage guidelines (question 11), there is nothing directly mentioned in the paper. Is the idea that the license, when released, will have clear guidelines for this?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The authors list hallucination as the main limitation of the model, which they propose can be corrected using RLHF and help improve reliability. While RLHF can help to some extent with the hallucination problem in the context of multi-modal LLMs, it firstly aligns models with human preferences on tasks of interest (e.g., improving captioning capabilities). Additional systems such as RAG also play a role in this case. I'd recommend potentially mentioning this.
2. In a similar vein to RLHF, since the model has not been aligned, I'd also recommend that the authors mention the potential for biased / non-helpful outputs from their model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Limit on the gains from upcycling and when training MoE-based ViTs or MoE connectors from scratch will be much better / potentially start outperforming the dense-only case?**
A1: The conclusion of using 20% additional capacity to catch up the upcycled model in original sparse upcycling and V-MoE is based on a much more intensive computation and data budget for pre-training LLM. However, in the CuMo setup, our motivation for using upcycling is to stabilize the training of the MoE module under a small data and compute budget, because training from scratch of connectors or ViTs is not comparable to the dense models due to the training instabilities as shown in Table 3(a). To estimate the gains of upcycling compared to training from scratch, we may need to train a CLIP-MoE from scratch, which is out of our training budget and beyond the scope of our work.
**Q2: Diminishing gains as the base model size increases and how this will apply to the CuMo setup? If we scale the CLIP-ViT model or use a much stronger model like SigLIP-ViT, will the gains from CuMo still hold?**
| | ImageNet | Res. | Params. | TextVQA | MMVet | SEED |
|------------------|----------|------|---------|---------|-------|------|
| CLIP-ViT-L | 76.6 | 336 | 0.30B | 57.6 | 32.1 | 66.4 |
| + MoE | - | 336 | 0.50B | 59.3 | 34.3 | 69.6 |
| SigLIP-SO400M | 83.2 | 384 | 0.43B | 58.1 | 32.5 | 67.5 |
| + MoE | - | 384 | 0.72B | 59.4 | 34.1 | 69.8 |
A2: We think the diminishing gains also exist in the CuMo setup if we use larger and stronger pre-trained CLIP with MoE while keeping training data unchanged. Here we use the pre-trained SigLIP-SO400M as the vision encoder and add MoE to it as shown in the table above. SigLIP-SO400M has a much better performance on ImageNet zero-shot classification than CLIP-ViT-L (83.2 vs 76.6). The added MoE can still make improvements to this stronger vision encoder but the average improvement shrinks compared to CLIP-ViT-L. However, the training data here is limited to LLaVA-665K for quick verification, which may not show the full potential of the model if training with more data.
**Q3: Suggestions regarding Table 1, 2, captions, Figure 6 and Appendix.**
A3: Thanks for the suggestions. We'll update Table 1 by highlighting the best performance numbers in each section and Table 2 by highlighting QwenVL's TextVQA number, as well as the table captions to make them clear to the audience. For Figure 6, the responses are generated by CuMo-Mistral-7B. For the Appendix, the 'M3 model' refers to the CuMo-Mistral-7B, we'll revise it in the updated version as well.
**Q4: Non-society license, release, and usage guidelines.**
A4: We plan to release the code under Apache 2.0 and the weights of all CuMo checkpoints under CC BY-NC 4.0 for non-commercial use. All the datasets and pre-trained weights we used for training and evaluation are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses.
**Q5: Limitations.**
A5: Thanks for the suggestions. We'll add discussions of RAG and the potential for biased outputs in the limitation section.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response to the reviews and additional experimental results. After reading all reviews and response, and the overall global response, I will retain my score.
Please make the necessary changes for the figure and table captions and other recommended changes in the final version. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their thoughtful comments. We feel encouraged that the reviewers find that:
- The innovation of our method integrates MoE design into the vision side of current multimodal LLMs (NGUV, rTFY, 5mox), and the implementation is simple and easy to follow (5mox).
- We provide detailed ablations to validate the effectiveness of the proposed method (NGUV, rTFY) and achieve good performance across various benchmarks (rTFY, 5mox).
- Our work is based on fully open datasets and we present all training settings with hyper-parameters for reproduction of the results (NGUV).
We also appreciate the suggestions from all reviewers that help us continue to improve the draft. We attempted our best to address the questions using the individual responses below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unlocking the Potential of Global Human Expertise | Accept (poster) | Summary: This paper designs a framework called Realizing Human Expertise through AI (RHEA) to combine and distill solutions from a diverse set of models or solutions provided by experts. The steps are: Define Problem -> Gather Solutions -> Distill Model -> Evolve Solution. Examples were presented to demonstrate the framework's effectiveness in different domains.
Strengths: The author proposes a potential solution to collective knowledge that can be combined to discover solutions. It could be a way to find a better solution from the information stream when there is more than one solution.
Weaknesses: 1. Some explanations of the framework are unclear and require additional details to better explain the "Distill" process.
2. The contribution paragraph needs more information to clearly describe the real contribution.
3. The conclusion is too broad and needs to be more specific.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) The concern is the efficiency of this framework, as the Gather step requires waiting for experts to respond. The framework involves a human-in-the-loop process. How can bias be reduced when gathering solutions from the experts? (if these experts' solutions are biased or limited)
2) During a crisis, what happens if there are insufficient solutions? If there are two opposing solutions, what will be the outcome?
3) The description of the Distill process lacks sufficient detail for the reader to understand how it works.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: This work has limited real-world evaluation. Fairness also needs to be considered in developing this framework.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Response to question on details of the Distill process:_
We will move the core details of Distill from the Appendix to the main text and ensure that there is enough in the main text to have a clear idea of the process.
_Response to suggestions to add more specific details to the Contributions and Conclusion paragraphs:_
Thank you for the suggestion. We will revise both these paragraphs to focus more on the technical contributions of the paper, specifically, details of the framework, implementation, and experiments.
_Response to questions around the effects of having humans playing a role in the process:_
These are very interesting practical considerations. This work focuses on the technical aspect of how to set up the framework, but the problem of how to improve efficiency in gathering expert solutions is a social/civil problem. It could be improved e.g. through improved communication/solicitation methods. Such considerations are outside the scope of this paper, but constitute interesting avenues of future work, as will be noted in the discussion.
On the other hand, RHEA has a natural method to overcome bias in expert solutions, since, through the process of evolution, the objectively highest quality components can recombine and persist. If expert knowledge is limited, RHEA will simply revert to the evolution baseline.
_Response to: “During a crisis, what happens if there are insufficient solutions? If there are two opposing solutions, what will be the outcome?”_
This is exactly the problem that RHEA is designed to solve! RHEA yields a broad Pareto front of solutions, to maximize the chance that one will be useful in any particular scenario.
_Response to “This work has limited real-world evaluation”:_
While further real-world evaluation would always be useful, this paper actually includes an unusually substantial such evaluation already. The COVID-19 intervention application required soliciting, incentivizing, and gathering the input of over 100 teams of experts in over 20 countries. Such a real-world demonstration of the Define and Gather steps at scale is a major and unique contribution of this work. Further, the optimization results were evaluated with an expansive dataset of real-world pandemic interventions and outcomes in over 200 countries and two years—the first such dataset to be created. We will clarify these contributions in the paper.
_Response to “Fairness also needs to be considered in developing this framework.”:_
RHEA has mechanisms for fairness, by assigning probabilities to expert knowledge for recombination, and providing traces of where contributed knowledge comes from. However, we agree that it is possible for fairness issues to enter the system, especially if actions with bad intentions submit solutions (as also outlined in the response to Reviewer 1). Detecting and preventing such adversarial behavior is an interesting avenue for future work.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for responding to my questions. Their explanations should be considered for inclusion in the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for appreciating our explanations. Would you consider increasing your score based on the impact of these explanations, along with the new comparisons to state-of-the-art MORL methods and theoretical justification? | Summary: The authors introduce a new evolutionary framework for combining expert policy predictions called RHEA and evaluate performance in both a synthetic domain that is highly interpretable and in a predictor from the XPRIZE Pandemic Response Challenge. The solutions are analyzed by qualitatively and quantitatively and compared against less sophisticated baselines.
Strengths: The paper is clearly written and focuses on an important problem. I liked that the authors explain the framework in the context of a highly interpretable example and then do significant qualitative and quantitative analysis on the solutions discovered by in the XPRIZE domain.
Weaknesses: See questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: I did not get the sense that the “policy intervention problem” is something that is well studied. How is the problem statement here different from a contextual bandit or similar formalisms (knapsack problems?)
Are there other baselines for this problem besides lesioned versions of the current framework?
What are the computational differences between RHEA and the baselines was each algorithm run until convergence? Is it possible to show learning curves?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I did not get the sense that the “policy intervention problem” is something that is well studied. How is the problem statement here different from a contextual bandit or similar formalisms (knapsack problems?) Are there other baselines for this problem besides lesioned versions of the current framework? What are the computational differences between RHEA and the baselines was each algorithm run until convergence? Is it possible to show learning curves?
*Response to Questions above:*
These clarifications are indeed useful; thank you for pointing them out. The problem that the Evolve portion of the framework solves is formulated as a general decision-making problem. It can indeed be framed for multi-objective reinforcement learning (MORL) by casting the predictor as an environment that prescriptors interact with as they learn. We ran such an experiment for this response, and describe the results in our Main Response. In short, the state-of-the-art MORL methods do not do well on these kinds of problems because they have a hard time recombining blocks of knowledge in useful ways, whereas this process is natural in evolutionary methods.
In the pdf attachment to our Main Response, we provide learning curves for RHEA, Evolution, and the MORL methods. Each was given the same number of evaluations, i.e., calls to the predictor. In terms of wall-clock time, RHEA and Evolution were also much faster than the MORL methods, especially model-based methods that require expensive gradient updates. The theoretical analysis in the Main Response also addresses questions of method convergence.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have no further questions.
---
Reply to Comment 1.1.1:
Comment: Great! We are glad we were able to address all you questions. Would you be willing to increase your score based on the information provided, especially the new comparisons to state-of-the-art MORL methods and the new theoretical justification? Thanks! | Summary: This paper introduces RHEA (Realizing Human Expertise through AI), a framework for combining diverse human expertise to solve complex problems using artificial intelligence. The key contributions are:
* Recognizing the challenge of integrating diverse human expertise to solve global problems like those in public health.
* Identifying requirements for an AI process to effectively combine human expertise.
* Proposing the RHEA framework to meet these requirements, consisting of four steps: 1) Define the problem formally 2) Gather diverse expert solutions 3) Distill solutions into a canonical form (ie neural networks) 4) Evolve the distilled solutions to discover improved solutions
* Demonstrating RHEA on a synthetic example.
* Applying RHEA to a real-world problem: optimizing COVID-19 intervention policies.
* Analyzing, for that real world problem: how RHEA recombines and innovates upon human expertise to discover improved solutions.
The paper shows (at least for their examples) that RHEA can discover broader and more effective policy strategies than either AI or human experts alone. It highlights RHEA's ability to realize latent potential in diverse human expertise, even from solutions that may not seem immediately useful. The authors argue this approach could help bridge the gap between human-only and AI-only decision-making for complex global problems.
Strengths: Originality: The paper presents a novel framework, RHEA (Realizing Human Expertise through AI), which creatively combines human expertise with an evolutionary framework. This approach is original in several ways: The framework introduces a process of distilling human-created solutions into a canonical form (neural networks) that can then be evolved using population-based search.
Quality: The authors provide a step-by-step explanation of the RHEA framework, though it’s unclear to me to how immediately useful those steps are as they are quite high level.
* The framework is tested first on a synthetic example making it easier to understand.
* The analysis of results on the COVID example is thorough, using multiple performance metrics and visualizations to support their claims.
* The authors acknowledge some of the limitations and potential issues
Clarity: The paper is generally well-structured and clearly presented:
* The paper is well written.
* The introduction effectively sets up the problem and the paper's contributions.
* The illustration is helpful for explaining the concept.
* The use of figures, particularly in explaining the RHEA framework and visualizing results, aids in understanding.
Significance: The paper's significance lies in its potential impact on AI research and applications:
* It addresses the important challenge of leveraging human expertise in AI systems, which is crucial for tackling complex global problems.
* The RHEA framework could be applied to a wide range of domains beyond COVID-19 policy optimization.
* The work demonstrates a practical approach to combining diverse expert knowledge, which could be valuable for collaborative problem-solving in various fields.
Weaknesses: Here's a substantive assessment of the weaknesses of the paper:
* Limited generalizability: The paper's primary application focuses on COVID policy optimization, which, while relevant, may not sufficiently demonstrate the framework's broad applicability. The authors could strengthen their claims by: a) Providing a theoretical analysis for why RHEA should work in other domains. b) Discussing potential applications in other fields, with specific examples of how RHEA might be adapted.
* Novelty: The evolutionary algorithm comes from other work so the idea is basically to initialize the parameters with distilled neural networks. This is a powerful way to solve this specific problem but given the limited analysis of the algorithm both empirically (when does it work) and theoretically, I am not sure if this work should be published at an ML conference or if it would be better at a more general purpose venue.
* Insufficient comparison to existing methods: While the paper compares RHEA to evolution alone and distilled models, it lacks comparison to other state-of-the-art methods in policy optimization or ensemble learning. The authors should: a) Include comparisons to relevant baseline methods (e.g., reinforcement learning approaches, other multi-objective optimization techniques). b) Discuss how RHEA compares to other methods of combining expert knowledge, such as boosting or stacking.
* Generalizability concerns: The paper does not sufficiently address what the operating regime is of RHEA. What problems does it work? What's the scale at which it works? Does it work if you have thousands of experts?
* Lack of Theoretical guarantees: While the empirical results are promising, the paper lacks theoretical analysis of RHEA's properties. The authors could strengthen the paper by: a) Providing theoretical bounds on the performance of RHEA under certain conditions. b) Analyzing the convergence properties of the evolutionary process. c) Discussing how the distillation process affects the solution space and optimization landscape.
* The structure of the paper: Some important details are in the appendix while there is a lot of analysis of the specific solutions and details (like the carbon footprint) which should be in the appendix. For example, as the specific ways this work is technically novel are unclear, the authors should move some of the related work to the main text.
Addressing these weaknesses would significantly strengthen the paper, providing a more comprehensive and robust presentation of the RHEA framework and its potential impact.
Post rebuttal update:
Technical Quality: 3
Clarity: 3
Questions for Authors: * How confident are you that RHEA can be applied effectively to domains beyond COVID-19 policy optimization? Could you provide examples of other complex problems where you believe RHEA would be particularly effective, and explain why? And also where it would not be effective?
* Have you conducted any comparisons between RHEA and other state-of-the-art methods in policy optimization or ensemble learning? If so, what were the results? If not, which methods do you think would be most relevant for comparison?
* How does the computational complexity of RHEA scale with the number of experts and the complexity of the problem space? Are there any foreseeable bottlenecks in applying RHEA to much larger or more complex problems?
* Have you conducted any theoretical analysis of RHEA's properties, such as convergence guarantees or performance bounds? If so, could you elaborate on these? If not, what kind of theoretical results do you think would be most valuable to pursue?
* How interpretable are the solutions produced by RHEA compared to the original expert inputs? Can you provide a detailed case study of a specific evolved solution, explaining how it combines and improves upon the original expert knowledge?
* Could you provide more details on the distillation process? How do you ensure that the neural network accurately captures the essence of the expert's solution, especially for complex or nuanced strategies? For example, you find a Spearman correlation of 0.7 which does not seem particularly high to me. Also, I assume this 0.7 on the test set? What is the spearman correlation on the training set?
* Does RHEA have mechanisms to incorporate new expert knowledge or feedback over time? How adaptable is the framework to changing conditions or new information?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss the potential for approximation errors when applying the policies to the real world.
The paper also mentions the need for user studies to fully evaluate the real-world effectiveness of RHEA prescriptors. The authors recognize that their cost measure was uniform over interventions, which may not reflect real-world scenarios.
I think this is generally a good set of limitations. Potential areas where the authors could improve:
* The authors could explore more deeply the potential long-term societal impacts of widespread adoption of AI-assisted policy-making tools like RHEA.
* While briefly mentioned, the paper could elaborate on how to ensure the RHEA process remains transparent and interpretable to non-expert stakeholders.
* The limitations section could address how RHEA might handle rapidly changing conditions or unexpected scenarios not covered in the initial expert knowledge.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Our responses to remaining comments are grouped by topic:
_NeurIPS vs. general science venue:_
We agree that this approach could be well-appreciated by broad audiences. Indeed, NeurIPS encourages application papers that have broad appeal, and we believe the introduction of this framework to the NeurIPS community could catalyze ML researchers to consider practical concerns that lead to interesting ML problems to solve. Consequently, in the revision we will add additional comparisons to multi-objective reinforcement learning (MORL) methods, expand the theoretical analysis, and showcase technical details and context in the main paper, thus strengthening this impact.
_Comparisons:_
W.r.t. comparisons to ensembling methods, the MoE comparison in the Illustrative Domain (Figure 2) serves as a multi-objective Oracle for stacking, since it shows the performance of a stacker that optimally selects experts based on context. Boosting does not apply to our problem: it relies on experts having access to each other’s models, whereas we assume that the expert models are submitted in a single traunch. We will clarify these points in the paper. This also relates to your question on adaptively adding new experts during optimization, which is discussed below.
_Paper structure:_
Thank you for this feedback. We will move the most relevant portions of the related work to the main text, specifically the discussion of multi-objective reinforcement learning and ensembling methods. We will also move technical methodological details to the main paper, such as those on distillation, to make the main paper technically self-contained.
_Scaling to more complex problems:_
The population-based search used by RHEA extends readily to thousands of experts; the theoretically optimal population size is often quite large [4, 5]. We will note these details in the final version of the paper.
Bottlenecks could come from adversarial attacks in submissions based on overwhelming the system with non-useful experts. In this paper we have assumed that the humans submitting solutions are well-intentioned. We will mention this potential concern in the discussion.
_Interpretable case study:_
First, the illustrative domain provides a clear case study exactly along the lines suggested. Second, in the COVID-19 domain we get at this quantitatively through the topological analysis of the final solutions (Figure 4) and evolutionary dynamics (Figure 5). Figure 4 illustrates several complete schedules generated from RHEA solutions, and Figure 5 shows how evolved solutions in RHEA solutions tend to lie between their parents in objective space. We will add a detailed case study of a single solution from Figure 4 and its expert parents to make the interpretation more detailed and concrete.
_Distillation process:_
The process is detailed in the Appendix; details will be moved to the main paper. While other implementations are possible, this one is simple, intuitive, and was shown to work well based on the quantitative results in the overall framework. In the future, we can investigate different distillation processes and their effects on the overall framework. In this kind of real-world scenario, correlation much closer to 1.0 is unlikely, since many solutions are close together in objective space, and may have different positions on the Pareto front depending on the evaluation context.
Full autoregressive schedule roll-outs were not generated in the Distill step, as the models were trained on next-step prediction. The mean MAE over distilled models on both the training and validation splits of the distillation dataset was ~0.1, i.e., the mean difference between the actual and predicted actions (which could range from 0-5) was small.
_Online adaptation:_
This is a great avenue for future work! The present implementation of RHEA naturally supports the incorporation of new expert knowledge over time, since it maintains a population of solutions over time, so new expert solutions could at any time be distilled and inserted into the populations. Since it is likely that the objective quality of the new knowledge falls somewhat behind the current Pareto front, a mechanism might be required to ensure the new knowledge survives for long enough in the population to be effectively exploited.
It would also be possible to update the predictor based on new data, as has been demonstrated in earlier work on evolutionary surrogate-assisted prescription [Francon et al 2020], and even increase the number of context or action variables online (e.g., in the case of pandemic interventions, new kinds of interventions could be implemented).
_Long term societal impacts:_
This is a very interesting social question. One risk with AI-assisted policy-making tools is that as human users come to rely on them more there may be less incentive to gather useful and diverse knowledge from human experts. The hope with RHEA is that, by centering the value of soliciting and gathering human expert knowledge, humans will be rewarded for the perspectives they share, and it will encourage expert diversity to thrive. We will clarify this discussion of risk/reward trade-off in the discussion of the paper.
_Transparency:_
RHEA can already assign contribution percentages of submitted experts to final solutions, as depicted in Figure 5. One can further imagine an interface, where upon selecting a solution for inspection, a decision-maker is alerted to who contributed to this solution, at which point they have the opportunity to investigate the contribution, e.g., raise a flag if many selected solutions have the majority of their contribution from a single actor or small group of actors. Analysis of such a scenario could reveal outstanding objective expertise by this group, or some unintended bias in the system. We will clarify this point in the discussion.
Thank you again for articulating these issues. We agree that our response strengthens the work and its impact. | null | null | Rebuttal 1:
Rebuttal: Main Response:
Thank you to the reviewers for the feedback. We were glad the reviewers agreed that the work was valuable, and suggested how it could be strengthened further by addressing some outstanding questions. This main response focuses on three main points brought up by multiple reviewers:
1. Comparisons: We have performed further experimental comparisons to multi-objective reinforcement learning (MORL) methods, highlighting how they fall short, creating a research opportunity for RHEA.
2. Theory: We have provided theoretical justification for the main methodology of evolutionary optimization.
3. Generalizability: We have clarified the scope of the method and detailed how it can be applied to other domains.
**Comparisons**
In response to questions about additional baselines, we performed comparisons to a suite of MORL techniques [1] in the Illustrative domain. We ran preliminary tests with several of the recent algorithms, namely, GPI-LS [2], GPI-PD [2], and Envelope Q-Learning [3]. Due to computational constraints, we then focused on GPI-LS for scaling up to larger action spaces because (1) it has the best recorded results in this kind of domain [1], and (2) none of the other MORL methods in the suite were able to outperform GPI-LS in our experiments.
In short, the baseline multi-objective evolution method strongly outperforms MORL (see plots in rebuttal pdf). The reason is that evolution inherently recombines blocks of knowledge, whereas MORL techniques struggle when there is no clear gradient of improvement. We will add this discussion and the code for the MORL comparisons to the paper and its online supplement. Further, we will move the most critical portions of Related Work from the Appendix to the main paper to clarify the motivation for these comparisons.
**Theory**
The theory will depend on the particular implementation of the RHEA framework. For this paper, we can provide a theoretical analysis based on recent convergence analysis of NSGA-II [4, 5], which is the multi-objective evolutionary algorithm used in the paper.
In theoretical settings, the performance of this algorithm has been shown to depend critically on the size of “jumps” in the optimization landscape, roughly, the maximum size of non-convex regions in the landscape. When these regions are minimal, the method converges to the full ground truth Pareto front in $O(N n \lg n)$ evaluations. When the jump size increases, the best known bound is $O(N^2 n^k / \Theta(k)^k)$, where $k$ is a measure of the jump size, $n$ is the problem dimensionality, and $N$ is the population size. In other words, an increasing jump size causes a roughly exponential slowdown of convergence.
Distilling useful, diverse experts can be viewed as a way of decreasing the jump size. This process is clearly shown in the illustrative domain, where the experts provide building blocks that can be immediately recombined to discover better solutions, but that are difficult to discover from scratch. This interpretation is borne out in the experiments, as RHEA continues to converge quickly as the action space (i.e. problem dimensionality) increases, whereas evolution regresses to only being able to discover the most convex (easily-discoverable) portions of the Pareto front.
We will articulate this theoretical framework and the requisite assumptions fully in the revision.
**Generalizability**
RHEA can be applied effectively to policy-discovery domains where (1) the problem can be formalized with contexts, actions, and outcomes; (2) there exist diverse experts from which solutions can be gathered; and (3) the problem is sufficiently challenging. In contrast, RHEA would not be effective, (1) if the problem is too easy, so that the input from human experts would not be necessary; (2) if the problem is hard, but no useful and diverse experts exist; (3) if there is no clear way to define context and/or action variables upon which the experts agree. These aspects of generality will be clarified in the revision.
The modularity of the overall framework means that different implementations of components can be designed for different domains, such as those related to sustainability, engineering design, and public health. One particularly exciting opportunity for RHEA mentioned several times in the paper is climate policy. The paper gives the example of defining “green hydrogen” as a concrete example of the kind of challenge RHEA could solve, but this could be just a small part of a climate policy application. For example, the En-Roads climate simulator supports diverse _actions_ across energy policy, technology, and investment; _contexts_ based on social, economic, and environmental trajectories; and multiple competing _outcomes_, including global temperature, cost of energy, and sea-level rise (https://en-roads.climateinteractive.org/). Users craft policies based on their unique priorities and expertise. RHEA could be used with a predictor like En-Roads to discover optimized combinations of expert climate policies that trade-off across temperature change and other the outcomes that users care about most. We will add this discussion to the paper.
A point-by-point response to remaining comments in each review is provided below. The same reference numbers are used throughout the responses.
[1] Felten, et al. “A Toolkit for Reliable Benchmarking and Research in Multi-Objective Reinforcement Learning”. NeurIPS 2023. https://github.com/LucasAlegre/morl-baselines.
[2] Alegre, et al. “Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization”. AAMAS 2023.
[3] Yang, Sun, and Narasimhan. “A Generalized Algorithm for Multi-Objective
Reinforcement Learning and Policy Adaptation”. NeurIPS 2019.
[4] Doerr and Qu. “From Understanding the Population Dynamics of the NSGA-II to the First Proven Lower Bounds”. AAAI 2023.
[5] Doerr and Qu. “Runtime Analysis for the NSGA-II: Provable Speed-Ups from Crossover”. AAAI 2023.
Pdf: /pdf/a2ff91bdceb2d84bdc1c1f7f16570a04a74a3a19.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Constant Acceleration Flow | Accept (poster) | Summary: The paper presents an extension to rectified flow by Liu et al., 2023, a method based on optimal transport that can match samples from two distributions. This class of methods may have different uses but is often employed for the efficient generation of new samples and for morphing one sample to another. As such rectified flow and the extension proposed in this paper may be seen as finding efficient trajectories for use during generation by diffusion models. The method solves the transport flow in a simple Euler approximation of the ODE describing the flow. The paper identifies crossing of flow trajectories as a major obstacle in generating efficient paths in rectified flow and proposes constant acceleration flow (CAF). The paper proposes Initial Velocity Conditioning to produce the initialize the acceleration field in solving the ODE. Rectified flow uses constant velocity to solve the ODE using a reflow procedure to optimize the paths in a second step. Similarly, the proposed CAF also uses reflow to improve the initial paths. The paper reports experimental evaluation of the model with synthetic distributions and with CIFAR-10. The experiments demonstrate improved flow between distributions for synthetic data and improved FID score on CIFAR-10. The paper also reports an increase in straightness of trajectories.
Strengths: The constant acceleration flow is novel to the best of my knowledge.
The paper describes the Initial Velocity Conditioning and related steps to make the approach work on CiFAR-10. The approach is suitably summarized in pseudo-code in two algorithms.
The synthetic experiments demonstrate the paper's motivation visually.
The ODE framework for the transport and the forward Euler solver makes the steps in the procedure easy to follow. The analogy makes it easy to see that constant velocity and constant acceleration lead to straight trajectories is the initial coupling of the samples x_0 and x_1 is correct.
A comparison with many alternative methods is provided in Table 1.
The proposed CAF visually leads to improved quality and stability on CIFAR-10 over rectified flow.
Weaknesses: The paper presents arguably a substantial improvement of results obtainable with rectified flow in terms of quality at the price of increased computation. The paper does not attempt any argument why this is significant, i.e., has the improved quality the potential to lead to wider adaptation of rectified flow type methods. The paper has no real-world use case to motivate the work. The claim that experiments with CIFAR-10 at 32x32 pixels is a real-world example seems far-fetched.
The initialization of the method and its impact on the result is not well explored. The initial velocity calculation requires sample x_1 drawn from the target distribution. Both, the paper under consideration and rectified flow employ a pre-trained generative model to obtain that sample. The quality of initial samples may impact the results of CAF. The authors should provide evidence to the contrary or provide an experimental evaluation of the influence.
There is no theoretical justification why constant acceleration is superior to constant velocity flow generation. While the experimental evaluation appears clear, it remains unclear if this behavior is due to the tasks considered or holds more broadly.
The paper claims reflow as a contribution but it has been proposed by rectified flow. The same is true for the measurement of straightness of flow trajectory with Normalized Flow Straightness Score (NFSS) in Eqn 12.
The paper does not discuss how CIFAR-10 is used. One can speculate based on earlier work such as GLOW that unconditional generation and conditional generation may refer to use of class conditioning but a concise description of the experimental setup is required.
The use of a pre-trained generative model should also be taken into account in the required computational effort during training. It may be application-dependent if such a model is available or if it need to be trained.
Technical Quality: 3
Clarity: 3
Questions for Authors: A discussion of the use of rectified flow on real-world tasks would enhance the justification for the proposed CAF. Can such examples be provided and discussed?
Are there other options to provide the initial velocity field rather than using a pre-trained generative model?
The paper is generally well written. The following are minor typos:
l. 20 a huge computation burden -> a large computational burden
l. 128 thet -> the
l. 176 Experiment -> Experiments or Experimental Evaluation
l. 188 128-dimensional dimensions
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations section should be moved from the appendix into the main paper as the computational impact of CAF over rectified flow is essential.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate the reviewer's effort for the detailed feedback and for spotting the typos. We will make corrections in the final version.
**Q.1 [Significance of work & real-world application]**
**Response to Q.1**
In response to the concern regarding the significance of our method, we would like to emphasize three key points:
- **[Fast sampling]:** To address the slow inference of diffusion models, there have been efforts to reduce inference steps at the cost of additional training, such as CM and CTM. Our work shares this motivation, effectively addressing slow inference without compromising quality. We believe our work has substantial potential to make a generative model both fast and accurate, enhancing its applicability.
- **[Efficient editing]:** Inversion has been an essential technique for real-world applications such as image/video editing [1,2]. However, current methods require 25-100 steps for accurate inversion, whereas our method can significantly reduce the inference time by a few-step inversion. We demonstrate this in two additional tasks: *reconstruction* and *box inpainting*. For the results, please refer to **4. [Inversion & Zero-shot image editing]** in the "**Author Rebuttal**" above.
- **[ImageNet 64x64 results]:** For real-world examples, we provide additional results on ImageNet 64x64. These results demonstrate the broader applicability of our framework. For the quantitative results, please refer to **1. [ImageNet 64x64 results]** in the “**Author Rebuttal**” above.
---
**Q.2 [Impact of pre-trained model]**
**Response to Q.2**
As the reviewer commented, the performance of CAF can be influenced by the quality of the pre-trained model. This phenomenon is common in other distillation methods, where the student's performance is bounded by the teacher. To address this, we have incorporated auxiliary adversarial loss with real data. This auxiliary loss can be considered as the divergence between the real data distribution and the learned distribution, helping the model surpass the performance of the pre-trained model. Our empirical results in the paper also support this, where CAF’s performance with distillation (FID 1.7) surpasses EDM (FID 2.01).
---
**Q.3 [Why constant acceleration is better than constant velocity?]**
**Response to Q.3**
CAF employs 2nd-order momentum by incorporating acceleration (2nd derivative of position w.r.t time). In contrast, constant velocity flow relies on 1st-order momentum, representing only the velocity. The inclusion of the 2nd-order term in CAF enables it to account for the time-varying nature of velocity, providing a more expressive approximation of the dynamics. This is analogous to higher-order terms in a Taylor series expansion, leading to lower numerical errors during the sampling phase. The superiority of higher-order schemes in reducing numerical errors is well-documented in numerical analysis literature (see LeVeque, 2002, Finite Volume Methods for Hyperbolic Problems).
---
**Q.4 [Contribution of reflow and NFSS]**
**Response to Q.4**
Thank you for your feedback. We would like to clarify that we do *not* intend to claim credit for the reflow procedure or the measurement of straightness, both of which were introduced in the rectified flow. In our paper, we have provided appropriate references and discussions of these concepts in the abstract, introduction, and preliminary sections to acknowledge their original sources and their use in our work. Our primary contributions lie in the development of the CAF framework, which leverages these established techniques to achieve and measure improved performance. We will ensure that our references and discussions are appropriately highlighted to avoid any misunderstanding.
---
**Q.5 [Details of CIFAR10]**
**Response to Q.5**
We will include the details in the revised paper. For training, we generated 12 million pairs from a pre-trained EDM model using 10 class labels. We do not use class labels for unconditional training. For class conditional training, the class labels are transformed into one-hot class embeddings and then element-wise added to the time-step embeddings.
---
**Q.6 [Additional training efforts for the pre-trained model]**
**Response to Q.6**
We acknowledge that the use of a pre-trained model should be considered for the required computational effort during training. To clarify this, we have considered the following cases:
- **[With pre-trained models]:** In scenarios where pre-trained models are readily available, leveraging them can significantly reduce the overall training time and computational resources required.
- **[Without pre-trained model]:** In scenarios where pre-trained models are not available, our CAF can be trained from scratch to serve as a model to generate deterministic couplings. To show this, we conducted experiments where CAF was trained solely from real data without any pre-trained models. For the results, please refer to section 2. [Training without pre-trained model] in the "Author Rebuttal". This approach increases the computational efforts but effectively addresses the scenario where pre-trained models are not available.
We will include these discussions in the revised version of our paper to clarify the required computational effort during training.
---
**Q.7 [Real-world applications]**
**Response to Q.7**
Please refer to our response to **Q.1 [Significance of work & real-world application].**
---
**Q.8 [Other options to provide the initial velocity]**
**Response to Q.8**
Thank you for pointing out possible directions for future study. We believe that improving generation quality in few-step regimes without relying on reflow is a promising direction for future research.
---
[1] Mokady, Ron, et al, Null-text inversion for editing real images using guided diffusion models, CVPR2023 \
[2] Huberman-Spiegelglas, Inbar, et al, An edit friendly DDPM noise space: Inversion and manipulations, CVPR2024
---
Rebuttal Comment 1.1:
Title: Rebuttal well-addressed my concerns
Comment: Thank you for the detailed rebuttal.
Based on the explanations and especially the demonstrations of additional applications, I have revised my recommendation to weak accept.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the reviewer’s response and are truly thankful for the constructive feedback that has helped enhance our work. Thank you for dedicating your time and effort to providing us with such valuable insights. | Summary: This work proposes Constant Acceleration Flow (CAF), which, instead of learning a velocity-based flow model like in Flow Matching, jointly trains an initial velocity model together with a constant time-dependent acceleration model. This framework aims to learn straighter paths and thus enable better results for few-step generation.
Additionally, they propose to condition the acceleration model on the initial velocity and propose to improve the initial velocity using a reflow procedure. CAF is empirically validated on toy data and CIFAR-10, including ablations on the different elements of the proposed framework.
Strengths: - The Constant Acceleration Flow is a new formulation for achieving straighter trajectories in flow-based generative models derived based on the assumption of constant acceleration.
- Good empirical results on CIFAR-10
- Includes important ablations on the velocity conditioning as well as on the magnitude of the initial velocity
Weaknesses: "Reflow" for initial velocity:
- The reflow procedure was proposed to straighten the paths of an existing generative model. The authors propose to use a "reflow" procedure for training CAF. However, they don't generate the couplings with a model trained with CAF but instead with a pre-trained EDM model, which is very different from how the reflow procedure was proposed. Additionally, it is unclear how dependent CAF is on this procedure and, thus, from the pre-trained generative model. In the CIFAR-10 ablation, the setting with constant acceleration and velocity conditioning but no "reflow" procedure (and thus no pre-trained generative model) is crucial to answer the question of whether CAF also works without a pre-trained model. If it does not, then I would argue that the proposed framework is more of a new distillation technique rather than a new generative model. This contextualization should also be more clearly explained in the main text.
Missing discussion and contextualization of related work:
- Acceleration Generative Modeling has been proposed in [1]. While CAF is based on Flow Matching and a constant velocity is chosen, [1] considers Bridge Matching in a stochastic optimal control framework with a changing velocity induced by the acceleration prediction. However, their trained model is also a parameterized acceleration prediction that takes as input the current data point $x_t$, current time $t$, and the current velocity $v_t$ effectively conditioning on the velocity. This framework is very related to the approach proposed by the authors, and the connection should be discussed in detail in the main paper. Additionally, an experimental comparison illustrating the differences between the two approaches could be beneficial.
- Coupling preservation through initial velocity conditioning: The authors refer to the reflow approach as existing related work. However, the reflow approach was proposed to straighten paths, not preserve couplings. Moreover, the problem of coupling preservation has been thoroughly analyzed in [2], which proposes to use a source point conditioning, i.e. conditioning on $x_0$. How does Augmented Flow/Bridge Matching (Flow/Bridge Matching with $x_0$ conditioning) compare to CAF? As mentioned, velocity conditioning is also used in [1].
- A possible way to empirically compare to these competing methods would be to extend the ablation study on CIFAR-10 to include [1] and [2].
Minor Weaknesses:
- Experiments are only conducted on CIFAR-10. Including at least one more dataset, e.g. 64x64 ImageNet/CelebA/etc, could strengthen the empirical results.
[1] Tianrong Chen and Jiatao Gu and Laurent Dinh and Evangelos A. Theodorou and Joshua Susskind and Shuangfei Zhai. "Generative Modeling with Phase Stochastic Bridges". In ICLR 2024.
[2] Valentin De Bortoli and Guan-Horng Liu and Tianrong Chen and Evangelos A. Theodorou and Weilie Nie. "Augmented Bridge Matching". In Arxiv 2023.
Technical Quality: 2
Clarity: 2
Questions for Authors: - As mentioned above, how dependent is CAF on the pre-trained model? Does it also work well without any pre-trained model?
- Does the 2-Rectified Flow comparison for the CIFAR-10 experiments also use generated samples by EDM for its first training round?
- Can the velocity and acceleration models be trained separately? E.g. first train the initial velocity model and then train the acceleration model?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: - Again, how dependent is CAF on the pre-trained model?
- Training both an acceleration and a velocity prediction model effectively doubles the amount of total parameters. This could be mentioned as another limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate the reviewer for the constructive feedback. We would like to clarify the reasoning behind our approach and address the concerns raised.
**Q.1** **[Why use EDM for reflow?]**
**Response to Q.1**
In response to the reviewer’s concern regarding the reflow procedure, we would like to emphasize that the reflow procedure can be flexibly applied to any pre-trained diffusion model, which is a notable advantage of reflow, as demonstrated in InstaFlow [1]. Based on these findings and the inherent flexibility of the reflow procedure, we utilized the pre-trained EDM model to ensure a fair comparison against other fast sampling models, such as CM and CTM, which also utilized pre-trained EDM models.
---
**Q.2 [Contextualization of work]**
The proposed framework seems to be more of a new distillation technique rather than a new generative model. This contextualization should also be more clearly explained in the main text.
**Response to Q.2** We acknowledge the importance of contextualizing the method. We will include the below discussions and contextualize our work more clearly in the main text of the final version. To address the reviewer’s concern comprehensively, we have considered two key points:
- **[Fast sampling as our main motivation]**: We would like to emphasize that our primary focus is to propose an ODE framework based on Rectified Flow that enables fast generation with high accuracy (as stated in the abstract, introduction, and preliminary sections), rather than introducing a new generative model family. To achieve this goal, we have proposed three techniques: constant acceleration, initial velocity conditioning (IVC), and reflow. Each of these techniques independently improves the estimation accuracy for single-step generation, as demonstrated in Table 2 of our paper.
- **[CAF as an independent framework]**: To address the reviewer’s concern regarding the reliance on reflow, we conducted experiments where CAF was trained solely from real data (CIFAR-10) without the deterministic couplings generated from the pre-trained model (denoted as 1-CAF). For the quantitative results, please refer to section **2. [Training without pre-trained model]** in the "**Author Rebuttal**". The results show that 1-CAF effectively learns the real data distribution without reflow, even outperforming 1-Rectified Flow (RF). However, similar to 1-RF, 1-CAF loses its accuracy in extremely few-step regimes. We believe that improving generation quality in these regimes without the deterministic couplings is an interesting direction for future research.
---
**Q.3 [Missing discussion with AGM]**
**Response to Q.3** Thank you for bringing out the related work that we have missed. We found that AGM elegantly formulates a new generative model using the acceleration term based on SOC theory. Since this concurrent work was published a few weeks before our submission, we unfortunately missed this important contribution. We will ensure that discussions of AGM and related works are included in the revised paper.
The main difference between AGM and CAF is that CAF assumes constant acceleration, whereas AGM predicts time-dependent acceleration. Our constant acceleration framework leads to a simpler sampling procedure (Eq. 11 in our paper) based on an analytically closed-form solution (Eq. 5 in our paper). This closed-form solution enables few-step (N<3) sampling with high accuracy when learned with deterministic couplings. To demonstrate this, we additionally compare the generation results of AGM and CAF on CIFAR-10. For the quantitative results, please refer to section **3. [Comparison with AGM]** in the "**Author Rebuttal**". The result demonstrates that CAF outperforms AGM in terms of FID scores in an extremely few-step regime. Moreover, we provide qualitative comparisons between CAF and AGM in the attached PDF file, where CAF generates images with more high-frequency details than AGM for few-step sampling. These results highlight the distinct advantages of our framework over AGM, particularly in terms of single-step sampling accuracy.
---
**Q.4 [Missing discussion with AugBM]**
**Response to Q.1**
Thank you for pointing out the related work that we have missed. We recognize that AugBM shares a similar motivation with our method in preserving couplings and addressing flow (bridge) crossing by conditioning auxiliary information to the network. However, while AugBM aims to improve coupling preservation specifically for image-to-image translation tasks, our main contribution lies in developing an ODE framework that enables fast sampling with high accuracy. We will include a detailed discussion of AugBM and related works in the revised paper to provide a more comprehensive comparison.
---
**Q.4 [Training setting of 2-Rectified Flow]**
**Response to Q.4**
Yes, in our experiments, we used the same dataset generated by the pre-trained EDM for both CAF and RF.
---
**Q.5 [Separate training]**
**Response to Q.5**
Thank you for the insightful suggestion. We investigated the possibility of training the velocity and acceleration models separately. In our experiments on the CIFAR-10 conditional setting, we found that training each network separately led to more stable training dynamics. This approach resulted in a slight improvement in the FID score, reducing it from 4.18 to 3.32.
---
**Q.6 [Computational cost]**
**Response to Q.6** We agree with the reviewer that our framework requires additional learnable parameters for both acceleration and velocity prediction models. This can be a limitation, particularly for low-resource systems. One potential mitigation strategy could be training a single model to learn both velocity and acceleration by conditioning the model appropriately. We will include this discussion in the limitations section of the revised paper.
---
[1] Liu, Xingchao, et al. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation, ICLR 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed answers and added experiments.
> ***The result demonstrates that CAF outperforms AGM in terms of FID scores in an extremely few-step regime.***
For the rebuttal, the authors compare CAF with reflow vs AGM without reflow. I believe the contribution of the [i] constant acceleration flow should be disentangled from the [ii] use of reflow. [ii] Reflow should also significantly improve the few-step results of AGM. To fairly compare the two frameworks, a comparison of just [i] vs AGM is needed or/and a comparison of [i] + [ii] vs AGM + [ii]. In my opinion, the reported results in the author rebuttal do not constitute a fair comparison between AGM and CAF.
> ***This closed-form solution enables few-step (N<3) sampling with high accuracy when learned with deterministic couplings.***
Is there an intuition of why this closed-form enable few-step with deterministic coupling over AGM with deterministic couplings? As mentioned above, this should also be confirmed empirically.
> ***Since this concurrent work was published a few weeks before our submission.***
AGM was uploaded to Arxiv October 2023, which does not make it concurrent given the NeurIPS Contemporaneous Work guidelines but previous work.
---
Rebuttal 2:
Comment: Thank you for the response. We'd like to clarify the additional questions the reviewer raised.
> ***AGM was uploaded to Arxiv October 2023, which does not make it concurrent given the NeurIPS Contemporaneous Work guidelines but previous work.***
First, we would like to clarify and sincerely apologize for any confusion: at the time of our submission, we were *genuinely unaware* of the AGM. As mentioned in our previous response, we fully recognize the AGM's contribution to the field of acceleration modeling and will ensure that discussions of AGM and related works are included in the revised paper.
> ***Is there an intuition of why this closed-form enables few-step with deterministic coupling over AGM with deterministic couplings?***
Since the CAF ODE assumes that the acceleration term is *constant* with respect to time, there is no need to iteratively solve complex time-dependent differential equations. This simplification allows for a direct closed-form solution that supports efficient and accurate sampling in just a few steps, given that the learned velocity and acceleration models are sufficiently accurate.
In contrast, AGM’s acceleration term is *time-varying*. This variability means that the differential equation cannot be simplified in the same way, often necessitating multiple iterative steps to approximate the true solution accurately. The constant acceleration assumption in CAF is similar to the concept of rectified flow (constant velocity), which is known to make solving differential equations more efficient than in diffusion models.
Below are the details:
In CAF ODE (equation 4 in the paper), the solution for the final sample is given by:
$x_1 =x_0 +\int_0^1v(x_0,0)+a(x_t,t)tdt=x_0+v(x_0,0) +\int_0^1a(x_t,t)tdt$
Thanks to the constant acceleration assumption, the integral simplifies to:
$x_1=x_0+v(x_0,0)+ a(x_0,0)\int_0^1tdt=x_0+v(x_0,0)+ \frac{1}{2}a(x_0,0)$
This result corresponds to the one-step sampling version of equation 11 in our paper.
> ***For the rebuttal, the authors compare CAF with reflow vs AGM without reflow. I believe the contribution of the [i] constant acceleration flow should be disentangled from the [ii] use of reflow. [ii] Reflow should also significantly improve the few-step results of AGM. To fairly compare the two frameworks, a comparison of just [i] vs AGM is needed or/and a comparison of [i] + [ii] vs AGM + [ii]. In my opinion, the reported results in the author rebuttal do not constitute a fair comparison between AGM and CAF.***
Thank you for your valuable feedback. We understand your concern regarding the need to disentangle the use of the reflow.
First, we would like to clarify the rationale behind our comparisons. In our proposed method, the reflow procedure is an essential component designed to achieve an accurate solution of our CAF ODE. AGM, on the other hand, does not incorporate reflow in its original methodology. In our rebuttal, we have fully utilized the AGM method in its **standard form** as proposed by the authors, using their official code without making any alterations. This approach reflects the typical use cases and capabilities of both methods as they were originally intended to be used.
To address the reviewer's concern, we conducted additional experiments where AGM was trained with deterministic couplings generated from EDM (the same as our reflow setting). We replaced $\epsilon_0$ and $x_1$ in AGM with noise-image pairs from EDM and followed the experimental setup in the official AGM code. The results are summarized in the table below:
| | N | FID $\downarrow$ |
|------------------------|:-:|:-----:|
| AGM-ODE without reflow | 5 | 11.88 |
| AGM-ODE with reflow | 5 | 15.23 |
| CAF | 1 | 5.15 |
In fact, incorporating reflow into AGM did not necessarily improve its performance in the few-step regime. We assume that this is because AGM may require noise-velocity-image triplets. This indicates that effectively integrating the reflow mechanism within the AGM framework is not straightforward with a pre-trained diffusion model, and it requires extra effort to build these triplets that go beyond the scope of our current comparison.
---
We hope our explanations address the reviewer’s concerns regarding AGM.
---
Rebuttal 3:
Comment: Dear Reviewer rYLJ,
We sincerely appreciate your dedication and constructive feedback on our work. As the reviewer-author discussion period ends soon, we would like to kindly ask if you have any remaining concerns or questions.
Best regards, \
Authors
---
Rebuttal Comment 3.1:
Comment: Thanks for the further explanations and results. This further clarified my concerns and I've decided to raise my score 3 -> 4. | Summary: This paper develops a new flow based generative model whose vector field is constructed similar to rectified flows (straight path between source and target samples), but instead of using a path with a constant speed, uses a path with constant acceleration. This requires learning a neural network to parametrize the initial velocity field and another neural network to parametrize the acceleration field. To improve their model's performance, the authors also propose parametrizing the acceleration neural network using the initial velocity and also applying reflows to the initial velocity. The proposed method helps mitigate the flow crossing problem at a model level, which should in theory avoid excess reflows to learn non-crossed flows. The experiments section demonstrates the ability of the proposed method to learn high performing generative models on single step generation on the CIFAR-10 dataset.
Strengths: The idea is novel, solves a relevant problem and fits nicely into the current landscape of the flow based generative modeling research area. The paper was easy to read and the method itself is simple enough to understand and implement after a single pass over the paper. The main contribution of the paper, in my opinion, is the conditioning of the acceleration neural network on the initial velocity. This gives the final trained model a non-markov path measure, which is something that is of current interest to the field (Augmented Bridge Matching, De Bortoli et al., 2023).
Weaknesses: - The related work should include references to building diffusion models with non-markov path measures (see De Bortoli et al., 2023 and its related work).
- For completeness, the paper should include proofs to show that the learned distribution is the same as the data distribution. This follows directly from interpreting Eq. 2 as a conditional flow matching objective (Lipman et al., 2023).
- In the initial presentation of the vector field $a(x_t,t)$ at the start of section $4.1$, it would make more sense to drop the dependence on $t$ to emphasize that the ground truth acceleration field is constant in time.
- The scope of the empirical evaluation is limited. The authors should have evaluated on more datasets than just CIFAR-10. There are other small scale image datasets, like ImageNet-32, that could have been evaluated.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is there a reason why algorithm 1, line 6, updates $\theta$ before computing the loss for $a_\phi$?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q.1 [Missing discussion with AugBM]** The related work should include references to building diffusion models with non-markov path measures (see De Bortoli et al., 2023 and its related work).
**Response to Q.1**
We appreciate the reviewer for pointing out the related works that we have missed. We recognize that AugBM shares a similar motivation with our method in preserving couplings and addressing flow (bridge) crossing by conditioning auxiliary information to the network.
However, while AugBM aims to improve coupling preservation specifically for image-to-image translation tasks, our main contribution lies in developing an ODE framework that enables fast sampling with high accuracy. To achieve this, we proposed a flow with constant acceleration that provides a one-step closed-form solution (Eq. 5 in our paper). This closed-form solution enables few-step (N<3) sampling (Eq.11 in our paper) with high accuracy when learned with deterministic couplings, as demonstrated by our experimental results.
We will include a detailed discussion of AugBM and related works in the revised paper to provide a more comprehensive comparison.
---
**Q.2 [Proof of marginal preserving]**
For completeness, the paper should include proofs to show that the learned distribution is the same as the data distribution.
**Response to Q.2**
We highly appreciate the feedback and acknowledge the importance of demonstrating the marginal preservation property of our approach. To address the reviewer’s concern, we provide a derivation that the flow induced by our Constant Acceleration Flow (CAF) ordinary differential equation (ODE) preserves the marginal of the data distribution, established by the definitions and theorems in [1].
***Definition 1***: *For a path-wise continuously differentiable process $\mathbf{x} = {\mathbf{x}_t : t \in [0,1]}$, we define its expected velocity $v^{\mathbf{x}}$ and acceleration $a^{\mathbf{x}}$ as follow:*
$$
\
v^{\mathbf{x}}(x,t) = \mathbb{E}\left[\frac{d\mathbf{x}_t}{dt} \ \bigg| \ \mathbf{x}_t = x\right], \quad a^{\mathbf{x}}(x,t) = \mathbb{E}\left[\frac{d^2\mathbf{x}_t}{dt^2} \ \bigg| \ \mathbf{x}_t = x\right], \quad \forall x \in \text{supp}(\mathbf{x}_t).
\
$$
*For $x \notin \text{supp}(\mathbf{x}_t)$, the conditional expectation is not defined, and we set $v^{\mathbf{x}}$ and $a^{\mathbf{x}}$ arbitrarily, for example, $v^{\mathbf{x}}(x,t) = 0$ and $a^{\mathbf{x}}(x,t) = 0$.*
***Definition 2*** [1]: *We denote that $\mathbf{x}$ is rectifiable if $v^{\mathbf{x}}$ is locally bounded and the solution of the integral equation of the form*
$$
\
\mathbf{z}_t = \mathbf{z}_0 + \int_0^t v^{\mathbf{x}}(\mathbf{z}_t, t) dt, \quad \forall t \in [0,1], \quad \mathbf{z}_0 = \mathbf{x}_0,
\
$$
*exists and is unique. In this case, $\mathbf{z} = {\mathbf{z}_t : t \in [0,1]}$ is called the rectified flow induced by $\mathbf{x}$.*
***Theorem 1*** [1]: *Assume $\mathbf{x}$ is rectifiable and $\mathbf{z}$ is its rectified flow. Then, $\text{Law}(\mathbf{z}_t) = \text{Law}(\mathbf{x}_t)$ for all $t \in [0,1]$.*
\
Refer to [1] for the proof of ***Theorem 1***. We will now show that our CAF ODE satisfies ***Theorem 1*** by proving that our proposed ODE induces $\mathbf{z}$, which is the rectified flow as defined in ***Definition 2***. In our paper, we defined the CAF ODE as
$$
\frac{d\mathbf{x}_t}{dt} = \left. \frac{d\mathbf{x}_t}{dt} \right|{t=0} + \frac{d^2\mathbf{x}_t}{dt^2} \cdot t.
$$
By taking the conditional expectation on both sides, we obtain
$$
v^{\mathbf{x}}(x,t) = v^{\mathbf{x}}(x,0) + a^{\mathbf{x}}(x,t) \cdot t,
$$
from ***Definition 1***. Then, the solution of the integral equation of CAF ODE is identical to the solution in ***Definition 2***:
$$
\mathbf{z}_t = \mathbf{z}_0 + \int_0^t \left( v^{\mathbf{x}}(\mathbf{z}_0, 0) + a^{\mathbf{x}}(\mathbf{z}_t, t) \cdot t \right) dt \
= \mathbf{z}_0 + \int_0^t v^{\mathbf{x}}(\mathbf{z}_t, t) dt.
$$
This indicates that $\mathbf{z}$ induced by CAF ODE is also a rectified flow. Therefore, CAF ODE satisfies the marginal preserving property, i.e., $\text{Law}(\mathbf{z}_t) = \text{Law}(\mathbf{x}_t)$, as stated in ***Theorem 1***.
---
**Q.3 [Expression of acceleration model]**
In the initial presentation of the acceleration field $a(x_t,t)$ at the start of section 4.1, it would make more sense to drop the dependence on $t$ to emphasize that the ground truth acceleration field is constant in time.
**Response to Q.3**
Thank you for your suggestion. We agree that emphasizing that the ground truth acceleration field is constant in time can enhance clarity. We will consider revising the notation.
---
**Q.4 [Limited evaluation]**
The scope of the empirical evaluation is limited. The authors should have evaluated on more datasets than just CIFAR-10. There are other small-scale image datasets, like ImageNet-32, that could have been evaluated.
**Response to Q.4**
Thank you for your feedback regarding the scope of our empirical evaluation. To address the reviewer’s concern, we have extended our evaluation to include additional generation results on ImageNet 64x64. These results demonstrate the broader applicability and effectiveness of our framework beyond CIFAR-10. For the quantitative results, please refer to **1. [ImageNet 64x64 results]** in the “**Author Rebuttal**” section above.
---
**Q.5 [Training algorithm]** Is there a reason why algorithm 1, line 6, updates $\theta$ before computing the loss for $a_\phi$?
**Response to Q.5**
Thank you for your question. In our experiments, we found that updating the initial velocity model before computing the loss for the acceleration model leads to more stable training dynamics.
---
[1] Liu, Xingchao, Chengyue Gong, and Qiang Liu, Flow straight and fast: Learning to generate and transfer data with rectified flow, ICLR 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and for the updated experiments. My opinion on the paper is the same so I am going to keep the same score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s prompt response and are grateful for the constructive comments that have strengthened our work. Thank you for dedicating your time and effort to providing us with such valuable insights. | null | null | Rebuttal 1:
Rebuttal: We sincerely appreciate all reviewers' detailed feedback. Here, we present additional quantitative results to address their concerns.
---
**1. [ImageNet 64x64 Results]**
In response to the concerns about the limited evaluation of our method, we provide additional quantitative results (FID, Inception Score, and Recall) on ImageNet 64x64. The table below summarizes the results:
| Model | N | FID $\downarrow$ | IS $\uparrow$ | Recall $\uparrow$ |
|----------------------------------------------------|:---:|:-----:|:-----:|:------:|
| _GAN Models_ | | | | |
| BiGGAN-deep | 1 | 4.06 | - | 0.48 |
| StyleGAN-XL | 1 | 2.09 | **82.35** | 0.52 |
| | | | | |
| _Diffusion Model / Consistency Model_ | | | | |
| ADM | 250 | 2.07 | - | **0.63** |
| EDM | 79 | 2.44 | 48.88 | 0.67 |
| iCT-deep | 2 | 2.77 | - | 0.62 |
| iCT-deep | 1 | 3.25 | - | **0.63** |
| | | | | |
| _Diffusion Models - Rectified flow_ | | | | |
| **CAF (Ours)** | 1 | 9.29 | 42.73 | 0.627 |
| | | | | |
| _Diffusion Models - Distillation_ | | | | |
| CD | 1 | 6.2 | 40.08 | **0.63** |
| Diff-Instruct | 1 | 5.77 | - | - |
| CTM | 2 | 1.73 | 64.29 | 0.57 |
| CTM | 1 | 1.92 | 70.38 | 0.57 |
|
| _Diffusion Models - Rectified Flow + Distillation_ | | | | |
| **CAF (+distill + GAN) (Ours)** | 1 | **1.69** | 62.03 | 0.621 |
These results demonstrate that our method achieves comparable performance compared to recent state-of-the-art models, highlighting the generalization capability of our approach to large-scale datasets. Additionally, we have included the ImageNet generation results from our trained model with a single step in **Figure 4** of the attached PDF file.
---
**2. [Training without pre-trained model]**
To address the reviewer’s concern regarding the reliance on a pre-trained model, we report additional results where CAF was trained solely from real data (CIFAR-10) without the deterministic couplings generated by the pre-trained model (denoted as 1-CAF).
| Model | N | FID $\downarrow$ |
|-------|:---:|:----:|
| 1-RF | 1 | 335 |
| **1-CAF** | 1 | 328 |
| 1-RF | 70 | 4.7 |
| **1-CAF** | 70 | **3.53** |
The results show that 1-CAF effectively learns the real data distribution without the deterministic couplings, even outperforming 1-Rectified Flow (RF). However, similar to 1-RF, we lose its accuracy in extremely few-step regimes. We believe that improving quality in these regimes without the deterministic couplings is an interesting direction for future research.
---
**3. [Comparison with AGM]**
As the reviewer rYLJ requested, we additionally compare the generation results of Acceleration Generative Model (AGM) [1] and CAF in the table below.
| Dataset | Model | N | FID $\downarrow$ |
|----------------|------------|:--:|:-----:|
| CIFAR-10 | AGM-ODE | 50 | 2.46 |
| | AGM-ODE | 5 | 11.88 |
| | **CAF (Ours)** | 1 | **1.7** |
| ImageNet 64x64 | AGM-ODE | 30 | 10.07 |
| | AGM-ODE | 20 | 10.55 |
| | **CAF (Ours)** | 1 | **1.69** |
The results demonstrate that CAF outperforms AGM in terms of FID scores in an extremely few-step regime. Moreover, we provide qualitative comparisons between CAF and AGM in **Figure 3** of the attached PDF file, where CAF generates images with more high-frequency details than AGM for few-step sampling. These results highlight the distinct advantages of CAF over AGM, particularly in terms of few-step sampling accuracy. For a detailed discussion, please refer to our response to the revierwer rYLJ **Q.3 [Missing discussion with AGM]**.
---
**4. [Inversion & Zero-shot image editing]**
As the reviewer oUYA requested, we demonstrate our CAF's capability on real-world applications by conducting zero-shot tasks on CIFAR-10 test datasets (*reconstruction* and *box inpainting*). Since our framework is based on ODEs, we can solve our CAF ODE in reverse to perform inversion like DDIM [2]. For reconstruction, we followed the procedure as DDIM. For box inpainting, we inject conditional information (non-masked image region) into the iterative inversion and reconstruction procedure. As demonstrated in the table below, we achieve better reconstruction quality and zero-shot inpainting capability even in a few steps compared to the baselines. This improvement is due to the superior coupling preservation capability of CAF (in Table 4 of our paper). Moreover, we provide qualitative results in **Figure 1 and 2** of the attached PDF file that align with the quantitative results.
| Reconstruction | N | PSNR $\uparrow$ | LPIPS $\downarrow$ |
|-------------|:-:|:-----:|:-----:|
| CM | - | N/A | N/A |
| CTM | - | N/A | N/A |
| EDM | 4 | 13.85 | 0.447 |
| 2-RF | 1 | 29.33 | 0.204 |
| **CAF (Ours)** | 1 | **30.27** | **0.171** |
| Box inpainting | N | FID $\downarrow$ |
|----------------|:---:|:-----:|
| CM | 18 | 13.16 |
| 2-RF | 10 | 16.41 |
| **CAF (Ours)** | 10 | **9.79** |
These results demonstrate that our method can significantly reduce the inference time required for recent methods that utilize inversion for various real-world applications such as image/video editing.
---
[1] Chen, Tianrong, et al, Generative modeling with phase stochastic bridges, ICLR 2024 \
[2] Song, Jiaming, Chenlin Meng, and Stefano Ermon, Denoising diffusion implicit models, ICLR 2021
Pdf: /pdf/86cd5507f15452bc13038112e0cdba9483878f89.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RGFN: Synthesizable Molecular Generation Using GFlowNets | Accept (poster) | Summary: This study modifies prior GFlowNet-based frameworks to produce molecules satisfying synthesizability. While previous GFlowNet-based molecular generation completes molecules by adding fragments or atoms, the proposed method repeats following stems to generate molecules: (1) selects reaction templates, (2) selects reactants, and (3) performs reactions and selects one of the resulting molecules. In the experiments, the proposed method shows competitive performance while preserving synthesizability.
Strengths: - The proposed method first incorporates a reaction-based generation framework with GFlowNet for synthesizability.
- A chemical language for synthesizability seems to offer cost advantages in real-world applications, demonstrating that it acts as a promising inductive bias to generate promising molecules.
- The proposed method shows competitive performance compared to (1) FGFN in terms of synthesizability and (2) SyntheMol in terms of average reward and mode discovery. Although it finds fewer modes compared to FGFN, this seems to stem from constraints in the generation space.
Weaknesses: - The preliminary is insufficient for understanding the components of the method, particularly mentioning the flow matching condition while the method section actually describes forward and backward policy.
- To improve synthesizability-related metrics, one may consider including them in the rewards. Can authors provide a comparison with FGFN including the SA score in the reward? (e.g., multiobjective GFN)
- The lower number of modes in RGFN (compared to FGFN) is due to constraints in the generation space (ensure high SA scores). Therefore, it would be interesting to compare the number of modes filtered by SA scores.
- Can authors compare the cost of generation time? Additionally, I am curious about time costs for (finding a synthesizability path + generation with FGFN) vs. (generation with RGFN).
**Minor:**
- Error in line 114 $m'i$
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have acknowledged the limitations of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Weakness 1 Response:
Thank you for this comment, we agree that the preliminary section present in the original manuscript was insufficient for understanding the method. We revised it to include the following information, which hopefully will address Reviewer’s remark:
> Another way to rephrase the flow-matching constraints is to learn a forward policy $P_F(s_{i+1}|s_i)$ such that trajectories starting at $s_0$ and taking actions sampled by $P_F$ terminate at $x \in \mathcal{X}$ proportional to the reward.
> Trajectory balance. Several training losses have been explored to train GFlowNets. Among these, trajectory balance [citation] has been shown to improve credit assignment. In addition to learning a forward policy $P_F$, we also learn a backward policy $P_B$ and a scalar $Z_\theta$, such that, for every trajectory $\tau = (s_0 \rightarrow s_1 \rightarrow \dots \rightarrow s_n = x)$, they satisfy:
\begin{equation}
Z_\theta \prod_{t=1}^{n} P_F(s_t|s_{t-1}) = R(x) \prod_{t=1}^{n} P_B(s_{t-1}|s_t)
\end{equation}
However, please note that we are still fairly constrained in terms of space, hence the description has to remain brief.
# Weakness 2 Response:
Thank you for the suggestion. We included a fragment-based GFN that used the average of proxy value and SA score as the reward as another baseline (specifically, we used $R(x) = exp(\beta * (0.5 * proxy(x) / max_{proxy} + 0.5 * (10 - SA_{score}) / 10)$, with $\beta$ adjusted per proxy to match the original reward range. We attached the results in the PDF with the main answer to the reviews. As can be seen, using the SA score as a part of the reward slightly reduces the obtained proxy values, while improving the SA scores (which is expected). However, as can be seen, it does not translate to AiZynthFinder scores on par with RGFN, which highlights the issue of poor reliability of SA scores that was discussed in the manuscript.
# Weakness 3 Response:
Thank you for the suggestion. As requested, we investigated computing the modes based on SA score-thresholded molecules only (with SA <3, which should correspond to easy-to-medium synthesis difficulty; lower values yielded similar trends). We conducted this experiment specifically for ClpP, with the results attached. However, this did not turn out to change the results significantly, since as discussed in the original manuscript, SA scores are a poor approximation of synthesizability. We could not conduct a similar experiment with AiZynthFinder in time due to much larger computational cost. We agree with the Reviewer that selecting only synthesizable modes should significantly improve RGFN’s performance, but the issue is once again evaluating the synthesizability.
# Weakness 4 Response:
On a single RTX Quadro 8000 GPU, generating 1000 molecules from a trained FGFN model took ~7 seconds, while evaluating the top 100 molecules with AiZynthFinder took 2206 seconds. On the other hand, generating 1000 samples using RGFN took ~14 seconds. It’s worth noting that FGFN used a heavily optimized implementation [1], while our approach didn’t leverage multiprocessing for sampling, which might have produced additional overhead. Still, this illustrates that when taking into account the cost of synthesis, using RGFN becomes much more efficient.
# Minor
Thank you for spotting this, it was revised!
[1] https://github.com/recursionpharma/gflownet
---
Rebuttal 2:
Comment: Thank you for the detailed and informative clarification. Most of my concerns are resolved. I believe that the paper has a basic value to be accepted. Therefore, I'd like to increase my score (5->6). | Summary: This paper presents a model for molecule design called Reaction-GFlowNet (RGFN). RGFN is an extension of the GFlowNet framework [4] (which has previously been used to generate molecules by building them out of small fragments) to generate molecules through virtual chemical reactions, ensuring that the molecules RGFN generates are more likely to be synthesizable. The paper evaluates RGFN against GraphGA (a graph genetic algorithm), FGFN (the original fragment-based GFlowNet), and SyntheMol (a recent synthesis-based generative model of molecules proposed in [54] based around Monte Carlo tree search). The paper shows RGFN finds competitively scoring molecules on two proxy* and one docking task (as expected the unconstrained GraphGA does best), while also ensuring the molecules it suggests do better on important synthesizability metrics.
* proxy meaning the task is to optimize against a graph neural network (GNN) oracle trained on a small amount of target data.
(Edited Aug 14 to update the score: see comments below).
Strengths: ## Summary of review
This paper addresses one of the limitations of GFlowNets by explicitly incorporating synthesis plans into the molecular generative process. However, this seems to be a fairly straightforward extension of the GFlowNets framework with synthesis-based generative models of molecules proposed elsewhere. The experiments in this paper compare the proposed method against only one of these existing models.
## Originality/Significance
RFGN is an extension of GFlowNets [4] to build molecules through chemical reactions rather than through combining small molecular fragments. This addresses a key limitation with the original GFlowNet, which the experiments show often generates molecules for which no synthetic route can be found.
Having said that, there have been many synthesis-based generative models of molecules proposed (e.g., [8,17,54,19,22] and others not cited -- see below), and RGFN seems a fairly straightforward combination of these ideas with GFlowNets.
> Korovina, K., Xu, S., Kandasamy, K., Neiswanger, W., Poczos, B., Schneider, J. and Xing, E. (2020) ‘ChemBO: Bayesian Optimization of Small Organic Molecules with Synthesizable Recommendations’, in International Conference on Artificial Intelligence and Statistics. AISTATS 2020
> Nguyen, D.H. and Tsuda, K. (2022) ‘Generating reaction trees with cascaded variational autoencoders’, The Journal of chemical physics, 156(4), p. 044117.
> Vinkers, H.M., de Jonge, M.R., Daeyaert, F.F.D., Heeres, J., Koymans, L.M.H., van Lenthe, J.H., Lewi, P.J., Timmerman, H., Van Aken, K. and Janssen, P.A.J. (2003) ‘SYNOPSIS: SYNthesize and OPtimize System in Silico’, Journal of medicinal chemistry, 46(13), pp. 2765–2773.
> Button, A., Merk, D., Hiss, J.A. and Schneider, G. (2019) ‘Automated de novo molecular design by hybrid machine intelligence and rule-driven chemical synthesis’, Nature Machine Intelligence, 1(7), pp. 307–315.
## Quality
The optimization tasks done to evaluate RGFN seem interesting and it's nice to see the molecules suggested also evaluated in terms of their diversity and synthesizability (using a retrosynthesis planner). While I think it is great that the tasks were more complicated than some of the simple ones often used elsewhere, it might have been nice to include some more commonly-used (yet still challenging) benchmarks too (such as the Therapeutics Data Commons benchmark -- citation below). This would make comparisons with existing models easier.
> Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. Therapeutics data commons: machine learning datasets and tasks for therapeutics. arXiv preprint arXiv:2102.09548, 2021.
## Clarity
Overall I thought the paper was well-written and easy to follow. In particular, I thought Section 3.2 was helpful in giving an overview of the approach (along with the informative Figure 1), and Equations 2–7 were helpful in explaining the parameterizations of the various networks involved. The experiments also seemed to be clearly presented. Readers unfamiliar with GFlowNets [4] would likely have to read about them elsewhere (lines 70–82 provide a short, although limited introduction). However, this seems reasonable due to the overall space available.
Weaknesses: ## Originality
As mentioned under the originality/significance heading in the section above, RGFN seems a fairly straightforward combination of the ideas of GFlowNets with recent synthesis-based generative models of molecules. It would have been helpful if there had been more focus on why synthesis-based GFlowNets were a better approach to take than the currently proposed alternatives to better promote the paper's significance. (Hence why I have gone with a low contribution score).
## Only one other synthesis-based generative model of molecules is compared against
The only other synthesis-based generative model for molecules that is compared against in the experiments is SyntheMol, and this baseline has been limited here in the amount of compute it uses. I can understand no comparison is done against [22,19] due to the lack of open-sourced code, but it would have been nice to have had a comparison against some of the other ones cited, e.g., [17] with its open sourced code available on GitHub (https://github.com/wenhao-gao/SynNet). It would also be nice to have a comparison to the other mentioned methods for encouraging synthesizability, e.g. via using scoring methods [34].
(This along with the fact that no code is available -- see below -- is why I have gone with a lower soundness score).
## No code is shared
As far as I'm aware there is no code currently available for RGFN? It does not seem to be provided as part of the submission (despite this box being ticked in the paper checklist)? I would find it impossible to reproduce the results of the paper using the details provided in Appendices B-D alone (e.g., I'm unsure of what building blocks and reaction templates to use or even how the molecules fed into the GNNs were featurized). One of the proxies was initially trained in an unsupervised manner on the ZINC dataset (Appendix B), but there are scant details on how this was done.
On a similar note, the compute resources used to train the models also seems to be missing. This is again marked as included in the paper checklist, so would appreciate being pointed towards this if I have made a mistake and missed this?
## Method is currently limited to a restricted synthetic space
RGFN uses only 350 building blocks and 17 reactions. This is low compared to the ~150k building blocks and 49 reactions used in [19], or the 5000 building blocks and 90 reactions used in [22]. The experiments done in Section 4.3 suggests the RGFN scales badly to larger building block libraries even with architecture modifications and larger reaction libraries do not seem to have been tried. This, combined with the fact that RFGN can only explore linear synthetic trees (i.e., multiple complex intermediates cannot be created and then combined) and bimolecular reactions restricts the molecules that the approach could explore. This might be an issue in more challenging optimization tasks.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What is the distribution of the number of reaction steps typically used when finding useful molecules?
2. I don't completely follow how the size of the state space was calculated in Figure 2. Perhaps a cartoon would be useful in Appendix A. Is the state space size different to the total number of unique molecules that can be generated?
3. I'm confused as to why RGFN seems to often outperform FGFN in the docking/proxy scores of the molecules it finds. I would have thought the latter model is more flexible and so should be able to find better scoring molecules, even if they did not end up being synthesizable. The paper suggests that the library of fragments available for RGFN might be less suitable than those found in the building blocks picked for RGFN. Have different fragments been tried or other experiments carried out to investigate this hypothesis?
4. RGFN seems to produce molecules with lower QED scores than SyntheMol and FGFN: is there any intuition for why this might be the case?
5. Line 243 says:
> "All RGFN modes were additionally inspected manually by a chemist and confirmed as synthesizable, which indicates that AiZynth scores are likely underestimated."
Does this mean that a chemist took the proposed molecule and synthesized it in the lab? Were the proposed synthetic routes from RGFN used or were alternatives derived?
6. Line 282 states:
> "It is also important to recognize that RGFN does not explicitly generate synthetic routes to the molecules."
I did not understand what was meant by this. I thought the whole point of RGFN is that it did explicitly generate synthetic routes to the final molecule through the sampled trajectory?
7. Equation 4 versus 7:
a. My understanding is that the architecture expressed in Equation 4 is used instead of that of Equation 7 (apart from in Section 4.3)?
b. The architecture expressed in Equation 7 seems to do better in Figure 5 so why not always use this?
c. Could also a similar principle be used to scale up the number of reaction templates available?
8. Line 221 discusses a procedure for picking molecular modes from the list of proposed molecules. Given modes have to be a certain Tanimoto distance apart, I assume this procedure is run greedily and is somewhat dependent on how the molecules are ordered?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Two limitations are discussed in Section 5: (i) that the model is currently limited to a very small reaction and building block library, and (ii) the issues inherent with docking oracles. Regarding the first of these, the authors discuss and evaluate a modification of the architecture to try to enable the method to scale (see Section 4.3), but performance still degrades as the initial fragment library increases and so this seems to remain an outstanding limitation (see also the weaknesses section above). The second limitation (docking oracles are not perfect) is not limited to RGFN and is a problem among similar models more generally. The authors discuss incorporating additional, more-accurate oracles in a multi-fidelity framework to resolve this which seems reasonable.
Another limitation which is not discussed is the sample efficiency of the proposed approach (which is again also a limitation for the baselines). The number of molecules the non-GFlowNet baselines visit during optimization for the different tasks is described in Section 4. These are high and range from 70k-400k, and I would be interested in how this compares to the two GFlowNet models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Significance
We thank the Reviewer for pointing out additional references that were included in the revised version. While the idea of extending GFNs to operate in the reaction space might seem straightforward, significant work was needed to implement the approach, not emphasized enough in the original manuscript. These include:
- Handling multiple possible products of a given reaction, which results in a de facto non-deterministic environment. This is not an issue in FGFN, as operating on graphs allows us to specify the place of inserting a new fragment. We deal with this by introducing a product selection step (Eq. 5).
- Since GFNs require computation of parent states and masking invalid actions, this required implementing efficient recursive decomposition of molecules into BBs and crafting specific SMARTS templates for reactions that would limit the possible number of parents.
- Crafting a specific set of high-yield reactions and low-cost fragments that would enable synthesis.
- Improving scalability by introducing fingerprint-based action embeddings for fragment selection.
Additional discussion can be found in Overall Response 2.
# Quality
We included the DRD2 benchmark from TDC. We attach the results. The trends are comparable to other oracles.
# W1
Please refer to the answer above regarding the originality of the approach. The comment regarding needing to emphasize more the differences between our and existing methods, was addressed in the Overall Answer.
# W2
When implementing SynNet, we were unable to reproduce a decoder that reliably generated valid SMILES strings. As the initial trained weights provided did not match the most recent version of the US Enamine BB stock and generated nonsensical SMILES, we retrained the model’s four MLPs on the recent stock, which still did not result in reliable SMILES outputs. Due to time constraints, we instead opt for RXNGenerator, a VAE model designed for generating synthetic trees. For a general response regarding other synthesizable molecule generation methods, see Overall Response 2.
# W3
We indicated in the paper checklist that the code would be provided upon acceptance and hope it wasn’t misleading to mark it as such in the checklist. We provide the code as is in the AC comment.
We did mistakenly omit the computational resources in the original manuscript, we apologize for that. Evaluating fragment scaling took approximately 800 RTX4090 hours in total. Remaining experiments took roughly 24 hours on Quadro RTX 8000 for sEH, DRD2 and senolytic proxies, and roughly 72 hours on an A100 for docking-based proxies.
# W4
See Overall Response 1.
# Q1
Vast majority of the molecules were generated using 4 reactions (maximum allowed number). We applied this constraint because larger molecules tend to achieve higher docking scores, and GFNs generate samples with the probability proportional to the reward, leading to a higher likelihood of generating larger molecules. Achieving higher size diversity would be possible by penalizing larger molecules.
# Q2
The state space size is precisely the number of unique molecules that can be generated from the listed reactions and reactants. We have tried to explain this more clearly in the revised text.
# Q3
Unfortunately, we were unable to explore using different FGFN fragments in time for the rebuttal. Other factors we suspect were 1) larger state space size of FGFN (which makes convergence more difficult), and 2) implementation differences (we based FGFN on [1], while RGFN was implemented from scratch).
# Q4
Our hypothesis is that the QED differences stem mostly from different average molecular weights. While we tried to achieve comparable weight ranges for different methods, it’s still task dependent (Table 1). SyntheMol uses shallow synthesis trees which lead to smaller molecules. We note that QED could easily be improved upon by adding it directly as a reward term.
# Q5
Synthetic routes from RGFN were used. Costs of synthesis were estimated according to literature reaction yields, building block availability, and unit price. The molecules generated in the paper were not synthesized, but we are in the process of doing that for another batch.
# Q6
It is unlikely actual synthetic routes would fully follow those generated by RGFN due to the linear nature of RGFN compared to a retrosynthetic analysis that would aim to maximize the yield RGFN is not intended to optimize the synthetic route but rather to generate synthesizable compounds.
# Q7
- Yes, we use the architecture expressed in Eq. 4 rather than Eq. 7, which outperforms with larger fragment libraries. However, for the fragment library outside Section 4.3 which contains 350 molecules, both architectures perform on-pair, as illustrated in Figure 5. We opt for Eq. 4 only because it is simpler. We note this in the text.
- The embedding scheme proposed in Eq. 7 can be used for reaction templates. We did experiment with using RxnRep [2] embeddings in the early stage of the project but observed only a negligible performance improvement.
# Q8
We employed the standard techniques of the Leader algorithm as implemented in RDKit. While in theory this process is dependent on the order of the molecules, in practice the molecules are always shuffled and the standard deviation for the number of modes across 10 shuffled runs is less than 0.5%. We now add a note to this in the manuscript.
# L1
Regarding limited library size, please refer to the Overall Response. We agree with the oracles being a limiting factor, but as the Reviewer states, this is not a unique problem to RGFN.
# L2
The GFlowNet models each visited 400k molecules, the budget set for baselines, with the exception of SyntheMol. However, our models were able to converge much earlier, and allowing RGFN/FGFN to run for the remaining time was meant to allow exploration. We attach convergence plots for RGFN to illustrate this.
[1] github.com/recursionpharma/gflownet
[2] doi.org/10.1039/d1sc06515g
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their detailed rebuttal and responding to each of my points. Particularly appreciate the additional experiments provided to address some of my/other reviewers' concerns.
I have one minor follow up question regarding Q6, the answer to which I do not think I fully understand (partly also due to the response to Reviewer rpKJ's Question 2). I realize that RGFN might not necessarily generate the _best_ synthetic route to a proposed molecule, but I still do not understand why it does not generate _an_ explicit synthetic route. Are these not provided just by tracing back the sampling process for a product back to the initial building blocks?
Some general comments on the other points made:
* I appreciate that restricting to a smaller fragment library increases robustness (global response), although I am glad to see you recognise that the one used here is particularly small for many use cases and are taking steps to address this (and also support more complicated synthesis plans in future work).
* Glad you will mention the related work initially overlooked. Pushing back though slightly on the importance of the template-based approach for reaction prediction used here, many of these other works often seem fairly agnostic to the reaction oracles used.
* Thanks again for the new results (particularly with the quick turnaround), and also for discovering and fixing a mistake with the FGFN baseline. I still think it's a little weird that FGFN does not do particularly well on optimization (Q3), but thanks for the intuition regarding the fragment library/implementation provided in your rebuttal.
Will begin discussing the paper with the other reviewers and (unless anything new comes up from this) increase my score to reflect you addressing several of my points.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. We would also like to take this opportunity to thank you again for a very thorough review, as that didn’t fit in the official answer. It is very appreciated and definitely helped us improve the paper.
Answering the follow up question regarding Q6:
Let us preface by saying that we suspect some of the ambiguity comes from different common understanding of the term “synthesis route” as used in ML and chemistry communities. In the ML community, the synthesis route is a sequence of reactions transforming sets of reactants into products. According to this definition, RGFN framework can output a synthesis route (albeit not necessarily the optimal one). However, in the answers and the paper we refer to the chemistry-based understanding of the term, elaborated on below. We apologize for the lack of clarity, we tried to be precise in chemical terminology, but recognize that given the audience we should clarify this, which we will do in the revised paper.
To elaborate on the additional components needed for actual wet lab synthesis (which we refer to when we discuss a “synthesis route”):
An explicit route is not generated, but rather the likely building blocks necessary to construct a molecule (and the knowledge of the reactions encoded) will be readily identifiable. This does not mean that an explicit synthetic route has been generated. A synthesis is more than just the building blocks and reactions used.
1. Firstly, consider a molecule ABCD constructed from four fragments (building blocks). A synthesis of such a molecule might proceed as A + B -> **AB** + C -> **ABC** + D -> **ABCD**. An alternative linear route might start with a coupling of B + C as the first step. Another alternative might employ a convergent synthesis, i.e., A + B -> **AB**, then C + D -> **CD**, and then **AB** + **CD** -> **ABCD**. There are other possibilities. Even if the same four building blocks are used for each these are not the same syntheses, and in practice they would show different degrees of success. A trained synthetic chemist would be able to suggest the best order of couplings.
2. In some cases “protection group” strategies or the use of surrogate functional groups and reactions may be needed or be more efficient.
3. In some cases alternative strategies to the synthesis might still be preferred. For example, in the convergent 3 step synthesis of **ABCD** shown above. If the final coupling is somehow incompatible (or not optimal) it might be necessary to employ a synthesis such as C + E -> **CE** and then do the coupling **AB** + **CE** -> **ABCE** followed by **ABCE** -> **ABCD**. Although this adds an extra reaction, if it avoids some incompatibility (or inefficiency) between the component **D** and coupling used, then this would be an acceptable solution (indeed issues like this are common – more often than not!). Again a trained synthetic chemist would be able to easily suggest such strategies (or alternatively software packages that explicitly design syntheses).
4. Careful choice of reaction conditions, external reagents, catalysts, etc. are also essential aspects of any synthesis. These are not established with RGFN.
We hope that this answers the question.
Also, regarding another remark:
> Pushing back though slightly on the importance of the template-based approach for reaction prediction used here, many of these other works often seem fairly agnostic to the reaction oracles used.
Could we please ask for a clarification on whether the Reviewer refers to the discussion in the shared rebuttal, Originality / Similar Works paragraph, specifically the reliance on pre-trained neural networks to predict synthetic trees? As we are not entirely sure to which point does that comment refer to.
---
Rebuttal 2:
Title: Thanks for discussion and further results
Comment: Thank you for following up. Yes, while I agree your curated set of reactions and fragments could be an interesting contribution, currently it is hard for me to judge given that they are not actually included in the submission (related to point W3). I believe (from the initial rebuttal) you have provided this to the AC though (note I cannot see the AC comment) so leave it with them.
Anyway, I have edited my score to reflect many of my original points getting addressed and to reflect the additional experiments carried out (including the one recently posted), showing that the diversity of the produced molecules could be a reason to prefer the approach here over other previously proposed synthesis-based de novo design methods.
Thanks also for the discussion! | Summary: This work proposes a GFlowNet-based framework for synthesizable molecule design, which comprises of building block selection stages and reaction selection stages. The GFlowNet-based model uses chemical oracle functions as reward and the goal is to generate synthesizable molecules with the desired property quantified by the scoring function.
Strengths: - This work tackles the synthesizability challenge in molecular design, which is crucial to experimental validation but has been long overlooked by previous work on molecular generation.
- Molecular synthesis pathways have been represented by directed acyclic graphs, which fit well the structure of GFlowNet sampling process. Therefore, this work choosing GFlowNet as the framework is proper and well-motivated.
- The experimental study is comprehensive. It demonstrates that RGFN can produce synthesizable molecules with higher docking score. It also examines the diversity and docking structures of the generated molecules
Weaknesses: - As acknowledged in the paper, the major limitation is scalability to a larger chemical space. It becomes much more difficult to discover hit scaffolds when the chemical space gets larger. In addition, the capability of this framework is limited by oracle functions.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Molecules generated by building blocks and reactions are guaranteed to be highly likely synthesizable. Is it still necessary to evaluate empirical synthetic accessibility scores of these molecules? as these scores are generally inaccurate and they cannot tell much when a molecule already has synthesizability guarantee.
- Can this framework be applied to bottom-up synthesis planning?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Weakness 1 Response:
Regarding limited library size, please refer to the Overall Response. We agree with the oracles being a limiting factor, but as the Reviewer states, this is not a unique problem to RGFN.
# Question 1 Response:
The reviewer raises a good point. We agree that the SA scores are inaccurate, and in our opinion using them is not really necessary. However, we have tried as best as possible to demonstrate the synthesizability aspect in a rigorous way, and including the SA scores was one data point (in addition to the AiZynthFinder and manually showing the synthesis paths of top-10 molecules). Huge differences in SA scores can thus serve as a rough guideline. We also anticipated that not providing SA scores might lead to reviewers requesting the scores be included in our analysis, since the application of such scores is fairly common. We acknowledge that demonstrating synthesizability is difficult in general, and more work into this problem is required (but is outside the scope of this paper). We have added some text to highlight the caveats with current synthesizability assessment methods.
# Question 2 Response:
At this juncture the platform does not automatically recommend specific synthetic routes. However, a trained synthetic chemist can readily propose reasonable synthetic routes for any proposed structure from RGFN, particularly given that the reaction templates used are well known. Synthetic routes may also be proposed by well-developed stand alone retrosynthetic software packages. | Summary: This work proposes a workflow to synthesize molecules with reaction templates starting from some building blocks. Through this pipeline, the synthesizability of the generated molecules can be improved based on some experimental evidence of molecular drug design tasks.
Strengths: (1) This reviewer agrees with the importance of researching the synthesizability of generated molecules;
(2) The presentation of the proposed approach is very clear. This reviewer can quickly understand the major idea of the whole workflow;
(3) The idea of generating molecules with chemical reactions is very valuable since this reviewer also thinks the most important thing is generating synthesizable molecules.
Weaknesses: (1) The related work discussion in this work is insufficient. This reviewer just searches a bit on the web and find three highly related articles, which are "Bridging the gap between chemical reaction pretraining and conditional molecule generation with a unified model", "A generative model for molecule generation based on chemical reaction trees" and "Generating molecules via chemical reactions". This reviewer believes other relevant articles are not covered in this article. A more comprehensive and systematic related work section is required;
(2) The proposed method has many limitations, including reliance on reaction templates and powerful pre-trained models. We state these limitations in the following section;
(3) The evaluation of the proposed approach is not comprehensive enough. It does not include the conventional molecular generation benchmarks like novelty, validity, and diversity.
Technical Quality: 2
Clarity: 3
Questions for Authors: (1) This reviewer is concerned about the pre-trained oracle models. It seems that almost every step of the pipeline relies on a single pre-trained model. So how to derive these pre-trained models independently?
(2) This reviewer is concerned about the ability to generate truly novel molecules. It seems the proposed approach starts with a selected set of fragments and deduces products through reaction templates. However, since reaction templates are extracted from known reactions, it seems nearly impossible to generate purely novel molecules through this pipeline. How do the authors address and evaluate this issue?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: (1) It seems the whole pipeline of RGFN heavily relies on many different pre-trained models. Each stage can affect the overall process drastically.
(2) The proposed approach is based on reaction templates. The reaction templates must be updated frequently to cover more types of reactions. However, the template-based method has poor generalization to unseen reactions (new combinations of reactants).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Weakness 1 Response:
Thank you for pointing out the omission of the Qiang et al., 2023 manuscript (“Bridging the gap..”) and the work by Nguyen et al. (“A generative model … reaction trees”) , we added the references to the revised version. In the related work section we cited a review by Meyers et al., 2021 (reference #42 in the original manuscript) that covers many other relevant works. The manuscript “Generating molecules via chemical reactions" by Bradshaw et al., was actually cited by us (reference #8). The cited manuscript (“A Model to Search for Synthesizable Molecules”) refers to the NeurIPS version, while the version pointed out by the reviewer (“Generating molecules via chemical reactions”) refers to an earlier ICLR workshop version. We also include several references mentioned by another Reviewer.
# Weakness 2 Response:
Please refer to Question 1 Response for the answer to the stated limitation being reliance on pre-trained models, and Limitation 2 Response to the stated limitation being reliance on reaction templates.
Reaction templates are necessary to avoid post facto evaluation of synthesizability for analogs generated solely based on physico-chemical parameters without regard to the realities of chemical synthesis. Synthetic intractability has in fact been a major issue with previous generative methods. The use of reaction templates solves this problem without imposing detrimental constraints on chemical diversity. Reaction templates are thus inherent to our approach as the only reliable means to avoid infeasible structures and/or to enforce focussing on robust chemical reactions (or those available at a preferred chemical supplier, like Enamine).
We have added additional text to the manuscript to clarify this important point on the challenges of real-world synthetic chemistry. We thank the reviewer for highlighting this issue and hope the reviewer concurs with our arguments.
# Weakness 3 Response:
We apologize for the lack of clarity on this point. By the definition of our search space, all molecules generated are inherently valid (e.g., correct valence, reasonable size, etc.) because all compounds are generated by bona fide synthetic reactions. We also demonstrate diversity of generated compounds in Figure 4, which shows RGFN finds more modes (i.e., molecules dissimilar to each other) compared to GraphGA and SyntheMol. We note that because the GFN model is trained from scratch, all generated molecules are “new” because the model is not trained on any known molecules. They are also highly different from known ligands, as demonstrated in Appendix L. We also note that the current largest space of synthesizable molecules at Enamine (~40 billion) is not systematic in terms of reaction combinations and is more than an order of magnitude smaller than even the limited RGFN space we present here. We have revised the text of our manuscript to better reflect these points.
# Question 1 Response:
We apologize for a lack of clarity on this key issue. While some of the considered oracles are indeed pre-trained ML models (sEH proxy and senolytic proxy), we specifically include the GPU-accelerated docking as an alternative reward function to alleviate this issue. Our approach is agnostic to the reward function, and could even use something like a QED score as the reward. We emphasize that all of the considered baselines also rely on the exact same oracle models (including the pre-trained proxies). We believe that a strength of our approach is its adaptability to all possible reward functions.
# Question 2 Response:
Although having a limited set of known reactions in principle does restrict diversity, the space of synthesizable molecules is still vast and greatly exceeds all synthesized molecules to date. A huge space of compounds remains to be queried for biological activity based only on established chemical reactions, due to the massive number of related building blocks that can be plugged into any given reaction. Thus, there are approximately 219M substances known to be produced and characterized in the literature [1] and the evaluated size of produced chemical space in RGFN is already larger, despite using only a modest number of input reactions and building blocks. The total chemical space encompassed by these reactions and associated reactants has barely been explored.
# Limitation 1 Response:
As above, we emphasize that RGFN does not heavily rely on pre-trained models. Please refer to Question 1 Response for the answer to this perceived limitation.
# Limitation 2 Response:
We would like to clarify the difference between reactions and reactants. The number of feasible reactions is limited, the number of reactants that can be joined together by these reactions is very large, and hence the number of combinations is extremely large. To give a simple example, naturally occurring peptides are generated by a single reaction type (amide coupling) with only 20 reactants (the L amino acids) yet can linearly generate tremendous chemical diversity. The reaction templates need not be updated as more reactants are added - the diversity of possible reactants that can be accommodated alone enables a combinatorial explosion. In practical terms, it is also possible to add any reaction to the dataset using the SMARTS language, and those that we present in the paper are known to be generally very robust and reliable. While this method obviously doesn't allow us to discover ligands that could be produced from yet unknown reactions, it is not an aim of this work to devise new synthetic reaction methods. On the contrary, our stated goal is to ensure "out-of-the-box" synthesizability by using selected reactions that are more likely to deliver the product and thus shorten the discovery process, while at the same time ensuring massive chemical diversity.
[1] https://www.cas.org/cas-data/cas-registry
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thank you for providing a detailed rebuttal. Below are our reviewer's responses:
(1) The reviewer expresses concern regarding the omission of related works, which may impact the novelty and significance of the proposed research. Merely including the omitted literature as references is insufficient. Further writing improvements are necessary to elucidate the major contributions of the proposed method compared to previous methods, thereby justifying its significance.
(2) From a machine learning algorithm perspective, the technical contribution of the proposed method appears somewhat limited. The algorithm generates molecules using reaction templates, an approach widely adopted in template-based retrosynthesis analysis and forward reaction prediction. The reviewer does not identify significant algorithmic differences between the proposed method and template-based reaction modeling (both forward and backward).
(3) The evaluations appear somewhat confusing and not comprehensive enough. Table 1 suggests that the proposed RGFN method does not achieve state-of-the-art performance in most tasks, causing the reviewer to question the method's superiority. Moreover, since the title emphasizes "molecular generation," typical empirical comparisons should include molecular generation benchmarks to demonstrate the basic VUN (Validity, Uniqueness, Novelty) of the proposed method. Building upon these fundamentals, an additional reasonable metric for evaluating synthesizability should be incorporated. Although the reviewer genuinely believes that molecular generations based on chemical reactions can be more reliable and synthesizable, the current evaluations are insufficient to demonstrate RetroGFN's superiority.
In conclusion, the reviewer has decided to maintain their rating, and we kindly defer the decision to the AC.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer for answering our rebuttal.
Regarding points 1 and 2, we again apologize for the omission of some mentioned papers and we are keen on improving the writing as per the Reviewer’s valuable suggestions. However, we believe that it does not fundamentally alter the presentation of our main novel contributions, which as discussed in the reply to the Reviewer 5soV are:
- Handling multiple possible products of a given reaction, which results in a de facto non-deterministic environment. This is not an issue in FGFN, as operating on graphs allows us to specify the place of inserting a new fragment. We deal with this by introducing a product selection step (Eq. 5).
- Since GFNs require computation of parent states and masking invalid actions, this required implementing efficient recursive decomposition of molecules into BBs and crafting specific SMARTS templates for reactions that would limit the possible number of parents.
- Improving scalability by introducing fingerprint-based action embeddings for fragment selection.
- Crafting a specific set of high-yield reactions and low-cost fragments that would enable fast and cheap synthesis experimentally, which has been a long-standing challenge in de novo small molecule discovery.
Regarding point 3, we would like to emphasize that based on the (noisy) synthesizability metrics, RGFN does in fact achieve state-of-the-art performance in terms of synthesizability (and cost of synthesis, as illustrated in the Appendix), while preserving high performance in terms of mode discovery (higher than baseline methods other than FGFN). It’s this favorable balance that makes RGFN, in our opinion, an impactful approach, as easy synthesizability is the most important practical factor (unsynthesizable molecules with high property scores are not useful experimentally). As discussed in the paper, QED, molecular weight are included for completeness, but are secondary to AiZynthFinder scores, which in our opinions are the best approximation of synthesizability currently available. We will further clarify this in the updated manuscript with the Reviewer’s comments in mind.
As discussed in the first answer, we would like to clarify that the metrics we utilize here are **chemically more rigorous than the typical VUN metrics**. VUN metrics are more suitable for methods such as 3D diffusion-based generative modeling where the model is trained on a dataset and inference can generate invalid molecules. All molecules generated by RGFN are valid by design (validity), RGFN is not trained on a dataset of molecules but on a reward function (i.e. novelty is not directly applicable, in the sense all of the molecules are “new”), and we use diversity which is a much stricter version of uniqueness. Typical graph-based generation methods, including most GFlowNet papers, use these more strict metrics as adopted here ([1-6]). We would argue that the ones used in our paper are more appropriate for illustrating a good trade-off between synthesizability and diversity of generated molecules.
Regarding the comment that “additional reasonable metric for evaluating synthesizability should be incorporated”, we are eager to improve our manuscript and would kindly ask for the Reviewer to provide some specific suggestions for what other chemically rigorous metrics could be used. We were unable to identify realistic approaches to reliably do this - lack of good synthesizability evaluation metrics is, in fact, the main motivation behind RGFN. As pointed out by Reviewer rpKJ, showing the synthesizability of RGFN might not even be strictly necessary, as it guarantees it out-of-the-box by operating directly in the space of high-yield chemical reactions, and synthesizability metrics are inherently noisy.
We hope you were convinced by our rebuttal to the other points in the first review (e.g. pre-trained models, usage of reaction templates). Please, let us know if you have any questions.
Thank you once again for your detailed review and for considering our revisions. We are active in any further discussions and would really appreciate it if AC and SAC could take our explanations and clarification into consideration!
[1] Zhang, Zaixi, et al. "Molecule generation for target protein binding with structural motifs." ICLR (2023).
[2] Guo, Jeff, et al. "Link-INVENT: generative linker design with reinforcement learning." Digital Discovery (2023).
[3] Zhu, Yiheng, et al. "Sample-efficient multi-objective molecular optimization with GFlowNets." NeurIPS (2024).
[4] Korovina, Ksenia, et al. "ChemBO: Bayesian optimization of small organic molecules with synthesizable recommendations." PMLR (2020).
[5] Nguyen, Dai Hai, and Koji Tsuda. "Generating reaction trees with cascaded variational autoencoders." The Journal of Chemical Physics (2022).
[6] Swanson, Kyle, et al. "Generative AI for designing and validating easily synthesizable and structurally novel antibiotics." Nature Machine Intelligence (2024). | Rebuttal 1:
Rebuttal: Thank you very much for your detailed feedback on our manuscript. We understand and have carefully considered your concerns with our work, and aim to provide a summarized response to the most common points here.
# Small Fragment Library
We agree with the Reviewers that the search space is indeed constrained as it stands. However, we wish to note that this constraint was purposeful to ensure practical fast synthesis for the generated molecules. That is, the end goal is to allow a typical chemical laboratory to quickly synthesize de novo molecules and test their properties experimentally. It is reasonable to expect such a laboratory to own hundreds of building blocks and conduct a handful of very reliable organic reactions, whereas it is impractical to employ >1000s building blocks and >20s reactions, as many building blocks/reactions require specialized synthetic expertise. We agree with the Reviewers that the trade-off is a limited search space, but our method nevertheless yields billions to trillions of synthesizable molecules, exceeding all current experimental techniques (e.g., DNA encoded libraries) which already yield strong drug candidate molecules for many targets. We do plan to incorporate non-linear synthetic trees and multi-component reactions but these aspects need careful experimental validation to ensure true synthesizability and robustness.
# Originality / Similar Works
We thank the Reviewers for highlighting additional relevant papers that we overlooked. We note that the majority (UniRXN, RXNGenerator, SynNet, ChemBO) rely on pre-trained neural networks to predict synthetic trees and therefore require the curation of large datasets of druglike molecules and generating ground-truth synthesis routes. This is in part an additional drawback of fragment and reaction datasets too large to serve as discrete action spaces. Moreover, while template-free methods like UniRXN, ChemBO, and MoleculeChef are capable of accurately predicting the products of reactions given reactants, they do not specify exactly the reaction used at each step - valuable information for chemists to prioritize easy-to-synthesize molecules out of a large pool of generated candidates. We present a simple alternative to these methods that avoids both the time cost of pre-training and provides template-based synthetic routes.
# Scalability to Larger Fragment Libraries
As stated in the manuscript, we recognize that one of the limitations of our approach is degrading performance with very large fragment libraries. However, as explained before, it is not necessarily our goal to support very large libraries in the first place, as they can be impractical if one wants to achieve rapid in-lab synthesis. Still, we recognize that their support can be useful in some settings, and provide a mechanism for improving scalability (using fingerprint-based action embeddings). We additionally would like to point out that 1) we believe decreasing performance is somehow expected, since by adding more fragments we increase the state space size exponentially, and it takes the model longer to converge, 2) still, even in the considered setting, the number of discovered modes remained large, suggesting the feasibility of using large libraries to increase chemical diversity (while preserving a high number of discovered modes). First point was illustrated in another experiment we conducted (attached with the figures), in which we count the number of discovered high-reward scaffolds in the last 10k samples throughout training. There we can see that as the model trains, the gap between small and large fragment sets decreases.
# Additional Results
As requested by the Reviewers, we included FGFN that used SA score as one of the reward terms as one of the baselines. We considered one more task, DRD2 oracle from TDC repository, as requested by another Reviewer (with results for SyntheMol still computing). We also included an additional baseline, RxnGenerator [2], but due to a very slow generation speed at this time only obtained results for the DRD2 task with a small number of molecules generated. While it achieves similar synthesizability, it discovers significantly less modes. Finally, we’d like to indicate that there was an error with the original results presented for FGFN in the senolytic generation task, in which we didn’t scale the proxy values properly. We attach the updated plots, but would like to point out that it did not change the overall conclusions, as FGFN still discovers only a very small number of high reward molecules. Still, we apologize for that mistake.
[1] Huang, Kexin, et al. "Therapeutics data commons: Machine learning datasets and tasks for drug discovery and development." arXiv preprint arXiv:2102.09548 (2021).
[2] Nguyen, Dai Hai, and Koji Tsuda. "A generative model for molecule generation based on chemical reaction trees." arXiv preprint arXiv:2106.03394 (2021).
Pdf: /pdf/c8c9d8e5685e74068ca0229fe6d35149a85fcd48.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Flexible mapping of abstract domains by grid cells via self-supervised extraction and projection of generalized velocity signals | Accept (poster) | Summary: This work concerns stimuli that come from an underlying low-dimensional latent space. It studies a path-integrating problem on this space, in which a network is shown two images, has to infer a displacement signal between these images, and use it to traverse the space in a few different ways specified by a series of losses. It uses this as a model of velocity extraction for abstract cognitive maps in neuroscience.
Strengths: I agree with the choice of problem. The brain must be extracting abstract velocity signals when it is traversing structured spaces. This is therefore an interesting and relevant problem.
Further, the stated network and losses do, indeed, nicely solve the problem.
Finally, the exposition is, for the most part, clear and compelling. The figures were very nice.
Weaknesses: First, I think the framing with respect to previous work is needlessly confrontational, leading to some silly claims. Currently it is framed as if the proposed approach is a fundamentally different way of constructing a cognitive map, I think this is wrong, rather it seems like a useful addition that could coherently be included in existing models, making them more powerful. Relative to CSCG, TEM and models like them, I think this work is simply answering a different part of the cognitive mapping question. As the authors correctly point out, these models require a velocity signal (though that signal has no semantics). They then learn the meaning of the velocity signal (for example that north is the opposite of south) and use that to create a representation that is claimed to match neural data. In contrast, the author's model is a model of velocity extraction, of turning pairs of points into a velocity signal. You could well imagine a combined model that extracted velocities according to the author's scheme, and used them to drive a CSCG/TEM like model (i.e. the function $g(i_1, v)$ would be CSCG/TEM). This would ideally have the cross-environment generalisation of CSCG/TEM (which the author's model doesn't have), and the cross-velocity-modality generalisation of the current model. I think SR is not really a model of cognitive mapping, so I'm not too worried about the comparisons there.
Related to this, the claim that the velocity signals are learned independently of the stimuli also seems misleading (figure 1g). The same network, f, cannot transfer across sets of stimuli, even if they share the same underlying structure, unlike CSCG/TEM. Have I misinterpreted, should I take this to mean the same velocity is extracted independent of where in the space that velocity begins?
The silly claims that I think this leads to are the claims of novel predictions. Everyone and their mother would have predicted, from the moment the stretchy birds experiment worked, that the cellular basis of the grid signal would be modules of grid cells behaving in the same way they do in physical space. Yet, in lines 78-82 and 257-265 the authors make exactly this series of claims and frame it as a novel contribution. This is certainly a prediction, but it is far from unique to this work! Every model of grid cells, of which there are likely hundreds, is ambiguous to whether the variable being encoded is physical or abstract space!
A further confusion for me was the requirement for the latent code to be a linear function of the true velocities. Why is it so important? (As is implicitly assumed in the loss calculated in the table, though I couldn't find the loss described in detail anywhere) I see that the shortcut estimation loss and the isotropy loss build this linearity assumption into the velocities (though it seems from the ablation studies that it is actually not necessary to make the representation mostly linear), but I wonder whether it is not more impressive the fewer losses you use? It adds a lot of precision and clarity to be able to say that just the next-state prediction loss is enough to get a nonlinear encoding of the velocity, and from the ablation study, then adding loop closure gets you a linear representation.
Finally, I thought both the comparison to other techniques, and the grid cell results were slightly gimmicky. That's fine, but I don't give them a huge amount of weight as a result. Of course, if you have a perfect velocity code and feed it to a hexagonal grid cell model it will look like a hexagon. Further, why should I expect other dimensionality reduction techniques to learn the velocity? It seems like that is an assumption about the dataset (that all pairs of points separated by the same velocity signal correspond to the most meaningful axes on which to describe differences between). PCA extracts a different, but also meaningful encoding. It is interesting that this is not the same subspace, and certainly it shows the proposed method is the best at this, but the phrasing made it sound like a key contribution is beating these other methods (lines 74-77). In reality, it seems like your loss is looking for a specific type of dimensionality reduction that your model is built for and the others aren't. Its still a non-trivial result, and don't get me wrong, using this technique for dimensionality reduction sounds interesting, but I think the framing is currently way off.
Two smaller points:
The stated equivalent between velocities summing to 0 and inputs being the same assumes no aliasing (i.e. no repeated stimuli at different positions) (bottom fig 2c), in reality the two are not equivalent (velocities summing to 0 implies $i_1 = i_N$, but not the other way around). It should either be relaxed to a one directional implication, or the assumption of no aliasing should be stated.
I think the velocities in the latent space were literally added (like vector addition) to compute the shortcut loss, is that true? If so, it should be highlighted that this limits the spaces in which this can be used to ones in which velocities commute. For example, unless I've misunderstood, you cannot then use this for spheres, this should be mentioned as a limitation.
Technical Quality: 3
Clarity: 3
Questions for Authors: My questions are largely included in the weaknesses section. Further more details of the exact form of the losses would be nice.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations were not very clearly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and for raising excellent questions. We will try to address all questions one-by-one.
> **Relative to CSCG, TEM and models like them, I think this work is simply answering a different part of the cognitive mapping question**
Thank you for highlighting these points and for your careful review!
As we note in the text, the crucial difference between our approach and those of others such as TEM and CSCG, is the following: previous approaches perform simultaneous learning of representations of external states and transitions between states; in contrast, we propose that only transitions need to be learned, which are then processed by a reusable grid attractor to enable integration and mapping. Thus, velocity-extraction can be done more simply relative to models that would involve both representational embedding and transition learning. However, we completely agree with the reviewer’s point that our model can augment models such as TEM and CSCG, by taking advantage of our model’s state-independent extracted velocities (or equivalently, actions). We will note this in our revised text and we will correspondingly edit the framing of our comparison with background work, which we did not intend to be confrontational.
> **Claim that the velocity signals are learned independently of the stimuli also seems misleading (figure 1g)**
We claim that the extracted velocity signals are independent of the stimuli _within each environment_. We do not make any claims that a network trained on one environment should be out-of-the-box generalizable to other abstract domains. Fig. 1g shows the extraction of generalized velocity signals within an example environment, Stretchy Blob. What we are saying is that the same velocity is extracted at various points within a space, independent of the specific location in that space. We will clarify this additionally in the text.
> **Yet, in lines 78-82 and 257-265 the authors make exactly this series of claims and frame it as a novel contribution. This is certainly a prediction, but it is far from unique to this work!**
Thank you for the opportunity to clarify!
Our prediction here is not simply that grid cells can encode physical and abstract space, or that grid cell-like tuning exists across different spaces. Our prediction is, rather, that cell-cell correlation is preserved _across_ different spaces. The key nuance in our framework is the use of a single prestructured grid cell attractor network across different spatial and non-spatial environments through being able to extract generalized velocities in each task domain. Because we hypothesize that the brain generates velocity signals and pipes them into a rigid, prestructured set of attractor networks, we can predict that the cell-cell correlations within each grid network should be preserved across different modalities (i.e. if two grid cells are co-active for a spatial task, they should remain coactive for a non-spatial task). Since other models (such as TEM) do not use prestructured attractor networks, and learn the appropriate representations from scratch in different domains, it is unclear whether the cell-cell correlations will be preserved across domains/modalities. Although our conclusions may seem straightforward, other computational models that build cognitive maps (such as TEM, CSCG, SR) do not use fixed, prestructured attractors. We are unaware of these models being able to make such a prediction.
> **The requirement for the latent code to be a linear function of the true velocities**
Since grid cells are doing path integration through linear addition of velocity vectors, constraining our latent space to also be a linear function of the true velocity signals was crucial. Indeed, as the reviewer correctly noted, linearity is built into our choice of losses. In fact, the loop-closure loss imposes a linearity constraint (as can also be seen through our ablation studies; we will add a brief explanation of why loop closure ensures linearity in the appendix), with the additional shortcut and isotropy losses refining the obtained linear representations of velocity estimates. Indeed, what the reviewer noted is accurate: next-state prediction leads to a nonlinear representations of velocities, loop closure makes the encoding linear, and additional terms then refine the representations. We thank the reviewer for this succinct description and will include it in the text.
> **I couldn't find the loss [calculated in the table] described in detail anywhere**
We apologize for not including these details. We will include additional details in the text describing calculation of the error used in Table 1. We provide a brief description of the error metric here.
As discussed above, we aim for the estimated velocities $\hat{\mathcal{V}}$ to be a linear function of the true velocities $\mathcal{V}$. Correspondingly, we first estimate the best-fit linear transformation $T$ from $\hat{\mathcal{V}}$ to $\mathcal{V}$ via a pseudoinverse, $T = \mathcal{V} \hat{\mathcal{V}}^\dagger$. Then, we compute our error metric as a normalized mean-squared error between the $N$ transformed points and the true distribution: $e = \frac{|| T\hat{\mathcal{V}} - \mathcal{V} ||_2}{N \text{var}( \mathcal{V} ) } $. The presence of outliers, particularly in some of the poorly performing baseline methods, can lead to particularly poor best-fits $T$. To alleviate this, we find the transformation $T$ after removing a small number of outliers in $\hat{\mathcal{V}}$ via the DBSCAN clustering algorithm (any outlier rejection tool will suffice); however, we report the error $e$ evaluated on the entire dataset including outliers.
---
Rebuttal 2:
Title: Rebuttal, Part 2
Comment: > **Why should I expect other dimensionality reduction techniques to learn the velocity?**
> **PCA extracts a different, but also meaningful encoding. It is interesting that this is not the same subspace, and certainly it shows the proposed method is the best at this, but the phrasing made it sound like a key contribution is beating these other methods**
This is an excellent question.
By construction, the datasets that we have generated have a specified intrinsic dimensionality (1, 2 or 3, depending on the environment). For example, in the Stretchy Bird environment, there exists a minimal representation of the dataset that is captured by a two-dimensional coordinate, neck and leg lengths. Thus, within the high-dimensional space in which the raw inputs live, the dataset is explicitly constructed to lie on a (nonlinear) two-dimensional manifold. If a dimensionality reduction method were able to flatten this manifold and represent it in two dimensions, differences between points would produce a veridical representation of velocities.
While PCA does provide an encoding of the data (that describes the projection of the data onto the principal components), it does not provide a meaningful representation, since it is unable to find even a low-dimensional representation of the data. We demonstrate our viewpoint further through the new Fig. R3 in the attached PDF. Here, we examine a dataset generated through a single trajectory within our 2D Moving Blobs environment. Through integration of our velocity estimates, we find that the high-dimensional states collapse onto a two-dimensional plane, and are thus completely captured by two coordinates. In contrast, PCA appears to require 24 dimensions to capture 95% of the variance within the data, and appears to occupy a volume when plotted in three dimensions. Since PCA is unable to find a low-dimensional representation of a dataset that is intrinsically two-dimensional, we believe that the projection learned by PCA is not strongly meaningful for this dataset. These new results continue to point towards dimensionality reduction as a key contribution and strength of our model. As a result, we felt that it was reasonable to use the other dimensionality reduction techniques as necessary baselines, which the other reviewers have appreciated. In summary, we find that integrating estimates of low-dimensional transitions between high-dimensional states can be an effective tool for dimensionality reduction.
> **Stated equivalent between velocities summing to 0 and inputs being the same assumes no aliasing**
Thank you for pointing this out. We had implicitly assumed that within the abstract cognitive spaces we investigate, states are unique at each point in space. We will make this assumption explicit in the text.
> **You cannot then use this for spheres**
Thank you for raising this subtlety in our model. We do in fact assume that the velocity vectors in the space of our latents commute. As a result, the estimated velocity vectors cannot directly represent tangent vectors in a non-Euclidean space, such as a sphere. We will add this note to the paper. However, this does _not_ entirely preclude the representation of these non-Euclidean spaces through our method, since we can consider these spaces as embedded in a higher dimensional Euclidean space wherein velocity vectors will commute again.
For example, in representing a point in an abstract domain with spherical geometry, our method will fail if we attempt to estimate two-dimensional velocity vectors representing transitions. However, if instead we estimate three-dimensional vectors, we expect our method to continue to work since the surface of a sphere can be embedded in three-dimensional space. For data arising from a general manifold, we will require a few extra dimensions (cf. Whitney embedding and Nash embedding theorems) but will always be able to consider a Euclidean space of sufficient dimensions that will embed the data generating manifolds. We thank the reviewer for leading us to consider this subtlety and will include this cost of requiring extra dimensions for non-flat spaces as a limitation in the discussion section of our paper.
---
Rebuttal 3:
Title: Rebuttal, Part 3
Comment: > **Further more details of the exact form of the losses would be nice**
Thank you for the comment. We have currently listed the description of each loss in L173-195 along with equations in Fig. 3 and are happy to explicitly write out our loss functions and other algorithmic details in the appendix, that we briefly mention here:
Given two states $i\_t$ and $i\_{t+1}$, the next-state prediction loss minimizes the distance between the predicted state and the true state: $\text{min} ~ || i\_{t+2} - \hat{i}\_{t+2}||\_2$.
The loop closure loss ensures that all the predicted velocities in a given trajectory sum to zero given that the trajectory is a loop: $\text{min} \sum\_{0 \leq t \leq T-1} \hat{v}\_{t \rightarrow t+1} = \text{min} ~ || \oint \hat{v} dt ||$.
The shortcut loss ensures that the decoder $g$ can generalize given a state and a velocity. For instance, $\text{min} ~~ || g(i\_{t+2}, \hat{v}\_{2 \rightarrow 3} + \hat{v}\_{3 \rightarrow 4}) - \hat{i}\_{t+4} ||$.
Finally, the isotropy loss induces an isotropy in the inferred velocity space: $\text{min} ~ \text{var} [ ||\hat{v}\_{t \rightarrow t+1} || ~ \mid ~ d(i\_t, i\_{t+1}) < \theta ]$, where $d$ is a similarity function in the input image space and $\theta$ is some small threshold.
All losses have a weight / prefactor, listed in Table 2 of our paper. As mentioned in L200, regardless of the training environment, the relative weighting of the two critical loss terms remains consistent (the loop-closure loss weighted ten times higher than the next state prediction loss). We also plan to discuss more about how each synthetic task was generated, as also recommended by $\textbf{sK54}$.
---
Rebuttal 4:
Comment: We hope that you agree that our paper has improved given your detailed feedback and suggestions! Given that we have addressed the primary concerns raised in the review, we kindly ask you to adjust your score while taking our rebuttal into account.
---
Rebuttal Comment 4.1:
Title: Response
Comment: Thank you for your extensive response, I appreciate the assiduousness and engagement.
Broadly, the comments answer my questions, especially about aliasing and spheres.
I continue to disagree about the importance of the dimensionality reduction comparison to e.g. PCA. Sure, you generate datasets in which differences are a better way to extract out the underlying space than PCA. Your method, aimed at extracting velocities, therefore does better. That is wonderful, but I believe my original comments still hold.
Lastly, you got me musing about what it is we do when we build such models of the brain (cue authors' groan). I continue to think that the prediction of preserved cell-cell correlations across domains is what everyone and their mother (I checked with my mother) would have thought post-stretchy birds experiment, perhaps I am too new to the field and missed some 00/10s era debate on this. This model shows that, if you can design a scheme for extracting velocities, such an idea works.
But, could it ever not have done? If you are capable of extracting velocities from pairs of images correctly then of course you can pipe them into a reusable grid cell system in which such cell-cell correlations are preserved. Did we need a model to tell us that?
The novelty of your work seems to be in the design of a system to extract velocities, the losses etc. that can make it work, and using it to do dimensionality reduction on appropriate datasets. Hypothetically, you could make the same attack at models like TEM/CSCG/SR, that they take a computation, and simply show a model of it working, but (a) I think I learnt more from the algorithmic details needed than I did here and much more importantly (b) they predict neural behaviour!
So it seems the potential neuroscience value I am drawing from this model is about the losses that are required to make it work. I do find it interesting so many losses are needed to make it perform well, I would have guessed that just next-state prediction and your architectural choices might have done it (and who knows, with another setup it still might). But I'm afraid given my above thoughts, despite the paper being well-explained, thorough, correct, and generally nice science, I will keep my score.
If you think there is something that has still not made it through my thick skull please do let me know.
---
Reply to Comment 4.1.1:
Title: Response, Part I
Comment: Thank you for your response to our rebuttal, and for recognizing our work as being well-explained, thorough, correct and containing generally nice science.
> **I continue to disagree about the importance of the dimensionality reduction comparison to e.g. PCA. Sure, you generate datasets in which differences are a better way to extract out the underlying space than PCA. Your method, aimed at extracting velocities, therefore does better. That is wonderful, but I believe my original comments still hold.**
We do not believe our improved performance in comparison with PCA is simply because of our choice of datasets. Through the idea of velocity extraction as a means to perform dimensionality reduction, we can leverage local transition structures in the original data manifold that sequence agnostic methods (e.g., PCA, UMAP, etc.) cannot use. Thus, our method can utilize the crucial trajectory information in the data to perform more efficient dimensionality reduction. In this context, we do acknowledge that our method assumes the presence of trajectories characterized by continuous variation in inputs; however, we believe this is not a restrictive assumption, as many datasets, including video data, typically fall into this category. An examination of larger, more realistic datasets lies within the scope of future work that we plan to pursue.
You mention that you believe that your original comments in this context still hold. Our understanding of your original comments were that you had three primary questions/concerns: (1) One may not expect other dimensionality reduction methods to extract a velocity. (2) PCA may still be extracting a useful embedding that happens to not correspond to the velocities that we extract. (3) Our loss terms specifically looked for a specific type of dimensionality reduction that other methods did not.
In our earlier response, we attempted to respond to each of these concerns: (1) We argued that any dimensionality reduction method should be expected to extract a velocity, since the data lies on a very low-dimensional manifold embedded in the high-dimensional space. (2) Since PCA was unable to find a low-dimensional representation, we argued that it was not a meaningful dimensionality reduction. (3) _Yes_, our loss terms do look for dimensionality reduction based on transitions which other methods are unable to take into account; this allows us to be able to perform more effective dimensionality reduction as evidenced by extraction of a two-dimensional embedding of the data in a dataset where other methods failed to extract low-dimensional representations, even though the original data lived on a two-dimensional manifold.
We hope this response has helped to clarify the salience of our approach and address the concerns you raised in your original comments. If there are any unresolved questions about the importance of our method as a dimensionality reduction technique, we would appreciate further discussion to understand your perspective.
---
Reply to Comment 4.1.2:
Title: Response, Part II
Comment: > **Prediction of preserved cell-cell correlations across domains is what everyone and their mother (I checked with my mother) would have thought post-stretchy birds experiment…But could it ever have not been done? If you are capable of extracting velocities from pairs of images correctly then of course you can pipe them into a reusable grid cell system in which such cell-cell correlations are preserved. Did we need a model to tell us that?**
Prediction of cell-cell correlations across domains of different modalities may appear as a straight-forward prediction, however, it critically depends on two key ideas: (1) that low-dimensional velocities can indeed be extracted from abstract domains, and (2) that the _same_ continuous-attractor-based grid module can perform integration across domains. The bulk of our work establishes (1), through constructing an SSL framework for this velocity extraction that performs better than (both deep and non-deep) baselines.
Note that simply the assumption of a single continuous-attractor-based grid module would not be sufficient, since typical continuous attractor models require a low-dimensional velocity input, and one would thus need to first establish that velocities can be extracted across inputs from different modalities. To summarize, in making predictions for grid cell correlations, we assume a continuous attractor network based grid model and provide a computational basis for linking such an attractor to inputs from different modalities through our framework for velocity extraction.
To establish that across domain cell-cell preservation is a nontrivial prediction, and do not follow directly from pre-existing literature, we point towards the other related models of cognitive mapping mentioned by the reviewer: (1) TEM learns grid cell representations from scratch in any given environment. If presented with a new environment, TEM requires re-learning of the affordances and statistics of the new environment, and as a result cells that were correlated within an environment of one modality are not guaranteed to be correlated in a different modality. (2) SR does not produce preserved cell-cell correlation structure across environments within the same modality itself, and (3) CSCG does not provide grid responses to be able to compare cell-cell correlations.
In the context of these above models, we hope it is clearer that cell-cell correlations being preserved across modalities does not trivially follow from previous modeling work. We are also unaware of such a conclusion being drawn in the literature in the light of previous experimental work, such as the stretchy bird experiments, sound modulation experiments, or any others. We would be grateful if the reviewer could point us towards such a reference that has already made the predictions that we have stated in our work.
> **I think I learnt more from the algorithmic details needed than I did here and much more importantly (b) they predict neural behaviour!**
The crucial difference between our approach and those of others such as TEM and CSCG, is the following: previous approaches perform simultaneous learning of representations of external states, and transitions between states; in contrast, we propose that only transitions need to be learned, which are then processed by a reusable grid attractor to enable integration and mapping.
Approaching the cognitive mapping problem from a velocity-extraction-first perspective, means that the mapping of the environment to grid states can be performed simply by a prestructured continuous attractor network, instead of something more complex like TEM. Thus, in terms of mapping spaces, our model can be thought of as making predictions for neural behavior corresponding to those obtained from the continuous attractor model, which can be distinct from the predictions made by the grid cells in other hippocampal models. If one wishes to examine neural predictions for hippocampal cells rather than grid cells, our method points towards the usage of a hippocampal complex model that can use prestructured attractor dynamics, such as Chandra et al. 2023 bioRxiv 2023.11.28.568960v2.
---
Reply to Comment 4.1.3:
Title: Response, Part III
Comment: > **I do find it interesting so many losses are needed to make it perform well, I would have guessed that just next-state prediction and your architectural choices might have done it**
The number of losses needed for our model to “perform well” effectively depends on what criteria are being imposed in defining “well”.
If we are interested in extracting some nonlinear low-dimensional velocity between states in any abstract domain, then just the next-step prediction loss is sufficient, as seen in Fig. 9. If we additionally demand that the estimated velocities be linear functions of the ground truth velocities, then the loop-closure loss needs to be added. If we then additionally demand isotropic representations of velocities, we need to include our isotropy loss. And, if we require further refinement, with very precise encoding of velocities, we include our shortcut loss. As indicated in our earlier rebuttal, we will certainly include an updated description of our model results to clarify this point.
As such, we argued earlier that grid cells would require linear representations of velocities and thus posited that next-step prediction and loop-closure were essential losses to consider. We included isotropy and shortcut losses as additional auxiliary losses to help refine the obtained solution, and will make it clearer in the text that these are not essential to any of our key results. We are happy to include an elaboration of this discussion in text, and will also include a proof to show that loop-closure loss exacts a strict linearity constraint on its inputs.
> **But I'm afraid given my above thoughts, despite the paper being well-explained, thorough, correct, and generally nice science, I will keep my score.**
We hope our responses have further established the key interesting aspects of our results and would appreciate a re-evaluation of our score in this context. | Summary: This paper develops a new dimensionality reduction approach based on velocity in latent space. It is inspired by, and tries to emulate, aspects of entorhinal grid cell activity. The critical ingredient is a "loop-closure" constraint that drives the model to build a metric map of the latent space. The results demonstrate advantages of this approach over traditional dimensionality reduction algorithms.
Strengths: - The paper proposes a novel solution to an interesting and important problem.
- Writing and visualizations are very clear (some minor comments on this below).
- The experimental results are impressive relative to baselines.
- The work will be potentially impactful within neuroscience. I'm less sure about AI (see Weaknesses).
- I appreciated that the authors laid out the predictions of their framework. It would be great to see these carefully tested.
Weaknesses: - With respect to the brain, I think the model makes some assumptions with questionable plausible (see Questions below), but I would be interested to hear if the authors disagree.
- I didn't feel that the authors made a really compelling case for AI applications, though I understand the potential utility in principle.
Minor:
- p. 3: Technically the state-based SR doesn't assume action inputs, since it's implicitly taken an expectation over actions. It's true that the state-action SR does take actions as input.
- p. 3: "learns simultaneously learns" -> "simultaneously learns"
- p. 8: "These inferred velocities" -> "With these inferred velocities"
- p. 8: "allow reuse grid cells" -> "allow reuse of grid cells"
- Somewhere in the supplement, the authors should completely spell out their loss function and other algorithmic details.
Technical Quality: 4
Clarity: 4
Questions for Authors: - How does the model deal with boundary/obstacle effects? This seems like a situation where the effect of the velocity operator is not state-independent.
- Related to the previous point, if I've understood correctly, the model assumes that state variables live in R^D (it would be helpful if this was made explicit), so there are no boundary conditions. How do the authors think about this in relation to the toroidal topology that appears to constrain grid cell population activity?
- If the claim is that this model emulates how the entorhinal cortex learns velocity-based representations, I wonder how the authors think about the fact that often the training data animals receive will not have that many loops. For example, how many loops were there in the experiments on stretchy birds? What happens if the model doesn't have loops in its training set? What about trajectories that are *almost* loops?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors dicuss future directions at the end, but don't directly address limitations. There are no potentially negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and for raising excellent questions. We will try to address all questions one-by-one.
> **State-based SR doesn't assume action inputs**
Thank you for this note. We will clarify this in our text and distinguish between state-based SRs and state-action SRs. We additionally note that the original SR paper (Dayan 1993) and recent extensions (Stachenfeld 2017) use a discrete action space whether SR is state-based (i.e. expectation over actions) or state-action based.
> **Grammatical changes in text**
Thank you for suggesting these revisions! We will change them in text.
> **Somewhere in the supplement, the authors should completely spell out their loss function and other algorithmic details**
Thank you for the comment. We have currently listed the description of each loss in L173-195 along with equations in Fig. 3 but are happy to explicitly write out our loss functions and other algorithmic details in the appendix, that we briefly mention here:
Given two states $i\_t$ and $i\_{t+1}$, the next-state prediction loss minimizes the distance between the predicted state and the true state: $\text{min} || i\_{t+2} - \hat{i}\_{t+2}||\_2$.
The loop closure loss ensures that all the predicted velocities in a given trajectory sum to zero given that the trajectory is a loop: $\text{min} \sum\_{0 \leq t \leq T-1} \hat{v}\_{t \rightarrow t+1} = \text{min} || \oint \hat{v} dt ||$.
The shortcut loss ensures that the decoder $g$ can generalize given a state and a velocity. For instance, $\text{min} || g(i\_{t+2}, \hat{v}\_{2 \rightarrow 3} + \hat{v}\_{3 \rightarrow 4}) - \hat{i}\_{t+4} ||$.
Finally, the isotropy loss induces an isotropy in the inferred velocity space: $\text{min} ~ \text{var} \left[ ||\hat{v}\_{t \rightarrow t+1} || \mid d(i\_t, i\_{t+1}) < \theta \right]$, where $d$ is a similarity function in the input image space and $\theta$ is some small threshold.
All losses have a weight / prefactor, listed in Table 2 of our paper. As mentioned in L200, regardless of the training environment, the relative weighting of the two critical loss terms remains consistent (the loop-closure loss weighted ten times higher than the next state prediction loss). We also plan to discuss more about how each synthetic task was generated, as also recommended by $\textbf{sK54}$.
> **Boundary/obstacle effects**
This is an excellent question. We clarify that while the decoder is state dependent by construction (outputs a predicted state given a state and velocity), the encoder is state-independent since it infers a generalized notion of velocity between any two high-dimensional states within our abstract domains. We investigate how the decoder performs at boundaries (Fig. R4 in the attached PDF) within the 2D Stretchy Bird environment. We find that the decoder implicitly understands the boundaries of the training data’s underlying manifold. That is, once a velocity produces a bird state that is unseen in the training distribution, i.e. a bird that cannot further shrink its legs or extend its neck, further transitions in the same direction do not produce any changes in the predicted state. Thus, the decoder understands the boundaries observed in the training data.
> **Model assumes that state variables live in R^D**
Thank you for raising this point! We do indeed assume that the input state variables occupy a region of $\mathbb{R}^D$ (where $D$ is the number of pixels in the input, for example); we also further assume that the estimated velocities live in $\mathbb{R}^d$ for some smaller $d$. This in and of itself is not particularly constraining: while a single module of grid states live on a torus, they can integrate velocities that are obtained generally from $\mathbb{R}^2$ without bounds. While grid states wrap around, the velocities do not necessarily need to wrap around boundaries. Further, if we include additional grid modules, grid cells can integrate velocities in the higher dimensional Euclidean space $\mathbb{R}^d$, as has been shown by recent theoretical work (Klukas et al. 2020, PLOS Computational Biology 16(4): e1007796). We are happy to add a discussion of this point in text.
> **Concerns on loops**
Thank you for raising these excellent points!
While our models were trained with data consisting of loops, this training paradigm is simply for convenience (L179-181) and is not a necessity of the model. If arbitrary random walks were selected instead for training samples, self-intersections would automatically lead to loops within sub-sequences of the walks (as detected by $k<m$ with $i_k=i_m$) — these loops could then be used for the loop-closure loss, with all other losses applied on all loop and non-loop trajectories. As a simple proof of concept, we generated a dataset of arbitrary trajectories with 50% of the trajectories as loops and the remainder as non-loops. We applied loop closure loss on trajectories that formed loops and all other losses on the entire dataset of trajectories. Fig. R1 in the attached PDF demonstrates this paradigm for the 2D Stretchy Bird environment, where we continue to get consistent velocity representations with low transformation error to the true velocity distribution.
In greater generality, we also note that the loop closure loss can also work on trajectories that are “almost loops” – i.e., a loss whose prefactor term scales with how close a trajectory is to forming a closed loop. However, running this analysis and tuning these parameters is beyond the scope of the current rebuttal period, and we will examine these results for the camera ready version of our paper.
> **Lack of limitations**
We apologize for not including sufficient discussion of limitations. We will add a discussion of the limitations to our model in text, as also mentioned to Reviewer $\textbf{sK54}$.
---
Rebuttal 2:
Title: Rebuttal, Part 2
Comment: > **Compelling case for AI applications**
We believe our work has potential application to machine learning beyond neuroscience, primarily through the context of dimensionality reduction.
A crucial step in analysis of large datasets of high-dimensional data is often the generation of dimensionality reduced representations of the data. Our method provides a novel SSL framework for dimensionality reduction and manifold learning, which appears to significantly outperform baselines on datasets that contain relatively lower-dimensional transitions. Video data, for example, naturally contains transitions between frames that are typically lower dimensional than the frames themselves, and thus may be particularly amenable to our techniques.
Moreover, typical nonlinear dimensionality reduction techniques, such as Isomap and UMAP are non-invertible. In contrast, similar to an autoencoder, we can use the decoder in our model to generate the high-dimensional states that correspond to points within the low-dimensional space. In this sense our model also acts like a generative model, by being able to generate inputs that correspond to different regions of the low-dimensional space.
Furthermore, our method naturally lends itself to manifold alignment, which is particularly effective when the data exhibits a small number of continuous modes of variability. Through mapping two spaces onto the same low-dimensional space of transitions, our technique can perform equivalent transformations on datasets from different modalities. Moreover, with a small number of “gluing” points, our method allows for building one-to-one correspondences between different domains.
We will be glad to include these discussion points in the camera ready version of the text.
---
Rebuttal Comment 2.1:
Title: Good points
Comment: Thanks for clarifying this! I think these are reasonable points to add to the discussion.
---
Reply to Comment 2.1.1:
Comment: We hope that you agree that our paper has improved given your detailed feedback and suggestions! Given that we have addressed the primary concerns raised in the review, we kindly ask you to adjust your score while taking our rebuttal into account. | Summary: This work proposes a novel neural circuit model that explains how grid cells in the medial entorhinal cortex can map both spatial and abstract cognitive spaces. The model extracts low-dimensional velocity signals from high-dimensional abstract domains using self-supervised learning, leveraging grid cells’ spatial velocity integration capabilities. This allows the creation of efficient cognitive maps across different domains.
Strengths: 1. Although the idea of using grid cells as a scaffold for representing cognitive maps is not new, the ability to apply continuous actions (‘velocity’ in this paper) and learn from the data is indeed a highlight.
2. The paper conducts comprehensive ablation studies to verify the rationality of its objective function.
3. The comparison with baselines is also quite thorough.
Weaknesses: 1. The paper’s starting point is that grid cells can serve as a scaffold for different cognitive domains, as reflected in the title, but the main content focuses on the extraction of velocity. There is little discussion about grid cells, and the description of how the continuous attractor network of grid cells works is insufficient.
2. The paper compares the performance of various baselines. The experimental results show that the presented model significantly outperforms the baselines in dimensionality reduction, but there is no clear explanation for why this model is superior to the baselines.
3. The training process relies on specially constructed datasets (loops). It’s obvious that organisms do not need to return to the original point to achieve good representations.
4. The authors didn’t discuss the situation when the dimension of velocity is higher than grid cells (2D).
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and for raising excellent questions. We will try to address all questions one-by-one.
> **There is little discussion about grid cells, and the description of how the continuous attractor network of grid cells works is insufficient.**
Thank you for highlighting this. We briefly describe why extracting velocity signals from non-spatial, abstract domains is an important prerequisite challenge for grid cells to path integrate between states in these domains (L35-38, L95-98, and Fig. 1 caption). Here we provide an additional explanation: continuous attractor models for grid cells have gained much favor through repeated confirmation via experimental results. However, relating these continuous attractor models to mapping of abstract domains requires the extraction of a faithful representation of velocity in these domains. This is in contrast to other models for spatial mapping, such as TEM, which learn grid cell-like representations from scratch while mapping new abstract spaces. Our work shows how it can be sufficient to use a prestructured attractor network and the experimental predictions that can arise from this modeling choice. We will elaborate on this discussion and make the connection back to grid cells clearer in the text.
We apologize for not including a sufficiently detailed description of the continuous attractor network that we used. We provide here a brief description of the grid cell model used and will include a more detailed description in the appendix of the revised paper.
We use an approximation of a continuous attractor model for a module of grid cells: we simulate patterned activity on a lattice of neurons as the sum of three plane waves, resulting in a hexagonal pattern of activity. Input velocities are used to update the state of the activity on the lattice of neurons through updating the phases of the plane waves, leading to accurate integration of the input velocities.
> **Why this model is superior to the baselines**
Thank you for highlighting this point as well. While we have briefly discussed why our model outperforms other dimensionality reduction techniques in L102-105, we elaborate on this further here and will include this in the paper:
Typical dimensionality reduction techniques are agnostic to the transition structure between high-dimensional input states when drawn from a trajectory. They usually rely on the statistics of distances between points across the entire ensemble of inputs. In contrast, our method explicitly looks for a structured tangent manifold around each point that captures the low-dimensional transitions between successive states along a trajectory (see Fig. R3). Through using this additional sequential information, with a prior of assuming the existence of a tangent manifold along the manifold of input points, we obtain a superior performance. We have also previously included comparison to another technique that uses sequential information (MCNet). However, there too we find that our method is significantly better through assuming the existence of a low-dimensional description of transitions in a state-independent fashion, rather than a high-dimensional maximal description of the state-dependent transitions.
> **Training process relies on specially constructed datasets (loops)**
Thank you for raising this excellent point!
While our models were trained with data consisting of loops, this training paradigm is simply for convenience (L179-181) and is not a necessity of the model. If arbitrary random walks were selected instead for training samples, self-intersections would automatically lead to loops within sub-sequences of the walks (as detected by $k<m$ with $i_k=i_m$) — these loops could then be used for the loop-closure loss, with all other losses applied on all loop and non-loop trajectories. As a simple proof of concept, we generated a dataset of arbitrary trajectories with 50% of the trajectories as loops and the remainder as non-loops. We applied loop closure loss on trajectories that formed loops and all other losses on the entire dataset of trajectories. Fig. R1 in the attached PDF demonstrates this paradigm for the 2D Stretchy Bird environment, where we continue to get consistent velocity representations with low transformation error to the true velocity distribution.
In greater generality, we also note that the loop closure loss can also work on trajectories that are “almost loops” – i.e., a loss whose prefactor term scales with how close a trajectory is to forming a closed loop. However, running this analysis and tuning these parameters is beyond the scope of the current rebuttal period, and we will examine these results for the camera-ready version of our paper.
> **Situation when the dimension of velocity is higher than grid cells (2D)**
Thank you for raising this point. We have included an example of an environment where the dimension of the velocity was higher than two-dimensional, cf. the 3D Stretchy Bird environment. This, however, does not pose a problem for integration by grid cells. While a single grid cell module can only integrate two-dimensional velocities, previous theoretical work (Klukas et al. 2020, PLOS Computational Biology 16(4): e1007796) has shown that multiple grid cell modules can collectively represent large volumes of high-dimensional Euclidean spaces through integration of velocities in higher dimensions. We will add a brief discussion of this point in the text.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments. I believe my concerns are properly addressed and I would like to maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your detailed feedback and suggestions. We hope you agree that our paper has improved as a result of this discussion. Please let us know if there are any additional questions that we may be able to address to consider a re-evaluated score.
---
Rebuttal 2:
Comment: We hope that you agree that our paper has improved given your detailed feedback and suggestions! Please let us know if there is anything else that we can clarify. We kindly ask you to consider revising your score if you agree that the primary concerns raised in the review were addressed. | Summary: This work explores how the brain could theoretically generalize its representations of velocity signals used to map spatial domains into abstract velocity signals that map abstract cognitive domains.
To do so, proposes a self-supervised ANN algorithm and architecture to learn a low-dimensional latent space that describes abstract velocities (transformation parameters) between sensory inputs resulting from a parameterized low-dimensional generative process (where parameters are points in abstract space). For example, in a Stretchy Blob task, an image of a 2D Gaussian blob is the product of a generative process parameterized by[width, height], and the difference between two images/blobs can be succinctly captured as a 2D velocity corresponding to their difference in [width, height].
The self-supervised framework makes use of various loss terms that ensure the latent space is a well-formed / geometrically consistent metric space. The experimental evaluation shows that these methods successfully captures relationships across a variety of image and audio domains, better than standard dimensionality reduction baselines. When fed into a synthetic grid cell model, the learned upstream velocity signals result in much more grid-like firing patterns.
Strengths: **Significance:** The work addresses an important problem in reconciling how grid cells could be involved in both spatial and abstract tasks. This is relevant for both the neuroscience community interested in grid cells / integrators, and potentially the machine learning community interested in transfer / generalization.
**Novelty:** The algorithm builds on ideas from SSL regarding prediction and promoting locally linear / geometrically consistent latent spaces. The perspective of abstract velocities is interesting and differently motivated than typical SSL work.
**Technical Quality:** The qualitative results are convincing, with the learned velocities accurately map their abstract domains. The model is also elegantly simple, yet can be applied to a variety of modalities.
**Presentation Clarity:** The writing is excellent, as is the figure design. The font for annotations in the figures (e.g. Figure 4 var/axes, Figure 3 isotropy equation) could be larger, but no major revisions suggested from me.
Weaknesses: **Technical Quality**
1. **Dimensionality reduction:** Line 74. Is this a dimensionality reduction method? Can a single image be compressed into a low dimensional repr like the baselines that were compared to (e.g. autoencoder, PCA)? It seems to me that only a _pair_ of images could be compressed into a low-dimensional repr of the velocity. If trying to pass a single image twice into the encoder, a well-trained velocity would be 0.
2. **Preservation of cell-cell relationships:** Line 78, Section 2.6. I didn't follow the logic of this claim of what the model is predicting. Wouldn't showing that cell-cell relationships are preserved require showing that the velocity signals from multiple tasks can be fed to the same grid cell model simultaneously and use the same grid? Only one task is shown. Or is this claim presupposed by the grid cell model? In which case it wouldn't be a prediction.
3. **Orthogonality of velocities in triplet:** Line 163. The model computes velocity v12 from (i1, i2), then regresses g(i2, v12) to i3. But the actual velocity v23 between i2 and i3 could be quite different from v12, and in the worst case it is orthogonal. So would this introduce a systematic error? Or maybe its an assumption that velocity doesn't change too quickly along a path, in which case this assumption should probably be made explicit.
4. **Missing limitations:** I didn't find limitations in the Discussion section as indicated in the Checklist. Some limitations in particular that I'd like to see addressed are:
(a) The biological plausibility of the proposed method, especially if this is meant to describe brain mechanisms for generating abstract velocities.
(b) How naturalistic is the data collection process that requires closed-loops of random walks?
**Presentation Clarity**
5. **Missing grid cell model details:** Line 254, Is the grid cell model described in the paper/appendix? I couldn't find it or a reference to another work.
6. **Missing data generation details:** Line 153, It will be important to have these details on how the synthetic data was generated.
7. **Network vs circuit model:** Line 62 and elsewhere, I would suggest you rename this a "neural network" model, not a "neural circuit" model. The proposed SSL architecture is a deep learning style model. Typically, I think it's best practice to use "circuit" only if you are willing to provide specific commitments between parts of your model and the underlying neural circuit mechanisms in the nervous system. Meanwhile, "network" is less strict.
8. **Missing section number:** Line 207, Experimental Results should be differently numbered than Methods.
Technical Quality: 3
Clarity: 4
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: I didn't find limitations in the Discussion section as indicated in the Checklist. See above for particular questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed feedback and for raising excellent questions. We will try to address all questions one-by-one.
> **Figure 4 var/axes and Figure 3 isotropy equation**
Thank you for the suggestions! We will make the necessary edits to the figure.
> **Dimensionality reduction**
Thank you for raising this point. Yes, our method is indeed a dimensionality reduction method. Our model estimates low-dimensional velocities between high-dimensional states, which can then be integrated to determine the low-dimensional representation of a given state. Dimensionality reduction methods typically apply over an ensemble of states, with the low-dimensional representation of a single state examined in relation to the low-dimensional representation of other states. Our method directly constructs these relative relationships by extracting low-dimensional velocities. As an example, we have generated a new figure (Fig. R3 in the attached PDF), showing states generated from a trajectory in the Moving Blobs environment. Through integration of our model-outputted velocities, the high-dimensional states within this trajectory fall on a plane (consistent with the data, since it can be minimally described through transitions in $\mathbb{R}^2$). PCA of the same set of states fails to capture this low-dimensional description, with ~24 principal components necessary to capture >95% of the variance.
> **Preservation of cell-cell relationships**
The key prediction of our model is that the cell-cell relationship between two cells is preserved across task modalities and across abstract cognitive domains. To simplify our explanation, we consider a grid cell model with a single grid module. Preserving cell-cell relationships does not require the model to _simultaneously_ receive inputs into the same grid module from multiple tasks. The only requirement is that the estimated velocities from each task are fed into the module, but not at the same time.
The question of cell-cell relationships is whether, if two cells fire in an overlapping way in one mapped space/domain, they continue to do so in other domains; and if they fire in a non-overlapping way in one domain, whether they continue to be non-overlapping in all other domains (cf. Yoon et al. 2013 Nat Neuroscience). Our hypothesis is that the brain generates velocity signals from these different domains and feeds these signals into a fixed, prestructured grid cell circuit. Following the structure of the continuous attractor model assumed for the grid cells, we observe that if two grid cells are co-active for one task, they remain co-active for all other tasks. While this claim relies on the underlying grid cell model, it is not presupposed by it: preservation of cell-cell relationships follows from our hypothesis that low-dimensional velocities can be extracted from abstract domains, and thus the same continuous-attractor-based grid module can perform integration across domains.
> **Orthogonality of velocities in triplet**
As discussed briefly in L127-129, we assume that $v_{23} = v_{12}$ and ask the model to predict an $i_3$ solely to demonstrate that the model does not memorize the image features of $i_2$ through some encoding in the predicted velocity / latent space. We will clarify this in our paper. We demonstrate that this assumption is not necessary for model training in Fig. R2 in the attached PDF. Here, we retrain our network on the Moving Blobs task with trajectories that vary in velocity randomly at each time-step. Here we only predict an estimated $\hat{v}$ from $i_1$ and $i_2$, and then predict $i_2$ from $\hat{v}$ and $i_1$. We show that qualitatively and quantitatively, the obtained results remain very similar to the training paradigm we previously employed.
> **Missing limitations: the biological plausibility of the proposed method**
This is a great question. We believe our core loss framework is biologically plausible. The two critical losses we propose are the next-state prediction loss and the loop-closure loss: (1) The next-state prediction loss may occur through sensory systems computing an error that describes the difference between true and predicted sensory input; (2) The loop-closure loss can be computed through a neural integrator circuit such as the grid cell system. Experimental results show that grid cells process these abstract domains, so it is plausible that they provide some error / feedback towards the accurate mapping / organization of these spaces. We include further discussion related to this plausibility in text.
In addition to the critical losses, we also used two auxiliary losses. The biological plausibility of these losses is not as clear as the critical loss terms; however, we note that they primarily aided in refinement of the obtained representations and were not crucial to our results (as demonstrated in our ablation results). We also fully agree that the optimization process we use (stochastic gradient descent and backpropagation) is not a process supported by biology. We will add these limitations to our discussion section.
---
Rebuttal 2:
Title: Rebuttal, Part 2
Comment: > **How naturalistic is the data collection process that requires closed-loops of random walks?**
Thank you for raising this question.
While our models were trained with data consisting of loops, this training paradigm was for convenience (L179-181) and is not a necessity of the model. If arbitrary random walks were selected instead for training samples, self-intersections would automatically lead to loops within sub-sequences of the walks (as detected by $k<m$ with $i_k=i_m$) — these loops could then be used for the loop-closure loss, with all other losses applied on all loop and non-loop trajectories. As a simple proof of concept, we generated a dataset of arbitrary trajectories with 50% of the trajectories as loops and the remainder as non-loops. We applied loop closure loss on trajectories that formed loops and all other losses on the entire dataset of trajectories. Fig. R1 in the attached PDF demonstrates this paradigm for the 2D Stretchy Bird environment, where we continue to get consistent velocity representations with low transformation error to the true velocity distribution.
In greater generality, we also note that the loop closure loss can also work on trajectories that are “almost loops” – i.e., a loss whose prefactor term scales with how close a trajectory is to forming a closed loop. However, running this analysis and tuning these parameters is beyond the scope of the current rebuttal period, and we will examine these results for the camera-ready version of our paper.
> **Missing grid cell model details**
Thank you for raising our attention to this. We provide here a brief description of the grid cell model used and will include a more detailed description in the appendix of the revised paper.
We use an approximation of a continuous attractor model for a module of grid cells: we simulate patterned activity on a lattice of neurons as the sum of three plane waves, resulting in a hexagonal pattern of activity. Input velocities are used to update the state of the activity on the lattice of neurons through updating the phases of the plane waves, leading to accurate integration of the input velocities.
> **Missing data generation details**
Thank you for raising our attention to this as well! We will provide additional details about the synthetic data generation in the appendix section. We briefly touched upon the training / testing set sizes and velocity distributions from which each synthetic environment was generated in the appendix, but we will add more details. We plan to include relevant details about the construction of the training / test set (e.g., how we generated random loop trajectories) and statistics about the environment states (e.g., size of image frames, maximum velocity allowed between consecutive states, etc.).
> **Network vs. circuit model; Missing section number**
Thank you for the excellent suggestions. We will change the terminology used to describe our model and fix the section numbering in our paper.
---
Rebuttal 3:
Comment: Thank you for your rebuttal comments, additional results, and manuscript improvements!
I urge you to clarify the "cell-cell relationships" discussion more in the paper, since the language can be a bit opaque, particularly for the non-grid-cell community. Maybe it would be good to discuss how this prediction could have been falsified (e.g. is it only if you weren't able to learn a good velocity signal to feed into the assumed grid cell model?). Also, in future work, it would be valuable to address the biological plausibility issues to make this a better model of the brain.
Overall, your rebuttal has addressed my feedback, which did not raise major concerns. I am happy to maintain my original high score and support this paper's acceptance.
---
Rebuttal Comment 3.1:
Comment: Thank you for the comments. We will certainly improve the discussion on grid cells and our predictions in the paper. We will ensure that the novelty of our work and predictions in the context of grid cells will be made more apparent to people outside the grid cell community.
Discussion of how our predictions can be falsified is an excellent idea, and we will be sure to include this. To restate from earlier, we predict that if two grid cells are co-active for one task, they remain co-active for all other tasks. Further, if they fire in a non-overlapping way in one domain, they will continue to be non-overlapping in all other domains. This is critically dependent on the following being true: (1) that low-dimensional velocities can indeed be extracted from abstract domains, which we demonstrate through our learning framework, and (2) that the _same_ continuous-attractor-based grid module can perform integration across domains.
This prediction may thus be falsified if the brain were to use _distinct_, independent grid cell modules to organize information from different modalities. It would also be falsified if the grid cells are not modeled by a pre-structured continuous attractor network which would function independently of the nature of the inputs. We will include and elaborate on the falsifiability of our predictions in the revised text.
We are certainly interested in constructing more biologically plausible architectures and learning rules that can perform the computation described in our work, but as correctly noted, this would fall outside the scope of our current work.
Thank you for your score and for supporting our paper's acceptance. We hope that you agree that our paper has improved, given your detailed feedback and suggestions. Please let us know if there are any additional questions that we may be able to address to consider a re-evaluated score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful and detailed review of our paper and their overwhelmingly positive feedback. In this general response, we address some common themes across the reviews. We will additionally provide detailed answers to each reviewer in subsequent comments. To go with this rebuttal, we provide a 1-page PDF with additional figures (labeled Fig. R1-R4).
All reviewers highlighted the importance and relevance of the problem, as well as the novelty of the proposed method. Reviewers also noted that our work was potentially impactful to both neuroscience and machine learning communities, found the model design to be simple yet elegant, the experiments and ablations to be thorough, and the writing and figure design to be excellent.
Two common critiques emerged across reviews: whether our training data consisting only of loops was justified, and whether our model was truly a/should be compared with dimensionality reduction method(s).
To address these issues and other questions raised by reviewers, we run new analyses in this rebuttal that we include in the attached PDF. Specifically:
1. To show that training purely on loops is not necessary, we find relatively unchanged results from our model when trained on data consisting of both random walks (non-loops) and random loops ($\textbf{Fig. R1}$). We provide more details in our response to $\textbf{sK54, SMXS, SEuj}$.
2. To further demonstrate that our model can truly be considered a dimensionality reduction method, we show the low dimensional representations of an ensemble of states from our 2D Moving Blobs environments. The low-dimensional representations from our model occupy a 2D subspace, reflecting the true data structure, unlike a PCA embedding that we show for comparison ($\textbf{Fig. R3}$). We provide more details in our response to $\textbf{sK54, SMXS, AXig}$.
3. To show how our model deals with boundary effects, we analyzed a trained model on a 2D Stretchy Bird task. Interestingly, the model’s encoder can continue to generate state-independent velocities, but the model’s decoder is state dependent and does not produce state predictions that lie beyond the implicit boundaries imposed by the training data ($\textbf{Fig. R4}$). We provide more details in our response to $\textbf{SEuj}$.
We sincerely thank the reviewers for their extensive feedback and believe that our manuscript has significantly improved as a result. We hope you agree that our proposed revisions and new analyses have enhanced our work and that our research can lead to a positive impact on both the neuroscience and machine learning communities.
Pdf: /pdf/653f64a2b74b42b4b54dee5bf5e998910077badd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Convergence of $\text{log}(1/\epsilon)$ for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis | Accept (poster) | Summary: The paper provides smoothed analysis results four gradient-based algorithms for two-player zeros-sum games: OGDA, OMWU, EGDA and IterSmooth. The considered setting assumes that the payoff matrix is injected by noise where each element of the noise matrix is i.i.d. Gaussian $\mathcal{N}(0, \sigma^2)$. In this setting, the linear convergence rate $O(\log(1/\epsilon))$ of three algorithms OGDA, EGDA and IterSmooth are proved to hold with probability $1- \frac{1}{mn}$, where $m$ and $n$ are dimensions of the action space of the two players. The proofs rely on a key insight that with high probability over the randomness of the noise, the payoff matrix (and hence the game) satisfies a particular error bound that gives rise to the $\log(1/\epsilon)$ convergence rate.
Strengths: The paper offers novel techniques to analyze the reliability of existing gradient-based algorithms. These new techniques sidestep the difficulty on condition numbers-like quantity in previous analyses, and instead focus on the geometric characteristics of (the equilibrium of) the game. This seems highly interesting and significant.
Weaknesses: I find the writing hard to understand. Theorems, lemmas and technical results are often discussed before they are explained and/or formally defined. In particular, the definition of the matrix $Q$ in Equation (5) is quite confusing as it is not worded as a definition at all. The vector $b, c$ and value $d$ also seem to come out of no where.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the presentation of section 3.2, the matrix $Q$ (and subsequently $\tilde{x}$ and $\tilde{y}$) is defined based on a particular pair of indexes $(i, j) \in B \times N$. Are all subsequent claims hold for all $(i, j)$?
2. Is there a reason for the missing proof of claim C.2?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The work is of mathematical nature and has no societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and service.
*In the presentation of section 3.2, the matrix $Q$ is defined based on a particular pair of indexes $(i,j) \in B \times N$. Are all subsequent claims hold for all $(i,j)$?*
Matrix $Q$ is indeed defined with respect to a certain pair $(i,j) \in B \times N$, and all subsequent claims hold for any such $(i, j)$.
*Is there a reason for the missing proof of claim C.2?*
The proof of Claim C.2 is in the paragraph just before Claim C.2. We are happy to transfer the proof after the claim if the reviewer believes the current version can cause confusion.
*The definition of the matrix $Q$ in Equation (5) is quite confusing as it is not worded as a definition at all. The vector $b, c$ and value $d$ also seem to come out of no where.*
Those quantities are defined in Eq. (10) of the appendix. Their exact definition is not important for the purpose of the main body, but we will make sure to include Eq. (10) in the main body of the revision as we see how the current version can cause some confusion.
Let us know if we can further make the writing easier to understand.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. I will keep my score. | Summary: The paper is concerned with studying the convergence of some state-of-the-art gradient-based algorithms for solving zero-sum games. For these algorithms, it is known that in the worst case, the number of their iterations grows polynomially in 1/e, where e is the error bound.
The paper shows that for many of the aforementioned algorithms, their smoothed complexity is polynomial in the sense that their number of iterations grows polynomially in log(1/e).
Strengths: 1. The paper studies state-of-art algorithms that have significant important in ML applications.
2. The paper provides meaningful justifications as to why some of the gradient-based algorithms for zero-sum games perform well in practice.
Weaknesses: No noted weaknesses. This seems to be a very solid paper that's very relevant to the scope of NeurIPS.
Technical Quality: 3
Clarity: 3
Questions for Authors: No questions.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No limitations found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and service. | Summary: This paper performs a smoothed analysis for zero-sum games. Existing convergence rate guaratees of gradient-based algorithms often depend on condition number-like quantities which can be exponential in dimension. This paper shows that for the average case or smoothed case (as opposed to worst-case) the error condition constant can be polynomial in dimension. This validates practically observed success of gradient-based algorithms to solve zero-sum games.
The paper also discusses the relation between $\delta$-stable games and error-bound condition constant.
Strengths: The main strength of the paper is Theorem 1.4 which shows that, in the smoothed case, the error-bound coefficient is polynomial in problem dimensions. This deals with an important problem and bridges the gap between theoretical rates and success of gradient-based algorithms in real life to solve zero-sum games.
The paper is easy to read.
Weaknesses: 1. The paper lacks adequate comparison with existing literature. There has been considerable research on smoothed analysis of optimization problems (see [1] for example). Since standard (worst-case) analysis of gradient-based algorithms like OGDA, EGDA, etc. to solve (1) is quite similar to analysis of gradient-based algorithms to solve constrained convex optimization problem, I expect the novelty required to deal with the smoothed version of (1) should be pretty similar to the established techniques to deal with smoothed analysis of constrained convex optimization problem. If this is not true, I request you to provide a detailed discussion delineating the two scenarios and highlighting the novelties required in this paper on top of smoothed analysis of convex minimization problems.
2. Convergence in terms of euclidean distance instead of duality gap should be less emphasized as contribution/novelty as these type of results are already known for gradient-based algorithms.
Minor Suggestions:
3. The paper abstract and possibly title should be reflective of the fact that Theorem 1.4 is the main message of the paper (instead of convergence rates) as all the other results of the paper readily follows from Theorem 1.4.
4. An example where the the constant $\kappa$ depends exponentially on $m,n$ could be helpful.
Reason for my score: Theorem 1.4 seems to be the main contribution - the proof of which mostly requires change of variables and well-known concentration results for Gaussian random variables. This result by itself, although a very nice result, seems a little inadequate for a NeurIPS level paper. But I am on the fence here and I may change my score depending on answer to the first question under Weakness section.
[1]Cunha, Leonardo, Gauthier Gidel, Fabian Pedregosa, Damien Scieur, and Courtney Paquette. "Only tails matter: Average-case universality and robustness in the convex regime." In International Conference on Machine Learning, pp. 4474-4491. PMLR, 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weakness section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and service.
*The paper lacks adequate comparison with existing literature. There has been considerable research on smoothed analysis of optimization problems (see [1] for example).*
We will make sure to cite and discuss [1] (and some of the references therein) in the revised version. There are many crucial differences between our results and [1].
- First, [1] focuses on certain convex quadratic problems while we examine zero-sum games.
- The perturbation model between our paper and [1] are also different, as the latter has an average-case flavor (parameterized by the concentration of eigenvalues of a certain matrix) while we work in the usual smoothed complexity model of Spielman and Teng.
- Our main result establishes an iteration complexity scaling with $\log(1/\epsilon)$, while [1] obtains a complexity polynomial in $1/\epsilon$ (unless there is strong convexity, which is not the case in our problem).
- Moreover, the algorithms considered in [1] are distinct from ours, although they are also gradient-based, and the techniques are also very different. In particular, [1] assumes (see Problem 2.1 in that paper) that the underlying randomization is independent of the optimal solution. On the other hand, as we stress throughout our paper, the fact that in our setting the equilibrium depends on the randomization is the main technical challenge.
Overall, the technical challenges we faced are very different from the ones in [1], and we believe that the papers are generally orthogonal. We will include the above discussion in the revision.
*Convergence in terms of euclidean distance instead of duality gap should be less emphasized as contribution/novelty as these type of results are already known for gradient-based algorithms*
The contribution/novelty of our results in terms of Euclidean distance is that they provide the first polynomial bounds in the smoothed complexity model. Many results in terms of Euclidean distance were known, as the reviewer points out and as we highlight throughout the paper, but they had to rely on exponentially large constants. We believe that this improvement–which is obtained in conjunction with those known results–is worth highlighting as an important contribution.
*The paper abstract and possibly title should be reflective of the fact that Theorem 1.4 is the main message of the paper (instead of convergence rates) as all the other results of the paper readily follows from Theorem 1.4.*
We will expand the abstract to reflect the fact that Theorem 1.4 is the main technical contribution.
*An example where the constant $\kappa$ depends exponentially on $m, n$ could be helpful.*
Starting from the $3 \times 3$ example of Proposition 3.1, one can make $\kappa$ to be inversely exponential in $m, n$ by suitable selecting $\gamma = \gamma(n, m)$ (and filling the rest of the matrix according to the identity matrix).
---
Rebuttal Comment 1.1:
Comment: We thank the reviewer again for the valuable feedback. Given that the discussion period comes to an end tomorrow, we wanted to make sure that our response above adequately addressed the reviewer's concern regarding comparison with prior work. | Summary: This paper studies smoothed analysis of gradient-based algorithms for computing equilibria in zero-sum games. In general, regret minimization can be used to compute an $\epsilon$-equilibrium in a number of iterations polynomial in $1/\epsilon$. If inverse polynomial or better precision in $\epsilon$ is desired, this iteration count becomes prohibitively large. In this case LP solvers based on interior point methods can be used, which have iteration counts of $\log(1/\epsilon)$, but high per-iteration complexity and memory requirements. This paper shows that several standard gradient-based algorithms converge to an $\epsilon$-approximate equilibrium in a number of iterations polynomial in $\log(1/\epsilon)$ in the setting of smoothed analysis. In particular the runtime also depends polynomially on $1/\sigma$, where $\sigma$ is the variance of the smoothing noise.
The general approach taken in this paper is to prove that, for a smoothed zero-sum game, the duality gap of a pair of strategies is with high probability at most polynomially (in the size of the game $nm$ and the noise variance $\sigma$) smaller than the $\ell_2$ distance to the equilibrium strategy pair. Existing analysis of several standard gradient descent methods then imply convergence to an $\epsilon$-equilibrium in time polynomial in $nm,1/\sigma,\log(1/\epsilon)$.
Strengths: 1. This paper provides, to my knowledge, the first theoretical justification for using gradient-based methods as equilibrium solvers in zero sum games in the high-precision regime, i.e. where $\epsilon$ is inverse polynomial in the size of the game.
2. The approach via smoothed analysis is quite generic: it revolves around structural properties of the game relating distance to the equilibrium and the duality gap. This structural property appears in the analysis of several gradient-based algorithms, and so the results can be applied to directly obtain $\log(1/\epsilon)$ convergence to $\epsilon$-equilibria.
Weaknesses: The paper relies heavily on techniques for smoothed analysis of linear programming developed by Spielman and Teng. This is in some sense to be expected due to the strong connections between zero-sum games and linear-programming. As the authors point out, the smoothed analysis of gradient-based methods is well-known in the unconstrained min-max setting, and the main problem is that for zero-sum games the strategies must be constrained to be probability distributions.
Clarity:
Line 260: The introduction of $Q$, $d$, and $c$ here as implicitly defined by equation (5) was not very illuminating. Going to the appendix to see the full definition eventually resolves this, but this part could be better written.
Line 774 and 776: Refs to Definition 3.1 should probably be refs to Theorem 3.6.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you clarify precisely how gradient based methods in the smoothed setting improve over, say interior point methods which also achieve $\log(1/\epsilon)$ iteration complexity? Probably you can claim lower space requirements, but is there anything else to say?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and service.
We also thank the reviewer for pointing out the issues with clarity. We will make sure to incorporate the suggestions in the revision.
*Could you clarify precisely how gradient based methods in the smoothed setting improve over, say interior point methods which also achieve $\log(1/\epsilon)$ iteration complexity? Probably you can claim lower space requirements, but is there anything else to say?*
The two main aspects on which gradient-based algorithms, such as OGD, are more appealing than interior-point methods are the per-iteration complexity and the memory requirements. An algorithm such as OGD requires a single matrix-vector product per iteration; this is nearly linear for sparse matrices, and can be even smaller when the game-matrix is more structured (for example, low-rank). The memory requirements of OGD are also minimal, as the reviewer points out. On the other hand, interior-point methods require solving a linear system in each iteration, which can be prohibitive in large games. In those regards gradient-based algorithms are more compelling.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! I will maintain my score and continue to recommend acceptance. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors apply the smoothed analysis framework predominantly studied by (Spielman and Teng'04) to some common sequential algorithms for learning the Nash equilibria in zero-sum bimatrix games. To this end, they look at EGDA, OGDA, OMWU and IterSmooth. They look at these algorithms that have known last-iterate convergence properties, albeit with game dependent constants or do not have linear rates in the worst case. Their main techniques are to show important constants and condition numbers have polynomial dependence on the game dimensions and the noise parameter.
Strengths: This is one way to understand the average case performance of some common algorithms for solving bimatrix games. Their results instill some sense of confidence in these algorithms as their average case performance is polynomial in the dimensions and has a linear last-iterate behavior, except OMWU, which perhaps indicates some intrinsic intractability with respect to the best obtainable constants still being game dependent!
Weaknesses: 1) I think the authors could perhaps provide more justification for using smoothed analysis in this context. In the standard sense, it is about understanding perturbation of worst case instances (for simplex). For games, often we do not clearly know the worst case game matrices for certain algorithms.
2) Some assumptions related to perturbations of the A matrix, that is in definition 4.1 could be restrictive, in that the support of the equilibrium doesn't change. Can the authors comment on whether this can be relaxed to a certain extent?
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors comment if the smoothed analysis could give insights going beyond bimatrix games, such as the performance of the algorithms studied in this paper but for convex-concave settings or for low-rank bimatrix non-zero sum games?
Minor:
Typo in page 8, line 395 ...of the error bond->...bound.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and service.
*Can the authors comment if the smoothed analysis could give insights going beyond bimatrix games, such as the performance of the algorithms studied in this paper but for convex-concave settings or for low-rank bimatrix non-zero sum games?*
This is an interesting question. Beyond (two-player) matrix games, perhaps the most natural next step would concern polymatrix zero-sum games, for which we suspect that similar results should apply. For general convex-concave games, it is not clear to us how one should even define the perturbed instance, unless the underlying function is more structured. When it comes to nonzero-sum bimatrix games, even under a low-rank constraint computing Nash equilibria is hard (unless the rank is 1), and so iterative algorithms instead converge in a time-average sense to a so-called coarse correlated equilibrium (CCE). It is unclear whether the framework of smoothed complexity can lead to meaningful improvements when it comes to convergence to CCE. On the other hand, it seems plausible that our results can be extended to rank-1 games.
*Some assumptions related to perturbations of the A matrix, that is in definition 4.1 could be restrictive, in that the support of the equilibrium doesn't change. Can the authors comment on whether this can be relaxed to a certain extent?*
There are other natural notions of perturbation stability one could consider in the context of Section 4; for example, that the equilibrium of the perturbed game must be $\delta$-close to the equilibrium of the original game. It is not clear whether a result such as Theorem 4.2 applies to that notion, but we believe this is an interesting question for future work. One technical challenge we should point out here is that our characterization of Theorem 3.6 does not handle multiplicity of equilibria, which appears to be relevant for other notions of perturbation stability.
*I think the authors could perhaps provide more justification for using smoothed analysis in this context.*
Besides the usual justification for performing smoothed analysis in the context of linear programming and optimization, which applies to zero-sum games as well, we believe that in game-theoretic problems smoothed complexity is even more relevant since there is often noise/imprecision when specifying the players’ utilities. This puts into question some worst-case pathological examples, such as the one specified in Proposition 3.1. Those are the type of examples for which algorithms such as OGD perform poorly in practice.
Finally, we thank the reviewer for spotting the typo.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their rebuttal and I will maintain my existing positive score. | null | null | null | null | null | null |
HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models | Accept (poster) | Summary: This paper proposes a retrieval-augmented generation (RAG) method that is inspired by the hippocampal indexing theory of human memory to enable longer knowledge storage and efficient knowledge integration over new experiences.
Strengths: 1. This paper's idea is interesting and shows impressive performances.
2. The paper is well-written and the presentation is clear.
3. The metric `Node Specificity` is aligned with the intuition that humans may get better memorization of things that they are seeing over and over again. This is shown in Figure 2 where the logo of "Stanford" grows larger. I think this one is very interesting.
Weaknesses: 1. Extracting triplets from the passage strongly depends on the power of the triplets extracting model, which is a language model in the paper's implementation. I'm concerned that it may lose information when the passage becomes longer. Is this a common strategy in other retrieval methods?
2. Missing citations. Since the paper talks about long-term memory in related work, I believe the following papers may need to be cited.
[1] Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture.
[2] MEMORYLLM: Towards Self-Updatable Large Language Models.
[3] CAMELoT: Towards Large Language Models with Training-Free Consolidated Associative Memory.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes, the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent reviewing our paper. We are glad they found our method interesting and enjoyed the `Node Specificity` portion of our methodology. We also appreciate their suggestions and long-term memory references, which we will definitely include in our updated literature review.
- **W1: Extracting triplets from the passage strongly depends on the power of the triplets extracting model, which is a language model in the paper's implementation. I'm concerned that it may lose information when the passage becomes longer. Is this a common strategy in other retrieval methods?**
We refer the reviewer to the general response for an intrinsic and extrinsic length-dependent evaluation of our knowledge graph construction.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thank the authors for the responses. I don't have more concerns. | Summary: This paper presents Hippo-RAG, which enables knowledge integration across retrieval results and supports long-term memory with a mechanism that resembles the hippocampal memory indexing theory. Hippo-RAG includes two steps: offline indexing to extract, encode, and index the passages to KG, and online retrieval to extract entities from the query and retrieve the results from the KG. Hippo-RAG outperforms various existing methods on the multi-hop QA benchmarks and is able to perform multi-hop retrieval in a single step.
Strengths: 1. The proposed method is novel, and it resembles the long-term memory mechanism of human beings. The offline indexing stage enables knowledge integration across passages and addresses the path-finding problems in RAG. The pipeline examples can help to better understand the process.
2. The proposed method achieves significant performance improvement from various challenging multi-hop QA benchmarks.
3. The paper provides sufficient details for reproduction.
Weaknesses: 1. The mechanism is great, but the generalization of the method is not good enough. Intuitively, the mentioned mechanism should work for several scenarios that require long-term memory. Still, the proposed method narrows it down and limits the experiments to multi-hop reasoning.
2. The framework's performance depends on the NER ability of LLM. However, [1] demonstrates a large performance gap between prompting LLMs for NER and fine-tuned NER models. The error analysis also indicates that most errors in the system are from NER.
3. The comparison of baselines may not be fair since the parameter sizes of Hippo-RAG are significantly larger than other methods. Contriever and ColBERTv2 achieve comparable results on MuSiQue and HotpotQA, but when combined with HippoRAG, the improvement is insignificant. ColBERTv2 is the second-best model, and the improvement from Hippo-RAG seems incremental on the two datasets.
[1] https://arxiv.org/abs/2304.10428
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can Hippo-RAG work for other tasks that require long-term memory?
2. Why does Hippo-RAG perform significantly better on 2Wiki?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately discussed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for the time and effort they dedicated to reviewing our paper as well as for their comments and questions.
- **W1: The mechanism is great, but the generalization of the method is not good enough. Intuitively, the mentioned mechanism should work for several scenarios that require long-term memory. Still, the proposed method narrows it down and limits the experiments to multi-hop reasoning.**
- **Q1: Can Hippo-RAG work for other tasks that require long-term memory?**
Long-term memory in humans is a remarkably complex and powerful cognitive faculty that forms the basis for our reasoning and decision making. It’s a holy grail for artificial intelligence to acquire a memory mechanism as powerful as humans’, but that’s bound to be a long journey that we as a community have to take, one step at a time. Before HippoRAG, the de facto solution for long-term memory for LLMs, that expose a static LLM to new experiences, is RAG (see, e.g., this blog post that has had important impact in shaping this belief and practice: https://lilianweng.github.io/posts/2023-06-23-agent/). However, current RAG lacks many important properties of human long-term memory such as the inability to store previously learned procedures and store facts associated only with a particular time or place (episodic memory). HippoRAG focuses on another one of these important properties that current RAG lacks, knowledge integration across experiences, for which the hippocampus is believed to play an important role. For this property, multi-hop QA is the most natural task for evaluation because of its inherent knowledge integration requirement as well as the well established datasets and baselines.
Can HippoRAG be applied to other tasks that require long-term memory, e.g., agents that need to maintain episodic or procedural memory? Maybe, but more likely new adaptations and innovations will be needed to properly handle new properties of such memories, e.g., handling of their temporospatial attributes. We are very interested in further extending HippoRAG to get even closer to the powerful human long-term memory and handle more tasks that require long-term memory. On the other hand, we also believe that HippoRAG still marks a meaningful and solid step in the long journey of trying to empower AI with the powerful long-term memory mechanism with which humans are equipped.
- **W2: The framework's performance depends on the NER ability of LLM. However, [1] demonstrates a large performance gap between prompting LLMs for NER and fine-tuned NER models. The error analysis also indicates that most errors in the system are from NER.**
We agree with the reviewer that understanding the effect of different NER components on HippoRAG’s performance is important. In order to do this, we compare our standard query NER prompting method with the state-of-the-art NER model UniversalNER [1] below:
| MuSiQue Retrieval | R@2 | R@5 |
| ------------------------------------------ | ---- | ---- |
| HippoRAG (Contriever) w/ UniversalNER-7B | 25.1 | 32.0 |
| HippoRAG (Contriever) w/ GPT-3.5-turbo NER | 41.0 | 52.1 |
We can see that GPT-3.5-turbo outperforms UniversalNER-7B on MuSiQue. We believe this is due to UniversalNER’s preference for a few entity types given its training data. As reported in their paper, the top 1% most frequent entity types account for 74% of all entities produced. Additionally, even though UniversalNER is one of the most flexible supervised NER models available, it still requires defining entity types a priori, in contrast to our prompting methodology.
Nevertheless, as the reviewer mentioned, we acknowledge that query NER issues are still HippoRAG’s largest source of errors. However, as we discuss in Appendix F.1 and F.2, these errors do not come from NER issues per se but rather the use of NER exclusively to link queries to our KG. For example, in a query such as “When was one internet browser’s version of Windows 8 made accessible?”, the terms that must be extracted are not named entities but common noun phrases like “internet browser”. We believe that finding suitable starting points for graph search is an important venue for future research.
---
Rebuttal Comment 1.1:
Comment: Thank you for the analyses provided. I raise the rating from 4 to 5 for they address most of my concerns.
My remaining concerns are about W3 experimental comparison.
The settings for RAPTOR are not stated in the paper (which retriever (SBERT/DPR/BM25) is used for its implementation). In Table 2, ColBERTv2 outperforms the two LLM-augmented methods, and it is not discussed in the paper. These are my concerns, but they are not the weakness of the paper. I would appreciate it if the authors could include them in later versions.
---
Reply to Comment 1.1.1:
Comment: Thank you for considering our responses and kindly updating your score. Please feel free to let us know if you have any further questions or concerns.
We appreciate your suggestion to include more details about the LLM-augmented baselines. In order to ensure our experiments appropriately represent these baselines, we used the embedding methods which obtain the strongest performance in their respective experiments. Both of these were Sentence Transformer models, `sentence-transformers/multi-qa-mpnet-base-cos-v1` for RAPTOR and GTR (`sentence-transformers/gtr-t5-base`) for the Proposition-izer.
In order to provide a direct comparison with ColBERTv2 as the reviewer suggests, we ran these two baselines using ColBERTv2 as their retrieval component.
| Method | MuSiQue | MuSiQue | 2Wiki | 2Wiki | HotpotQA | HotpotQA |
|---------------------------|----------|----------|---------|---------|----------|----------|
| | R@2 | R@5 | R@2 | R@5 | R@2 | R@5 |
| ColBERTv2 | 37.9 | 49.2 | 59.2 | 68.2 | **64.7** | **79.3** |
| RAPTOR | 35.7 | 45.3 | 46.3 | 53.8 | 58.1 | 71.2 |
| RAPTOR (ColBERTv2) | 36.9 | 46.5 | 57.3 | 64.7 | 63.1 | 75.6 |
| Proposition | 37.6 | 49.3 | 56.4 | 63.1 | 58.7 | 71.1 |
| Proposition (ColBERTv2) | 37.8 | 50.1 | 55.9 | 64.9 | 63.9 | 78.1 |
| HippoRAG (ColBERTv2) | **40.9** | **51.9** | **70.7**| **89.1**| 60.5 | 77.7 |
We find that both baselines obtain stronger performance using ColBERTv2 than their original models. However, they still underperform ColBERTv2 itself in all results except for MuSiQue (R@5), where only the Proposition-izer model outperforms ColBERTv2 and gets 0.8% closer to HippoRAG.
We point out that RAPTOR's poor performance compared to ColBERTv2 demonstrates that their "cluster and summarize" methodology is mostly ineffective for the comprehensive knowledge integration required in these datasets. Finally, we note that the Proposition-izer's unstable performance, especially its poor performance in 2Wiki, illustrates that separating passages based on propositions can sharply diminish a retriever's knowledge integration capabilities.
We will include these results and discussion in the revised version.
---
Rebuttal 2:
Comment: - **W3: The comparison of baselines may not be fair since the parameter sizes of Hippo-RAG are significantly larger than other methods. Contriever and ColBERTv2 achieve comparable results on MuSiQue and HotpotQA, but when combined with HippoRAG, the improvement is insignificant. ColBERTv2 is the second-best model, and the improvement from Hippo-RAG seems incremental on the two datasets.**
In the response below, we present evidence and arguments that we hope will convince the reviewer that our performance improvements are substantial and our experimental setting is sound and fair.
#### **Improvements on MuSiQue**
We first highlight that our method “achieves significant performance improvement from various challenging multi-hop QA benchmarks”, as pointed out by the reviewer when listing our strengths.
HippoRAG improves R@5 on MuSiQue by 5.5% and 2.7% over Contriever and ColBERTv2 respectively, as well as 3.4% F1 score on QA compared to ColBERTv2. Many well-received and highly cited works such as IRCoT [2], Self-Ask [3], ITER-RETGEN [4] and MCR [5] highlight improvements similar to ours.
#### **HotpotQA’s Weaknesses as a Knowledge-Integration Benchmark**
As discussed in subsection “Single-Step Retrieval Results” (Section 4) of our paper, our lower performance on HotpotQA is mainly due to its lower need for knowledge integration due to existing shortcut signals. This HotpotQA limitation is referenced when constructing MuSiQue [6], but also explored more deeply in Appendix B of our paper. In bridge multi-hop questions, the query and second supporting passage should be linked only through a bridge entity. However, as shown in Figure 6, HotpotQA queries are on average as similar to the second supporting passage as to their distractors, instead of less similar like in the other datasets, making that second supporting document easier to detect without knowledge integration.
#### **Measuring the Impact of Improved Knowledge-Integration Directly**
Additionally, when we measure the impact of improved knowledge integration in MuSiQue and 2WikiMultiHopQA, our improvements are even larger than our overall results. In Table 8, we show that HippoRAG’s ability to find all supporting documents is key to its strong performance, obtaining a 6.3% and 38.6% improvement over ColBERTv2 in All-Recall@5, compared to a 2.7% and 21.3% improvement in standard Recall@5.
#### **Our Experimental Setting is Sound and Fair**
Finally, we refer to the reviewer’s concern that our comparison with Contriever and ColBERTv2 might not be fair given their small size compared to LLM. We understand the reviewer’s apprehension, however, we argue that improving the performance of a system using an LLM and comparing it to the original system is a well-accepted paradigm in AI research. As a specific example, many well-received QA works such as IRCoT [2], Self-Ask [3] and RAPTOR [7] leverage a large LLM in different ways and compare with the original pipeline; which in these papers contain various retrieval methods like BM25, DPR [8] and ColBERTv2. We note that HippoRAG follows this same research paradigm, which is related to the large and growing body of recent work that leverages LLMs to challenge prior state of the art based on smaller models.
As a final remark, we also note that our method outperforms RAPTOR and is comparable with IRCoT (and considerably more efficient). Since both of these methods augment the original retrieval process with outputs from an LLM, they can be seen as HippoRAG’s most direct baselines in that sense.
- **Q2: Why does Hippo-RAG perform significantly better on 2Wiki?**
As discussed in subsection “Single-Step Retrieval Results” (Section 4) of our paper, 2WikiMultiHopQA’s construction is more entity-centric than the other two datasets, making it particularly well-suited for HippoRAG’s design.
To provide some extra context, 2WikiMultiHopQA was created by leveraging the Wikidata KG to determine which entities and relations could be found in Wikipedia passages to create compositional, comparison, inference or bridge-comparison multi-hop questions. Due to this construction process, queries in this dataset include at least one named entity, a characteristic that our methodology can leverage given its previously mentioned reliance on NER to link queries to the KG.
Title: Rebuttal by Authors (Continued)
---
Rebuttal 3:
Title: Rebuttal by Authors (Continued)
Comment: ### References
[1] Zhou et al. (2024). UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition.\
[2] Trivedi et al. (2023). Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions.\
[3] Press et al. (2023). Measuring and Narrowing the Compositionality Gap in Language Models.\
[4] Shao et al. (2023). Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy.\
[5] Yoran et al. (2023). Answering Questions by Meta-Reasoning over Multiple Chains of Thought.\
[6] Trivedi et al. (2022). ♫ MuSiQue: Multihop Questions via Single-hop Question Composition.\
[7] Sarthi et al. (2024). RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval.\
[8] Karpukhin et al. (2020). Dense Passage Retrieval for Open-Domain Question Answering. | Summary: The paper introduces HippoRAG, a retrieval framework inspired by hippocampal indexing theory to enhance large language models (LLMs) in integrating new information. The algorithm is a combination of LLMs, knowledge graphs (KGs), and the Personalized PageRank algorithm. HippoRAG outperforms existing retrieval-augmented generation (RAG) methods in multi-hop question answering (QA) by a significant margin. It achieves comparable or better performance than iterative retrieval methods like IRCoT while being significantly cheaper and faster.
Strengths: - The idea of converting the corpus to KG and then running page rank algorithm for better retrievals is interesting. The connections to hippocampal memory theory is inspiring.
- The method is shown to be more efficient than iterative retrieval method IRCoT and can also use IRCoT to further boost the performance.
- The experiment is well-executed, showing the improvements of HippoRAG.
- The paper is well-written and easy to follow. Analysis is clear.
Weaknesses: - Some baselines from KG-LLM for multi-hop QA literature are missing. These could be beneficial to replace Page Rank for ablation study or comparing with KGQA on open-source knowledge graph.
- Some citation are missing (see questions)
Technical Quality: 3
Clarity: 4
Questions for Authors: - Can we evaluate the performance of constructed knowledge graph alone?
- Some citations are missing:
- Park et al. Graph Elicitation for Guiding Multi-Step Reasoning in Large Language Models
- Jin et al. Improving embedded knowledge graph multi-hop question answering by introducing relational chain reasoning
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on reviewing our paper. We appreciate their helpful suggestions and the related work they brought to our attention.
- **W1: Some baselines from KG-LLM for multi-hop QA literature are missing. These could be beneficial to replace Page Rank for ablation study or comparing with KGQA on open-source
knowledge graph.**
In order to address this important concern, we discuss previous works on multi-hop QA that combine graphs and neural methods by dividing them in two major categories 1) graph-augmented reading comprehension and 2) graph-augmented retrieval.
#### **Graph-Augmented Reading Comprehension**
In this direction, graphs are used to provide structure to complement textual signals from the retrieved passages and improve QA performance.
Most supervised methods in this category [1,2,3], train a graph neural network (GNN) to introduce hyperlinks or co-occurrence graphs into a language model for better QA. For works using LLM prompting, the graphs are usually constructed by extracting triples from retrieved passages and then added to the final LLM prompt [4,5,6]. The contributions from these works are mostly orthogonal to our own since graphs are not used in the retrieval process like they are in HippoRAG, however, they could be used in conjunction with our method to achieve complementary improvements.
#### **Graph-Augmented Retrieval**
In the second category, the graph is used to retrieve relevant documents rather than provide structured signals from a previously retrieved set. Many previous such works train a re-ranker to traverse a graph primarily made from hyperlinks [7,8,9,10,11,12].
As far as we know, HippoRAG is the first method that successfully combines a KG and an LLM for retrieval in multi-hop QA without any supervised data or predefined Wikipedia hyperlinks, meaning it can be used in more scenarios than previous methods.
We hope that this discussion helps better contextualize our method and clarify its uniqueness. As discussed above, although much KG-LLM work has been done for multi-hop QA, most of it is either not used in the retrieval process or relies on supervised data. However, as the reviewer mentioned, some KGQA methods are able to leverage an open KG for question answering using LLM prompting. For completeness, we carried out an ablation of our method that leverages our KG using Pangu (Gu et al 2023), a state-of-the-art LLM-based KBQA method, for QA.
Pangu obtains less than 0.05% F1 score on MuSiQue, demonstrating that standard KGQA systems cannot perform question answering on our OpenIE KG. We observe that the following challenges and many more lead to this dramatically low performance:
#### Entity Linking
This is a significant challenge even in settings with strictly defined KG, making it even more difficult on our KG where one entity can have several expressions. For example, for the question "When was the person who Messi's goals in Copa del Rey compared to get signed by Barcelona?", many entities are potentially relevant, such as Messi, the Spanish Messi, or Lionel Messi. This diversity makes it difficult to identify which starting point could lead to the answer.
#### OpenIE Coverage of Answers and Relations
Some relations as well as final QA answers are sometimes not present in the KG even if they are in the text. For instance, as we demonstrated in Table 14, this is the case for some complex expressions and numerical attributes.
We will expand our discussion of this literature accordingly in the revised version.
- **Q1: Can we evaluate the performance of constructed knowledge graph alone?**
We refer the reviewer to the general response for an intrinsic evaluation of our knowledge graph construction.
### References
[1] Fang et al. (2020). Hierarchical Graph Network for Multi-hop Question Answering.\
[2] Ramesh et al. (2023). Single Sequence Prediction over Reasoning Graphs for Multi-hop QA.\
[3] Qiu et al. (2019). Dynamically Fused Graph Network for Multi-hop Reasoning.\
[4] Park et al. (2023). Graph Elicitation for Guiding Multi-Step Reasoning in Large Language Models.\
[5] Li et al. (2023). Leveraging structured information for explainable multi-hop question answering and reasoning.\
[6] Liu et al. (2024). Era-cot: Improving chain-of-thought through entity relationship analysis.\
[7] Ding et al. (2019). Cognitive Graph for Multi-Hop Reading Comprehension at Scale.\
[8] Zhu et al. (2021). Adaptive Information Seeking for Open-Domain Question Answering.\
[9] Nie et al. (2019). Revealing the Importance of Semantic Retrieval for Machine Reading at Scale.\
[10] Das et al. (2019). Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering.\
[11] Asai et al. (2020). Learning to retrieve reasoning paths over wikipedia graph for question answering.\
[12] Li et al. (2021). Hopretriever: Retrieve hops over wikipedia 3625 to answer complex questions.\
[13] Gu et al. (2023). Don’t Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments.
---
Rebuttal Comment 1.1:
Title: Thank you!
Comment: Thank the authors for their detailed rebuttal. My concerns are addressed adequately and I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your helpful suggestions and for considering our response. | null | null | Rebuttal 1:
Rebuttal: We thank all of the reviewers for the time and effort they dedicated to reviewing our work, we believe our work will be significantly enhanced from incorporating their suggestions.
We are delighted to know that reviewers found the parallels between our methodology and hippocampal memory indexing theory inspiring. We are also happy that our proposed method was not only deemed interesting and novel but also recognized achieving impressive performance while being efficient. Lastly, we appreciate the reviewers’ comments on the clarity of our writing, experiments and analysis.
### OpenIE Performance (R1 & R3)
We appreciate both reviewers’ questions concerning the intrinsic performance of our OpenIE methodology as well as its robustness to longer documents. We will address these related concerns below:
- **(R1-Q1): Can we evaluate the performance of constructed knowledge graph alone?**
We first note that our ideal IE output is quite different from that of the conventional ClosedIE or OpenIE settings, which is too constrained and unconstrained respectively in terms of named entities and pre-defined relations. Therefore, to evaluate our KGC performance intrinsically, we extract OpenIE triples manually from a small dataset of 20 passages taken from the MuSiQue training set. Out of these passages, we extract 239 gold triples. We measure the quality of our KGC using the CaRB [1] metrics and compare different LLMs with the supervised closed information extraction model REBEL [2] and IMoJIE [3], a supervised OpenIE method.
| Model | AUC | Precision | Recall | F1 |
|---------------------|-------|-----------|--------|-------|
| gpt-3.5-turbo | 0.465 | 0.684 | 0.552 | 0.611 |
| gpt-4o | 0.544 | 0.675 | 0.650 | 0.662 |
| gpt-4-turbo | 0.571 | 0.724 | 0.662 | 0.692 |
| llama-3-8b-instruct | 0.412 | 0.622 | 0.508 | 0.559 |
| llama-3-70b-instruct| 0.512 | 0.710 | 0.599 | 0.650 |
| REBEL | 0.010 | 0.080 | 0.018 | 0.029 |
| IMoJIE | 0.192 | 0.402 | 0.273 | 0.325 |
We find that larger and newer LLMs perform slightly better than others but they all outperform Open and Closed IE methods by large margins. This is due to IMoJIE usually extracting large portions of the passages being processed and being unable to do coreference resolution. On the other hand, ClosedIE models like REBEL fail dramatically because they extract very few entities and relations. Additionally, we refer the reviewer to this work which shows further evidence that LLMs can compete with supervised OpenIE methods in their training setting without further training [4].
We will add this important intrinsic evaluation to the camera ready version of our paper.
- **(R3-W1): Extracting triplets from the passage strongly depends on the power of the triplets extracting model, which is a language model in the paper's implementation. I'm concerned that it may lose information when the passage becomes longer. Is this a common strategy in other retrieval methods?**
We first point out that this information extraction methodology has in fact become more common recently [5, 6], however, it has not been directly applied to retrieval until now.
Second, we present both intrinsic and extrinsic experiments that help us understand the robustness of our OpenIE methods to passage length.
In our intrinsic length-dependent evaluation below, we present the gpt-3.5-turbo OpenIE results on the 10 shortest passages vs the 10 longest passages and find a substantial deterioration of OpenIE performance when extracting from longer passages.
Data Subset | AUC | Precision | Recall | F1
-------------------|-------|-----------|--------|-------
10 Shortest Docs | 0.589 | 0.792 | 0.657 | 0.718
10 Longest Docs | 0.390 | 0.607 | 0.485 | 0.539
In our extrinsic evaluation, we leverage the fact that the MuSiQue dataset contains several passages with the same Wikipedia article title. We therefore combine all passages with the same title into one document, creating longer documents for the OpenIE and DPR models to process. We find that our method outperforms Contriever by similar margins that in our standard setting even though the intrinsic knowledge graph quality is likely diminished. This result is indicative that HippoRAG’s joint components make it more robust than DPR with regards to longer passages.
#### Original MuSiQue:
Model | R@2 | R@5
-----------|------|------
Contriever | 34.8%| 46.6%
HippoRAG | 41.0%| 52.1%
#### Longer Document MuSiQue:
Model | R@2 | R@5
-----------|------|------
Contriever | 32.6%| 41.7%
HippoRAG | 45.7%| 57.7%
We note that the above original results are not directly comparable to the longer document results given that the number of total documents available for retrieval corpus changed. We include the dataset statistics for reference below:
Statistic | Original MuSiQue | Longer Document MuSiQue
----------|------------------|------------------------
Count | 11,656 | 9,838
Mean | 79.80 | 94.54
Std | 47.93 | 146.01
We will add these length-dependent experiments to our updated paper.
### References
[1] Bhardwaj et al. (2019). CaRB: A Crowdsourced Benchmark for Open IE.\
[2] Huguet Cabot et al. (2021). REBEL: Relation Extraction By End-to-end Language generation.\
[3] Kolluru et al. (2020). IMoJIE: Iterative Memory-Based Joint Open Information Extraction.\
[4] Ling et al. (2023). Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty.\
[5] Park et al. (2023). Graph Elicitation for Guiding Multi-Step Reasoning in Large Language Models.\
[6] Li et al. (2023). Leveraging Structured Information for Explainable Multi-hop Question Answering and Reasoning. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object Detection | Accept (poster) | Summary: The paper presents a Domain-Aware Adapter (DA-Ada) for DAOD based on VLMs, which aims to improve the model performance when applied to an unlabeled target domain. DA-Ada incorporates two types of adapters, including a Domain-Invariant Adapter (DIA) to learn domain-invariant knowledge and a Domain-Specific Adapter (DSA) for injecting domain-specific knowledge. It also includes a Visual-guided Textual Adapter (VTA) to encode the cross-domain visual feature into a textural encoder to enhance detection. Experiments across various tasks show that DA-Ada significantly outperforms existing methods, demonstrating its effectiveness in eliminating the domain gap.
Strengths: 1. The proposed method can significantly improve DAOD performance, surpassing state-of-the-art works by a large margin.
2. The general idea of using an adaptor with VLMs to assist DAOD is moderately interesting.
3. The method is simple yet effective.
Weaknesses: 1. Some claims are confusing and lack sufficient justifications.
2. Some designs are similar to existing works and lack sufficient technical contributions.
3. The method is only deployed on Faster RCNN. The adaptability to more advanced detectors is unknown.
4. The paper writing and organization need to improve.
Details are in Questions.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In Line 139, are there any theoretical proofs or experimental justifications to support the claim that low-dimensional features have less information redundancy and are more suitable for domain adaptation? How can you ensure that low-dimensional representation is domain invariant? I don't think that the feature with less information redundancy means domain invariant.
2. For the Visual-guided textual adapter, this design seems to follow CoCoop [1], a generic technique for improving model learning. The difference needs to be clarified.
3. I am confused about the domain-specific adapter. The authors use the residual parts of the visual features to represent the domain-specific information. Is there any proof or literature that can justify the correctness of this assumption?
4. Can this method be applied to more advanced detectors, such as DETR and CenterNet v2?
5. The paper writing and organization need to improve. The authors propose DITA and DSTA but don't provide any information about them in the abstract. Additionally, the method descriptions and Fig.2/3 are not well matched, leading to the reading difficulty. For example, in Line 184, where are DITA and DSTA in Fig. 2(b)?
[1] Zhou, K., Yang, J., Loy, C. C., & Liu, Z. (2022). Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16816-16825).
Considering the significant performance gain of the proposed method, I'm happy to turn to a positive score if concerns are addressed.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comments:
We sincerely thank you for the valuable comments.
We are encouraged to see that our work is recognized as moderately interesting and effective.
We will explain your concerns point by point.
Q1: **In Line 139, ...low-dimensional features have less information redundancy and are more suitable for domain adaptation?**
A1: Thanks for raising an important point.
In fact, it is the combination of dimensional-reduction and dimensional-increase processes with the constraint of task loss that can reduce information redundancy and is more suitable for domain adaptation, rather than the low-dimensional features themselves.
In line 139, the low-dimensional features refer to the output of the down-projection layer, which is an intermediate feature of the adapter's bottleneck structure.
The bottleneck structure is first proposed in [2].
It reduces the computational cost and number of parameters through dimensional reduction-increase, efficiently learning feature representation.
In this structure, low-dimensional features function as intermediate vectors: when down-projecting the input into low-dimensional features, some redundant information is discarded; when mapping low-dimensional features back to the original dimension, the task-related features are retained with the constraint of task loss.
In DIA, we first down-project the input features into low-dimensional features $h^L$ then up-project to high-dimensional features $h^I$, optimizing with the adversarial loss $L_{dia}$ and detection loss $L_{det}$.
Therefore, by combining dimensional reduction-increase with the constraints of detection and adversarial loss, we enable DIA to extract domain-invariant features while reducing redundant features.
Experiments show that the performance peaks 57.1% when the bottleneck dimension is 1/2 of the input (Line 2 of Table 7 in the paper), indicating that appropriate dimensional reduction can filter redundant features while extracting domain-invariant knowledge.
Q2: **The difference between Visual-guided textual adapter (VTA) and CoCoop [1] needs to be clarified.**
A2: Thanks for your nice suggestion.
CoCoop[1] projects image features into input-conditional tokens to lessen sensitivity to class shift.
However, it generates tokens with a meta-net shared across domains, *e.g.* domain-agnostic projector.
Ignoring the differences between images from different domains, [1] shows limited ability to distinguish inter-domain commonalities and characteristics, which are essential for DAOD.
In contrast, VTA explicitly injects domain-invariant and domain-specific knowledge into tokens.
It consists of a domain-invariant textual adapter (DITA) $P$ and domain-specific textual adapters (DSTA) $P^S, P^T$.
DITA $P$ is shared between domains to encode visual domain-invariant knowledge into prompt tokens, specially optimized by an adversarial loss $L_{dita}$.
The DSTA $P^S$ and $P^T$ are independent across domains to learn domain-specific knowledge with detection loss $L_{det}$ on each domain, respectively.
Moreover, a decoupling loss $L_{dec}$ is also designed to boost the DIA and DSA to learn cross-domain information.
VTA increases the textual encoder's discriminability with cross-domain information and outperforms CoCoOp by 2.3%, 2.2% on C->F, K->C adaptation tasks (Table 8 in the paper).
It shows the superiority of our adapter over CoCoOp’s image-conditional prompt.
Q3: **I am confused about...the residual parts... to represent the domain-specific information**
A3:
Thanks for raising an important point.
[3] has studied utilizing residual parts to disentangle domain-invariant and domain-specific representations.
It considers the difference between the input and domain-invariant features as the domain-specific parts.
However, on the one hand, [3] uses the input and output of the whole backbone to make the difference.
Since the output is performed with a lot more convolutions than the input, they differ greatly in semantic level, extracting inaccurate domain-specific knowledge.
On the other hand, [3] only use domain-invariant features for detection, ignoring the capability of domain-specific knowledge to improve the discrimination of detector.
To solve this, we introduce the novel domain-specific adapter (DSA).
First, to ensure semantic-level consistency, the two features used for calculating the difference differ only by 3 convolutions.
Second, to ensure DSA learns domain-specific knowledge, we maximize the distribution discrepancy between DIA and DSA.
Moreover, the DSA is adaptively fused with domain-invariant features to further enhance the discriminability.
Experiments show that using residual parts can effectively extract domain-specific knowledge, and improves 1.5% over not using the residual parts (Line 3, 5 of Table 6 in paper).
And DSA achieves an absolute gain of 4.5%, outperforming the 2.6% in [3].
These results show the effectiveness of the proposed domain-specific adapter.
Q4: **Can this method be applied to more advanced detectors?**
A4: Thanks for your concern.
We apply the proposed method to DETR.
The vanilla DETR baseline achieves 41.6% mAP on C->F with domain discriminators.
Equipping the domain-aware adapter brings 5.2% gains, reaching 46.8% mAP.
To further verify the effectiveness, we also extend DA-Ada to DAPL[4] for DA Classification task.
It increases Accuracy from 74.5% to 77.1% on Office-Home, proving the generalization.
Q5: **DITA and DSTA lack information in the abstract...in Line 184, where are DITA and DSTA in Fig. 2(b)?**
A5:
Thanks for the suggestion.
We will supplement descriptions of DITA and DSTA in the abstract, and revise the description of Figures in the manuscript.
For example, the "Fig.2(b)" in Line 184 should be revised as "Fig.2(c)".
[1]Conditional prompt learning for vision-language models
[2]Deep residual learning for image recognition
[3]Vector-decomposed disentanglement for domain-invariant object detection
[4]Domain Adaptation via Prompt Learning
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. Most of my concerns have been addressed. Therefore, I would like to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your positive feedback. We really appreciate your precious time and valuable comments. | Summary: This paper presents a method to tackle the domain adaptive object detection (DAOD) task within the framework of visual-language models (VLM). The authors propose a Domain-Aware Adapter (DA-Ada) to enhance the visual encoder's ability to learn both domain-invariant and domain-specific features. DA-Ada consists of two components: the Domain-Invariant Adapter (DIA) and the Domain-Specific Adapter (DSA). The DIA learns domain-invariant features by aligning the feature distribution between the source and target domains, while the DSA recovers domain-specific knowledge from the differences between the input and output of the visual encoder blocks. Additionally, the Visual-guided Textual Adapter (VTA) embeds cross-domain information into the textual encoder to improve detection head discriminability. Experiments on various DAOD benchmarks indicate that DA-Ada improves performance compared to state-of-the-art methods.
Strengths: This paper is well-written and easy to follow. The motivation for explicitly learning additional domain-specific features to enhance cross-domain performance is clear and straightforward. The disentanglement of domain-invariant and domain-specific features proposed in the Domain-Specific Adapter, along with the regularization loss in Eq. 15, appears to be effective. The ablation study is comprehensive and detailed, and the experimental results indicate that the proposed model achieves significant performance improvements compared to other state-of-the-art methods across different domain shift scenarios.
Weaknesses: Despite the paper’s strengths, certain aspects of the discussion could be further refined for precision and clarity:
1. The performance of the source-only baseline in Table 3 outperforms (or achieves similar performance to) most of the non-VLM-based models in Table 1. For example, on the Cross-Weather domain shift, the source-only model achieves an mAP of 50.4, whereas methods like AT[35], which are backbone ImageNet-pretrained, show a performance of 50.9, and their source-only variants perform roughly >10 mAP lower. While using a VLM vision encoder to generate essential general knowledge is reasonable, it raises the question of whether this task should still be categorized as cross-domain (domain adaptive) learning or as transfer learning (or VLM-based domain adaptation). Additionally, it would be helpful to add a column in Tables 1 and 2 to indicate the pretraining.
2. It would be beneficial if the authors could also provide the comparisons about the computational speed
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the regression losses used in DA-Ada? Are they the same as those used in [28]?
2. The symbols of down-projectors in Eq.6 and Section 6.7 are different. Could you clarify this discrepancy? Additionally, for the down-up projections, which operate on both the channel dimension (Eq.5) and the spatial dimensions, a clearer explanation would be beneficial.
3. Since DA-Ada modules are added to different visual encoder blocks, the claim in line 139 that “Low-dimensional features have less information redundancy…” seems inappropriate. In my opinion, “low-dimensional” features typically refer to features from the earlier layers. In Section 2, Eq. 5 seems more related to “channel dimension” condensation. Additionally, in “Bottleneck Dimension” in Section 4.3 (also Table 7), it would be helpful if the authors could directly indicate the input channel dimensions.
4. It would be beneficial to add some visualizations for failure cases.
Some minor problems:
1. Typos in line 183-184?
2. Legend missing in Figure 3 (the red symbols)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the limitations are discussed in Sec 6.11.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment:
We sincerely thank you for your comprehensive comments and constructive advice.
We are pleased to see our work being regarded as effective, and the motivation as clear and straightforward.
We will explain your concerns point by point.
Q1: **The performance of the source-only baseline in Table 3 outperforms (or achieves similar performance to) most of the non-VLM-based models in Table 1..., it raises the question of whether this task should still be categorized as cross-domain ...to indicate the pretraining.**
A1: Thanks for the interesting question.
Our research focuses on injecting new domain knowledge into pre-trained VLMs, while protecting the generalization ability of the pre-trained knowledge.
Despite using a strong VLM backbone, our method still achieves significant absolute performance gains.
As shown in Table 10 in the paper, our method achieves an 8.0\% improvement on Cross-Weather adaptation task, surpassing the 7.9\% of the SOTA method AT[35].
In addition, to properly evaluate the method, we introduce DA-Ada to weaker non-VLM baseline DSS[59] in Table 17.
With DA-Ada, DSS achieves a competitive 48.1\% mAP to SOTA methods and attains 7.2\% improvement, indicating that the proposed DA-Ada performs well even under the weak non-VLM baseline.
For ease of understanding, we will also add the description of pre-training to Tables 1 and 2.
Domain adaptation (DA) aims to transfer source domain knowledge to the target domain.
Compared with the backbone used in non-VLM DAOD method, the VLM used in our method differs only in the pre-trained data.
Essentially, we design the domain-aware adapter and visual-guided textual adapter to learn knowledge from the source domain and transfer it to the target domain, which follows the general definition of domain adaptation.
Specifically, DA-Ada analyses the source domain data to transfer the general knowledge of the VLM to downstream tasks, that is, the target domain.
In this case, the presence of VLM makes the problem like a combination of traditional DA and source-free DA.
Therefore, it is also appropriate to list our method as a sub-problem of DA, such as VLM-based domain adaptation.
Q2: **It would be beneficial if the authors could also provide the comparisons about the computational speed**
A2: Thanks for your advice.
We have compared computational speed over global fine-tune, the SOTA method DA-Pro [28] and our proposed DA-Ada in Table 18 in the Appendix.
We initial the three methods with the same VLM backbone.
Only attaching lightweight adapters, our DA-Ada slightly introduces 0.02s extra time, accounting for 5\% inference time but significantly improves the performance.
Global fine-tuning has the largest training time overhead but only achieves the lowest performance, indicating the limitations of traditional DAOD methods in optimizing VLM.
Compared with global fine-tuning, DA-Pro significantly reduces training time overhead while improving performance.
Furthermore, DA-Ada significantly improves mAP with 4.9\% while only using 6\% of the time and 47\% of the memory, showing great efficiency in adapting cross-domain information to VLM.
We will also add some computational time comparisons for non-VLM methods.
| Method |mAP |Inference time(s)/iter |Training time(s)/iter |Total iter |
|-|-|-|-|-|
| Global Fine-tune |53.6 |0.40 |2.67 |25000 |
| DA-Pro[28] |54.6 |0.40 |1.47 |1000 |
| DA-Ada |58.5 |0.42 |1.61 |2500 |
Q3: **What are the regression losses used in DA-Ada? Are they the same as those used in [28]?**
A3: Thanks for your question.
The regression loss is the smooth L1 loss, which is the same regression loss used in the DA-Pro[28].
Q4: **The symbols of down-projectors in Eq.6 and Section 6.7 are different...a clearer explanation would be beneficial.**
A4: Thanks for pointing this out.
The symbol in Eq.6 and Section 6.7 refers to the same down-projector.
Therefore, we will unify their descriptions and use the same font.
For the down projectors, Eq.5 operates only on the channel dimension.
As a multi-scale version, Eq.6 operates on both channel and spatial dimensions.
For the up projectors, Eq.7 operates only on the channel dimension.
We will provide these details in the manuscript.
Q5: **“Low-dimensional features have less information redundancy…”...seems more related to “channel dimension” condensation...it would be helpful if the authors could directly indicate the input channel dimensions.**
A5: Thanks for the valuable suggestion.
The "low-dimensional features" in Line 139 refers to the output of down-projection layer $\mathscr{C}^{D}$ in the proposed adapter, which is an intermediate feature of the bottleneck structure.
Since it represents a reduction in channel dimension, we will revise the "low-dimensional" into "low channel-dimensional" for better clarity.
As for the “Bottleneck Dimension,” the input channel dimension is equal to Line 4 of Table 7, \ie 64, 256, 512, 1024 for the four DA-Ada blocks, respectively.
We will supplement the explaination the number of input channels in Section 4.3 and highlight them in Table 7.
Q6: **It would be beneficial to add some visualizations for failure cases.**
A6: Thanks for your suggestion.
We provide some examples of failure cases on the Cross-Weather adaptation scenario in Figure 6 in the Global Rebuttal Part.
We visualize the ground truth (a)(b) and the detection boxes of DA-Ada (c)(d).
In (c.1), DA-Ada misses the car with its headlights on in the fog.
Since the source data Cityscapes is collected on sunny days, few cars turned on their lights in the training set.
Therefore, DA-Ada missed such out-of-distribution data.
In (d.1), DA-Ada misses the bicycle and person blocked by other foreground objects.
Since occlusion causes great damage to semantics, this type of missed detection is widely seen in object detection methods.
Q7: **minor problems**
A7: Thanks for pointing this out.
We will correct the typos and supplement the legend in Figure 3.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. While your explanations clarified most points, I still believe that the extensive pre-training on massive data gives the VLM backbone a strong ability to capture domain-invariant features. This shifts the problem away from pure (or conventional) domain adaptation to something more akin to "vision-language data pre-trained domain alignment," which might better reflect the nature of the task, even though the terminology itself may seem a bit clumsy.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive and insightful feedback.
As you suggested, in the era of pre-trained VLM, we do need to rethink the role of VLM in DAOD and explore new paradigms for the task itself.
In future work, we will continue to study this type of VLM-based domain adaptation tasks and investigate how the domain alignment of visual-language pre-training can help detectors adapt across different domains. | Summary: This work focuses on domain adaptive object detection (DAOD) with the vision-language models. The core idea behind this paper is the frozen visual encoder with a domain-agnostic adapter only captures domain-invariant knowledge for DAOD. To this end, this paper proposes a novel Domain-Aware Adapter (DA-Ada) to capture the domain-invariant knowledge and domain-specific knowledge. The experimental results on multiple DAOD tasks show the proposed method clearly outperforms existing DAOD methods.
Strengths: 1. DAOD is an important problem, especially, in the large vision-language era, we need to rethink and explore the new paradigm for DAOD.
2. The proposed method is reasonable, effective transfer performance should maintain both the transferability (domain-invariant) and discriminability (domain-specific) between source and target domains.
3. The experimental results over multiple benchmarks show the effectiveness of the proposed method. Extensive ablation studies have been conducted to investigate the proposed method.
Weaknesses: 1. Although the proposed method is reasonable, it still has limited novelty for the domain adaptation community. The domain-invariant and domain-specific knowledge have been exploited by many previous works.
2. The function of dec loss is to maximize the distribution discrepancy between DIA and DSA. Why is the cosine similarity calculated by $h_i^I$ and $h_i^I \cdot h_i^S$ instead of $h_i^I$ and $h_i^I$.
3. Lack of details for VTA, what are the structures of DITA, DSTA, and DSTA. Besides, it should be added the text description c in Figure 2 (c), the CLIP Textual Encoder adopts both the projected visual feature and the textual feature for better clarity.
4. The author should add results of source-only baseline, i.e., RegionCLIP, for a fair comparison.
5. The Injection Operation has many ways in Table 6, can the authors provide some explanation between $h_i^I + h_i^S$ and $ h_i^I + h_i^I \cdot h_i^S $
6. In line 184, the figure reference should be Fig.2(c) instead of Fig.3(b).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Do we really need a source domain for adaptation when we already have a powerful vision-language detector that contains much general knowledge from large-scale data?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Nothing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment:
We sincerely thank you for your comprehensive comments and constructive advice.
We are pleased to see our work being regarded as reasonable and effective.
We will explain your concerns point by point.
Q1:
**Although the proposed method is reasonable...the domain-invariant and domain-specific knowledge have been exploited.**
A1:
Thanks for your concern.
Recent works [50, 2, 1, 59, 60] propose multiple extractors[38, 36, 57, 61] and discriminators [63, 76] to decouple the domain-invariant and domain-specific knowledge, aiming to disentangle knowledge unique to each domain.
However, on the one hand, they only use domain-invariant features for detection, ignoring the improvement of discriminability brought by the characteristics of each domain, *e.g.* domain-specific knowledge.
On the other hand, Applying existing DAOD method to VLM would overfit the model to the training data, compromising the generalization of pre-trained models.
In contrast, we take advantage of the generalization of VLM to assist in domain adaptation and propose a novel decoupling-refusion strategy.
While preserving the pre-trained essential general knowledge, it adaptively modifies domain-invariant features with domain-specific features to enhance the discriminability on the target domain.
Experiments show that DA-Ada surpasses the SOTA disentangling method [76] (Line 6 of Table 1 in the paper) 6.4\~9.2\% on three benchmarks, indicating that our method explores the relationship between domain-invariant and domain-specific knowledge from a novel and effective perspective.
Q2: **The function of dec loss...Why is the cosine similarity calculated by h^I and h^I\* h^S instead of h^I and h^I.**
A2:
Thanks for your nice suggestion.
$\mathbf{h}^S$ is the output features of DSA block, expected to extract domain-specific knowledge.
To adaptively fuse the domain-specific with the domain-invariant knowledge, we explore two kinds of injection operation: directly adding $\mathbf{h}^I+\mathbf{h}^S$ and pixel-level attention$\mathbf{h}^I+\mathbf{h}^I\cdot \mathbf{h}^S$.
We find that domain-specific knowledge describes intra-domain properties and is more suitable for refining the extracted domain-invariant features.
As shown in Table 6 in the paper, $\mathbf{h}^I+\mathbf{h}^I\cdot \mathbf{h}^S$ achieves better performance than directly adding $\mathbf{h}^I+ \mathbf{h}^S$.
Therefore, $\mathbf{h}^S$ functions as pixel-level attention for $\mathbf{h}^I$.
In this case, $\mathbf{h}^I\cdot \mathbf{h}^S$ represents features refined by the domain-specific knowledge, and maximizing the cosine similarity between $\mathbf{h}^I$ and $\mathbf{h}^I\cdot \mathbf{h}^S$ could help further decoupling the domain-invariant and domain-specific knowledge.
In addition, if we maximize the cosine similarity between $\mathbf{h}^I$ and $\mathbf{h}^S$, then the term $\mathbf{h}^I\cdot \mathbf{h}^S$ eventually converges to 0, which is meaningless for adaptation.
Q3: **Lack of details for VTA.... Besides, it should be added the text description c in Figure 2 (c)....for better clarity.**
A3: Thanks for raising this question.
The structure of DITA and DSTA is a 3-layer MLP with a hidden dimension of 512.
The DITA and DSTA project visual embeddings into 8 tokens for the textual encoder.
We will supplement the details of VTA in the manuscript.
We will also add the textual description c as an input to the textual encoder in Fig.2(c).
Q4: **The author should add results of source-only baseline, i.e., RegionCLIP.**
A4: Thanks for this point.
The zero-shot results of the baseline, *e.g.* RegionCLIP, is 52.6\% mAP in Line 1 of Table 4,6 and 8 in the paper.
Fine-tuned with only the source domain data, the source-only baseline achieves 50.5\% mAP, suffering performance degradation on the target domain.
Equipped with domain-adaptive adapter and visual-guided textual adapter, our proposed method achieves the highest absolute gain of 8.0\% (Line 5 of Table 10 in the paper) over the source-only baseline, demonstrating remarkable efficiency.
We will modify the corresponding description in the manuscript to make it clearer.
Q5: **The Injection Operation...can the authors provide some explanation between h^I+h^S and h^I+h^I\*h^S**
A5: Thanks for the question.
$\mathbf{h}^I+ \mathbf{h}^S$ denotes directly adding the output of DIA and DSA and taking it as the output feature for the domain-aware adapter.
$\mathbf{h}^I+\mathbf{h}^I\cdot \mathbf{h}^S$ denotes that $\mathbf{h}^S$ is applied as pixel-level attention, multiplied with $\mathbf{h}^I$ element-wise and added with the original $\mathbf{h}^I$.
Experiment shows that $\mathbf{h}^I+\mathbf{h}^I\cdot \mathbf{h}^S$ outperforms $\mathbf{h}^I+ \mathbf{h}^S$ (Line 5 and 7 of Table 6 in the paper), revealing that $\mathbf{h}^S$ describes intra-domain properties and is more suitable for refining the $\mathbf{h}^I$, rather than directly adding.
Q6:**In line 184, the figure reference should be Fig.2(c) instead of Fig.3(b).**
A6: Thanks for this point. We will fix this typo in the manuscript.
Q7: **Do we really need a source domain for adaptation when we already have a powerful vision-language detector that contains much general knowledge from large-scale data?**
A6:
Thanks for the intriguing question.
Pre-trained VLMs learn powerful generalization through large-scale data.
Although VLMs' zero-shot capabilities are strong, they benefit from the general knowledge they learn, which seems like common sense over a wide range of data.
When deployed to downstream tasks, VLMs can show better performance combined with task-specific knowledge.
Therefore, if there is a lack of credible labels for downstream tasks, a manually annotated dataset that is highly relevant to them can help VLMs migrate to downstream tasks properly, that is, source domain data.
That is to say, the existence of the source domain is to help VLMs better adapt to the target domain.
In this sense, the source domain is still needed.
---
Rebuttal 2:
Title: Thank you for your careful response
Comment: Thank you for your detailed answers to the other reviewers and me, which solved all my concerns.
---
Rebuttal Comment 2.1:
Comment: Thanks for your timely responses. We appreciate the valuable comment and insightful question! | Summary: This article proposes a Domain-Aware Adapter (DA-Ada) tailored for the DAOD task. The key point is exploiting domain-specific knowledge between the essential general knowledge and domain-invariant knowledge. The DA-Ada framework consists of the Domain-Invariant Adapter (DIA) for learning domain-invariant knowledge and the Domain-Specific Adapter (DSA) for injecting the domain-specific knowledge from the information discarded by the visual encoder. Comprehensive experiments over multiple DAOD tasks show the effectiveness of DA-Ada.
Strengths: 1. The paper is well-written.
2. The proposed method achieves significant performance improvement compared to several baselines on commonly used benchmarks. The proposed method not only improves the detection performance on the target domain, but also achieves improvement on the source domain.
Weaknesses: 1. The DIA module is a sequence of operations involving mapping, dimensionality reduction, slicing, and dimensionality-raising. It is challenging to understand why such operations can extract domain-invariant knowledge. The author should provide a stronger explanation to clarify the rationale behind this design.
2. In line 251 of the text, could the author elaborate on how the source-only adapter and the domain-agnostic adapter each function? This would help readers better understand.
3. Could the author design a quantitative experiment to further visualize which features represent domain-specific knowledge? Additionally, could they compare the traditional adapter with the method proposed in this paper to further demonstrate the effectiveness of the proposed method? The AP metric alone is insufficient to fully explain the motivation behind this method.
4. Since this paper focuses on adapter work, could the author further review the related work on VLM (Vision-Language Models) adapters in more detail? This would help readers better understand the contribution of this paper.
5. The paper mentions that the domain-agnostic adapter easily causes the model to bias towards the source domain. The latest work [1] also addresses the issue of source bias. In the related work and experiments sections, the author should include a discussion and comparison with the latest work.
6. During this training process, are the original CLIP network components (including the visual encoder and the text encoder) completely frozen, with only the adapter and VTA parts being trained?
7. minor error: There are several places between lines 91 and 102 where spaces are missing between sentences.
Reference:
[1] DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection[C]//Forty-first International Conference on Machine Learning.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment:
We sincerely thank you for your comprehensive comments and constructive advice.
We are pleased to see our work being regarded as achieving significant improvement.
We will explain your concerns point by point.
Q1: **The DIA module is a sequence of operations...explanation behind this design.**
A1:
Thanks for your valuable concern.
In fact, it is the combination of dimensional reduction-increase processes with the constraints of detection and adversarial loss that can extract domain-invariant features while reducing redundant features.
The structure of DIA is motivated by bottleneck [2].
Bottleneck reduces the computational cost by dimensional reduction-increase, efficiently learning feature representation.
In this structure, when down-projecting the input into low-dimensional features, some redundant features are discarded; when mapping low-dimensional features back to the original dimension, the task-related features are retained with the constraint of task loss.
In DIA, we first down-project the input features into low-dimensional features h^L then up-project to high-dimensional features h^I, optimizing with the adversarial and detection loss.
Besides, we introduce the multi-scale scheme by slicing channels into different receptive fields, enabling it to capture domain-invariant knowledge in various scales.
Experiments show that the mAP peaks 57.1% when the bottleneck dimension is 1/2 of the input (Line 2 of Table 7 in the paper), indicating that appropriate dimensional reduction can filter redundant information while extracting domain-invariant knowledge.
Applying the scaling ratios of {1, 1/2, 1/4, 1/8} achieves 58.5%, demonstrating that multi-scale convolution can help deal with the domain bias in object scales.
Q2: **How the source-only adapter and domain-agnostic adapter each function?**
A2: Thanks for the suggestion.
The source-only adapter denotes the traditional adapter fine-tuned on the source domain.
The domain-agnostic adapter is tuned with detection loss on source domain and adversarial loss on both domains, as shown in Fig.1(b).
We will supplement the description in the manuscript.
Q3: **Quantitative experiment to visualize which features represent domain-specific knowledge?**
A3: Thanks for your constructive suggestion.
We visualize the output features of the traditional adapter, the domain-invariant adapter (DIA), domain-specific adapter (DSA) and the domain-aware adapter (DA-Ada) in Figure 5 in the Global Rebuttal part.
We sample image (a) a car and a person in the fog from Foggy Cityscapes.
The traditional adapter (b) roughly extracts the outline of the car.
However, affected by target domain attributes, such as fog, background areas are also highlighted in (b), and the person is not salient.
DIA (C) mainly focuses on the object area and extracts domain-shared task information.
DSA (d) mainly focuses on factors related to domain attributes besides the objects, such as foggy areas.
By combining DIA with DSA, DA-Ada (e) extracts the car and person while reducing the interference of fog in the background.
Compared with (b), objects are more salient in (e), indicating the effectiveness of DA-Ada.
Q4: **Further review the related work on VLM adapters**
A4: Thanks for your suggestion.
Recent works explore adapters to transfer pre-trained VLM to few-shot visual tasks.
[3] and [4] propose adapters to introduce image-related inductive biases into Transformer and CNN network.
[6] firstly integrates the adapter into the CLIP model, and [5] further analyzes the components to be frozen or learnable.
[7] combines self-supervised learning to enhance the ability to extract low-level features.
Recent [8] explore injecting task-related knowledge into high-resolution segmentation model SAM.
However, they also face the source-biased problem when applied to DAOD task.
To handle this, DA-Ada explicitly learns both domain-invariant and domain-specific knowledge.
We will provide a more detailed description in related work.
Q5: **Comparison with the latest work [1].**
A5:
As a semi-supervised method, [1] transfers the source domain image into target domain, aiming to train a style-unbiased classifier.
However, it requires a large amount of image pre-processing and three stages of training.
And [1] can only handle domain bias in image style.
In contrast, DA-Ada freezes the backbone and introduces lightweight learnable adapters.
It does not require data generation and only tuning a very small number (1.794M) of parameters to achieve significant adaptation of +8.0%(Line 5 of Table 10 in the paper).
Meanwhile, since DA-Ada adaptively learns cross-domain information from visual features, it appears to be robust in various adaptation scenarios.
Experiments show that DA-Ada outperforms [1] on three benchmarks with 6.3\~17.4% mAP, and 5.1\~5.6% absolute performance gain, indicating that DA-Ada is more effective and can handle a broader range of adaptation scenarios.
|Benchmark| DSA-DA|Abs. Gain|DA-Ada|Abs. Gain|
|-|-|-|-|-|
|C→F|52.2|+2.9|58.5|+8.0|
|K→C|49.3|+1.6|66.7|+7.2|
|S→C|52.5|+1.1|67.3|+6.5|
Q6: **During training process, are the original network completely frozen?**
A6:
Yes. Only the adapter parts are trained, and all other CLIP components are frozen.
Q7: **minor error in lines 91 and 102**
A7:Thanks for pointing that out. We will revise these typos.
[1]DSD-DA:Distillation-based Source Debiasing for Domain Adaptive Object Detection
[2]Deep residual learning for image recognition
[3]Adapterfusion:Non-destructive task composition for transfer learning
[4]Conv-adapter: Exploring parameter efficient transfer learning for convnets
[5]VL-ADAPTER: Parameter-efficient transfer learning for vision-and-language tasks
[6]Clip-adapter: Better vision-language models with feature adapters
[7] SVL-adapter: Self-supervised adapter for vision-language pretrained models
[8]Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. Most of my concerns are addressed. I hope that all the experiments and discussions in the rebuttal can be added in the revised paper. After reading the author's response and considering comments from other reviewers, I would like to keep my initial rating.
---
Reply to Comment 1.1.1:
Comment: We really appreciate your precious time. As you nicely point out, we will carefully include the additional experiments and discussions in the manuscripts. Thanks for your insightful suggestion! | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful and valuable comments! Overall, we are encouraged that they find that:
1. The idea of learning domain-aware adapter is **moderately interesting** (Reviewer N5kf), **reasonable** (Reviewer nqrX) and the motivation is **clear and straightforward** (Reviewer xAKd).
2. This paper addresses an **important** problem in the large vision-language era and **explore the new paradigm for DAOD** (Reviewer nqrX).
3. The proposed method is **simple yet effective** (Reviewer N5kf, Reviewer xAKd, Reviewer nqrX), **significantly improves DAOD performance** (Reviewer N5kf, Reviewer DyVN, Reviewer xAKd), and **surpasses state-of-the-art works by a large margin**(Reviewer N5kf).
4. The paper is **well-written** (Reviewer DyVN, Reviewer xAKd), the experiments is **extensive** (Reviewer nqrX), **comprehensive** and **detailed** (Reviewer xAKd).
We will revise the manuscript according to the reviewers' comments. The main changes we made include:
1. We add more details and explanations about the low-dimensional features in DIA, the injection operation in DSA, and the structure of VTA.
2. We add discussions and comparisons with the latest research on prompt tuning, adapters, and representation disentangling.
3. We add quantitative experiments to explore the effectiveness of the proposed method. We provide feature visualization of the traditional adapter, the proposed domain-invariant adapter, domain-specific adapter and domain-aware adapter in Figure 5 in the attached PDF. We also analyze some failure cases in Figure 6.
4. We revise the details in Figures and Tables and fix some typos in the manuscript. We add the textual description c as an input to the textual encoder in Fig.2(c), and add legends in Fig.3. We indicate the pre-training scheme for the model in Table 1 and Table 2.
Next, we address each reviewer's detailed concerns point by point. We hope we have addressed all of your concerns. Thank you!
Pdf: /pdf/10952f878d3efe24f693efb15aa449ce15cbf777.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-Agent Domain Calibration with a Handful of Offline Data | Accept (poster) | Summary: The paper addresses the challenge of performance degradation when RL policies trained in one domain are deployed in another with different dynamics. The proposed solution, Madoc (Multi-agent domain calibration), uses a small amount of offline data from the target domain to adjust the source domain's physics parameters. This approach improves upon existing domain calibration techniques by employing cooperative Multi-Agent Reinforcement Learning (MARL) to handle a large parameter space more efficiently.
Strengths: - Strong empirical performance
Weaknesses: - The paper is so hard to follow.
- In 4.1, we start with KL minimization objective. It is difficult to understand how it is optimized, as the paper lacks sufficient explanation and some derivations are incorrect. Eq (4) seem to be completely different formula compared to eq (3). regarding q_\phi, in eq (4), the only relevant term is the third term, which makes no sense.
- Section 4.2. refers to "calibaration critic" and some terms that are not defined.
- especially the latter part of 4.2, explaining about the parameter grouping, is mostly not understandable. what does "the identity i of each parameter" mean? How can $i$ itself determine s' given s,a,\zeta^i without knowing \zeta^-i?
minor comments:
- in figure 2 legend, blue dots are said to be MA, whereas the text on left says blue dots are single-agent method.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Regarding the objective at the start, eq. (1), why do we need to match action distribution as well? Usually if the transition is varying, the objective is to match the state distributions. If the transitions are different, the actions to make the similar state distributions will be different, and it seems natural to not match the action distributions.
- Why do we need the multi-agent method? in section 4.2. it is motivated in a way that shared critic struggles to optimize policy in a huge action space. Does it mean that it is beneficial to use multi-agent algorithm for any RL environments for better optimization of policy? As far as I know, multi-agent algorithms are harder to optimize to the optimal policy (often converge to sub-optimal), and I am curious why this case is different.
- Why can't we apply algorithms like "imitation learning from observation" for this problem setting after concatenating the domain parameter to the action space? How is this paper different from such algorithms?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: The objective in Sec. 4.1.
**Eq. 4 is obtained through reasonable approximations.**
The explanations for each derivation step are as follows:
- Eq. 2 is derived from Eq. 1 based on the definitions of trajectory distribution and KL divergence, transforming the objective into the expectation under the parameter distribution and trajectory distribution of the source domain.
- Eq. 3 further decomposes Eq. 2. Since $\log \frac{q_{\phi}(\xi)}{p(\xi)}$ is independent of the trajectory distribution $d_{\pi,\mathcal{M}_{\xi}}$, the expectation can be decomposed into two terms.
- Eq. 4 approximates Eq. 3. As the parameter distribution and trajectory distribution used to calculate the expectation are difficult to compute, we use **Monte Carlo sampling** on the source domain to approximate the expected results. To enhance sampling efficiency, we store the samples in the replay buffer, **converting the trajectory-based objectives into transition-based objectives**, following the off-policy RL paradigm.
We will add the detailed explanations in the revised manuscript.
### Q2: Some terms are not defined in Sec. 4.2.
The calibration critic and the calibration policy (actor) together constitute a calibration agent. The calibration critic is responsible for evaluating the accuracy of the parameters output by the calibration actor. The absolute calibration error represents the absolute difference between the parameters output by the calibration actor and the target parameters.
### Q3: The latter part of Sec. 4.2 is not understandable.
"The identity $i$ of each parameter" represents a specific identification symbol for each parameter. Specifically, in the experiments, these identities refer to a set of one-hot encodings used to distinguish different parameters.
Besides, some key contents were missed: the identity can determine $s'$ given $s, a, \xi$ by **keeping the other parameters $\xi^{-i}$ fixed**, similar to the approach in COMA [1]. By only changing $\xi_i$, the effect of this parameter on the dynamics can be reflected, thereby facilitating parameter grouping .
We apologize for the confusion and will add explanations in the revised manuscript for better understanding.
### Q4: Why do we need to match action distribution?
**We need to match the action distribution to make the dynamics of the source domain closer to that of the target domain.**
Eq. 1 is used to match the trajectory distribution (state distribution and action distribution). The underlying motivation is to minimize the transition dynamics gap. If only the state distribution is matched, multiple combined solutions of the policy and the transition function can exist. Therefore, to obtain the target transition function, we need to match the action distribution as well. To verify our idea, we conducted relevant experiments: for Eq. 1, we no longer matched the action distribution, denoted as Madoc_st. The experimental results, shown below, indicate performance degradation in all three scenarios.
| Task | Madoc | Madoc_st |
| -------------- | -------- | --------- |
| hfctah-med | 91.9±7.7 | 78.2±8.9 |
| hfctah-med-rep | 95.7±9.9 | 85.2±12.4 |
| hfctah-med-exp | 96.9±5.3 | 88.1±10.1 |
### Q5: Why do we need the multi-agent method?
**We use the multi-agent method for better optimization efficiency.**
The search space of domain parameters grows exponentially with the number of parameters, hindering the optimization of both single-agent methods and previous multi-agent methods. Fortunately, with the advent of linear value factorization algorithms like VDN [2], researchers have improved scalability by decomposing the joint action space theoretically and practically [3], resulting in a much lower action space for each agent. Consequently, Madoc has higher optimization efficiency and can calibrate parameters more accurately.
Certainly, the multi-agent method is not applicable to all RL environments. When there is only one agent in the environment with a low action dimension, a single-agent RL method might be more straightforward, providing satisfactory policy optimization without the added complexity.
### Q6: How about methods like "imitation learning from observation"?
**Madoc is different from and superior to these methods.**
Imitation from observation [4] methods (denoted as IfO) can be formalized as learning a discriminator $D(s, s')$ and a policy $\pi(a,\xi|s)$ simultaneously, while Madoc involves training a discriminator $D(s, a, s')$, a calibration policy $q(\xi)$ and a running policy $\pi(a|s)$. There are mainly two differences:
- Madoc decouples tuning the domain parameters and the policy, making it more accurate and efficient in reducing the dynamic gap.
- IfO requires matching the state distribution of the expert trajectory, which imposes additional quality requirements on the dataset [5]. However, Madoc only requires datasets of arbitrary quality to match the transition dynamics.
We also tested the performance of IfO. The first table below reports the mean return during the test stage.
| Task | Madoc | IfO |
| -------------- | -------- | --------- |
| hfctah-med | 91.9±7.7 | 43.2±28.9 |
| hfctah-med-rep | 95.7±9.9 | 55.2±19.4 |
| hfctah-med-exp | 96.9±5.3 | 42.1±22.1 |
The second table below compares the mean absolute calibration error.
| Task | Madoc | IfO |
| -------------- | --------- | --------- |
| hfctah-med | 0.06±0.01 | 0.98±0.12 |
| hfctah-med-rep | 0.12±0.02 | 1.02±0.30 |
| hfctah-med-exp | 0.09±0.02 | 0.79±0.08 |
Reference:
[1] Counterfactual Multi-Agent Policy Gradients. AAAI, 2018.
[2] Value-Decomposition Networks for Cooperative Multi-Agent Learning Based on Team Reward. AAMAS, 2018.
[3] Towards Understanding Cooperative Multi-Agent Q-Learning with Value Factorization. NeurIPS, 2021.
[4] Recent Advances in Imitation Learning from Observation. IJCAI, 2019.
[5] Robust Learning from Observation with Model Misspecification. AAMAS, 2022.
---
Rebuttal Comment 1.1:
Title: About author responses
Comment: Thanks for the detailed response. These responses address some of my concerns. However:
1. My question on (4) was that it no longer includes $q_\phi$ except the last term. optimizing it results in $q_\phi=p$, and I am sure that it is not what the paper is aiming to do.
2. I am still not convinced that multi-agent approach is mandatory in small-sized experiments of Mujoco locomotion tasks. I believe that the optimization challenge addressed with multi-agent approach can be also handled with complex neural architectures as Transformers/Diffusion models. In this sense, considering the difficulty in optimizing multi-agent algorithms, there won't be much interested researchers willing to explore similar directions.
Due to these reasons, for now, I decide to keep the score.
---
Reply to Comment 1.1.1:
Comment: Thank you once again for your thorough review and enthusiastic discussion.
Regarding your first concern, we have decided to revise Eq. (4) from
$$\approx-\mathbb{E}\_{(s,a)\sim\mathcal{B}}\left[\log\frac{\mu(a|s)}{\pi(a|s)}\right] - \mathbb{E}\_{(s,a,s',\xi)\sim\mathcal{B}}\log\left[\frac{T(s'|s,a,\xi^*)}{T(s'|s,a,\xi)}\right]+D\_{\mathrm{KL}}(q\_{\phi}(\xi)||p(\xi)),$$
to
$$\approx-\mathbb{E}_{a\sim\pi(\cdot|s)}\left[\log\frac{\mu(a|s)}{\pi(a|s)}\right]-\mathbb{E}\_{\substack{\xi\sim q\_\phi(\cdot) \\\\ (s,a,s')\sim d\_{\pi,\mathcal{M}\_{\xi}}(\cdot)}}\log\left[\frac{T(s'|s,a,\xi^*)}{T(s'|s,a,\xi)}\right]+D\_{\mathrm{KL}}(q\_{\phi}(\xi)||p(\xi)),$$
where both the second and third terms optimize the parameter distribution $q_\phi$. While the underlying approach remains the same, the revised expression is now clearer. Our initial intention was to emphasize the off-policy training paradigm by introducing the replay buffer $\mathcal{B}$ in the equation, but this led to some confusion. We will update this section in the revised version and add an explanation to address your concern.
Furthermore, addressing the concern about the necessity of a multi-agent approach, it's important to emphasize that the motivation behind using multi-agent methods is to mitigate optimization challenges posed by the large number of parameters. By employing the widely used value decomposition approach, the multi-agent method reduces the search space for each individual agent while preserving cooperation between different parameters. Additionally, we have chosen MARL for the following reasons:
- With the advancement of deep learning and the development of techniques such as value decomposition in MARL [1, 2], research in multi-agent reinforcement learning (MARL) has made significant strides, overcoming numerous challenges and achieving notable progress across various fields. Consequently, the optimization challenges in multi-agent algorithms have been substantially alleviated in recent years.
- Transformers and Diffusion models have recently shown exceptional results across various domains of deep learning. In reinforcement learning, approaches like TT [3], Diffuser [4], and DD [5] have also achieved impressive outcomes using these models. However, their success heavily relies on large datasets—TT, Diffuser, and DD required 1e6, 1e6, and 2.5e6 transitions, respectively, for training on continuous control tasks, while our method uses only 5e4 transitions. Consequently, their performance may not be as strong in our small-sample setting. Nonetheless, in future work, we plan to explore integrating Transformers and Diffusion models into our framework to enhance the expressive power of the reward model, particularly in scenarios involving high-dimensional image and text inputs.
- Multi-agent methods can solve problems that single-agent approaches cannot. For example, MA-DAC [6] models dynamic algorithm configuration problems as multi-agent systems and addresses them using cooperative MARL algorithms, thereby improving optimization efficiency. Similarly, MA2ML [7] effectively tackles optimization learning challenges in the connection of modules in automated machines through MARL. The study in [8] introduces a generic game-theoretic credit assignment framework to enable decentralized execution in continuous control. MARLYC [9] presents a novel method called MARL yaw control, which optimizes the yaw of each turbine, ultimately enhancing the total power generation of the farm. Additionally, the work in [10] models image data augmentation as a multi-agent problem, offering a more fine-grained automatic data augmentation approach by dividing an image into multiple grids and determining a jointly optimal enhancement policy. These successful applications highlight the effectiveness of modeling problems as multi-agent systems.
Thank you for your valuable feedback. We will continue to refine our framework to enhance domain transfer, guided by the reviewers' suggestions, and further advance the practical application of MARL.
Reference:
[1] Multi-agent deep reinforcement learning: a survey. Artificial Intelligence Review, 2022.
[2] A survey of progress on cooperative multi-agent reinforcement learning in open environment. arXiv, 2023.
[3] Offline reinforcement learning as one big sequence modeling problem. NeurIPS, 2021.
[4] Planning with diffusion for flexible behavior synthesis. ICML, 2022.
[5] Is conditional generative modeling all you need for decision making? ICLR, 2023.
[6] Multi-agent dynamic algorithm configuration. NeurIPS, 2022.
[7] Multi-agent automated machine learning. CVPR, 2023.
[8] Multiagent model-based credit assignment for continuous control. AAMAS, 2022.
[9] Marlyc: Multi-agent reinforcement learning yaw control. Renewable Energy, 2023.
[10] Local patch autoaugment with multi-agent collaboration. IEEE Transactions on Multimedia, 2023. | Summary: This paper introduces Madoc, a domain transfer method that calibrates a source domain using small amount of offline data from the target domain via multi-agent reinforcement learning (MARL). More concretely, MARL is used to tune physics parameters governing dynamics in the source domain to more closely match those in the target domain. Empirically, Madoc enables transfer in D4RL and NeoRL benchmark tasks.
Strengths: 2. The domain transfer problem is clear and the method is well-motivated.
1. The empirical analysis consider many relevant baselines and considers appropriate benchmark tasks.
Weaknesses: I lean to reject primarlily because the paper's empirical analysis does not seem to support the two core claims on the paper -- namely that Madoc reduce the difference in dynamics between source and target domains and outperforms existing methods.
1. **Weak empirical results.** In most if not all tasks, the confidence regions for Madoc overlap with those for other baselines.
2. **It unclear if Madoc is indeed reducing the dynamics gap between source and target domains, since this gap is not evaluated in experiments.** Additional experiments should be included showing that Madoc indeed reduces the dynamics gap. First instance, one could report the difference between source and target domain physics parameters and/or the KL divergence between source and target domains. As it stands, it looks like Madoc offers little to no improvement over baselines, which leads me to believe that Madoc might not be reducing the gap by much.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. I'm a bit confused by Figure 2. Could the authors provide more detail on what is being plotted here and why it is being plotted? In particular, what exactly are at the plotted metrics and what are they intuitively related?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitation are noted in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Weak empirical results.
**Our method Madoc has achieved a trade-off between high mean and low variance in return performance.**
- On the one hand, Madoc requires online interaction with the source domain to search for the domain parameters that best match the offline dataset. Consequently, the random seed significantly influences exploration and exploitation, leading to larger variance for Madoc. In contrast, pure offline algorithms like CQL and MOREC learn on a fixed dataset in a conservative manner and do not need to explore. Therefore, they are less influenced by the random seed and have smaller variance.
- On the other hand, algorithms with lower variance, namely H2O, DR+BC, CQL, and MOREC, obtain conservative policies by penalizing the value functions on OOD actions or directly constraining the policies against the behavior policies. As a result, their mean performances are also limited by the dataset. Our method has achieved a trade-off between high mean and low variance, attaining optimal performance compared to baselines in most scenarios.
Regarding the large variance problem of Madoc, we have made some improvements. Once domain calibration is completed, we no longer use pure SAC to train the policy on the source domain from scratch. Instead, we combine SAC with BC to impose appropriate constraints on the learned policy, referred to as Madoc+online_bc. The results are shown in the table below. We can observe that the mean performance of Madoc+online_bc decreased slightly but became more stable, confirming our approach.
| Task | MOREC | Madoc | Madoc+online_bc |
| -------------- | -------- | -------- | --------------- |
| hfctah-med | 73.9±3.0 | 91.9±7.7 | 88.9±4.6 |
| hfctah-med-rep | 74.1±2.8 | 95.7±9.9 | 90.1±4.8 |
| hfctah-med-exp | 72.0±3.1 | 96.9±5.3 | 97.2±3.3 |
The concern you mentioned is of crucial significance, and we will incorporate its discussion in the experimental section of the revised manuscript.
### Q2: It's unclear if Madoc is indeed reducing the dynamics gap between the source and target domains.
**Our method Madoc indeed reduces the dynamics gap between the source and target domains.**
Some experimental results are as follows:
| Task | DROPO | OTED | Madoc |
| -------------- | --------- | --------- | --------- |
| hfctah-med | 0.78±0.18 | 0.28±0.33 | 0.06±0.01 |
| hfctah-med-rep | 0.64±0.15 | 0.61±0.11 | 0.12±0.02 |
| hfctah-med-exp | 0.66±0.14 | 0.33±0.06 | 0.09±0.02 |
As shown in the table, we used "mean absolute calibration error" (i.e., the absolute difference between the physics parameters of the source and target domains) to measure the dynamics gaps of different algorithms. It can be observed that Madoc minimized the mean absolute calibration errors in all three tasks. The complete results are presented in Appendix E.1. We sincerely apologize for the confusion caused by the lack of appendix guidance in the manuscript and will add this guidance in the revised version.
### Q3: Could the authors provide more details about Fig. 2?
**Fig. 2 demonstrates that, compared to the single-agent method, the multi-agent method can more effectively evaluate the accuracy of each physics parameter output by the calibration policy.**
The calibration agent consists of a calibration critic that evaluates the accuracy of the physics parameters and a calibration actor (policy) that adjusts the output physics parameters. The plotted metrics are the values output by the calibration critic for a specific physics parameter (i.e., the gravity coefficient) under different parameter conditions. When the parameters output by the calibration actor are closer to the target parameters (indicating a smaller absolute calibration error), the evaluation value output by a "good" critic should be higher. The results in Fig. 2 show that our multi-agent method can train such a good critic, whereas the single-agent method fails to correctly evaluate the accuracy of the physics parameters due to the lack of reasonable reward allocation and the large search space.
We have corrected the legend of Fig. 2 **in the left half of the PDF of the "global" response** and will provide more detailed explanations about it in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response!
* Ah, I see, the paper *does* compare the source/target calibration gap in Appendix E. I must've missed this; I don't think there's a direct reference to Appendix E in the main body. I agree, Madoc is indeed achieving a lower calibration gap. I'm raising my score now that this has been clarified. As a side note, I suggest adding a vertical bar separating Madoc-S and Madoc from the other baselines. When I was comparing numbers, I found myself accidentally comparing Madoc with Madoc-S rather than the baseline methods.
* Fig 2 is now clear to me. If the discussion in your rebuttal is incorporated into the main paper, readers like myself should have a much easier time understanding the punchline here. I think my confusion primarily arose because the legend was incorrectly labeled in the original submission.
* I'll make another comment about the performance (return) of Madoc vs other baselines later today once I have some time to carefully look over the results again. I'll respond as soon as I can to ensure we can discuss if needed!
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback.
We appreciate your suggestion about the vertical bar and will certainly consider it for the final revision. If you have any additional comments or concerns regarding the performance (return) of Madoc versus other baselines, we'd be happy to discuss them. | Summary: The authors, introduce Madoc, a novel framework for domain calibration. By leveraging offline data from the target domain, it dynamically adjusts physics parameters, enabling direct policy deployment. To tackle the challenge posed by a large domain parameter space, the authors propose modeling domain calibration as a cooperative Multi-Agent Reinforcement Learning (MARL) problem. Experimental results demonstrate that Madoc surpasses existing techniques across most tasks in the D4RL and NeoRL benchmarks.
Strengths: The paper addresses the critical research topic of domain calibration, which plays a pivotal role in enabling effective domain transfer of reinforcement learning (RL) methods. The authors present a well-written and organized paper, meticulously explaining their ideas and methodology. The motivation behind their approach is clearly articulated, and the experimental section thoughtfully addresses key questions. The experimental results are particularly compelling, demonstrating significant improvements over baselines on two standard benchmarks (D4RL and NeoRL). One of the paper’s novel contributions lies in its use of Cooperative Multi-Agent Reinforcement Learning (MARL) for adjusting source domain parameters. The authors delve into the specifics of their method, conducting ablation studies to highlight the impact of different components. Overall, I find this paper both interesting and potentially useful for the community.
Weaknesses: *Major comments:*
- Method Complexity and Comparisons with Offline RL:
I noticed that the method appears more complex and computationally intensive compared to offline reinforcement learning (RL) methods like MOREC and CQL. This complexity might impact the fairness of the comparisons in Tables 1 and 2.
Adding discussion on the trade-off between complexity and performance in specific scenarios or domains could improve the paper.
- Surprising Results in Table 2:
I found it interesting that MOREC performs significantly poorly in HalfCheetah-M and Walker2d-H, which differs from the original paper results. It’s possible that differences in experimental settings, hyperparameters, or implementation details contribute to this discrepancy. Accurate baseline implementation is crucial for fair comparisons.
- Smallest Target Task Dataset Size:
The paper’s title suggests tackling domain transfer with handful of handful of offline data. The method seems to clearly perform well in small dataset regims. However, the smallest target task dataset still contains 5e4 samples. It’s reasonable to expect results with much smaller data sizes to better support the claim about domain transfer in sensitive contexts.
*Minor comments:*
In Section 4.2, the first paragraph refers to multi-agent (MA) methods depicted by red dots. However, the legend in Figure 2 uses blue for MA (presumably short for Multi-agent).
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the computation cost of training your method compared to other Offline RL methods? Please add a discussion.
- The reward model seems to be a crucial component of this method. How good is the reward model? Is there a way to quantify its accuracy?
- Recently, in MARL community, independent learning methods have gained traction again due to their suprising performance in some domains [1] due to their robustness to large and complex tasks. How do you think independent SAC agents wll perform in this problem?
[1] de Witt et al, "Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?"
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Method Complexity and Comparisons with Offline RL.
**Madoc adds more modules compared to CQL, but the model complexity and GPU memory cost remain acceptable.**
We conducted experiments on the hfctah-med-rep task of the D4RL benchmark to evaluate the model complexity of different methods. The results are shown in the table below, with all networks in the form of MLP. Our method achieves a balance between model complexity and performance.
| Method | GPU Memory Cost and Hidden Layers |
| ------ | --------------------------------------------------------------------------------------------------------------------------------------- |
| Madoc | 398MB: 2 * [256, 256] for reward models, 2 * [256, 256] for VAE, 6 * [64, 64] for calibration agents, 2 * [256, 256] for running agents |
| CQL | 286MB: 2 * [256, 256] for running agents |
| MOREC | 1053MB: [128, 256, 128] for dynamics reward function, 40 * [200, 200, 200, 200] for dynamics models, 2 * [256, 256] for running agents |
### Q2: Surprising Results in Table 2.
**The performance of MOREC is not as good as in the original paper because we used a smaller amount of offline data.**
MOREC is sensitive to the quantity of offline data as it initially needs to train a generalizable dynamics reward function from offline data and accordingly learn dynamics models for generating transitions. When the quantity of offline data is insufficient, both the dynamics reward function and the dynamics models tend to underfit. As depicted in Fig. 4(a), when using the same large dataset as MOREC, the performance gap is small. However, as the amount of offline data decreases, the gap becomes increasingly larger. We will provide an explanation of it in the revised manuscript.
### Q3: Smallest Target Task Dataset Size.
**Madoc utilized a sufficiently small amount of offline data.**
We present the smallest target task dataset size of different methods below:
| Method | Size |
| ---------- | ---- |
| Madoc | 5e4 |
| SSR [1] | 2e5 |
| CQL, MOREC | 1e6 |
SSR [1] proposes a data-efficient pipeline for offline RL with limited data and still used 2e5 training samples. This is because the continuous control tasks possess larger state and action spaces compared to discrete RL environments. Thus, the smallest target task dataset of Madoc, which contains only 5e4 samples, can be regarded as using a handful of offline data.
### Q4: What is the computation cost of training your method compared to other Offline RL methods?
**Madoc incurs higher computational costs compared to CQL, but this is acceptable given its performance improvement.**
We conducted experiments on the hfctah-med-rep task to evaluate the computational costs of different methods. The results are shown below:
| Method | Total Time | Seconds per Epoch (1000 epochs) |
| ------ | ---------- | ---------------------------------------------------------------- |
| Madoc | 5 hours | 1.8s for grouping (200 epochs), 14s for calibration, 4s for SAC |
| CQL | 2 hours | 7s for CQL |
| MOREC | 6 hours | 5s for dynamics reward function, 16s for policy training |
The table indicates that Madoc involves three training stages and consumes more computational cost compared to CQL. However, given the significant performance improvement, this additional computational cost is justifiable. We will also include this discussion in the revised appendix.
### Q5: How good is the reward model?
**The reward model can accurately reflect the dynamics gap under different parameters.**
To verify the role of the reward model, we designed an experiment to test whether it can act as a scoring function to evaluate the truthfulness of transitions. As shown **in the right half of the PDF of the "global" response**, we stored the checkpoint of the trained reward model and used it to test the reward results (the reward range should be [-4, 4]) it outputs under different dynamics.
In this simple test, we only changed two parameters: the gravity coefficient and body_mass_1 (the target values are -9.81 and 6.36, respectively). We sampled 256 transitions with the behavior policy under each parameter condition to calculate the mean reward. The results, shown in the heat map, indicate that when the domain parameters are closer to the target parameters, the output reward is higher. This suggests that it can accurately reflect the dynamics gap under different parameters.
### Q6: Comparison with independent learning methods.
**Independent SAC agents perform worse than Madoc agents in domain calibration.**
Independent SAC (ISAC) treats each agent as an independent individual, optimizing each policy with shared rewards without considering the joint policy. We conducted relevant experiments and the results are shown below:
| Task | Madoc | ISAC |
| -------------- | -------- | --------- |
| hfctah-med | 91.9±7.7 | 81.1±14.7 |
| hfctah-med-rep | 95.7±9.9 | 75.0±13.1 |
| hfctah-med-exp | 96.9±5.3 | 75.2±16.1 |
The results indicate that ISAC performs worse than Madoc in domain calibration. We believe the main reason is that all physics parameters are interrelated and cooperative. The independent learning method ignores the policy changes of other calibration agents, leading to non-stationary problems [2]. IPPO [3] performs well on some SMAC tasks because these tasks do not have high requirements for cooperativeness.
Reference:
[1] Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data. NeurIPS, 2022.
[2] Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning. arXiv, 2019.
[3] Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge? arXiv, 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. The responses addressed all of my concerns and integrating them to the paper could improve the paper. I still keep my score and support acceptance as the methodology of the paper is both novel and well communicated in my opinion. | null | null | Rebuttal 1:
Rebuttal: We appreciate valuable comments from all reviewers. We have carefully clarified the ambiguous parts and supplemented our work with additional experiments to address the raised issues. Our revisions can be briefly summarized as follows:
- Method.
- We have further elaborated on the motivation and derivation process of the optimization objective, supplemented the explanations of relevant definitions, and improved the representation to make it easier to understand.
- The supplementary PDF presents the revised version of Fig. 2.
- Experiments.
- We have provided a more detailed explanation of the empirical results and supplemented them with relevant experiments to demonstrate the superiority of our method.
- We have added the comparison results with new baseline algorithms in some scenarios, highlighting the differences and demonstrating the superiority of our method.
- We have presented the comparison of model complexity and computational cost with offline algorithms, reflecting the feasibility of our method.
The supplementary PDF consists of two parts:
- The left half presents the revised version of Fig. 2 from the manuscript, with corrected errors in the legend. Thank you for pointing out this crucial typo, which has helped us improve our work.
- The right half contains the verification of the reward model, demonstrating that it can reflect the dynamics gap under different physics parameters. The complete description can be found in Q5 for Reviewer Cdsr.
We hope that our responses address all the questions and concerns. If we have missed anything, please let us know. We are always willing to resolve any further issues and look forward to the ensuing insightful discussions.
Pdf: /pdf/6354d8255f49038f66c7a7abac82954ee393cd60.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GENOT: Entropic (Gromov) Wasserstein Flow Matching with Applications to Single-Cell Genomics | Accept (poster) | Summary: The paper proposes a multi-variable OT-based framework for solving Single-Cell related problems. The main idea behind the approach is to use the solutions for different variants of discrete entropy-regularize OT problems to learn a continuous parametric plan (measure) given by its conditional distributions, which approximates the true solution to the corresponding continuous problem. The authors propose utilizing conditional Flow Matching to parameterize the learned plan. The methodology is validated on various synthetic and real (single-cell) use cases. The obtained results testify the competitiveness of the framework.
Strengths: Despite the fact that the paper is full of specific technical details on single-cell stuff, the manuscript is clear and pleasant for reading.
The method is clearly introduced. It seems that the authors adequately and fairly exhibit both the advantages of the method - its universality (coverage of different variants of OT formulations), simplicity (both for understanding and I guess for implementation); and limitations of the methodology - good limitation section + discussed reliance on discrete OT.
Also, I would like to commend a good experimental section - a lot of interesting use cases are considered and interesting results (sometimes even negative) are presented. These experiment settings are worth considering regardless of the proposed method.
Weaknesses: * It seems the method could not (or hardly could) be adapted to high-dimensional (e.g., image-data) setups due to reliance on discrete EOT. At least, I expect that the learned plan in such setups will significantly deviate from the true EOT plan. Still, I understand that the proposed framework is targeted on single cell data, which is not that high-dimensional.
* Reliance on Discrete OT is something I personally treat as a weakness. It introduces biases which hardly could be analyzed. Still, in practice, the minibatch OT seems to work well.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The main of my questions is about the utilization of (conditional) flow matching for learning the parametric plan out of discrete minibatches. Why flow matching? As I understand, the proposed framework could be used on par with other generative techniques like GANs/normalizing flows/diffusion models. Why do you restrict yourself by flow matching?
* I found it interesting that GENOT achieves (on average) the best results on EOT benchmark [3] among the competitors. Could the authors provide the code for the benchmark experiment?
* Unbalanced Entropic OT work (to be mentioned): [1]. Also, a recent preprint you may find to be interesting: [2].
* Line 89: Do I understand correctly that $\mu$ and $\nu$ are not necessarily probability distributions, but could be positive measures with the same mass?
* Why does in (UQEOT) one utilize $\varepsilon \text{KL}^{\otimes}$, while in (QEOT) - just $\varepsilon \text{KL}$ (without $\otimes$)?
* Line 289: What is the “conditional mean” regime of the GENOT model?
* Why does the unfused methods work badly on the single cell modalities translation problem with $\ell_2^2$ intra-costs (Figure 4, FOSCTTM metric is almost everywhere near 0.5)?
Misprints:
- (QEOT) formula: $Q_{c_X, c_Y}$ -> $D_{c_X, c_Y}$
- line 101: complexity if -> complexity is
- line 351: Fig. 3 -> Fig. 4
[1] Pariset et. al., Unbalanced Diffusion Schrödinger Bridge, ICML’23 workshop.
[2] Gazdieva et. al., Light Unbalanced Optimal Transport.
[3] Gushchin et. al., Building the Bridge of Schrödinger: A Continuous Entropic Optimal Transport Benchmark
------------------------
POST REBUTTAL
I thank the authors for the answers provided. I am satisfied with them. I have decided to raise my grade.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Ok
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Title: Additional Answer to Reviewer GBWj
Comment: > ***What is the “conditional mean” regime of the GENOT model?***
➤ This refers to mapping a point $\mathbf{x}$ to the target domain via the conditional mean, i.e., by averaging multiple samples from the conditional distribution $\pi^\star_\varepsilon(\cdot|\mathbf{x})$ instead of taking a unique one $\mathbf{y}\sim\pi^\star_\varepsilon(\cdot|\mathbf{x})$. This is detailed in lines 134-138. While we would like to highlight that taking the conditional mean has to be taken with a grain of salt (as in most cases, this is not the quantity we are interested in when computing EOT plans), it can be beneficial in some single-cell applications with respect to certain metrics. For instance, in Figure 4 (translating modalities), we see that the FOSCTTM of GENOT-Q CM is lower (hence better) than that of GENOT-Q. Importantly, numerous single-cell applications explicitly model the barycentric projection (i.e., conditional mean), e.g., for translating modalities of cells ([Demetci+2022]), mapping cells to their spatial organization [Nitzan+2019], and aligning slides of spatial transcriptomics [Zeira+2023].
> ***Why does the unfused methods work badly on the single cell modalities translation problem with l2 intra-costs (Figure 4, FOSCTTM metric is almost everywhere near 0.5)?***
➤ This is an interesting question. It seems that pairwise Euclidean distances are not sufficiently characteristic for cells. The suitability of the cost might depend a lot on the dataset. These observations are also observed in the original publication for using Gromov-Wasserstein for aligning modalities of cells (SCOT, [Demetci+2022], Fig. 6). Unfortunately, we are not aware of any method on how to choose the “best” cost a priori, but our experiments (as well as the experiments shown in Fig.1, pdf) show that the choice of the cost can have an impact on the performance. Note that the poor performance of using the squared Euclidean cost motivated our "extension" of SCOT, i.e. constructing a fused term which empirically helps.
> ***Misprints.***
Thanks a lot for catching these typos. We adapted the relevant sections in the paper.
---
Rebuttal 2:
Rebuttal: > ***The method is clearly introduced [...] These experiment settings are worth considering regardless of the proposed method.***
➤ We thank the reviewer for appreciating our contributions.
> ***It seems the method could not (or hardly could) [...] which is not that high-dimensional.***
➤ We appreciate that the reviewer understands that our method is motivated by and targeted at single-cell data. As mentioned in the general response, we found this question very interesting and applied GENOT to image data. Again, we would like to highlight that we do not intend to include the results in the paper and leave further investigation and optimization of GENOT applied to image data to future work, as i) we believe it does not fit the story line, ii) reviewers appreciated the large number of experiments, but also reviewer GAFl also critized that such a large number of experiments is in the appendix, hence adding computer vision examples would hamper readability even more. In addition to what we wrote in the general response, we would like to add the following: We follow the same hyperparameter and architectural setup as the authors in [Eyring+24]. We only added the FiLM condition module and modified the number of iterations to 350k. Moreover, we consider the balanced scenario, because - as mentioned - we did not perform any hyperparameter optimization.
> ***Reliance on Discrete OT is something I personally treat as a weakness. It introduces biases which hardly could be analyzed. Still, in practice, the minibatch OT seems to work well.***
➤ We respect that the reviewer considers the reliance on discrete OT as a weakness. We would also like to highlight that by no means we assume there is no better way to solve some of the tasks which does not rely on discrete OT. Instead, we would like to encourage the community to build upon our method, as we hope to clearly motivate the challenges which are faced in single-cell genomics while providing in some of these cases a first method which can be benchmarked against. There are clear limitations of our method, which we explicitly discuss in the appendix A or can be directly seen from the results. Yet, as stated by the reviewer, we also see that in some cases the performance of GENOT is good. We thank the reviewer for appreciating that the results of the experiments justify the usage of discrete OT.
> ***The main of my questions is about the utilization of (conditional) flow matching for [...]. Why do you restrict yourself by flow matching?***
➤ We do agree that our proposed method could be built with any generative model. The choice of flow matching is mainly practical, but it also has some methodological advantages. First and most importantly, training is fast (simulation-free as opposed to normalizing flows) and stable due to its simple minimization objective (as opposed to GANs). Moreover, we experienced diffusion models to be harder to train on single-cell data (worse performance in terms of distributional matching). On the methodological side, flow matching allows for estimating densities (as opposed to GANs), which can be useful in single-cell genomics as demonstrated in Figure 21.
> ***I found it interesting that GENOT achieves (on average) the best results on EOT benchmark [3] among the competitors. Could the authors provide the code for the benchmark experiment?***
➤ We provide the code in an **anonymous** Google Drive: [https://drive.google.com/drive/folders/17DUZr_bnTY4gv4nd8szk1h0wJ5NdQjlH?usp=sharing](https://drive.google.com/drive/folders/17DUZr_bnTY4gv4nd8szk1h0wJ5NdQjlH?usp=sharing). The folder `genot_rebuttal` includes a self-contained notebook `benchmark_example_64_1.ipynb` to reproduce an example of the results for the benchmark pair in dimension $d=64$ and $\varepsilon=1$. We also included a file `requirements.txt` containing the package requirements.
> ***Unbalanced Entropic OT work (to be mentioned): [1]. Also, a recent preprint you may find to be interesting: [2].***
➤ We apologize for having missed this work, and now explicitly mention the work by replacing the sentence in line 56-59: "[Eyring+2024], [20], [Lübeck+2023], [Yang+2019] proposed a way to incorporate unbalancedness into deterministic linear OT maps, while unbalanced formulations for entropic OT in both the linear and the quadratic case have not been explored.” by [Eyring+2024], [20], [Lübeck+2023], [Yang+2019] proposed a way to incorporate unbalancedness into deterministic linear OT maps, while [Yang+2019] extended their method to the entropic setting (but do not provide code), while recent preprints [Pariset+2023], [Gazdieva+2024] also suggested methods to approximate unbalanced linear EOT solutions. To the best of our knowledge, unbalanced neural formulations in quadratic OT have not yet been explored."
> ***Line 89: Do I understand correctly that \mu and \nu are not necessarily probability distributions, but could be positive measures with the same mass?***
➤ In the balanced OT setting, $\mu$ and $\nu$ are taken as probability measures, i.e., their mass is 1. This extends to all measures with arbitrary but equal masses, as they can always be normalized, *with the same renormalization constant*, without losing generality. The unbalanced setting is used for positive $\mu$ and $\nu$ measures with different masses.
> ***Why does in (UQEOT) one utilize, while in (QEOT) - just (without )?***
➤ This is a very good question. For QEOT, we do not need to use $\mathrm{KL}^\otimes$ because, when $\mu$ and $\nu$ have the same total mass, as shown in [Séjourné+2024, Prop 8.], $\mathrm{KL}^\otimes(\pi|\mu\otimes\nu) = 2 \mathrm{KL}(\pi|\mu\otimes\nu)$. Therefore, using $\mathrm{KL}$ is equivalent to using $\mathrm{KL}^\otimes$, with the adjustment of replacing $\varepsilon$ by $\varepsilon/2$.
For the remaining comments, we would like to kindly ask the reviewer to read the "Additional Answer to Reviewer GBWj" added.
---
Rebuttal 3:
Title: Thanks for the answers
Comment: I thank the authors for the provided answers. I have decided to raise the score (up to 7)
---
Rebuttal Comment 3.1:
Title: Many thanks for reading our rebuttal.
Comment: We are very grateful for your score increase, and we are happy to answer any further question on our work.
The Authors | Summary: The paper proposes a framework for realigning cells in single-cell genomics which lies in the field of neural Optimal Transport (OT) solvers. The method utilizes a generative flow-based model for computing entropic OT couplings and tackles several practical challenges, e.g., it allows for using arbitrary cost functions, learning stochastic transport plans, relaxing mass conservation constraint, and tackling challenges of the (Fused) Gromov-Wasserstein problem. Specifically, the method consists of training of the conditional flow matching model on top of estimated discrete entropic OT plans. The approach is tested in different tasks from the single-cell biology field.
Strengths: The paper proposes a flexible framework which allows for addressing different challenges of application of neural OT solvers in the single cell genomics field. It is shown that the approach provides good results in several tasks from the single cell genomics according to the metrics under consideration. The paper is well-written and structured.
Weaknesses: My major concern is that the provided approach is not guaranteed to learn the ground-truth plans. Most of the experimental details and theoretical results related to this topic are moved to Appendix which hampers the understanding of this question.
Indeed, GENOT is based on the idea of distilling discrete entropic OT (EOT) solutions using the flow matching (which is close, but as the authors explain - different, to the idea from the work [1]). It raises a question: how well does such a kind of distillation of discrete plans approximate the ground-truth continuous ones (assuming that the approach works with batches of finite size)? Some explanations regarding the mini-batch biases are provided in lines 267-280. However, as far as I see it, the provided discussion (and references) do not cover the statistical properties of the discrete *unbalanced* linear or *quadratic plans*. The situation with quadratic setup is the most tricky here, since it is known that GW problem admits multiple solutions and, thus, the optimization problem may differ at each step of training depending on the calculated discrete entropic GW plans.
From the empirical side, the authors address the ‘biasedness’ issue by testing their approach using a benchmark with analytically known EOT [2], U-EOT [3], and GW/Unbalanced GW [4] plans (all in Appendix E). According to these experiments, I tend to agree that the proposed approach provides meaningful results for the balanced (or unbalanced) linear case. However, the situation with the GW case is not clarified – according to the experiment description (Appendix E.3), the authors introduce some heuristics to “preserve the orientation of GW solution” and claim that in the case of biological tasks it is not needed. (Still, as a non-expert in biology, I can not assess the validity of this claim.)
**In summary**, there are some evident issues with the implementation of the proposed approach in the Gromov-Wasserstein case. I appreciate that the authors mentioned this issue in the limitations section of their work (Appendix A). However, it is not obvious for me to what extent this is a serious drawback hampering the applicability of the approach to biological tasks. Besides, I think that as an important aspect of the implemented approach, this issue should be mentioned in the main body of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the related work section you cited several works on unbalanced EOT [5,6,7]. Why did you include only one of them [7] in comparison in Gaussians experiment?
- In Table 1-3 you use abbreviations DBSM and CFM-SM which are not introduced in the text. Please introduce them in the relevant section.
- Could you explain in more details why the issues of your implementation in the GW case will not arise in biological tasks? I see your explanation in lines 1345-1346, however, it seems that the issue might arise not only in the case of highly symmetric distributions
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work in a specific section of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Comment: > ***The situation with quadratic setup is the most tricky here, since it is known that GW problem admits multiple solutions and, thus, the optimization problem may differ at each step of training depending on the calculated discrete entropic GW plans.***
➤ We agree that the quadratic setup is the most tricky one, and we thank the reviewer for explicitly mention this issue again. As mentioned in the general response, we set out to investigate this further (Alg. 1, pdf), and results can be found in Fig. 2, pdf.
We consider two datasets:
30-dim. Gaussians (20 dim. quadratic + 10 dim. fused term): For both the source ($\mathcal{\mu}$) and the target ($\nu$) distribution we sample mean vectors from $\mathcal{U}([-1, 1])$ and construct the covariance matrices by sampling each entry of their square roots from $\mathcal{U}([0, 3])$.
30-dim. single-cell data (the one used for modality translation, but no train/test split): 20-dimensional PCA of the normalized ATAC data, 20-dimensional PCA projection of the gene expression data, and 10-dimensional PCA of VAE space (shared space).
To assess the stability of the solution of discrete EOT, we fix one sample $X_0 \sim \mu$ from the source distribution, and compute the variance of $\hat{\pi}(\cdot|X_0)[10:]$, i.e. of the incomparable space $\tilde{\mathcal{Y}}$ only, which is obtained from a discrete EOT solver with different samples in source and target. We do this for $\varepsilon=0.01$ and $\varepsilon=0.0001$ with the Sq. Euclidean cost. We consider:
- outer coupling: this serves as orientation.
- Gromov-Wasserstein (GW): We use $\tilde{\mathcal{X}}$ and $\tilde{\mathcal{Y}}$, i.e., we use the last 20 dimensions of both the source and the target distribution.
- Fused Gromov-Wasserstein (FGW): We use $\Omega \times \tilde{\mathcal{X}}$ and $\Omega \times \tilde{\mathcal{Y}}$, $\alpha=\frac{1}{11}$.
- Gromov-Wasserstein with geodesic cost (Geodesic): We use $\tilde{\mathcal{X}}$ and $\tilde{\mathcal{Y}}$ with the geodesic cost.
- Gromov-Wasserstein with initialisation (with init): We use $\tilde{\mathcal{X}}$ and $\tilde{\mathcal{Y}}$. We use the initialisation scheme as discussed in paper (Appendix A, E).
> ***However, the situation with the GW case is not clarified – according to the experiment description (Appendix E.3), the authors introduce some heuristics to “preserve the orientation of GW solution”***
➤ We visualise what we mean by "preservation of the orientation" of the GW solution in Fig. 1 in the provided pdf, where we transport a 1-dimensional Gaussian to a 1-dimensional Gaussian. The LHS of Fig. 1 shows the result obtained without initialisation, and the learnt coupling is clearly a mixture of solutions, while on the RHS of Fig. 1, we visualise that GENOT-Q learns a coupling which appears to be no mixture of solutions. While this is only an example in 1 dimension, the results in Fig. 2, pdf, highlight that using the proposed initialisation scheme stabilises the orientation of the GW solution also in high dimensions, for both the Gaussian and the single-cell experiments.
> ***[...] in the case of biological tasks it is not needed. (Still, as a non-expert in biology, I can not assess the validity of this claim.)***
➤ We thank the reviewer for pointing this out, apologize for this bold statement, and would like to correct ourselves and weaken the statement to: "In the case of single-cell data, choosing the right cost might help to overcome the need for the initialisation scheme to a certain degree." as can be seen from Fig. 2, pdf, where the geodesic cost seems to help make the solution more unique with $\varepsilon=0.0001$, as well as the results in Fig. 4, manuscript, LHS. Moreover, we will add the sentence "Additionally, most single-cell applications are a FGW problem (as opposed to a GW problem), and empirically, the fused term helps to make the set of solutions of the EOT coupling less diverse, as can be seen from Fig. 5, FIg. 19, and [provided pdf, Fig. 2].". Indeed, the only pure GW problem in common single-cell applications we are aware of is SCOT [Demetci+22]. As we realised the performance of both SCOT and GENOT-Q is poor, we introduced a novel way to include a fused term into the task of modality translation by constructing a joint space by creating a joint conditional VAE embedding from gene activity (source space) and gene expression (target space).
> ***However, it is not obvious for me to what extent this is a serious drawback hampering the applicability of the approach to biological tasks.***
➤ We hope to have clarified this with the additional experiments and the answers above.
Title: Additional Answer to Reviewer NoBU 1/2
---
Rebuttal 2:
Comment: > ***Besides, I think that as an important aspect of the implemented approach, this issue should be mentioned in the main body of the paper.***
➤ We fully agree with the reviewer, and will definitely include the discussion mentioned above in the main body of the paper, highlighting this limitation. Moreover, we will include the results on the stability of quadratic EOT couplings as outlined in the provided pdf document.
> ***In the related work section you cited several works on unbalanced EOT [5,6,7]. Why did you include only one of them [7] in comparison in Gaussians experiment?***
➤ We assume the reviewer is alluding to lines 56-59: [Eyring+2024], [Lübeck+2023], [Yang+2019] proposed a way to incorporate unbalancedness into deterministic linear OT maps, while unbalanced formulations for entropic OT in both the linear and the quadratic case have not been explored.” In fact, [Eyring+2024], [Lübeck+2023] are unbalanced Monge Maps (i.e. deterministic) estimators, and hence do not solve EOT. [Yang+2019] suggest a solver for both deterministic and entropic OT couplings, but their implementation only allows for the deterministic OT coupling. We now clarify this, and also include references to other preprints (thanks to reviewer GBWj) by adapting the statement in lines 56-59 to: "[Eyring+2024], [Lübeck+2023], [Yang+2019] proposed a way to incorporate unbalancedness into deterministic linear OT maps, while [Yang+2019] extended their method to the entropic setting (but do not provide code), while recent preprints [Pariset+2023], [Gazdieva+2024] also suggested methods to approximate unbalanced linear EOT solutions. To the best of our knowledge, unbalanced neural formulations in quadratic OT have not yet been explored."
> ***In Table 1-3 you use abbreviations DBSM and CFM-SM which are not introduced in the text. Please introduce them in the relevant section.***
➤ We thank the reviewer for pointing us to this. We will add all references to Table 1.
> ***Could you explain in more details why the issues of your implementation in the GW case will not arise in biological tasks? I see your explanation in lines 1345-1346, however, it seems that the issue might arise not only in the case of highly symmetric distributions***
➤ As mentioned above, this has two main reasons: First, empirically, the geodesic cost seems to overcome this limitation to a certain extent (see superior performance in Fig. 4, paper, and lower conditional variance in Fig.2, pdf). Second, most single-cell applications can be extended to a fused problem. Yet, we apologize for our formulation that this issue only arises in highly symmetric contributions, and will correct the statement to "Empirically, both changing the cost and including a fused term can help to account for this limitation. If none of these approaches work, GENOT can be run with its quadratic initialisation scheme, which empirically helps to make the set of EOT solutions less diverse, at the cost of loosing simulation-free training."
Title: Additional Answer to Reviewer NoBU 2/2
---
Rebuttal 3:
Rebuttal: > ***It is shown that the approach provides good results in several tasks from the single cell genomics according to the metrics under consideration. The paper is well-written and structured.***
➤ We thank the reviewer for the positive feedback.
> ***My major concern is that the provided approach is not guaranteed to learn the ground-truth plans.***
➤ We agree with the reviewer that distilling mini-batch coupling introduces a bias, which means that we can, in theory, only learn the true coupling in the asymptotic setting of infinite batch size. We discussed this aspect in detail in lines 267-280. On the other hand, it's important to remember that we are not aiming to learn a deterministic Monge map or a Benamou-Brenier velocity field. Instead, we operate in the entropic regime with $\varepsilon \gg 0$. In this regime, the estimation rates are parametric in $\varepsilon$, helping to avoid the curse of dimensionality and significantly reducing the bias introduced compared to estimating deterministic maps (see the discussion related to statistical estimation below.)
Moreover, solving a continuous OT problem in practice is very challenging, and most neural OT techniques introduce either a bias or a significant computational challenge into the learning process. In this paper, we have developed a method that is easy to train, even though it introduces a bias. This approach proved effective, as GENOT outperforms 12 baselines on a recent benchmark [Gushchin+2023], where we estimate a ground truth linear EOT known in closed form (see Tables 1-2).
Finally, we want to emphasize that, to the best our knowledge, there is only one other (unpublished) neural OT solver targeting quadratic OT, which comes with a number of limitations, especially for single-cell genomics. As stated in lines 62-64 of the manuscript: *"To our knowledge, the only neural formulation for GW proposed so far by [57] learns deterministic and balanced maps for inner product costs, using a min-max-min optimization procedure, which severely limits its application in single-cell genomics."* In general, solving a quadratic OT problem is extremely challenging. The only case where we can satisfactorily approximate GW couplings is in the discrete setting, using [Peyré+2016]'s scheme. Therefore, with GENOT, we aimed to develop a method that builds on these discrete GW couplings.
All in all, we would also like to highlight that by no means we assume there is no better way to solve some of the tasks which does not rely on discrete OT. Instead, we would like to encourage the community to build upon our method, as we hope to clearly motivate the challenges which are faced in single-cell genomics while providing in some of these cases a first method which can be benchmarked against.
> ***Most of the experimental details and theoretical results related to this topic are moved to the Appendix, which hampers the understanding of this question.***
➤ We agree with the reviewer that it is not ideal that experimental details and theoretical results are moved to the appendix. As the development of GENOT arose from the need for flexible neural OT estimators in single-cell genomics, we decided to highlight the applied point of view. We admit that the readability of the paper would benefit from having e.g. more experiments or propositions in the main body. Yet, as we hope this paper will motivate single-cell biologists to build upon these methods for analyzing their data, we prioritised the application, but we still hope to include a few more experimental details and/or theoretical results in the final version of the manuscript.
> ***How well does such a kind of distillation of discrete plans approximate the ground-truth continuous ones? [...] Some explanations regarding the mini-batch biases are provided in lines 267-280. However, as far as I see it, the provided discussion (and references) do not cover the statistical properties of the discrete unbalanced linear or quadratic plans.***
➤ This is a very good question. Most papers dealing with the estimation of EOT objects provide estimation rates for (Gromov)-Wasserstein distances. As stated in lines 267-281:
- In the linear EOT setting, both balanced [Genevay+2019] and unbalanced [Séjourné+2021] settings achieve a parametric rate in $\varepsilon$ dodging the curse of dimensionality.
- In the quadratic EOT setting, for the balanced case, [Zhang+2023] demonstrate that similar parametric rate in $\varepsilon$ can also be achieved. They prove this result by linearization of the quadratic EOT problem. Specifically, this rate depends on the minimum dimension of the source and target domains, indicating that estimation is easier when one of the domains is low-dimensional.
There is only one very recent paper [89] that provides a rate estimate for the ground truth coupling $\pi^\star_\varepsilon$ by the empirical coupling $\hat{\pi}^n_\varepsilon$ in the balanced linear EOT setting. This paper shows the same type of parametric rate in $\varepsilon$, mitigating the curse of dimensionality.
Extending this result to (i) the unbalanced EOT setting and (ii) the quadratic EOT setting is a very exciting perspective. For (i), since we have shown that an unbalanced EOT coupling is essentially a balanced EOT coupling between re-scaled measures (see Prop B.1), this result seems intuitively extendable. For (ii), the extension could be achieved by linearizing the quadratic EOT problem, similar to the approach used by [Zhang+2023] for costs.
For the remaining comments, we would like to kindly ask the reviewer to read the "Additional Answer to Reviewer NoBU 1/2" and "Additional Answer to Reviewer NoBU 2/2" added.
---
Rebuttal Comment 3.1:
Comment: I thank the authors for providing the answers to my concerns and raise my score to 6.
---
Reply to Comment 3.1.1:
Title: Thanks for the response to our rebuttal
Comment: We are glad we could address all concerns and are more than happy to address any further questions you may have about our work.
The Authors | Summary: The authors present Generative Entropic Neural Optimal Transport (GENOT), a flexible method for learning entropic couplings for linear and quadratic entropic optimal transport (EOT), unbalanced OT, and OT across incomparable spaces. At its core, their method uses conditional flow matching (CFM) to solve for the OT couplings. Through extensive empirical experiments, the authors show the use-case of their method for learning couplings of cell trajectories, predicting single-cell response to perturbations, and translating between single-cell data modalities.
Strengths: The authors present promising and comprehensive set of empirical results that support the applicability of their method for OT related problems in single-cell biology. They demonstrate a novel use case of learning OT couplings between different data modalities in singel-cell biology, i.e. from RNA-seq to ATAC-seq. Their proposed method is also supported with sound theoretical background and justification. In general, the paper is well motivated and addresses important and challenging problems in single-cell biology.
Weaknesses: Overall, this work has all the elements that constructs a complete and thorough application-based contribution to the field of neural optimal transport and single-cell biology. However, there remain several items that I feel hinder the understanding and impact of this work:
- It seems to me that a key novelty of this work is the use of quadratic OT for learning couplings between incomparable spaces -- i.e. from ATAC-seq to RNA-seq for the single-cell case considered in this work. The authors consider a large quantity of experiments for both linear and quadratic OT over various settings, with many of the experiment being reported in the appendix. This makes some of the results and discussion hard to follow, and overall diluting the presentation of the key items of this paper. Given the large quantity of experiments and dense content of this paper, maybe it would be beneficial to shift more focus on these aforementioned experiments and move other to the appendix. This may help the reader better understand some of the key contributions of this work.
- A large amount of items that are placed in the appendix tare referenced in the main text and seem to be important. This in general disrupts the overall fluidity of the paper. For instance, propositions in section 3 are listed in the appendix. Moreover, some experiments discussed in the main text only reference figures that are in the appendix. For example, experiments in section 5.1 "GENOT-L on simulated data" and "U-GENOT-L predicts single-cell responses to perturbations". In general, there seem to be a large proportion of experimental results that are placed in the appendix and referenced in section 5, which seem to be given equal wait to the experiment included in the main text. This makes it difficult to decipher the significance/importance of the respective results. As mentioned in my previous comment, possibly focussing the paper slightly may be beneficial to the overall presentation of this work.
Technical Quality: 3
Clarity: 3
Questions for Authors: - For the experiment on single-cell response to perturbations, it appears there are no comparisons to baseline methods? Is the reason for this because existing baseline methods are deterministic? Is there intuition on why the stochastic approach is helpful (or necessary) for this task?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss limitations and broader impacts in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > ***The authors present promising and comprehensive set of empirical results [...] and addresses important and challenging problems in single-cell biology.***
➤ We thank the reviewer for the positive feedback.
> ***It seems to me that a key novelty of this work is the use of quadratic OT for learning couplings between incomparable spaces [...] This may help the reader better understand some of the key contributions of this work..***
➤ We are grateful to the reviewer for providing this constructive criticism. We acknowledge that the paper includes a large number of experiments, which could hinder the reading flow. We decided to focus on the motivation of this work, which is the need for widely applicable and flexible neural OT estimators in single-cell genomics. In particular, we tried to motivate each of the 4 necessities **N1**, **N2**, **N3**, and **N4** (see lines 67-70 of the submission) as much incrementally as possible. Yet, we will try to include more experiments of the fused Gromov-Wasserstein case in the final version of the manuscript, which is indeed the major novelty.
Additionally, it is important to note that GENOT satisfying **N2**—approximating EOT coupling for **any cost function**—is a significant novelty. As detailed in lines 156-161 and illustrated in Figure 5, previous simulation-free linear EOT solvers, such as [Shi+2023], [Liu+2023], [Pooladian+2023], and [Tong+2024;a,b], are limited to the squared Euclidean cost $c(\mathbf{x},\mathbf{y}) = |\mathbf{x}-\mathbf{y}|_2^2$. This flexibility is crucial. We have demonstrated that in both linear (Figure 2) and quadratic EOT settings (Figure 4), using data-driven cost functions that approximate the geodesic distance on the data manifold yields more meaningful results from a biological perspective.
> ***A large amount of items that are placed in the appendix are referenced in the main text and seem to be important. [...] As mentioned in my previous comment, possibly focussing the paper slightly may be beneficial to the overall presentation of this work.***
➤ We agree with the reviewer that the large number of references in the main text to the appendix might hinder readability. As mentioned above, we intended to highlight the need for flexible neural OT estimators in single-cell genomics, and thus prioritized the motivation from a single-cell point of view. We admit that the readability of the paper would benefit from having e.g. more experiments or propositions in the main body. Yet, as we hope this paper will motivate single-cell biologists to build upon these methods for analyzing their data, we prioritized the applied point of view.
> ***For the experiment on single-cell response to perturbations, it appears there are no comparisons to baseline methods? Is the reason for this because existing baseline methods are deterministic?***
➤ This experiment serves as a motivating example for the need of neural unbalanced EOT estimators. The reason for not having included baselines in the perturbation prediction task is the lack of unbalanced entropic OT estimators. In fact, [Yang+2019] proposes a neural OT estimator in both a stochastic and a deterministic version, but their implementation only allows for the deterministic version. We decided to still include it in the benchmark of learning an unbalanced EOT plan between Gaussians (Figure 11) to demonstrate that learning an unbalanced EOT coupling is not trivial and cannot be replaced by estimators learning an unbalanced deterministic Monge Map.
> ***Is there intuition on why the stochastic approach is helpful (or necessary) for this task?***
➤ As motivated in the introduction (line 45: “[...] cells evolve stochastically […]”), cells evolve stochastically rather than deterministically, and hence obtaining stochastic predictions is relevant to model trajectories of cells. For example, stochastic evolution is relevant in developmental single-cell data (measured across different time points) where a homogeneous progenitor population (in the most extreme case, a single fertilized egg cell) stochastically proliferates into more mature cells. This is the motivation for the use case we consider in Figure 2.
Another source of stochasticity can be introduced from technical/experimental errors and biases, as we state in line 325: “imbalances might occur due to biases in the experimental setup or due to cell death”. The topic of uncertainty estimation is prevalent in single-cell genomics. For example, [Laehnemann+2020], state in “Eleven grand challenges in single-cell data science”: “Optimally, sc-seq analysis tools would accurately quantify all uncertainties arising from experimental errors and biases.” In particular, the perturbation data we consider e.g. in Figure 3, LHS, measures cells before and after perturbation. While in such experimental setups, it is common to perturb the same number of cells with each drug, the final data rarely has the same number of cells for each condition due to experimental errors. This is what motivates the calibration score in Figure 13, where we showed that cells with a high variance are mapped incorrectly (see Appendix C.2 for the description of the metric), resulting in a good calibration score.
[Laehnemann+2020] Laehnemann et al., Eleven grand challenges in single-cell data science, 2020
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response and answers to my questions. I will keep my positive score.
---
Reply to Comment 1.1.1:
Title: Thanks for the response to our rebuttal
Comment: We appreciate the positive feedback and are more than happy to address any further questions.
The Authors | null | null | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their encouraging feedback, constructive criticism, and thoughtful comments, as well as for pointing out typos.
In response to questions about the uniqueness of quadratic OT solutions raised by reviewer NoBU, we included an analysis of the stability of discrete OT solutions (see pdf document). We perform experiments on both simulated Gaussian data and single-cell data (from Fig. 4). In both cases the data has 20 dimensions (+10 dim. for fused exp.). The empirical analysis (Alg. 1, pdf) suggests:
1. Including a fused term in GW makes the solution more stable in both datasets (Fig. 2, pdf). Intuitively, the addition of this fused penalty on extra features $\mathbf{u},\mathbf{v}$ enables the "selection" of an optimal GW coupling. Essentially, we "trade off a bit of GW optimality" by choosing a coupling that minimizes the distortion of structural information $|c_\mathcal{X}(\mathbf{x}, \mathbf{x}') - c_\mathcal{Y}(\mathbf{y}, \mathbf{y}')|^2$ **while also minimizing** the cost on features $c(\mathbf{u}, \mathbf{v})$. Empirically, this mitigates the problem of the non-uniqueness of (pure) GW coupling to a large extent, making our procedure more stable.
2. The geodesic cost makes the solution more stable for suffiently small $\varepsilon$ in single-cell experiments, but barely in Gaussian experiments (Fig. 2, pdf)
3. The initialization scheme (App. A/E) for quadratic OT solvers makes the solution more stable (Fig. 2, pdf)
We intend to include these results in the final version of the manuscript.
For experimental details and a more detailed discussion, we refer the reader to the responses to reviewer NoBU.
While we thank reviewer GBWj (as well as reviewers GAfL and NoBU) for appreciating that GENOT is built for single-cell data, reviewer GBWj expressed skepticism about the applicability of GENOT to high-dimensional data like image data. We agree with reviewer GBWj that GENOT is built for single-cell data specifically, but we found the question interesting and thus conducted experiments on image data. Importantly, we would like to highlight that we do **not intend to include computer vision experiments in the final manuscript**, as in accordance with all reviewers, GENOT is built for single-cell data. We set out to apply GENOT-L to the common task of image translation on the CelebA dataset, translating females to males. We leveraged the flexibility of using any cost function and used CLIP [Radford+21] embeddings for both the discrete matching and the conditioning (together with FiLM [Perez+17]).
While FID scores and examples of translated images can be found in the provided pdf, we made the following observations:
1. In terms of FID, GENOT-L performs comparably, but slightly worse than other flow-matching based methods (Table 1, pdf)
2. Visually, the generated images look realistic, but the coupling information is not always well preserved (Fig. 3, 4, pdf), as predicted by reviewer GBWj.
[Liu+14] Liu et al. “Deep Learning Face Attributes in the Wild.”, 2014
[Perez+17] Perez et al. “FiLM: Visual Reasoning with a General Conditioning Layer.”, 2017
[Eyring+24] Eyring et al., "Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation", 2024
[Radford+21] Radford et al., “Learning Transferable Visual Models From Natural Language Supervision” 2021
Pdf: /pdf/94c50aa278834a8827bca53ee563bc94aae6f20f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interpretable Image Classification with Adaptive Prototype-based Vision Transformers | Accept (poster) | Summary: The authors present a novel method for interpretable image classification by incorporating a vision transformer (ViT) into the prototypical neural network framework which provides case-based reasoning to neural network based image classifiers. They claim that most existing prototypical methods are convolutional neural network (CNN)-based, and those methods are limited by spatially rigid prototypes, thus failing to handle geometric variations of objects. Existing methods that try to handle this geometric variation either rely on a continuous latent space, which is not compatible with ViTs, or are other prototype-based ViT that fail to provide inherently interpretable explanations. Due to these existing problems, the author present ProtoViT with the following contributions:
Incorporates a ViT backbone that can adaptively learn interpretable prototypes that can handle geometric variation of different sizes.
They achieve the above with a greedy matching algorithm utilizing an adjacency mask and an adaptive slots mechanism.
They give empirical evaluation showing SOTA accuracy and a qualitative analysis showing the faithfulness and coherence of the learned prototype representations.
Strengths: Soundness:
The methods are clean and sound with ample ablation experiments, and it appears their approach can perform better than others (marginally) and have interpretable and coherent prototypes.
Presentation:
The paper was easy to follow with clear claims, ideas and methods. While some of the figures could be a bit cleaner (such as the boundaries and borders in Figure 4), and some notation seemed a bit odd, they communicated their ideas/methods well.
Contribution:
The paper gives a clean method for utilizing ViTs in the prototypical framework of deeplearning for interpretability and even incorporates existing methods for making prototypes more flexible via utilizing the approach from Deformable ProtoPNet. They incorporate a novel coherence loss that encourages sub prototypes to be similar to each other. In addition, this paper utilizes a greedy matching algorithm with an adaptive mask to learn geometrically local sub-prototypes. This method also allows for an adaptive number of sub-prototypes through the slot pruning mechanism.
Weaknesses: Soundness:
The lack of qualitative comparison with other ViT methods. I know the authors state that these other vision transformer methods do not project the learned prototypical features to the closest latent patches, but they still provide explanations. Could more be expanded on this and/or a figure showing this lack of reasoning/inherent interpretability?
- This is my biggest concern
Presentation:
In the section 3.4 for “Optimization of last layers” did you mean “... l-th class prototypes…” with a plural on the prototypes? This was unclear to me
Contribution:
However due to already preexisting ViT prototype methods (with a lack of comparison to them), the contribution this ViT makes compared to others is unclear.
Technical Quality: 2
Clarity: 3
Questions for Authors: Examples in the paper only show good outcomes. What do incorrect predictions look like, and does the explanation (prototypes) show a reason for why the model was incorrect?
Do the authors have additional comments about potential information leakage between image patches in the attention step occurring in the activation of the prototypes? I ask this in context of a latent patch containing more information regarding other patches around it contributing to the learned prototype. When we project on this prototype, we look at its spatial position and project to the real image, but it may not tell the whole story.
When determining the next sub-prototype, why do you only consider the last sub-prototype when creating the adaptive mask given radius r? Why not all the currently selected sub-prototypes? Wouldn’t you get more coverage and cohesive information if considering them all?
Why can’t prototypes be shared across classes? While I still think the prototypes learned are good, I think a limitation that wasn’t stated is the lack of across class prototype sharing.
I feel like the coherence loss could limit your overall prototype representation. If all sub-prototypes have to be similar, then wouldn’t that hurt a prototype representation that has distinct parts. I’m thinking of a potential example of three differently colored stripes being next to each other. If the sub-prototypes are each of different colors, wouldn’t they be very different; thus discarded due to this loss? This is even shown in E.3. I know that is where the adaptive mask could combat this problem, but I would like more discussion on this.
I’m curious how sensitive your method is to masking / perturbation. For example, if you mask out some or all the sub-prototypes for the top activated area in the test image, can the model still find the prototype elsewhere if it still exists in the image? A particular case I’m interested in is if a prototype is about the blue on a bird’s feathers, so you make that part that the model initially detected. Does the model still pick up the blue elsewhere?
Could you expand more on why you “believe that simply adding prototype layers to an architecture without well defined “cases” does not make the new architecture more interpretable.” In particular, what do you mean by ‘well defined cases’?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The method lacks across class prototype sharing.
The coherence loss lack discussion on its limitation of making sub-prototypes being similar thus hindering diverse representation which may be important in a prototype.
The paper lacks comparison to other interpretable ViT methods
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and comments. We are happy that you find our method sound and our experiments ample. We address your comments below:
> The lack of qualitative comparison with other ViT methods..
We attempted to compare our visualizations to those from ProtoPFormer [1], but are unable to show any reasoning from ProtoPFormer, since they do not have explicit prototypes and their code does not include functionality to perform analysis on their prototypes. From their code, we were only able to extract prototype-wise activation patterns on a given test image and the masked out version of the test image from the global branch. We could not find any way to visualize the prototypes yielding these activation maps, which is likely because the prototypes are not projected.
We also attempted to visualize ViT-Net [2], but found that they have not provided any visualization code. We reached out to the authors requesting such code, but have not yet received any response. We are happy to discuss this point further during the discussion period if desired. We would like to emphasize that the other prototype-based ViTs are not able to address geometric variations. Those methods incorporate non-deformable CNN layers after the ViT encoder. Unfortunately, those methods do not project prototypes to the closest latent patches and thus have no visualizations. Moreover, even if they had a visualization of their prototype, it would be similar to ProtoPNet which we directly compared in Fig. 1. In short, we agree that such visual comparisons would be useful, but as discussed above, this is unfortunately not possible. We will however add parts of the discussion above to our revised paper. We hope this addresses your concerns.
> In the section 3.4 for “Optimization..
We agree that the current phrasing is confusing – we will change it to “... associated with class b for each prototype from class l”.
> wrong prediction examples
Thank you for the great suggestion. We have added some examples of incorrect reasoning to the shared response. Interestingly, we found that some of the time where the algorithm appears to have gotten it wrong, it was actually because the dataset was mislabeled and the algorithm was actually correct. We present examples of this in the shared response and will add them to our revised paper.
> .. why only consider the last sub-prototype..
Considering the neighbors of only the most recently selected part helps encourage prototypical parts to have a consistent relationship – i.e., the third part must always be adjacent to the second. If a prototype represents a bird’s ankle, it should always be true that the leg part is adjacent to the foot part, which is adjacent to the toe part.
Moreover, we have found that further restricting where sub-prototypes activate improved the semantic consistency of prototypes. The extreme case of this – removing adjacency masking entirely – is shown in Appendix Fig 8, where a prototype without any masking confuses a bird’s beak with another’s feet. As such, we aimed to keep adjacency masks tight by selecting only neighboring cells to the most recently activated sub-prototype.
> Information Leakage:
This is another great question. Due to character limit, please refer to the global response and the response to reviewer **GC9W**. We have used empirical evaluation to show that the theoretical possibility of information leakage isn’t happening to our model.
> shared across classes?
Prototypes could indeed be shared across classes if desired – ProtoViT is compatible with methods like ProtoPool [3] and ProtoPshare [4]. For the purposes of this work, we used a simple linear layer between prototype activations and class predictions for ease of training. We wanted to highlight the impact of the novel elements of this work without combining too many features from the literature, in order to keep our contribution clear.
Thanks again for the question, we will clarify this in our revised paper.
> the coherence loss could limit overall prototype representation.
Coherence loss only encourages the sub-prototypes to be semantically similar so that each prototype will represent only one semantic concept. This makes prototypes simpler to understand, since they represent a single, intuitive concept.
It is important to note that this coherence is enforced **in the latent space**. Every part of a sub-prototype does not need to be visually identical; they simply have to be semantically similar. If the three stripes described are really important for identifying the species, we might expect them to be encoded close to each other in the latent space, and thus be allowed to be grouped together by coherence loss. Finally, if so desired by the user, the coherence loss can be de-emphasized through tuning its corresponding hyperparameter. The inclusion of the coherence loss allows the users to control this aspect of the prototypes as in Appendix E.3.
> perturbation?
This is a great question. We have included examples in the shared pdf and discuss this in our global response.
> ‘well defined cases’?
This is a very key point that we are happy to further clarify. By well-defined cases, we mean that every prototype should be explicitly tied to one or more images so that we can visualize them. In both ProtoViT and the original ProtoPNet, this is achieved by projecting each prototype to be exactly equal to part of the latent representation of some training image. We can then refer back to the training image for a visual representation of the prototype. Without such a mechanism to enable visualizations, prototypes are just arbitrary learned uninterpretable tensors in the latent space of the network, which are not that different from a convolutional filter in any black box model.
We again thank the reviewer for their thoughtful comments. We hope we have addressed all your concerns in our response and are happy to provide additional clarifications if needed.
---
Rebuttal 2:
Title: citations
Comment: [1] Xue, Mengqi, et al. "Protopformer: Concentrating on prototypical parts in vision transformers for interpretable image recognition." arXiv preprint arXiv:2208.10431 (2022)
[2] Kim, Sangwon, Jaeyeal Nam, and Byoung Chul Ko. "Vit-net: Interpretable vision transformers with neural tree decoder." International conference on machine learning. PMLR, 2022.
[3] Rymarczyk, Dawid, et al. "Interpretable image classification with differentiable prototypes assignment." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[4] Rymarczyk, Dawid, et al. "Protopshare: Prototype sharing for interpretable image classification and similarity discovery." arXiv preprint arXiv:2011.14340 (2020).
---
Rebuttal 3:
Title: Rebuttal Response
Comment: Thank you for a detailed rebuttal!
I understand that other methods may not project to a latent of a training example, but this doesn't seem like a big step / modification to add to the other methods for comparison. If this was done then having this comparison would have added strength to this paper. I understand that they would just be similar to ProtoPNet if this was added, but they wouldn't be the same. Thank you for emphasizing that other ViT-based methods are unable to handle geometric variations. Thank you for adding parts of this discussion in the paper if accepted. On this point overall, having a unifying comparison between these approaches would have contributed well to this area.
Thank you for the perturbation and misclassification examples! Please do add this to the final paper if accepted!
I appreciate the response on the coherence loss. I think further analysis of this loss and it's limitations would be desirable. I understand that this is in the latent space, but the concern is still that same. If several parts are semantically different, but their combination is essential for the downstream task, then I still think this loss would restrict that. I understand this loss adds more control which is a reason I added it as a strength, but I still think it comes with limitations (unless there are more analysis that says otherwise).
Thank you for addressing some of my other concerns.
I've updated my score to reflect the added strength from the rebuttal.
---
Rebuttal 4:
Comment: Thank you for your response!
> I understand that other methods may not project to a latent of a training example, but this doesn't seem like a big step / modification to add to the other methods for comparison. If this was done then having this comparison would have added strength to this paper.
We agree that adding visualizations from other VIT based models would be very interesting and could directly show the strength of our method. However, since those methods also learn prototypes on the class tokens which have no correspondence to the image as we discussed in the related work, it is unclear how to project and visualize those prototypes even with projection. The main reason that those methods do not perform projection is because they observe a dramatic performance drop. This is explicitly stated on page 11 of ProtoPFormer, where they say:
"Our proposed ProtoPFormer does not employ the “push” process for two main reasons. One is that this process causes the performance degradation with ViT backbones, and the other is that our global and local prototypes are the high-level abstraction of associated visual explanations, representing their features based on the whole training set."
This drop is because that their method is not able to learn a well clustered latent space, which reviewer **GK9W** agreed on. Thus, it is extremely unclear how those methods produce the visualizations used in their manuscripts.
>Thank you for the perturbation and misclassification examples! Please do add this to the final paper if accepted!
We will make sure to do so!
>If several parts are semantically different, but their combination is essential for the downstream task, then I still think this loss would restrict that.
When they are semantically different, it would be captured by other prototypes associated with that class as encouraged by orthogonality loss, which is why we adopt the orthogonality loss in our method. Coherence loss is encouraging the similarity inside a prototype (which consists of some sub-prototypes), and **orthogonality loss is encouraging prototypes to be different from each other (to have diverse representation)**. So, if there are several semantically different parts that are each necessary for classification, a different prototype should learn to identify each one of them. We will make sure to discuss this more in the final manuscript.
We hope this would further address your concerns. And thank you again for updating your score and engaging with us! We greatly appreciate your feedback. | Summary: The authors introduce ProtoViT, a model that leverages the Visual Transformer (ViT) architecture and integrates prototypical parts for case-based reasoning. This method is self-explainable, adhering to the rule "this looks like that." A novel aspect of ProtoViT is the use of prototypical parts of varying sizes, utilizing a ViT backbone. The authors claim that these prototypical parts are coherent and inherently interpretable. They evaluate ProtoViT on the CUB and Stanford Cars datasets, using accuracy as the metric for comparison. The training process for the model involves five loss components in addition to cross-entropy.
Strengths: The paper is well-written, with images effectively illustrating the intended concepts. The introduction of the greedy matching algorithm is particularly engaging and holds significant importance for the community. The introduction section is well-crafted, clearly outlining the contributions. Additionally, the computational experiments are thorough and demonstrate comprehensive accuracy.
Weaknesses: The primary concern lies in ensuring that the ViT backbone can maintain prototypical parts that are both local and inherently interpretable. Since ViT uses attention mechanisms that mix information from all patches, there is a risk of confusion for the end user. To address this, I suggest conducting a spatial misalignment benchmark [1] to analyze its influence.
Additionally, there are no metrics related to explainability demonstrating whether the model improves interpretability, such as with FunnyBirds [2] or through a user study [3].
[1] Sacha, Mikołaj, et al. "Interpretability benchmark for evaluating spatial misalignment of prototypical parts explanations." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 19. 2024.
[2] Hesse, Robin, Simone Schaub-Meyer, and Stefan Roth. "FunnyBirds: A synthetic vision dataset for a part-based analysis of explainable AI methods." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[3] Kim, Sunnie SY, et al. "HIVE: Evaluating the human interpretability of visual explanations." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
Technical Quality: 2
Clarity: 3
Questions for Authors: How are you measuring all the claims, especially coherence and inherently interpretable models?
Could you provide a prototype purity metric for PIP-Net?
Is it possible to analyze whether ViT can be a backbone for prototypical parts or not, test with spatial misalignment benchmark?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: There is no quantification of interpretability, no user-study, nor no reference to XAI benchmarks such as FunnyBirds and spatial misalignment benchmarks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and comments. We are happy that you find our matching algorithm engaging and a significant contribution and our experiments thorough and comprehensive. We address your comments below:
> The primary concern lies in ensuring that the ViT backbone can maintain prototypical parts that are both local and inherently interpretable. Since ViT uses attention mechanisms that mix information from all patches, there is a risk of confusion for the end user. To address this, I suggest conducting a spatial misalignment benchmark [1] to analyze its influence.
Thank you for the great suggestion. As suggested, we have run experiments on the benchmark [1], and found our model outperforms models such as ProtoPool [2] and ProtoTree [3] that are CNN based architectures. We present the complete results in our shared response. Our model achieves very close performance in PLC to ProtoPNet [4] and strictly better performance in PAC and PRC. Our model is also robust to adversarial attacks with roughly only 2 - 3% drop in accuracy. These results suggest that our model with ViT can create prototypical parts that are both local and inherently interpretable equally or better than many of the influential existing models with CNN backbones. We will add these results to our revised paper.
Further, in section G of the appendix, we present several examples of global analysis. These figures show the highest activations of a variety of prototypes on training and test images, and consistently show that each prototype activates highly on a single semantic concept. This consistency suggests that our visualizations are **a good representation of the prototypes**, and not just spuriously selecting nice patches.
> Additionally, there are no metrics related to explainability demonstrating whether the model improves interpretability, such as with FunnyBirds [2] or through a user study [3].
We apologize for the confusion, we did indeed run a user study based on the agreement task from HIVE, which is your reference [3]. It’s in Appendix I, labeled in the table of contents in the appendix as a user study. The study shows that our model improves interpretability, and shows statistically significantly improvements in user understanding and confidence on the model reasoning process. We will make sure the user study is clearly discussed in the main body of the revised paper.
> How are you measuring all the claims, especially coherence and inherently interpretable models? Could you provide a prototype purity metric for PIP-Net? Is it possible to analyze whether ViT can be a backbone for prototypical parts or not, test with spatial misalignment benchmark?
We agree that these are important clarifications. For coherence, we have a definition in the training algorithm, Section 3.4, Equation 4. “Inherently interpretable models” are constrained to make their reasoning processes easier to understand, see work [5]. Our paper’s prototype-based reasoning is a constraint on the network to make its reasoning process easier to understand. The result from the newly run misalignment benchmark described above and in our global response shows that ViT can work as good as CNN backbones in our proposed method.
Finally, while we agree that a purity metric would be useful, however, given the limited time to prepare our rebuttal, we decided to focus on other experiments, such as the misalignment experiments on the new benchmark [1] suggested by the reviewer (which we deemed more important). In our understanding, the purity metric simply quantifies whether prototypes consistently activate on the same concept, which we show to be the case through many examples of global analysis. We expect the purity metric to be strong based on our global and local analysis results and we are working on computing the purity metric to add to our revised paper.
Thanks again for the thorough review and the great suggestions. We believe we have addressed the main concerns raised by the reviewer. We are happy to provide additional clarifications if needed.
[1] Sacha, Mikołaj, et al. "Interpretability benchmark for evaluating spatial misalignment of prototypical parts explanations." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 19. 2024.
[2] Rymarczyk, Dawid, et al. "Interpretable image classification with differentiable prototypes assignment." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[3] Nauta, Meike, Ron Van Bree, and Christin Seifert. "Neural prototype trees for interpretable fine-grained image recognition." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
[4] Chen, Chaofan, et al. "This looks like that: deep learning for interpretable image recognition." Advances in neural information processing systems 32 (2019).
[5] Rudin, Cynthia, et al. "Interpretable machine learning: Fundamental principles and 10 grand challenges." Statistic Surveys 16 (2022): 1-85.
---
Rebuttal Comment 1.1:
Comment: That is indeed a great response to my review. Thank you for the clarifications and discussion. If your work is accepted, please include values for the purity metric, as it is an established metric for evaluation. Including this will enhance the consistency of the research.
The results of the spatial misalignment benchmark are surprising, as I did not expect the ViT to be more robust in terms of explanation quality and faithfulness. Good job!
Regarding the user study, I really appreciate the presentation of results from Ma et al. [1], who also tested if users performed better than random. It would be worthwhile to check this for your method as well.
After consideration, I am leaning toward increasing my grade, but I still need to digest the information from other reviewers and their discussions.
[1] Ma, Chiyu, et al. "This looks like those: Illuminating prototypical concepts using multiple visualizations." Advances in Neural Information Processing Systems 36 (2024).
---
Reply to Comment 1.1.1:
Comment: Thank you for your response!
> If your work is accepted, please include values for the purity metric, as it is an established metric for evaluation. Including this will enhance the consistency of the research.
We will make sure to include the purity metric to the final manuscript if the paper is accepted.
> Regarding the user study, I really appreciate the presentation of results from Ma et al. [1], who also tested if users performed better than random. It would be worthwhile to check this for your method as well.
We agree that this could be an interesting addition to our paper, but it will take a long time to design and complete such a user study. We don't believe that we can finish it before the end of the discussion period. We will add it to the final manuscript if the paper is accepted.
Thanks again for engaging with us! | Summary: This paper presents a novel strategy to learn interpretable visual prototypes for visual transformers, with a good property of offering spatially deformed prototypes. The method also introduce an slot mechanism which can learn an adaptive number of prototypical parts. The proposed are wisely designed for visual transformer architectures.
Strengths: 1. It is nice to draw inspiration from the focal similarity when computing the patch features.
2. Different from Deformable ProtoPNet, the proposed method present a new way to accommodate geometric variations of objects.
3. The proposed method is validated on two benchmarks and with extensive ablation studies.
4. In general, the paper is well written.
Weaknesses: 1. The first concerns is about the use of deformable prototypes that containing K sub-prototypes, which means the proposed method will have more learnable prototype vectors (K times) than ProtoPNet, TesNet, ProtoPFormer, and so on. These previous approaches only use 1*1 prototypes. Does the performance improvement of the proposed method come from the largely increased number of sub-prototypes?
2. The idea of greedy matching algorithm is similar to the greedy prototype projection, proposed in [1, 2]. The authors are suggested to state their difference of the related works. Also, some important work using prototypes for interpretable image classification should be reviewed and discussed, such as [3, 4, 5].
3. Regarding the adaptive slots mechanism, it is good for the motivation of to learn an additional indicator to measure the importance of sub-prototypes. From my understanding, the learnable vector v is like a gate, which should be saved as model parameters after training. One potential limitation is such mechanism introduces an extra gate parameter v, compared with previous methods.
4. It not much clear about the adjacency masking. Since the prototypes are not initialized to have the position information, how do choose the patch/feature tokens around the prototypes within r?
5. The authors mention the issue of performance degradation after prototype projection. Does the proposed method still suffer from such issue? What extent will the performance drop?
6. The method has too much loss coefficients, which are selected without detailed tuning procedure.
References:
[1] Knowledge Distillation to Ensemble Global and Interpretable Prototype-Based Mammogram Classification Models
[2] Pixel-grounded prototypical part networks
[3] PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification
[4] Learning support and trivial prototypes for interpretable image classification
[5] Concept-level debugging of part-prototype network
Technical Quality: 4
Clarity: 3
Questions for Authors: N/A
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and comments. We are happy that you find our method novel and well-designed for transformers. We address your comments below:
> The first concerns is about the use of deformable prototypes that containing K sub-prototypes, which means the proposed method will have more learnable prototype vectors (K times) than ProtoPNet, TesNet, ProtoPFormer, and so on. These previous approaches only use 1*1 prototypes. Does the performance improvement of the proposed method come from the largely increased number of sub-prototypes?
Thanks for the important question. We discuss this point in section F of the appendix. While our prototypes do consist of more parts than prior work, they represent a similar proportion of each input, since the latent space of our backbones have a finer spatial resolution. We are not sacrificing locality, or changing the general reasoning process. We only add flexibility to the prototypes by decomposing them into multiple, flexible parts. In fact, when using 1*1 prototypes, the model performance is **85.28 +/- 0.11** which is comparable to our final model. So, the improved performance is not necessarily because of the increased number of sub-prototypes.
However, **the interpretability of the earlier method is very limited because the 1*1 patches are too small to convey much information**, which is exactly where the motivation of our work comes from. We are open to moving this section to the main body for the camera ready if the reviewer thinks that is warranted.
> Regarding the adaptive slots mechanism, it is good for the motivation of to learn an additional indicator to measure the importance of sub-prototypes. From my understanding, the learnable vector v is like a gate, which should be saved as model parameters after training. One potential limitation is such mechanism introduces an extra gate parameter v, compared with previous methods.
The reviewer is correct in that there is an additional parameter associated with each part of each prototype (the slot indicator), but we argue that this is not a limitation - it’s a **benefit**. This indicator allows us to learn prototypes with different numbers of parts, increasing the **flexibility** of the network and helping us reduce the overall number of prototypical parts when possible. Like adding any parameter to any network, there are benefits and drawbacks, but we confirm empirically that the benefits outweigh the drawbacks for performance. And because of this, we can learn prototypes that capture more specific features such as the red eyes in Fig. 4. Without these additional parameters, instead of learning the exact red eye feature, it will only learn the head of the bird as a whole. This is an important discussion which we will include in our revisions.
> It not much clear about the adjacency masking. Since the prototypes are not initialized to have the position information, how do choose the patch/feature tokens around the prototypes within r?
We apologize for the confusion. Adjacency masking is tied in with the greedy matching procedure: we match the first part of a prototype to anything in a given image, mask out every latent patch more than radius r away from the selected patch, and greedily select the best match for the next part from among the cells that were not masked out. We hope this clarification helps.
> The authors mention the issue of performance degradation after prototype projection. Does the proposed method still suffer from such issue? What extent will the performance drop?
Yes, like other prototype based methods, we do see a small drop in performance after projection relative to before. The drop in performance is usually around 0.8 to 1% accuracy, but we report only accuracy values taken following the projection step. That means the strong performance we report already accounts for this slight drop in performance.
As described in Theorem 2.1 from ProtoPNet [1], under a perfect training setting, the drop in performance should be negligible. The big drop in performance typically seen is mainly caused by the fact that the prototypes are not trained semantically close enough to the latent patches. Since the prototypes trained in our method are semantically much closer to the latent patches, our drop in performance is minimal, as discussed above.
We will make this clear in our revised paper.
> The method has too much loss coefficients, which are selected without detailed tuning procedure.
Like any hyperparameter, these loss coefficients can be tuned by a variety of techniques. For simplicity, we used the values suggested in prior work for terms that already existed, and used a small grid search to select the other values. No dedicated tuning procedure is necessary for these coefficients. Additionally, we note that the number of coefficients in our loss (five) is comparable to other similar work, such as TesNet [2]. We will clarify this in the revisions.
[1] Chen, Chaofan, et al. "This looks like that: deep learning for interpretable image recognition." Advances in neural information processing systems 32 (2019).
[2] Wang, Jiaqi, et al. "Interpretable image recognition by constructing transparent embedding space." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. I am happy with your work.
It is great to have an exhaustive discussion about the performance drop caused by prototype projection. I hope such discussion can be included in the main paper, if your work is accepted, since a few prior work noticed this practical issue. Good job!
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response! We will make sure to include the discussion of the performance drop caused by projection in the body of the final manuscript. It will be _discussed in the training algorithm section after the discussion of projection starting on page 7_. It is encouraging to know that our detailed revisions have addressed your concerns. Your recognition of our work in terms of contribution, novelty, soundness, and presentation is greatly appreciated.
We also appreciate it _if you could consider updating the rating_, as this would kindly reflect your most recent evaluation and kind recognition of the paper and acknowledge our efforts.
Thank you again for engaging with us! | Summary: The paper presents ProtoViT, a method for interpretable image classification. ProtoViT incorporates ViT backbone with deformed prototypes that explains its predictions. ProtoViT consists of three components:
(1) a feature encoding layer with a pre-trained ViT backbone, which computes a latent representation of an image;
(2) a greedy matching layer which compares the latent representation to learned prototypes; and
(3) an evidence layer which aggregates prototype similarity scores into a classification using a fully connected layer.
Quantitatively, ProtoViT achieves better performance than previous prototype-based methods. Qualitatively, it identifies meaningful prototypes to explain the prediction.
Strengths: 1. The paper is well-written and illustrates the technical details. The qualitative visualization are helpful in understanding the method. Attached appendix provides a lot of meaningful details.
2. The overall scheme of interpretable image classification is nicely presented with small patches!
Weaknesses: 1. It is understandable that one goal of interpretable image classification is to tell why something works. However, a bigger goal is to understand why something didn't really work. Is it possible to highlight the examples where the method could not predict the correct class? It would be then interesting to understand the reasons for failure.
2. The method relies on a strong assumption of a solid backbone model that can already achieve a good performance on the given task. This restricts the applicability of the method to limited scenarios where we do not need interpretable image classification at the first place. In such a scenarios, the nearest neighbors obtained using latent feature representation and pixel correspondences may be sufficient.
3. Limitations of the method is not clear from the paper. I could guess at certain places in the method section where things could go wrong. It would be better if the authors could illustrate those points with suitable qualitative analysis.
4. User study is not properly designed and is not statistically significant.
Overall: The paper is interesting. In a first reading, everything looks good. But then you start asking yourself questions about the different scenarios where the method will not work (and there are definitely such scenarios as can be judged from quantitative evaluation and user study) and why it will not work. The paper falls short in explaining them with details.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see Weaknesses Section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors did provide limitations but a far-fetched one that does not really talk about the limitations of the current method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We are glad that you find our paper interesting, well-written and nicely presented. We address your points below:
> ... Is it possible to highlight the examples where the method could not predict the correct class? It would be then interesting to understand the reasons for failure.
Yes! Per your suggestion, we present examples of this in the shared pdf response. Interestingly, we found that some of the time where the algorithm appears to have gotten it wrong, it was actually because the dataset was mislabeled and the algorithm was actually correct (see the first example in Figure 1 of the pdf rebuttal). Appendix Fig. 8 also includes an example when the model misclassified Black Tern and Pigeon Guillemot. And this misclassification is consistent across different model algorithms because of the red feet that both birds have. We will include these additional examples in the revised paper.
> The method relies on a strong assumption of a solid backbone model that can already achieve a good performance on the given task. This restricts the applicability of the method to limited scenarios where we do not need interpretable image classification at the first place. In such a scenarios, the nearest neighbors obtained using latent feature representation and pixel correspondences may be sufficient.
This is an important question that needs to be clarified. Nearest neighbors is not actually sufficient. First, calculating nearest neighbors for a full dataset is extremely expensive, which means training is extremely difficult since you’d have to calculate nearest neighbors at each iteration (or at least at many iterations); this makes it impractical for any complicated task.
Second, nearest neighbors are much more sensitive and difficult to troubleshoot. With prototypes, a person can look at all the prototypes and audit them, whereas you can’t do that with nearest neighbors for all the points. As we mentioned in our response to your first question, we found points that are mislabeled - if those were learned prototypes, we would have pruned them or relabeled them. We wouldn’t be able to check all those examples with nearest neighbors.
Third, we argue that the method is not limited to scenarios where we don’t need
interpretability. A good example is IAIA-BL, which uses prototypes for analyzing breast lesions in mammograms [1]. We **definitely** need interpretability here, as the stakes are high. Additionally, even when a black box model performs well, it doesn’t necessarily mean that it learns well. The example in our ablation section Appendix. Fig. 8, shows a version of our model that has a higher performance than the original; however, it is clear that the model confused the beak in the test image with the feet in the prototypes.
For applications that are high stakes, having an interpretable network allows us to troubleshoot the data and the model so that we can improve accuracy **above and beyond** the black box. The black box is just a good starting point for training our interpretable networks. In fact, we show that our network already has better performance than the black boxes, including the ViT-based and CNN-based models (see Table 2 of the main paper).
> Limitations of the method is not clear from the paper...
In addition to what is discussed in the paper, the following limitations could apply to our method:
1. Even though our method is comparable with other prototype models in terms of training speed, we should note that all prototype models take additional time compared to their black box backbone.
2. Location misalignment is also a common problem in CNN-based models. As we discuss in our global response, this issue is still present in our model, but to a much lesser extent.
We will add these to the revised paper.
> User study is not properly designed and is not statistically significant.
We apologize for the confusion. The user study results are actually significant at the alpha=0.05 level (all our p-values for the t-tests are below 0.05), as shown in Table 8 of the appendix.
As for the design of the user study, we followed a similar structure of the agreement task to that of the HIVE [2] paper, which is known for the quality of its human-studies experiments. However, we would be happy to consider suggestions to improve our user study.
> Overall: The paper is interesting. In a first reading, everything looks good. But then you start asking yourself questions about the different scenarios where the method will not work ... The paper falls short in explaining them with details.
We thank the reviewer for their kind words regarding our paper. We hope that with additional clarification to the manuscript based on your and other reviewers’ comments, we have provided more detail about our method and its benefits and potentially downsides.
We believe that our method makes it easier to audit and understand models and that the prototype-reasoning gives us information on how to solve the problem, whether it is by augmenting the data to learn a specific concept. Importantly, it also tells us when we should not trust the prediction, as demonstrated above. Through comprehensive experiments and ablations, we have shown our method to perform reliably well at this task and is a strict improvement over other prototype networks as its localized parts are totally faithful to the reasoning process (since we do not require upsampling for our visualizations, something that other prototype based methods require).
We thank you for your comments and hope we have addressed them all. Please let us know if there is anything you would like us to further clarify.
[1] Barnett, Alina Jade, et al. "A case-based interpretable deep learning model for classification of mass lesions in digital mammography." Nature Machine Intelligence.
[2] Kim, Sunnie SY, et al. "HIVE: Evaluating the human interpretability of visual explanations." ECCV 2022.
---
Rebuttal 2:
Comment: We hope that our response has helped explain our work's contributions and address your concerns. Please feel free to let us know if you have any further questions. We are very happy to discuss!
---
Rebuttal Comment 2.1:
Title: Thanks!
Comment: Thank you for the rebuttal! I have raised my scores. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their comments. In addition to addressing all of your concerns individually, we believe the following clarifications and additional experiments may be of interest to all of you. These additional experiments further reinforce the strength of our method and will be added to our revised paper.
**(1) Location misalignment benchmark:**
To ensure that the ViT backbone can maintain prototypical parts that are both local and inherently interpretable, we performed analysis on the benchmark [1] as suggested by reviewer **GK9W**. The quantitative result can be found in the shared pdf. Overall, we found that by integrating ViT backbone with our algorithm, it achieves equal or better performance than the influential CNN based prototype models such as ProtoPNet [2], ProtoPool [3], and Prototree [4] in terms of percentage change in location (PLC), percentage change in activation (PAC), and percentage change in ranking (PRC). For these metrics, lower is better. To ensure a fair comparison, we strictly follow the procedure of performing gradient based adversarial attacks as described in the paper. However, we would like to point out that since our prototypes only contain 4 (out of 196) or less patches, the adversarial attack performed is stronger to our model than other CNN based models, as a larger proportion of the images are perturbed for our model. Results can be found in Table 1 of the shared pdf.
**(2) Perturbation Analysis:**
Additionally, we have provided several instances of the perturbation analysis suggested by reviewer **Mc2e**, in which we mask out the region selected by each prototype, as shown in Figure 2 of the shared pdf. In each row, we mask out all matched locations for a prototype (middle-left column) using a white mask on the black wings and a black mask on the red parts, and check where that prototype activates after masking (shown in the leftmost column). We then confirm that the activated region for other prototypes remains reasonable when the mask from another prototype is applied (right two columns). We observed that after removing the preferred region by each prototype, it activates on another reasonable alternative (e.g., a red belly prototype might activate on a red back as a second choice).
Moreover, we performed the analysis on the test image with a modified background, and we observe that altering the background does not substantially impact the activation location of our ViT-based prototype, with all parts still solidly focused on the bird’s breast (again please refer to Figure 2 of the pdf). This is unlike the observation made in the original Location Misalignment Benchmark paper, where a CNN-based prototype model suffers from a dramatic shift in the activation location when there is a background change. This shows that our ProtoViT has a better location alignment than the CNN-based prototype models.
**(3) Misclassification examples:**
We provide the reasoning process of how our model misclassified a test image of a summer tanager and a slaty backed gull in Figure 1 of the shared pdf. We found that the misclassification of the given summer tanager example may be because of data mislabeling in the original dataset. A summer tanager does not have a black colored wing. That test image should indeed belong to the scarlet tanager class as the model predicted. Similar mislabeling cases also happen for Red Headed Woodpecker with image ID Red_Headed_Woordpecker_0018_183455 and Red_Headed_Woordpecker_0006_183383, as we found out when randomly selecting examples to present for the paper.
These examples showcase the ability of our method to help us understand the reasoning process of the model, not just when it is right, but also when it is wrong.
**(4) Other prototype-based ViT models:**
Finally, we would like to emphasize that some models suggested by the reviewers for comparison (e.g., ProtoPFormer [5], ViT-Net [6] and etc.) do not perform any form of prototype projection, and as such do not have any explicit visualization associated with their prototypes. Without such a mechanism to enable visualizations, prototypes are just arbitrary learned tensors in the latent space of the network, which are not that different from a convolutional filter in any black box model. In contrast, our network does perform projection, allowing each prototype to be tied directly to one well defined training image. This allows us to present clear, faithful visualizations of model reasoning.
[1] Sacha, Mikołaj, et al. "Interpretability benchmark for evaluating spatial misalignment of prototypical parts explanations." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 19. 2024.
[2] Chen, Chaofan, et al. "This looks like that: deep learning for interpretable image recognition." Advances in neural information processing systems 32 (2019).
[3] Rymarczyk, Dawid, et al. "Interpretable image classification with differentiable prototypes assignment." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[4] Nauta, Meike, Ron Van Bree, and Christin Seifert. "Neural prototype trees for interpretable fine-grained image recognition." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
[5] Xue, Mengqi, et al. "Protopformer: Concentrating on prototypical parts in vision transformers for interpretable image recognition." arXiv preprint arXiv:2208.10431 (2022).
[6] Kim, Sangwon, Jaeyeal Nam, and Byoung Chul Ko. "Vit-net: Interpretable vision transformers with neural tree decoder." International conference on machine learning. PMLR, 2022.
Pdf: /pdf/6c6e6c1ee40f2b39ef9567e2b0d36fcc6ac07ac0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Disentangled Generative Graph Representation Learning | Reject | Summary: The paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework that aims to guide graph mask modeling through disentangled latent factors to enhance the disentanglement of learned representations. Extensive experiments across 11 public datasets for node and graph classification tasks demonstrate the framework's effectiveness, significantly outperforming many existing self-supervised methods.
Strengths: Innovative Approach: The DiGGR framework innovatively utilizes disentangled latent factors to guide graph mask modeling, a novel contribution in generative graph representation learning that significantly enhances the model's explainability and robustness.
Comprehensive Experiments: The paper conducts extensive experiments on multiple datasets and tasks, showing significant performance improvements over existing methods, thus providing strong empirical support for the proposed approach.
Weaknesses: Complexity and Scalability: The framework appears computationally complex, which might limit its scalability to very large graphs or real-time applications. Unfortunately, this aspect is not extensively discussed in the paper.
Lack of Theoretical Analysis: While the empirical results are strong, the paper lacks a detailed theoretical analysis of why the disentanglement process improves performance, which could provide deeper insights into the method’s efficacy and limitations.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1: Computation Comleaxity
Grateful for your comments. We will expand the discussion on complexity and scalability in our revisions, focusing on the following three aspects:
1. Complexity Analysis: We discussed the network's complexity in Section 3.4 and will later compare the time and space complexity of additional models to demonstrate that DiGGR is comparable to previous work in terms of complexity.
2. Empirical Results: we compared DiGGR and SeeGera [1] on the Cora dataset. All experiments were conducted on an RTX 3090, with all the models adhering to their default settings, and perform a single run with the same random seed and the same Pytorch platform.
| Method | Training Time | epoch / second | Model Size | Training GPU Memory |
|:---------:|:-------------:|:----------------:|:-----------:|:-------------------:|
| SeeGera | 200 second | 5.28 it /s | 25 MB | 588.02 MB |
| **DiGGR** | **59 second** | **29.35 it / s** | **23.2 MB** | **566.83 MB** |
The experimental results indicate that our model has lower training time and resource consumption compared to SeeGera. Specifically, the training time of DiGGR is nearly four times less than that of SeeGera. These results collectively demonstrate that DiGGR maintains a manageable overall complexity while achieving excellent performance.
3. Lastly, we are actively working on extending DiGGR with a focus on scaling our algorithm to extremely large graphs. We conducted additional experiments on the larger dataset ogbn-arxiv. During the reconstruction of the adjacency matrix, we used the method from [5], sampling only a portion of the matrix for computation each time. The results are as follows:
| Methods | ogbn-arxiv |
|:-------------------:|:----------------:|
| GraphMAE [2] | 71.75 ± 0.17 |
| GraphMAE2 [3] | 71.95 ± 0.08 |
| DiGGR | **72.12 ± 0.08** |
DiGGR outperforms both on the ogbn-arxiv dataset, validating our approach's effectiveness and scalability. Previous works like GraphMAE initially focused on smaller datasets, but its subsequent work, GraphMAE2, used the PPR-Nibble technique to extend to large graph data. We believe that DiGGR also has the potential for scaling to large datasets and are actively working on advancing this capability.
## Q2: Theoretical Analysis
Thank you for your valuable suggestion. We would like to provide a detailed theoretical analysis as follows:
DiGGR is built on an graph autoencoder (GAE)-based framework. Recent studies [6, 7] have demonstrated a direct connection between GAE and contrastive learning through a specific form of the objective function. The loss function can be rewritten as follows:
$$
L^+ = \frac{1}{|\varepsilon^+|}\sum_{(u, v) \in V^+}{\log f_{dec}(h_u, h_v)}
$$
$$
L^- = \frac{1}{|\varepsilon^-|}\sum_{(u', v') \in V^-}{\log (1 - f_{dec}(h_{u'}, h_{v'}))}
$$
$$
L_{GAE} = -(L^+ + L^-)
$$
where $h_u$ and $h_v$ are the node representations of node $u$ and node $v$ obtained from an encoder $f_{enc}$ respectively (eg., a GNN); $\varepsilon^+$ is a set of positive edges while $\varepsilon^-$ is a set of negative edges sampled from graph, and $f_{dec}$ is a decoder; Typically, $\varepsilon^+ = \varepsilon$.
Building on recent advances in information-theoretic approaches to contrastive learning [8, 9], a recent study [6] suggests that for SSL pretraining to succeed in downstream tasks, task-irrelevant information must be reasonably controlled. Therefore, the following proposition is put forward:
The task irrelevant information $I(U; V| T)$ of GAE can be lower bounded with:
$$
I(U; V| T) \geq \frac{(E[N_{uv}^{k}])^2}{N_k} . \gamma^2
$$
minimizing the aforementioned $L_{GAE}$ is in population equivalent to maximizing the mutual information between the k-hop subgraphs of adjacent nodes, and **the redundancy of GAE scales almost linearly with the size of overlapping subgraphs**.
The above proposition has been proved in detailed in [6], where $I(.;.)$ is the mutual information, $U$ and $V$ be random variables of the two contrasting views, and $T$ denote the target of the downstream task. $N_{uv}^{k}$ is the size of the overlapping subgraph of $G^k(u)$ and $G^k(v)$, and the expectation is taken with respect to the generating distribution of the graph and the randomness in choosing $u$ and $v$.
According to this lower bound, we need to reduce the task-irrelevant redundancy to design a better graph SSL methods. In DiGGR, we first factorize the input graph based on latent factor learning before feeding it into the masked autoencoder. Take **Figure 1 from the PDF in global rebuttal** as an example. Nodes $a$ and $b$ have overlapping 1-hop subgraphs. However, after graph factorization, the connection between $a$ and $b$ is severed, thereby reducing large-scale subgraph overlap and lowering the lower bound of task-irrelevant information. As shown in Table 3 of the paper, after factorization, the latent factor groups extracted by DiGGR exhibit lower normalized mutual information (NMI), indicating reduced overlap between the latent factor groups. This result aligns with our theoretical analysis and highlights the advantages of our proposed method.
[1] Seegera: Self-supervised semi-implicit graph variational auto-encoders with masking. WWW 2023.
[2] Graphmae: Self-supervised masked graph autoencoders. KDD 2022
[3] GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner. WWW 2023
[4] Local graph partitioning using pagerank vectors. (FOCS'06). IEEE, 2006
[5] Fastgae: Scalable graph autoencoders with stochastic subgraph decoding. Neural Networks 142 (2021): 1-19
[6] What’s Behind the Mask: Understanding Masked Graph Modeling for Graph Autoencoders. KDD 2023
[7] How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders. NeurIPS 2022
[8] What Makes for Good Views for Contrastive Learning?. NeurIPS 2020
[9] Self-supervised Learning from a Multi-view Perspective. ICLR 2021 | Summary: The paper introduces a framework called DiGGR, aimed at improving the robustness and explainability of generative graph models by addressing the issue of entangled graph representations.
Strengths: 1. The paper tells the story in an easy-to-read way, and the whole paper is quite easy to follow.
2. The problem of disentangled learning is a very popular yet important task.
3. The paper conducts comprehensive experiments to evaluate their method.
Weaknesses: 1. Lack of novelty. Graph disentangled learning is not a new task. There are tons of existing methods for disentangled representation learning, such as those maximizing KL divergence or minimizing mutual information between two sets of representations. A lot of related works such as [1], [2], [3] and [4] are not discussed. Also node factorization is not a new idea, such as node clustering in [3].
[1] Disentangled graph collaborative filtering. SIGIR 2020.
[2] Disentangled Graph Convolutional Networks. ICML 2019.
[3] Deep Generative Model for Periodic Graphs. NeurIPS 2022.
[4] Disentangled contrastive learning on graphs. NeurIPS 2021.
2. The motivation of the proposed method is not clear to me. For example, why should we use mask? Also, why the proposed method sticks to GAE, not VGAE or other types of GNN, such as GCN, GIN or GAT?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Line 21: Does "other modalities" mean the GMAE it self, or another different kind of graph? What's the difference between GMAE and other famouse GNN such as GCN, GIN, GAT, etc?
2. Line 37, Why should we disentangle different pieces of information of a single node? Will this help improve GNN's expressiveness?
3. Line 120: The fulle name of EPM should be introduced before using its abbreviation.
4. The paper mentions "mask strategies" for quite a few times, but I'm still confused about the purpose of masking. The paper just mentions that the proposed approach uses masking but does not tell readers the intuition behind it. Will masking produce more expressive graph representation? Will it help disentanglement?
5. What's the difference between the proposed approach and a lot of existing approachs such as [1], [2], [3] and [4]? It seems that the proposed one factorizes nodes and edges and finally pull them together, while other techniques may rely on KL divergence or mutual information. I don't think those works are well discussed by the paper.
[1] Disentangled graph collaborative filtering. SIGIR 2020.
[2] Disentangled Graph Convolutional Networks. ICML 2019.
[3] Deep Generative Model for Periodic Graphs. NeurIPS 2022.
[4] Disentangled contrastive learning on graphs. NeurIPS 2021.
6. I understand that the number of latent groups can be pre-defined, but can we theoretically guarantee that those groups are identifiable in terms of the learned representation? As shown in Figure1, are those groups permutational invariant for their positions?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes. Limitations have been discussed in the paper.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1: Novelty
Thank you for the time and effort you have dedicated to our paper. It is true that DiGGR and [1-4] use some common techniques in disentangled learning. However, we sitll believe our approach offers novelties in the following aspects:
(i)The main goal of our paper is to use disentangled learning to enhance self-supervised feature learning, rather than focusing solely on disentanglement itself. To the best of our knowledge, most current work on graph masked autoencoders (GMAE) often overlooks the disentanglement of representations in the context of node classification and graph classification tasks: When performing mask modeling, these approaches treat the entire graph as a whole and often overlook its underlying structure.
For example, a node $n$ might belong to both community $A$ and community $B$, playing different roles in each. Consequently, different learning strategies should be adopted for these roles. However, most GMAE methods ignore these latent relationships and uniformly sample nodes from the entire graph for masking, disregarding the heterogeneous factors present within nodes. Therefore, we introduced disentanglement into GMAE and used a tailored probabilistic approach to improve the learned representations.
(ii) our method differs significantly from those in [1-4]. Please refer to the **global rebuttal**. Briefly, the main differences are as follows:
(a) Our approach models the latent factor $z$ as a probability distribution rather than a variable. This differs from most prototype-based factor learning methods [1, 2, 4]. By modeling $z$ as a distribution, our method introduces randomness during network training, leading to faster convergence and more distinctive latent factors.
(b) We model the latent factor using a Gamma distribution, leveraging its non-negative property to transform the factorization problem into a non-negative matrix factorization problem. This contrasts with [3], where the distribution nature prevents such a conversion. According to [5], this approach results in representations that are more disentangled and expressive.
### W2:Why not other framework
What we provide in this article is a general framework method. Within the DiGGR framework, any other GNN, such as GCN, GIN, and GAT, can be used for representation learning. Additionally:
1. In graph SSL, contrastive methods dominate node and graph classification but face challenges: (a) reliance on manually constructed negative samples, (b) need for high-quality data augmentation. Graph autoencoders (GAEs) avoid these issues by directly reconstructing the input graph. Therefore, our work is based on a GAE framework.
2. Due to an overemphasis on structural information, GAEs have lagged behind contrastive learning in node and graph classification tasks. Recently, [6] demonstrated that masked autoencoders achieve performance comparable to contrastive learning in graph SSL, with [7,8] confirming this. Thus, we incorporated mask modeling into our network.
However, Graph mask modeling is not our main focus. Our goal is to use disentangling to improve self-supervised representation learning, supported by a tailored probabilistic latent factor learning method.
### Questions
**Q1**: 1. To clarify, "other modalities" refers to graph domain. Line 21 means While MAE techniques are well-established in language and image domains, they are now gaining attention in graph learning; 2. GMAE and GNN are analogous to MAE and Transformers. GMAE employs an encoder-decoder architecture to reconstruct input graphs, it can use GNNs as either encoders or decoders.
**Q2**: Disentangling node latent structures has proven effective in graph representation learning [1,2]. The node latent structure sets the aspects of node information. For example, In a social network, node $n$ might represent both a student and a club member, showing different traits in each context. Thus, the ideal approach is to learn distinct node properties for each group and aggregate these to represent the node’s overall information. Such an approach has been proven effective in GNNs. For example, [3] address the issue of latent factor entanglement and show that disentangled node representations, achieved through iterative neighborhood segmentation, are effective.
**Q3**: We will include the full name of EPM, which is Edge Partition Model in the revision.
**Q4**: We use mask strategies to boost the learning capability of the GAE framework. Please see our response in W2 for details.
**Q5**: We have thoroughly reviewed [1-4]. Please refer to our detailed response in the **global rebuttal** section.
**Q6**: Theoretical guarantees for identifying latent groups pose challenges for both our study and graph learning filed at large. Our algorithm encourages latent group discovery only if it improves the explanation of the defined loss terms. However, it does not guarantee identifying groups that may not be relevant to specific tasks. Thus, while it identifies meaningful groups, it may overlook less impactful ones.
Our algorithm does not enforce permutation invariance for the discovered latent groups. Instead, it relies on the data to determine whether such invariance is necessary for the task. This approach allows for a data-driven determination of the relevance and utility of permutation invariance in the latent groups identified by our model.
[1] Disentangled graph collaborative filtering. SIGIR 2020
[2] Disentangled Graph Convolutional Networks. ICML 2019
[3] Deep Generative Model for Periodic Graphs. NeurIPS 2022
[4] Disentangled contrastive learning on graphs. NeurIPS 2021
[5] Learning the parts of objects by non-negative matrix factorization. Nature 1999.
[6] Graphmae: Self-supervised masked graph autoencoders. KDD 2022.
[7] Seegera: Self-supervised semi-implicit graph variational auto-encoders with masking. WWW 2023.
[8] Gigamae: Generalizable graph masked autoencoder via collaborative latent space reconstruction. CIKM 2023.
---
Rebuttal Comment 1.1:
Title: reviewer response
Comment: Thank the author or the time on rebuttal. I think they partially addressed my concern. As a result, I'm willing to increase my score. Here are the remaining questions:
1. I don't the treating $z$ as a distribution has any difference from existing works that rely on VAE, such as [1] and [2], since VAE also treats latents as random varibles that are supposed to follow Gaussian distribution. Basically, latents are sampled from Gaussian in VAE.
2. I don't see obvious benefits of modeling latents using Gamma distribution, as even for regular matrix that has values that can be either negative or positive, we can still perform matrix factorization, such as SVM. I would suggest author to clearly explain this in the paper. Also, I think with Gamma distribution assumption, e.g. the last layer of encoder, need to produce non-negative values then. Will that cause any instability during training? Also, it would be great if author could explain the benefits brought by the sparsity of Gamma distribution. Will this cause loss of information? Why not just use L1 norm to enforce the sparsity?
3. For Q2, are there mathematical metrics to measure the expressiveness of GNN? If so, I believe involving those metrics in experiments would be more persuasive.
4. Ignoring permutation invariance will introduce huge complexity for model parameters to well capture all kinds of graphs. If there are $n$ nodes of a graph, I guess it's $O(n!)$ to permute those nodes for a graph generation task.
[1] Deep Generative Model for Periodic Graphs. NeurIPS 2022.
[2] Multi-objective Deep Data Generation with Correlated Property Control. NeurIPS 2022.
---
Reply to Comment 1.1.1:
Title: We sincerely appreciate your efforts in reviewing the manuscript and hope our response will address your concerns effectively.
Comment: ### **Q1 and Q2**:
Thank you for your valuable advice. We will incorporate the benefits of using the Gamma distribution in the next revision. Essentially, we model $z$ as a Gamma distribution rather than a Gaussian distribution for the following reasons:
1. Compared to the Gaussian distribution, the Gamma distribution possesses **non-negative** and **sparse** properties, both of which we leverage in our approach.
a) Based on the non-negative characteristic, the latent factor learning can be converted into **non-negative** matrix factorization (NMF). Unlike canonical matrix factorization, NMF ensures that different features will not cancel one another out when calculating feature similarity [4]. To boot, the non-negativity constraint will lead to sort of sparseness naturally, which is proved to be a highly effective representation distinguished from both the completely distributed and the solely active component description [5]. According to [6], such property inherently benefits the learning of disentangled representation, which is unattainable with canonical matrix factorization under a Gaussian distribution.
b) The sparsity induced by the Gamma distribution is akin to imposing an additional sparsity constraint on NMF. This constraint aids in enhancing the uniqueness of the decomposition while enforcing a locality-based representation, making the factorization sparser and improving its robustness against potential offsets caused by additive noise [6].
2. **Training Stability of Gamma**: We adopted the variational inference method proposed in [7], using the Weibull distribution to approximate the Gamma distribution for reparameterization. This method has been shown to enable stable training in graph networks [8]. We want to emphasize that the sparsity introduced by the Gamma distribution does not compromise the model's performance. On the contrary, recent studies [4, 5, 7] have shown that the increased feature sparsity enhances the network's nonlinearity, leading to improved model expressiveness and generalization ability. (Note that non-negative values are common in deep learning models, particularly as outputs from non-linear functions like ReLU. These non-linear functions can enhance the model's expressive power and generalization ability by increasing its nonlinearity, rather than bringing information loss.)
3. **Why Choose Gamma instead of L1**:
Gamma distribution introduces probabilistic sparsity, which has the following two benefits :
a)Sparsity can be controlled more flexibly by adjusting parameters such as the shape and scale of the Gamma distribution[9]. Unlike the L1 norm, which forces some parameters to zero, this method allows for a balanced trade-off between sparsity and information retention[10], reducing the risk of information loss due to excessive sparsification.
b)The optimization process of Gamma's probabilistic sparsity is smoother than that of the L1 norm[12], which causes non-smoothness and issues like "dead neurons" as parameters approach zero[11]. The sparsity introduced by Gamma can improve convergence and reduce the risk of local minima or unstable solutions[9].
c) Probabilistic sparsity can provide nonlinearity, analogous to a Relu activation function, which is often helpful for the model's expressiveness. However, the L1 norm often hurts the original model's expressiveness.
### **Q3**
Similar to other works [1, 2, 8], we evaluate the expressiveness of our framework using various downstream task metrics. In Tables 1 and 2 of this paper, we use accuracy for comparison. In the global response, we specifically use the Gamma latent variable model for classification and compare it with the Gaussian latent variable model from [1] on downstream tasks. These comparisons demonstrate the expressiveness of the gamma latent variable model. In future revisions, we will also include experimental results on the link prediction task to more comprehensively validate the gamma latent variable model's expressiveness.
| Model | IMDB-BINARY (%) | MUTAG (%) |
|---------|-----------------|-------------|
| PGD-VAE | 54.37 ± 0.21 | 84.06 ± 0.52|
| Gamma latent variable model | 70.79 ± 0.34 | 85.12 ± 0.64| | Summary: The work proposes a disentangled generative self-supervised learning method for graphs. The authors introduce a latent factor learning module to capture the heterogeneous factors in the nodes. The proposed method factorizes the graph into factor-specific subgraphs, and jointly trains a disentangled Graph MAE applying distinct masks for each subgraph. Experimental results demonstrate that DiGGR outperforms traditional methods that treat the graph holistically, without accounting for its latent structure.
Strengths: 1. The proposed method first explores a factorization method for generative graph SSL.
2. The authors provide extensive experimental results and analysis on both node and graph-level tasks to show the improved effectiveness, interpretability, and generalization by using the proposed method.
Weaknesses: - The computation complexity of the proposed method is quite high. Could the author pride training time comparison to the baseline methods to help us get a sense of the real complexity?
- Could the author provide more insights on how to find an optimal factor number K according to the statistics of diverse datasets? This might be useful for real-world applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness part above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1: Computation Complexity
We genuinely appreciate the time and effort you dedicated to a thorough reading of our paper. Based on your suggestions, we conducted a training time comparison experiment. We found it to be very helpful for readers to understand the actual model complexity, and we will include this experiment in our paper in the later revision.
We have provided the time required for training on IMDB-BINAR, IMDB-MULTI and Cora in the following tables. All experiments were conducted on an RTX 3090, with all the models adhering to their default settings, and perform a single run with the same random seed and the same Pytorch platform.
In Section 3.4 of the paper, we analyze the time complexity of DiGGR and find it consistent with Seegera [1]. Therefore, we included it in the comparison. Since the public code of SeeGera only implement on Cora dataset, we also compared the training time of VEPM [2] on graph classification data for a comprehensive comparision.
| Method | Training Time | epoch / second |
|---------|---------------|----------------|
| SeeGera | 200 second | 5.28 it /s |
| DiGGR | **59 second** | **29.35 it / s** |
| | IMDB-BINARY | | IMDB-MULTI | |
|-------|----------------|-----------------|----------------|-----------------|
| | Training Time | second / epoch | Training Time | second / epoch |
| VEPM | 363 second | 1.21s/epoch | 314 second | 1.57 s/epoch |
| DiGGR | **340 second** | **1.13s/epoch** | **290 second** | **1.45s/epoch** |
From the table, it is evident that DiGGR's training time is significantly shorter than SeeGera's during actual training. On the Cora dataset, DiGGR's overall training time is almost 4 times less than SeeGera's, and the average number of epochs per second is 5.56 times higher than SeeGera. For graph classification, DiGGR's training speed is generally faster than VEPM, with shorter runtime per epoch.
Overall, these experimental results revalidate the conclusion in Section 3.4 of the paper: the complexity of DiGGR is comparable to previous works.
[1] Li, Xiang, et al. "Seegera: Self-supervised semi-implicit graph variational auto-encoders with masking." Proceedings of the ACM web conference 2023.
[2] He, Yilin, et al. "A variational edge partition model for supervised graph representation learning." Advances in Neural Information Processing Systems 35 (2022): 12339-12351.
## Q2: Optimal factor number K
Thank you for your valuable comments. As you pointed out, the hyperparameter $K$ is indeed a crucial factor in our model. We would like to share our view on this parameter from two aspects:
(1) Empirical Analysis: We initially analyzed the impact of different $K$ values on the datasets. We conducted ablation studies on $K$ across four different datasets (see Figure 5 in Appendix A) and documented the optimal hyperparameter $K$ used for each dataset (see Tables 5 and 6 in Appendix A). It was observed that the optimal $K$ for most datasets typically falls within the range of 2 to 4. Therefore, we believe the optimal number of latent factors depends on the dataset's complexity. In practical applications, we use NMI to measure the similarity between latent factors. If the NMI between a new latent factor and the existing ones is high, we opt to retain the original number of latent factors.
(2) Performance Stability: Although the optimal $K$ for achieving the best performance may vary across different datasets, the performance variation due to different $K$ values is not significant in practical applications. For instance, in the ablation study of $K$, the performance fluctuation remained relatively small as $K$ varied from 1 to 16. On the MUTAG dataset, the standard deviation of accuracy across different $K$ values was only 0.58, and on Cora, this value was just 0.25.
In summary, the optimal number of latent factors depends on the dataset's complexity. While the optimal $K$ may differ across datasets, the resulting performance variation is minimal. | Summary: The paper proposes a self-supervised learning framework DiGGR, aimed at enhancing the disentanglement of learned graph representations. The authors argue that existing generative graph models tend to overlook the entanglement of learned representations, leading to non-robust and non-explainable models. DiGGR addresses this by introducing a latent factor learning module and a disentangled graph masked autoencoder, allowing for factor-wise graph representations. The framework is tested on various benchmarks, demonstrating its effectiveness in outperforming previous self-supervised methods.
Strengths: 1. This paper studies an interesting research problem that is disentangled graph representation learning. This research problem is very hot recently.
2. The model design is easy to understand. The paper provides a detailed explanation of the proposed model.
3. The experiments demonstrate the effectiveness of the model. The performance improvement on some comparisons seems to be significant.
Weaknesses: 1. One of my concerns is from the novelty. I think the model design is a little similar to the works [1-2]. The authors should make more comprehensive discussions to show the differences between them.
2. The experiments ignore some recent or related contrastive baselines [1-4] for comparisons. The improvements on some datasets seem to be not significant.
3. More large-scale benchmarks should also be considered, e.g., OGB. The experimental settings are not very clear for reproducing the results.
[1] Disentangled contrastive learning on graphs. NeurIPS 2021.
[2] Disentangled Graph Contrastive Learning With Independence Promotion. TKDE 2022.
[3] Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative Representations. TNNLS 2023.
[4] MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning. AAAI 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you provide the experimental settings for reproducing the results (e.g., hyper-parameter configurations for the model and baselines)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We genuinely appreciate the time and effort you dedicated to thoroughly reading our paper. Our code is uploaded in the **Supplementary Material**, with optimal hyperparameters in the config file. Set "use_best_cfg = True" to reproduce our results. Specific training hyperparameters and dataset details are in **Tables 5 and 6 in Appendix A**. If you have any further questions about reproducing the results, please do not hesitate to contact us. We are more than happy to address any further concerns you may have.
### W1: More discussion on Novelty
Thank you for your insightful questions. We first highlight a notable difference: DiGGR uses a different learning strategy compared to DGCL [1] and IDGCL [2]. They use a contrastive learning framework requiring negative samples and complex data augmentation methods ([1, 2] tested four different augmentation methods). In contrast, our model employs Graph Mask Autoencoder, with masking the input node as the only pretext task, eliminating the need for explicit negative samples.
Due to character limitations, we thoroughly discussed the differences between DiGGR and [1-2] in the **global rebuttal** section. Please refer to that section for details.
In short, the differences primarily arise from three aspects: the general framework we use, the factor learning method we designed, and the probabilistic techniques specifically tailored for the generative model. We will also incorporate this comparison into future revisions of the paper.
### W2: recent or related contrastive baselines
Grateful for you valuable suggestion. In the subsequent revision of our paper, we will include additional disentangled methods based on contrastive learning. For now, we have re-listed our tables below, showing only the results reported in the paper for [1-4].
| Methods | IMDB-B |IMDB-M |MUTAG|NCI1|REDDIT-B |PROTEINS |COLLAB |
|-----|------|---------|---------|-----|------|------|------|
| DGCL | 75.9 ± 0.7 | 51.9±0.4 | 92.1±0.8 | 81.9±0.2 | 91.8±0.2 | 76.4±0.5 | 81.2±0.3 |
| IDGCL | 76.1 ± 0.2 | 52.3 ± 0.4 | 92.5 ± 0.6 | 82.4 ± 0.3 | 91.9 ± 0.3 | 77.1 ± 0.2 | 81.3 ± 0.3 |
| DiGGR | **77.68 ± 0.48** | **54.77 ± 2.63** | 88.72 ± 1.03 | 81.23 ± 0.40 | 88.19 ± 0.28 | **77.40 ± 0.05** | **83.76 ± 3.70** |
|Methods | Cora | Citeseer | Pubmed | PPI |
|-|-|-|-|-|
| MA-GCL | 83.3 ± 0.4 | 73.6 ± 0.1 | 83.5 ± 0.4 | - |
| DiGGR | **84.96 ± 0.32** | **73.98 ± 0.27** | 81.30 ± 0.26 | **78.30 ± 0.71** |
1. Due to large variations across different validation folds, we agree for each individual dataset the improvement might not appear that significant (not only to us, this observation applies to almost all previous state-of-the-art methods, e.g., for node classification, SeeGera[5] achieves better accuracy than GraphMAE[6] 0.1\% on the Cora dataset; for graph classification, IDGCL[2] outperforms DGCL[1] by 0.2\% on the IMDB-B dataset.).
However, we would like to highlight that our model employs a generative self-supervised learning framework. Compared to contrastive learning-based methods [1-4], it achieves comparable results across 11 datasets, with **7 of them reaching optimal performance**. This is unlikely to occur by chance and thus can be used to verify its appealing performance.
2. We chose generative self-supervised learning method, specifically a Graph Masked Autoencoder (GMAE) as the foundational framework, and incorporated disentangled learning into GMAE to help bridge the gap between generative and contrastive methods in node classification and graph classification task. As demonstrated in Tables 1 and 2 in our paper, **DiGGR outperformed most methods based on generative frameworks**. We aim to further reduce the performance gap between generative SSL and contrastive methods in the graph domain, capitalizing on its advantage of not requiring complex, task-specific pretext tasks, and to extend its applications in graph-related fields.
### W3: Larger Datasets
Following your advice, we tested our model on the ogbn-arxiv dataset. Since our model involves the reconstruction of the adjacency matrix, to further save computational costs on larger graphs, we adopted the scalable sampling method from [7], computing only a portion of the adjacency matrix at a time. Our experimental setup was as follows: Factor Number $K=4$, learning rate = 0.0005, training Epoch = 600. All other experimental settings and environments were kept consistent with those described in the main text. The results are shown below:
| Methods | ogbn-arxiv |
|:-------------:|:----------------:|
| GraphMAE | 71.75 ± 0.17 |
| GraphMAE2 | 71.95 ± 0.08 |
| DiGGR | **72.12 ± 0.08** |
It can be seen that DiGGR outperforms GraphMAE and GraphMAE2 on the ogbn-arxiv dataset, further validating the effectiveness of our approach. In the future, we will consider extending our model to other larger benchmarks. For extremely large graphs, we plan to use techniques such as PPR-Nibble[8] to generate relatively smaller local subgraph clusters. We believe that DiGGR has the potential to handle large-scale data and we are continuously working on this.
[1] Disentangled contrastive learning on graphs. NeurIPS 2021
[2] Disentangled graph contrastive learning with independence promotion. TKDE 2022
[3] Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative Representations. TNNLS 2023.
[4] MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning. AAAI 2023.
[5] Seegera: Self-supervised semi-implicit graph variational auto-encoders with masking. WWW 2023
[6] Graphmae: Self-supervised masked graph autoencoders. KDD 2022
[7] Fastgae: Scalable graph autoencoders with stochastic subgraph decoding. Neural Networks 142 (2021): 1-19
[8] Local graph partitioning using pagerank vectors. (FOCS'06). IEEE, 2006 | Rebuttal 1:
Rebuttal: ## **Global Response**
We thank all the reviewers for their valuable suggestions on our paper. We will first address a common issue: **Differences with Previous Work**.
We have carefully reviewed the paper [1-5] you provided, and while it is true that both our method and them utilize KL divergence to optimize the latent factor, we believe our work has fundamental differences with them.
While these methods [1-5] have proven effective in graph contrastive learning, directly applying them to mask graph autoencoders poses challenges. Specifically, as Table 3 illustrates, using these methods (Non-probabilistic Factor Learning) in mask graph autoencoders hinders the learning of meaningful factors and leads to poor performance improvements. This difference may arise because contrastive learning, aided by data augmentation technology, is more likely to facilitate the convergence of latent factor models, whereas generative self-supervised learning, which relies solely on masking techniques, may struggle to achieve such convergence.
In addition to experimental findings, we further clarify the following three key aspects, starting with the model itself:
1. Compared with prior works [1-5], our approach treats the latent factor as a distribution rather than a point vector. In [1-5], a prototype-based method is employed where the node's hidden representation is directly mapped to a simplex point vector via a softmax function. In contrast, our method models the latent factor $Z$ using a Gamma distribution. Leveraging the sparsity properties of the Gamma distribution, our model extracts more distinctive latent factors. To validate this claim, we conducted quantitative experiments. The table below shows the normalized mutual information (NMI) between the top two latent factors extracted by different models."
| NMI | COLLAB | IMDB-M |
|:--------------------------------:|:--------:|:--------:|
| Non-Probabilistic Factor Learning | 1.00 | 0.94 |
| DiGGR | **0.35** | **0.24** |
A smaller NMI indicates lower similarity between the latent factors, meaning that the extracted factors are more distinguishable. It can be observed that the NMI between the latent factors in DiGGR is lower than that of the non-probabilistic methods, suggesting that DiGGR can extract more distinguishable latent factors and achieve better model convergence.
2. In DiGGR, learning latent factors considers both the structural information of the graph and task-related information. However, previous works [1-4] primarily rely solely on task-related information for learning latent factors. Specifically, as shown in Equation 7 of the paper:
$$
L_z = E_{q(Z|A,X)}[\ln{p(A|Z)}] - \sum_{u=1}^NE_{q(z_u|A,X)}[\ln \frac{q(z_u|A, X)}{p(z_u)}]
$$
Where $L_z$ includes a reconstruction term for the adjacency matrix $\ln p(A|Z)$, addressing the graph's structural information, which distinguishes our approach from previous works [1-4].
3. Both DiGGR and [5] employ graph factorization, yet they differ notably in their treatment of latent factors. DiGGR models these factors using a Gamma distribution, leveraging its non-negative and sparse characteristics. This approach aligns DiGGR with Bayesian non-negative matrix factorization principles, known for enhancing feature disentanglement, as highlighted in literature [6]. In contrast, [5] adopts a Gaussian distribution for latent factors, allowing for the inclusion of negative values during sampling. This departure from non-negativity, as discussed in [6], potentially undermines the benefits observed in DiGGR. To validate this assertion, we extracted latent factors from both methods and evaluated their performance on downstream tasks using datasets such as IMDB-BINARY and MUTAG. The outcomes are as follows:
| | IMDB-BINARY | MUTAG |
|:-------:|:----------------:|:----------------:|
| PGD-VAE | 54.37 ± 0.21 | 84.06 ± 0.52 |
| DiGGR | **70.79 ± 0.34** | **85.12 ± 0.64** |
It can be observed that the latent factors from DiGGR outperform those from [5] on downstream tasks. This further underscores the distinctiveness and effectiveness of our latent factor learning method.
In summary, we believe that DiGGR differs from [1-5] in several key aspects. In the subsequent revisions of our paper, **we will revisit and discuss the work of [1-5] in the related work section**. Additionally, we will include a discussion section in the methodology part of the paper to help readers better understand the distinctions between DiGGR and these prior methods.
[1] Disentangled contrastive learning on graphs. NeurIPS 2021
[2] Disentangled graph contrastive learning with independence promotion. TKDE 2022
[3] Disentangled graph collaborative filtering. SIGIR 2020.
[4] Disentangled Graph Convolutional Networks. ICML 2019.
[5] Deep Generative Model for Periodic Graphs. NeurIPS 2022.
[6] Learning the parts of objects by non-negative matrix factorization. nature 401.6755 (1999): 788-791.
Pdf: /pdf/a13e1d656b66f873b76cbbb8ced37ee0c4121b60.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Bayesian Approach for Personalized Federated Learning in Heterogeneous Settings | Accept (poster) | Summary: The authors designed a federated learning algorithm that leverages a Bayesian network to enhance the model's robustness, and uses data distillation methods to reduce overhead and adapt to scenarios with different client model structures. The authors theoretically discussed the differential privacy characteristics of the method and demonstrated its effectiveness through experiments.
Strengths: 1. The paper formally discusses the characteristics of differential privacy, ensuring privacy protection.
2. Experiments about distillation dataset sizes are quite insightful.
Weaknesses: 1. The authors stated that the paper has two contributions: 1) to address the issue of small data volumes per client by using a Bayesian network; 2) to solve the overhead problem brought by the Bayesian network and the model heterogeneity issue by using a distillation method. However, these two methods seem relatively independent and both have their own rich related works. It appears the authors simply combined these two works directly, resembling an A+B work, which is less compelling.
2. The authors used a distillation scheme, but the distillation scheme requires the server to maintain public datasets. However, in many scenarios, this is very difficult to achieve, which is a major criticism of similar distillation methods.
3. The experiments are insufficient. As a paper that mainly measures performance through experimental results, the experimental part of the main text is only 2.5 pages (of which 1.5 pages are explanations of the experimental setup). Although the authors added some additional experiments in the appendix, it is still far from sufficient, and the overall experiment was completed under only one setting. It is recommended that the authors at least consider adding the following experiments: 1) Since the paper involves data heterogeneity, different data heterogeneity scenarios (such as Dirichlet non-IID) should be tested; 2) Expand to more common neural network structures instead of using the authors' self-constructed simple CNNs. Even if the client's computing power is insufficient, lightweight networks such as MobileNet should be considered.
4. From the perspective of paper presentation, this paper has considerable room for improvement, such as: 1) The paper does not provide a framework diagram of the method, making it very difficult to understand the author's method; 2) The writing style is not concise and clear enough, such as the second point in the contributions; 3) The margin between formulas and text, the width of tables, are not well adjusted.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Please introduce the difficulties and improvements in integrating the Bayesian network into the data distillation-based federated learning framework.
2. What is the performance of this method under other data heterogeneity or model structures?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors discussed possible social impact and application and also touched on the potential limitations of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate reviewer's insightful comments and suggestions. We are encouraged by reviewer's positive remarks about the formal privacy guarantee and the experiments on the alignment dataset.
We address reviewer's specific concerns below -
-- *Clarification on the contribution*
We agree with the reviewer that the bayesian framework and the knowledge distillation have been independently integrated into FL by previous works. However, our contributions are particularly significant for the following reasons -
1. *Knowledge distillation in Bayesian learning itself is non-trivial*. While knowledge distillation is a well-established method, the transfer of knowledge between Bayesian models has remained inadequately addressed. Our work introduces a novel method that facilitates collaboration across client-specific Bayesian models by transferring knowledge through a prior specification mechanism in the output space. This contribution significantly advances the field of Bayesian knowledge distillation, in both centralized as well as federated learning settings.
2. *Bayesian learning across heterogeneous clients in FL is novel.* The designed collaboration mechanism, is pivotal in enabling efficient collaboration across models of heterogeneous architectures (encompassing varying size and shape), significantly advancing the FL and Bayesian-FL procedures for heterogeneous settings. By means of such a procedure, the small compute capacity clients are able to gain about 10 increase in the performance through collaboration with large compute capacity clients (as shown in Table 2 and Figure 1 in the paper). This is in contrast to the conventional Bayesian-FL procedures that work with homogeneous models and would either leave out the small compute clients out of the FL process or reduce the model capacity on the large compute clients leading to under-performance.
3. *Incorporating a privacy-preserving Bayesian FL mechanism is novel*. As the reviewer correctly points out our method also incorporates privacy along with Bayesian learning in heterogeneous FL settings, making it more usable for a diverse set of critical applications.
-- *On model structures in Experiments*
We would like to kindly clarify to the reviewer that we did not only use "self-constructed simple CNNs" for the experiments, as mentioned in the review. In fact, we used a combination of popular VGG-based models and CNNs to demonstrate the heterogeneity in our setting. This approach highlights the stability and performance of our method under such drastic heterogeneity. Also, under the homogeneous client setting in the experiments (detailed in subsection 5.1 of the paper) all the clients train VGG based models, which aligns with the reviewer's comment on using "more common neural network structures".
-- *On different data heterogeneity in Experiments*
Thanks to the reviewer for this suggestion, we include more experiments on varying degree of heterogeneity below. Given the limited time for author response, we compare the performance of our method under varying degrees of heterogeneity against key Bayesian and the most competitive Bayesian baselines identified from previous experiments. To evaluate performance under different degrees of heterogeneity, we utilize the Dirichlet distribution. Specifically, for each class in the dataset, we distribute the data among 20 clients according to a Dirichlet distribution with parameter $\alpha$, smaller values of $\alpha$ create more heterogeneous distributions, while larger values create more homogeneous distributions. We conduct this experiment on the CIFAR-10 dataset with a 20-client setting, and report the results in the table below. Note that when $\alpha$ is low, the number of classes per client is less than 10, leading to an easier task and higher accuracy for all methods compared to the setting when $\alpha$ is high, and all 10 classes are present on each client, resulting in a more complex task. We have also added these new results to the revised version of the paper.
| Method | $\alpha$ = 0.1 | $\alpha$ = 1 | $\alpha$ = 10 |
| ---------------------- | -------------- | -------------- | -------------- |
| Local Training | 70.1 $\pm$ 1.5 | 54.5 $\pm$ 1.8 | 47.3 $\pm$ 0.7 |
| FedAvg (non-Bayesian) | 66.2 $\pm$ 1.4 | 53.7 $\pm$ 1.2 | 49.0 $\pm$ 0.5 |
| FedProx (non-Bayesian) | 66.9 $\pm$ 1.1 | 56.8 $\pm$ 0.9 | 50.1 $\pm$ 0.9 |
| pFedGP (Bayesian) | 65.7 $\pm$ 1.2 | 58.3 $\pm$ 1.9 | 51.8 $\pm$ 1.1 |
| pFedBayes (Bayesian) | 69.3 $\pm$ 1.4 | 60.1 $\pm$ 1.7 | 52.2 $\pm$ 1.2 |
| FedBNN (Ours) | 72.3 $\pm$ 1.9 | 62.5 $\pm$ 1.4 | 54.3 $\pm$ 0.7 |
-- *Presentation Issues*
We sincerely thank the reviewer for these valuable suggestions. We have included a new figure providing an overview of our method in the revised version of the paper. For the reviewer's reference, the figure and its caption are available at this link: https://i.imgur.com/ZBC2sMu.png.
We also apologize for the discrepancies between the margin and text in certain sections. These adjustments were made to fit the content within the 9-page limit. With the additional page allowed for the camera-ready version, these issues will be easily resolved.
---
Rebuttal Comment 1.1:
Comment: Overall, I believe the authors' rebuttal has provided some valuable new information. My updated comments are as follows:
**Experiments**
I appreciate the additional experimental data, which enhances the paper. However, the scope remains somewhat limited, with one dataset and a small number of clients. While I acknowledge the authors' efforts during the rebuttal period, I believe the experiments could be further strengthened.
**Model Structure and Presentation Issues**
Thank you for the clarifications. I apologize for any earlier misunderstandings. These explanations are helpful, and I look forward to seeing them incorporated into future revisions of the paper.
**Novelty**
I feel that the authors did not directly address my concerns regarding the novelty (e.g., concern 1). While I agree that some of the innovative aspects highlighted in the rebuttal are indeed non-trivial and novel to some extent, as I mentioned in my first weakness point, there are two issues: 1) Each point, when considered individually, lacks sufficient insight or difficulty, and the approach seems relatively straightforward (in my opinion); 2) When taken together, these innovative points do not seem to be well connected, making the overall contribution less compelling.
In summary, while some concerns have been addressed, others remain unresolved, leading me to adjust my score from 3 to 4. I hope this feedback is helpful as the authors continue through the review process.
---
Rebuttal 2:
Title: Response to Official Comment by Reviewer PtiT
Comment: Dear Reviewer PtiT,
We thank you for your detailed response to our rebuttal and sincerely appreciate your thoughtful comments. We further hope to effectively address your concerns on novelty and additional experiments through the points we make below.
-- *Additional Experiments*
We are glad to hear that you believe the additional experimental results enhance the paper and we thank you again for suggesting these new experiments. Although the limited time during the rebuttal period prevented us from completing the additionally suggested experiments on more datasets and settings, we want to assure you that we have already established the necessary experimental setup which will allow us to easily incorporate results on more datasets and clients in the next revision, and we are fully committed to doing so. We appreciate your understanding and look forward to sharing these updated findings in the final submission.
-- *Novelty*
We apologize if it seemed we did not fully address your concerns about the novelty of our work. We would like to take this opportunity to provide further clarification.
The key motivation of this work is to enhance the applicability of FL across diverse real-world settings by addressing several interconnected and prevalent challenges in federated environments. These challenges include - i) the issue of small data volume per client, ii) compute heterogeneity across clients, (both of which are known common limitations in federated learning environments), iii) the need for uncertainty quantification to measure and manage uncertainty in predictions, which is crucial for critical applications like in healthcare and legal domains, and iv) the need for privacy preservation which is essential for ensuring the confidentiality of sensitive information and is also important in critical domains. To tackle these issues, we have developed an integrated end-to-end framework, in which each client performs local training using Bayesian learning with its own tailored BNN architecture (and that could be different across clients), and the collaboration is achieved by the novel mechanism of knowledge distillation through prior specification. We also emphasize on the fact that the conventional Bayesian learning methods, whether centralized or federated, often focus on priors in the weight space, which makes them restricted for utilization in FL settings where the clients have heterogeneous data and compute resources, and is also communicationally intensive. Thus, our framework is designed to effectively address these issues as well—issues that would arise from merely applying conventional Bayesian learning methods in FL—through a novel collaboration mechanism.
We recognize that some components of our approach may seem individually straightforward when viewed in isolation. However, it’s crucial to understand that our approach is not just a collection of techniques but a carefully designed system that addresses multiple challenges in FL. It's appropriate formulation into a cohesive and effective framework, without introducing additional complications, for personalized federated learning represents a significant advancement in the field. We hope this further explanation clarifies the novelty and impact of our contributions, and we appreciate your thoughtful consideration of our response.
In essence, the key contributions of our work are not just in - *"1) to address the issue of small data volumes per client by using a Bayesian network; 2) to solve the overhead problem brought by the Bayesian network and the model heterogeneity issue by using a distillation method."*. We thank the reviewer for highlighting the importance of this discussion in the paper, and we will use the additional space available in the final manuscript to include this discussion.
---
Rebuttal Comment 2.1:
Title: Kindly Request for Reviewer's Feedback
Comment: Dear Reviewer PtiT,
We sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes. This is a gentle reminder that the Author-Reviewer Discussion period ends at 11:59 pm AoE on August 13. We are happy to answer any further questions you may have before then, but we will be unable to respond after that time.
If you agree that our responses to your reviews have addressed the questions you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments!
Sincerely,
Authors | Summary: The authors propose FedBNN, a novel personalized federated learning (FL) framework that leverages Bayesian principles to address challenges posed by heterogeneous data and computational resources among clients. FedBNN uses Bayesian neural networks to enable robust local training on small datasets by quantifying uncertainties. A novel collaboration mechanism involves sharing priors in the functional space of the networks, rather than directly sharing model parameters, to accommodate clients with varying computational capacities. The approach also includes a differentially private version with formal privacy guarantees. Experiments on standard FL datasets demonstrate that FedBNN outperforms existing methods, particularly in heterogeneous settings and under strict privacy constraints. The main contributions include improved robustness and efficiency in personalized FL, a novel collaboration method using functional space priors, and a formal differential privacy guarantee applicable to general settings.
Strengths: The paper presents a comprehensive evaluation of their FedBNN framework across multiple dimensions. The authors demonstrate its effectiveness on three major datasets (MNIST, CIFAR-10, CIFAR-100) under various heterogeneous settings, comparing against both Bayesian and non-Bayesian baselines. They provide detailed ablation studies to justify their design choices, including the use of functional space priors and the auxiliary dataset for collaboration. The experiments thoroughly explore the method's performance under different types of heterogeneity (data resources, compute resources, and statistical distribution), showcasing its robustness and adaptability. The authors also present a formal privacy analysis, demonstrating the framework's applicability in privacy-sensitive scenarios. Overall, the extensive experimental evaluation, coupled with the theoretical foundations, provides strong evidence for the effectiveness and significance of their proposed approach in addressing real-world challenges in federated learning.
Weaknesses: While the authors present an innovative Bayesian approach to personalized federated learning (FL), several improvements are needed. The novelty is somewhat diminished by existing methods like FedPop [1]; clearer differentiation and more comprehensive comparisons are necessary. The experimental evaluation is robust but lacks diversity in dataset types and real-world applications; incorporating more varied and complex datasets would better demonstrate generalizability. Scalability concerns are not adequately addressed, particularly regarding the computational overhead of Bayesian neural networks. The privacy analysis should delve deeper into the trade-offs between noise levels and model accuracy. More thorough ablation studies are needed to isolate the impact of individual framework components, such as functional space priors versus traditional weight space priors.
[1] Kotelevskii, N., Vono, M., Durmus, A., & Moulines, E. (2022). Fedpop: A bayesian approach for personalised federated learning. Advances in Neural Information Processing Systems, 35, 8687-8701.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed feedback. We are encouraged to see that the reviewer acknowledges the strength of our work, stating that "*Overall, the extensive experimental evaluation, coupled with the theoretical foundations, provides strong evidence for the effectiveness and significance of their proposed approach in addressing real-world challenges in federated learning*".
We address reviewer's key concerns below -
-- *Differentitaion with FedPop*
While a capability wise contrast of our method with the existing methods including FedPop is included in the Table 2 in Appendix, here we provide a detailed comparison of our method with the reference suggested by the reviewer, named FedPop.
1. *FedPop can only work in homogeneous FL settings as opposed to our method that can work in heterogeneous settings too*. The FedPop method in [1] assumes a hierarchical statistical model for personalised FL across clients. The hierarchical model constitutes of an unknown population parameter also referred to as prior (denoted by β) from which the individual local parameters for each client are sampled (denoted by z1,z2,..) where the variance of the population parameter determines the heterogeneity of the client specific parameters. Since the client's local parameters are sampled from the same distribution (also called prior), they have identical shape and form and therefore these methods work only in settings when the clients architectures are homogeneous. This is also evident in the aggregation mechanisms used in the algorithms where they perform element wise aggregation over the parameters. Since the client's local parameters are sampled from the same distribution (the prior), they have an identical shape and form. As a result, these methods are effective only in settings where the clients' architectures are homogeneous. This is also evident in the aggregation mechanisms used in the algorithms, which perform element-wise aggregation over the parameters. However, the problem setting we address involves Bayesian learning in a heterogeneous FL setting, where clients have different computational and data resources. In such a scenario, it is not feasible to utilize FedPop.
2. *Our method is much more communication efficient*. Since the FedPop method involves transmitting model parameters between clients and the server in each communication round, its communication cost is proportional to the number of model parameters, which can run into millions. In contrast, our method only transmits the outputs on the alignment dataset between clients and the server in each communication round. This significantly reduces the communication cost, as it involves outputs for approximately 5000 data points, which is much smaller in size.
3. *Incorporating a privacy-preserving Bayesian FL mechanism is novel* Our work presents a privacy-preserving method for Bayesian FL along with a bound on privacy loss, no privacy analysis or privacy-preserving algorithm is included in FedPop.
-- *Privacy Analysis*
We clarify to the author that the trade-off between model accuracy and privacy guarantee is included in Table 4 in the Appendix. Since, there is a direct relationship between the noise parameter and the privacy budget as established in Theorem 4.1, the results in Table 4 are also applicable for the trade-off between model accuracy and noise parameter.
-- *Functional space priors versus traditional weight space priors*
We appreciate the reviewer's suggestion for more thorough ablation studies to isolate the impact of individual framework components, such as functional space priors versus traditional weight space priors. Knowledge transfer between Bayesian models through priors in functional space is one of the key contributions of this work. These functional space priors are essential to our method, enabling effective knowledge transfer and collaboration across client-specific models in heterogeneous federated learning settings.
To identify the effect of functional space priors versus weight space priors, we can compare our method using functional space priors against a baseline with traditional weight space priors. We direct the reviewer to the results in Table 1, which compares the pFedBayes method (which uses the weight soace priors) and our method in the homogeneous setting. Additionally, the results in Figures 1(a) and 1(b) can be used to compare the impact of functional space and weight space priors across varying degrees of heterogeneity among clients. These results could be used to demonstrate the impact of functional space priors in achieving superior performance.
---
Rebuttal Comment 1.1:
Title: Kindly Request for Reviewer's Feedback
Comment: Dear Reviewer 4fxY,
We sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes. This is a gentle reminder that the Author-Reviewer Discussion period ends at 11:59 pm AoE on August 13. We are happy to answer any further questions you may have before then, but we will be unable to respond after that time.
If you agree that our responses to your reviews have addressed the questions you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments!
Sincerely,
Authors | Summary: This work integrates Bayesian neural networks, knowledge distillation, and differential privacy into the federated learning framework. The resulting framework can maintain privacy during training and provide uncertainty estimates for its predictions. Empirical results demonstrate that the proposed method outperforms previous works in the given experimental setting.
Strengths: 1. This work is well-written. The authors have clearly explained the motivation, methodology, and experimental settings.
2. This work addresses data and system heterogeneity and privacy, touching on all the major challenges of federated learning.
Weaknesses: 1. Bayesian frameworks and knowledge distillation have been independently integrated into federated learning and studied by many works. Although this work combines them with differential privacy effectively and shows good results, it has limited novelty as an academic research paper.
2. There are limited experimental results. The main results are presented in Table 1, where the experimental setting considers 20 clients and label distribution shift. Given the complexity of practical situations, the authors could consider other experimental settings to demonstrate the generality of the method, such as different types of data heterogeneity, varying numbers of clients, and different numbers of data points across clients. I note that in the appendix, the authors also provide results for 500 clients, but there are only a few numbers. A more comprehensive study similar to Table 1 would be preferable.
Technical Quality: 2
Clarity: 3
Questions for Authors: Since only features of the public dataset are transferred, this framework should be communication efficient. The authors could consider plotting accuracy versus transmitted bits to emphasize the strength of the proposed method.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s valuable feedback. We are pleased to see that the reviewer finds the work well-written, noting that "*The authors have clearly explained the motivation, methodology, and experimental settings*" and recognizing that "*This work addresses data and system heterogeneity and privacy, touching on all the major challenges of federated learning".
We address reviewer's key concerns below -
-- *"Bayesian frameworks and knowledge distillation have been independently integrated into federated learning and studied by many works. "*
We agree with the reviewer that the bayesian framework and the knowledge distillation have been independently integrated into FL by previous works. However, our contributions are particularly significant for the following reasons -
1. *Knowledge distillation in Bayesian learning itself is non-trivial*. While knowledge distillation is a well-established method, the transfer of knowledge between Bayesian models has remained inadequately addressed. Our work introduces a novel method that facilitates collaboration across client-specific Bayesian models by transferring knowledge through a prior specification mechanism in the output space. This contribution significantly advances the field of Bayesian knowledge distillation, in both centralized as well as federated learning settings.
2. *Bayesian learning across heterogeneous clients in FL is novel.* The designed collaboration mechanism, is pivotal in enabling efficient collaboration across models of heterogeneous architectures (encompassing varying size and shape), significantly advancing the FL and Bayesian-FL procedures for heterogeneous settings. By means of such a procedure, the small compute capacity clients are able to gain about 10 increase in the performance through collaboration with large compute capacity clients (as shown in Table 2 and Figure 1 in the paper). This is in contrast to the conventional Bayesian-FL procedures that work with homogeneous models and would either leave out the small compute clients out of the FL process or reduce the model capacity on the large compute clients leading to under-performance.
3. *Incorporating a privacy-preserving Bayesian FL mechanism is novel*. As the reviewer correctly points out our method also incorporates privacy along with Bayesian learning in heterogeneous FL settings, making it more usable for a diverse set of critical applications.
-- *"Experimental Results"*
We would like to clarify to the reviewer that apart from the results in Table 1 and Appendix, we also present results across many different settings in Figure 1, where performance is compared across varying degree of heterogeneity in the non-IID and IID-seeting. For example, like the reviewer said, performance with different number of data points across clients is depicted in Figure 1(c) of the paper, and the varying compute heterogeneity is depicted in Figure 1(a) and Figure 1(b).
Thanks to the reviewer for this suggestion, we include more experiments on varying degree of heterogeneity below. Given the limited time for author response, we compare the performance of our method under varying degrees of heterogeneity against key Bayesian and the most competitive Bayesian baselines identified from previous experiments. To evaluate performance under different degrees of heterogeneity, we utilize the Dirichlet distribution. Specifically, for each class in the dataset, we distribute the data among 20 clients according to a Dirichlet distribution with parameter $\alpha$, smaller values of $\alpha$ create more heterogeneous distributions, while larger values create more homogeneous distributions. We conduct this experiment on the CIFAR-10 dataset with a 20-client setting, and report the results in the table below. Note that when $\alpha$ is low, the number of classes per client is less than 10, leading to an easier task and higher accuracy for all methods compared to the setting when $\alpha$ is high, and all 10 classes are present on each client, resulting in a more complex task. We have also added these new results to the revised version of the paper.
| Method | $\alpha$ = 0.1 | $\alpha$ = 1 | $\alpha$ = 10 |
| ---------------------- | -------------- | -------------- | -------------- |
| Local Training | 70.1 $\pm$ 1.5 | 54.5 $\pm$ 1.8 | 47.3 $\pm$ 0.7 |
| FedAvg (non-Bayesian) | 66.2 $\pm$ 1.4 | 53.7 $\pm$ 1.2 | 49.0 $\pm$ 0.5 |
| FedProx (non-Bayesian) | 66.9 $\pm$ 1.1 | 56.8 $\pm$ 0.9 | 50.1 $\pm$ 0.9 |
| pFedGP (Bayesian) | 65.7 $\pm$ 1.2 | 58.3 $\pm$ 1.9 | 51.8 $\pm$ 1.1 |
| pFedBayes (Bayesian) | 69.3 $\pm$ 1.4 | 60.1 $\pm$ 1.7 | 52.2 $\pm$ 1.2 |
| FedBNN (Ours) | 72.3 $\pm$ 1.9 | 62.5 $\pm$ 1.4 | 54.3 $\pm$ 0.7 |
---
Rebuttal Comment 1.1:
Title: Kindly Request for Reviewer's Feedback
Comment: Dear Reviewer yjRB,
We sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes. This is a gentle reminder that the Author-Reviewer Discussion period ends at 11:59 pm AoE on August 13. We are happy to answer any further questions you may have before then, but we will be unable to respond after that time.
If you agree that our responses to your reviews have addressed the questions you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments!
Sincerely,
Authors | Summary: The paper proposes FedBNN, a Bayesian approach for personalized federated learning. The approach relies on the availability of an auxiliary small public unlabelled dataset, called Alignment Dataset, that can be used as a mean to distill knowledge across clients. In FedBNN, clients maintain an estimate of a posterior distribution over the model parameters, which are updated locally using Bayes-by-Backprop. The clients' posteriors are aggregated through Monte Carlo sampling on the Alignment Dataset. FedBNN has a differentially private implementation which consists in adding a Gaussian noise to the output of the forward pass of the local models on the Alignment Dataset.
The paper conducts thorough numerical experiments to quantify the performance of the FedBNN, and demonstrates that it outperforms SOTA baselines on a wide range of datasets and heterogeneity levels and types.
----
Post Rebuttal
The rebuttal addresses my concerns on the DP guarantees, as such I raise my score to 6.
Strengths: * The paper is overall well written and easy to follow.
* The proposed FedBNN handles both statistical and system heterogeneity. In particular, it allows each client to have a personalized architecture.
* The numerical experiments are rigorous and considers a relatively large number of settings and competitors.
Weaknesses: * FedBNN relies on the availability of a public auxiliary dataset. It is true that an auxiliary dataset is often available, but sometimes it is none.
* I am not sure how to interpret Theorem 4.1 and if the DP mechanism of the paper is correct. On one hand, Theorem 4.1 does not take into consideration the noise multiplier of the DP mechanism, which should be related to $\sigma_g^2$. On the other hand, the DP mechanism presented in the paper does not have any kind of clipping or normalization, which is usually expected. The proof in Appendix C, claims that $\Delta$ is smaller than two, but it only provides a short justification. (*Note: I am not a DP expert*).
**Given the current doubt on Theorem 4.1, I am leaning towards rejection, but I am willing to adjust my rating when this point is clarified.**
Technical Quality: 3
Clarity: 2
Questions for Authors: * Could elaborate more on the DP analysis. In particular, I would appreciate if you can formally prove why $\Delta < 2$ in the proof of Theorem 4.1 in Appendix C. Also, could you please explicitly show the effect of $\sigma_g$ on the DP guarantees in Theorem 4.1.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: FedBNN relies on the availability of a public auxiliary dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer rtXt,
We are thankful to the reviewer for the thorough feedback on our work.
It is encouraging for us to see that the reviewer found the paper easy to read and follow, appreciated the rigorous "*experimental evaluation that considers a wide range of settings and competitors*", and recognized the method's effectiveness in addressing both statistical and system heterogeneity.
Below we address the reviewer's concerns on the DP analysis -
-- *Dependence on $\sigma$*
We acknowledge the doubt regarding not including the noise-parameter $\sigma$ in the main theorem, Theorem 4.1, of the paper. However, we elaborate on this in the proof of the theorem present in the Appendix C (Theorem C.7). The first line in the proof suggests that $\rho = \dfrac{\Delta^2}{2 \sigma^2}$ establishing the relationship between the noise-parameter $\sigma$ and the $(\epsilon, \delta)$ parameters of differential privacy. We have also established this relationship in the Theorem 4.1 in the paper itself in the revised version of the paper.
-- *Clarification on the bound on $\Delta$*
The sensitivity, denoted as $\Delta$, is defined in Definition C.4 in the paper which defines $L_2$-sensitivity as the maximum change in the $L_2$ norm of the algorithm's output between two neighboring datasets differing in at most one data point. Let $D$ and $D'$ be two neighboring datasets that differ in one data point present at the $i^{th}$ row (without loss of generality), and let $\Phi(D(i,:))$ be the $n_c$ (number of classes) dimensional output probabilities from the model $\Phi$ for the $i^{th}$ row datapoint in $D$ and $\Phi(D'(i,:))$ be the output probabilities for the $i^{th}$ row datapoint in $D'$. The $L_2$ sensitivity of $\Phi$ is -
$\Delta (\Phi) = || \Phi(D) - \Phi(D')||_2$
Since all other data-points between $D$ and $D'$ are identical, the $L_2$ sensitivity of $\Phi$ becomes -
$\Delta (\Phi) = || \Phi(D(i,:)) - \Phi(D'(i,:))||_2$
Now, $\Phi(D(i,:))$ and $\Phi(D'(i,:))$ are both probability distributions, therefore it can be seen that the squared $L_2$ norm of their difference is bounded by 2, i.e., $\Delta(\Phi)^2 \leq 2$ (the maximum occurs when $\Phi(D(i,:))_k = 1$ and $\Phi(D(i,:))_l = 1$ for two separate indices $k \neq l$).
We greatly appreciate the opportunity provided by the reviewer to provide better understanding of the privacy analysis. We have now included these details in the revised version of the paper.
Best,
Authors
---
Rebuttal 2:
Title: Kindly Request for Reviewer's Feedback
Comment: Dear Reviewer rtXt,
We sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes. This is a gentle reminder that the Author-Reviewer Discussion period ends in just around 12 hours from this comment, i.e., 11:59 pm AoE on August 13. We are happy to answer any further questions you may have before then, but we will be unable to respond after that time.
If you agree that our responses to your reviews have addressed the questions you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments!
Sincerely,
Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Precipitation Downscaling with Spatiotemporal Video Diffusion | Accept (poster) | Summary: This paper presents a novel framework for spatio-temporal precipitation downscaling, comprising two modules: a deterministic downscaling module and a diffusion module. The model is able to outperform the SOTA models, especially in extreme events and in mountainous areas.
Strengths: 1. Multiple losses, such as PE and EMD, are used to measure the effectiveness of the results. At the same time, the discussion of the trade-off between realism and bias is novel and informative, using MSE to represent the average accuracy of predictions and PE to represent the model's ability to reproduce extreme events.
2. The model outperforms six strong super-resolution baselines and can be established as a new standard for data-driven precipitation downscaling.
Weaknesses: Experiments are insufficient. Such as the effectiveness of the sharing features across the modules is not proven by ablation experiments.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In line 121, “bicubic interpolation” does not have a explanation. Can you explain it?
2. In line 121, what is ”pixelated features” ?
3. Is the input, additional climate states, the L1 data or L2 data?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Is this work of practical value? The authors acknowledge that switching to a different dataset requires retraining due to differences in data distribution. However, can the model be trained if high-resolution ground truth images are not available? If high-resolution images are available, does it make sense to reduce their resolution before training the model?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Thank you for taking the time to read our work. We are happy you find novelty in the realism-distortion tradeoff and appreciate the PE and EMD metrics. We address your concerns as follows:_
- ___“...the effectiveness of the sharing features across the modules is not proven by ablation experiments.”:___
We thank the reviewer for suggesting an additional ablation study. Due to time constraints, we are unable to run this ablation for the rebuttal, but we will include this ablation in an updated version of our paper. However, we feel that this design choice is a relatively minor part of our overall architecture, and our existing experiments demonstrate the strength of our proposed method (reviewer EgC8) and ablate the key components of our model (reviewer St3G). We would be glad to discuss any additional experiments the reviewer feels would further strengthen our submission.
- ___Explanation of "bicubic interpolation":___
Bicubic interpolation is a resampling method that uses the values of the 16 nearest pixels (4x4 grid) to estimate the value of each pixel in the upsampled image. This method ensures a smoother and higher-quality image than simpler methods like nearest-neighbor or bilinear interpolation. In our context, since the input and output dimensions of our UNet are the same, we use bicubic interpolation to upsample the low-resolution input to match the high-resolution output dimensions (scale factor of 8). We will include this explanation in the revised manuscript for clarity.
- ___Explanation of "pixelated features":___
By "pixelated features," we refer to traditional pixelation artifacts or block artifacts that can occur in super-resolution tasks. These artifacts are characterized by a blocky appearance in the upscaled image, where individual pixels or groups of pixels become visibly distinct, leading to a loss of detail and smoothness. We will clarify this in the manuscript.
- ___“Is the input, additional climate states, the L1 data or L2 data?”:___
We appreciate the reviewer's question about the input data and additional climate states. However, we are unsure about the specific reference to "L1 data" or "L2 data" in this context. Could you please clarify what you mean by these terms? In our study, the input includes both the primary low-resolution precipitation data and additional climate states. These additional covariates are provided at a low resolution. Refer to the STVD-single experiment to highlight the importance of this side data. Additionally, we clarify in the appendix how these side data were selected.
- ___“Is this work of practical value?...”:___
As discussed in [1], fluid-dynamical emulators of the global atmosphere are too expensive to run routinely at such fine scales. So the climate adaptation community relies on “downscaling” of coarse-grid simulations to a finer grid. Our work builds on vision-based super-resolution methods to improve statistical downscaling by allowing a cheap run of fluid-dynamics emulators on a coarse grid followed by a downscaling using our model on the region of interest. Our model's practical value lies in its ability to provide quick and cheap high-resolution downscaling for regions of interest. Additionally, our model emulates long-term annual trends pretty well, which is critical for applications like water availability and management. The main motivation behind this project was to create computationally efficient and realistic proxies for climate emulators:
- Generate high-resolution training data for a limited time horizon (e.g., one year).
- Train the super-resolution model using this data.
- Generate 'cheap' climate predictions at low resolution and use the super-resolution model for focussed regional predictions.
Please also refer to our global response for a discussion of this point.
We will also add a discussion in the paper revision on the general issue of data distribution shifts and the potential for developing domain adaptation techniques for climate applications, highlighting it as an interesting future research direction. This context will underscore the practical value and broader applicability of our approach.
_Again, we appreciate the time you took reading our work. We will integrate our responses, along with additional clarifications, into the paper. These enhancements will contribute to the overall readability, clarity and completeness._
_[1] Stevens, B., Satoh, M., Auger, L., Biercamp, J., Bretherton, C. S., Chen, X., ... & Zhou, L. (2019). DYAMOND: the DYnamics of the Atmospheric general circulation Modeled On Non-hydrostatic Domains. Progress in Earth and Planetary Science, 6(1), 1-17._
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed response. I acknowledge I have read the rebuttal. | Summary: The method proposes a diffusion model to statistical downscale precipitation. The model requires a combination of high resolution and low resolution video data for training (a common scenario in weather and climate modeling). The diffusion model takes in a video of low resolution atmospheric variables and outputs a sample video of high-resolution precipitation.
In experiments, the model is trained on an ensemble of year-long runs from the standard US forecasting system (FV3GFS). Comparisons are made to deterministic downscaling models. The models are evaluated using mean squared error (MSE), continuous rank probability score (CRPS), earth mover distance, 99.999th percentile error (PE), and spatial autocorrelation error (SAE). The results confirm the advantage of using a probabilistic method in estimating the risk of extreme events.
Strengths: - The use of machine learning to downscale atmospheric variables, particularly precipitation, is a hot topic in weather forecasting and climate modeling. This paper will be of interest to the community.
- Nice ablation studies showing the importance of the temporal dimension and additional atmospheric variables.
Weaknesses: - There have been a number of papers using diffusion models for downscaling (Hatanaka, et al. 2023 for solar irradiance) and nowcasting (Gao, et al. 2023 and Yu, et al. 2024 for precipitation). These are missing from the related work section.
- The use of transformer-based models is not justified. It seems likely that vision transformer models have a poor inductive bias for this task, which probably has a lower degree of long-range spatiotemporal dependency than in natural images/video. The authors mention "key adaptations to the attention mechanism" but these appear to be simplifications that reduce computation and increase the locality bias --- have the authors tried removing attention all together?
- Some typos that could be caught with a spell checker.
- The baselines used for comparison are weak. They are all deterministic, so it is expected that they will fail to capture the extremes. I don't think it is necessary to have comparisons, but I think this point could be made more clearly in the text.
- In experiments, only 10 samples are taken from the diffusion model. This seems small in the context of estimating the risk of extreme events.
- Some clarifications would be helpful (see questions below).
Technical Quality: 3
Clarity: 3
Questions for Authors: - 280: For an annual average, the diffusion model isn't really necessary. Wouldn't any of the deterministic statistical downscaling models would perform just as well? This should be clarified.
- 294: It was unclear from the text why additional sampling steps in the STVD increases MSE. I imagine this is because the deterministic downscaling model predicts the mean of the possible rainfall (low MSE) and sampling from the distribution of residuals with the diffusion model will almost certainly increase this. So more sampling steps means a better sample of the residual, which means a higher MSE?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - As the authors note, the model is demonstrated on simulation data where high-resolution video inputs are provided. In many applications of statistical downscaling, the input and output are from different sources (e.g. observed high-res vs. GFS model output or reanalysis), and there may be inconsistencies. So the experiments done here are on "clean" data, and further experiments are needed to evaluate the method on applications where it will be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Thank you for taking the time to read our manuscript. We are happy you see the utility of our work in climate modeling and appreciate the ablation study. We address your concerns as follows:_
- ___Missing related work:___
We appreciate the reviewer's suggestion, and will include these references. Hatanaka et al. focus on solar irradiance, and Gao et al. and Yu et al. focus on nowcasting. Our work addresses the challenges posed by the temporal dynamics in precipitation downscaling by leveraging temporal attention mechanisms to encourage consistency over time. We emphasize that downscaling fundamentally differs from nowcasting, as we are provided the entire low-resolution video when doing downscaling and thus do not predict future frames.
- ___“use of transformer-based models not justified… have the authors tried removing attention?”:___
Before addressing this, we would like to clarify that our model's backbone is not a Vision Transformer but a Conv UNet with Spatial and Temporal Attention blocks applied to the convolutional features. Removing attention would remove the temporal context, resulting in our “STVD-1” ablation, where the performance degrades appreciably. Although 3D convolutions are an alternative, they are very expensive. To visualize the inner workings of temporal attention, we have added Figure 2 in the attached file in the global response section. It visualizes the temporal attention map from the bottleneck layer of the downscaler. We have averaged this map over the whole validation data and multiple attention heads. As one may expect, we see that, on average, attention decays as a function of time lag, meaning that the model learns to assign more weight to features that are temporally closer.
- ___Typos:___
Thank you for pointing this out. We will correct all typos in the final draft.
- ___“comparisons are weak... all deterministic... expected that they will fail to capture the extremes…”:___
Note that not all baselines we used are deterministic. Swin-IR-Diff and VDM are diffusion-based baselines and thus stochastic. Despite being diffusion-based, our experiments reveal that these methods do not capture extreme events as effectively as our model. As shown in Table 1, our method outperforms these baselines in both the PE and EMD metrics. Furthermore, Figure 4 demonstrates that the precipitation distribution produced by our model closely matches the actual distribution, particularly in the extremes, better than the distributions generated by the baselines. Additionally, we have used more competitive vision VSR baselines, acknowledging that most state-of-the-art baselines for VSR remain deterministic.
- ___“only 10 samples, seems small…”:___
We may have miscommunicated this. To clarify, the histogram in Figure 4 is calculated across *all grid points* over the entire one year of validation data (~$10^8$ points). The 10 samples referred to in the context are the 10 stochastic samples generated from the same input condition, which are used specifically for calculating CRPS.
- ___“annual average, the diffusion model isn't necessary…”:___
We stress that predicting annual averages was not our prediction goal but rather serves as a *diagnostic* with high relevance to climate modeling practitioners. The primary use case of our model is to provide high-resolution instantaneous downscaling, which captures detailed local behavior and dynamic precipitation patterns. While deterministic methods may capture broad annual trends, our model simultaneously captures both local behavior and annual patterns, which are crucial for practical applications such as water availability and management.
- ___“unclear why additional sampling steps in the STVD increases MSE…”:___
Yes, precisely, your interpretation is correct. We agree that this could have been explained more clearly in the submission, and we will include a more detailed discussion in the final draft. To elaborate, the conditional mean is the theoretical minimizer of the MSE, and any deviation from this conditional mean would result in a higher MSE -- even if the resulting deviation is more realistic. Regarding the relationship with sampling steps, fewer sampling steps correspond to taking larger time steps in the diffusion process. At one extreme, taking a single step would correspond to predicting the conditional mean (see [1] for a discussion), minimizing the MSE. Conversely, increasing the number of sampling steps results in a more accurate simulation of the diffusion process, producing more diverse and realistic samples, and thus increasing the MSE while decreasing the PE as shown in Figure 3.
- ___“the experiments done here are on "clean" data, and further experiments are needed…”:___
As discussed in [2], fluid-dynamical emulators of the global atmosphere are too expensive to run routinely at such fine scales. So the climate adaptation community relies on “downscaling” of coarse-grid simulations to a finer grid. Our work builds on vision-based super-resolution methods to improve statistical downscaling by allowing a cheap run of fluid-dynamics emulators on a coarse grid followed by a downscaling using our model on the region of interest. Please also refer to our global response for a discussion of this point.
_Thank you again for taking the time to read our work, as well as for the many suggestions and typo highlighting. We will incorporate these changes in the paper revision to improve readability, as well as the additional content prompted by your feedback._
_[1] Karras, T., ... & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems, 35, 26565-26577._
_[2] Stevens, B., Satoh, M., Auger, L., Biercamp, J., Bretherton, C. S., Chen, X., ... & Zhou, L. (2019). DYAMOND: the DYnamics of the Atmospheric general circulation Modeled On Non-hydrostatic Domains. Progress in Earth and Planetary Science, 6(1), 1-17._
---
Rebuttal Comment 1.1:
Comment: Hi, we wanted to follow up on our recent rebuttal submission to ensure that our responses adequately addressed your concerns. If there are any remaining issues or if you have any further questions, we would be grateful for your feedback.
Thank you for your time and consideration. | Summary: This paper extends video diffusion model to precipitation super-resolution, where a deterministic downscaler is used to produce initial results and a temporally-conditioned diffusion model is utilized to refine previous coarse results. By combing deterministic and statistical downscaling models, "mode averaging" problems are obviously alleviated. Experimental results demonstrates its effectiveness.
Strengths: 1. The paper is well organized, the motivations and method details are clearly described.
2. The explanation of professional terms is very good, so that researchers in other fields can easily read the paper.
3. It is very reasonable to predict the low-frequency part with a deterministic model and then generate the high-frequency residual with a statistical model.
4. The comparisons with other methods in experiment parts seems sufficient.
Weaknesses: There are some statements, method and experiment details remain to be clear.
1. Is there any design that guarantees that the output high-resolution frames are smooth over the time series?
2. For the high-frequency prediction part, if given the same conditions but different sampling noise, will output completely different results?
3. The authors state that the generative models can capture multimodal conditional distributions and alleviate underestimation of extreme precipitation. Could the authors show experimentally that their approach is better at modeling extreme precipitation than traditional supervised methods?
Technical Quality: 3
Clarity: 4
Questions for Authors: Please refer to Weaknesses part.
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The authors clearly point out the limitations of their work and the negative social impact after Conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Thank you for taking the time to read our work. We are pleased that you find the paper easy to read, appreciate the residual nature of our model, and find the experiments sufficient. We address your concerns as follows:_
- ___“...any design that guarantees that the output high-resolution frames are smooth over time?”:___
To encourage the temporal smoothness of high-resolution frames, our model employs temporal attention mechanisms in both the mean downscaling and residual diffusion modules. This smoothness is not explicitly hard-coded, but the fact that the input is a frame sequence and processed with temporal cross-attention enables the model to generate a temporally-coherent output sequence. This approach aligns with other video diffusion works [1] that achieve temporal coherence without hard-coding. For reference, note that we have provided sample output __videos in the supplementary zip__ (california.gif, himalaya.gif), which are as temporally smooth as the ground truth.
- ___“...same conditions but different sampling noise, will output completely different results?”:___
Yes, as a conditional generative model, our approach produces different, stochastic high-resolution output sequences under the same input conditions. These samples differ in their high-resolution details but are all consistent with the low-resolution input. This variability is crucial in climate modeling as it captures the inherent uncertainty and variability in weather patterns, essential for generating ensemble forecasts. This is why we include CRPS as an evaluation metric since it helps us compare a set of stochastic samples against a deterministic ground truth. For better visualization, we have included __Figure 1 in the attached file in the global response section__. The figure depicts five stochastic samples that our model (STVD) generated for the same precipitation event in the Sierra Nevada region, as shown in the main paper. Additionally, we provide a variance map that depicts variability across these samples. Interestingly, we find that high variance correlates with high precipitation regions, reflecting the fact that precipitation is a highly stochastic and hard-to-predict phenomenon.
- ___“...show experimentally that their approach is better at modeling extreme precipitation than traditional supervised methods?”:___
We agree that demonstrating our model's effectiveness in capturing the precipitation distribution, particularly extreme precipitation, is essential. In our paper, we address this through two metrics, Earth Mover Distance (EMD) and the 99.999th percentile error (PE), as highlighted in Table 1. These methods refer to the annual precipitation distribution, shown in Figure 4. Table 1 indicates that our method results in a low PE and EMD, indicating that we capture the annual precipitation distribution well and better than other methods. Figure 4 visually reveals that our approach also captures the tail end of this distribution better than other methods. This is what we mean by the statement that our approach is “better at modeling extreme precipitation”. We will clarify this in the paper.
_Thank you again for taking the time to review our work. We hope that our clarifications and additional figures on stochastic samples address your concerns. We will incorporate the additional content prompted by your feedback in the paper revision._
_[1] Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., & Fleet, D. J. (2022). Video diffusion models. Advances in Neural Information Processing Systems, 35, 8633-8646._
---
Rebuttal Comment 1.1:
Comment: Hi, we wanted to follow up on our recent rebuttal submission to ensure that our responses adequately addressed your concerns. If there are any remaining issues or if you have any further questions, we would be grateful for your feedback.
Thank you for your time and consideration. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for taking the time to review our work and appreciate the detailed feedback and constructive comments. In this response, we address a common point about a claimed limitation of the paper. All other questions are addressed in reviewer-specific responses.
Some reviewers pointed out the use of “clean data” for training (e.g., the availability of paired high-resolution/low-resolution samples) as a potential practical limitation. We stress that this setup is an established way of performing precipitation downscaling, commonly called the 'perfect prediction' paradigm according to the taxonomy of [1]. In this approach, the coarse-grid features used for super-resolution are derived by coarse-graining the fine-grid data used for training. This most basic super-resolution task is an important first step toward a longer-term goal: training super-resolution models to enhance coarse-grid features from an inexact emulator of coarse-grid meteorology to super-resolve the fine-grid data. This process can be broken into two steps: (1) correcting biases in the emulator compared to the coarsened fine-grid data, and (2) performing the ‘perfect prediction’ task. The first step is emulator-specific, while the second is more generic and of broader interest.
In addition, some reviewer questions asked for additional investigations. We provide here a pdf containing the results of those investigations, containing
- Figure 1: Visualization of stochastic samples and variance map
- Figure 2: Visualization of temporal attention
Specific context and discussions are presented in the corresponding review rebuttals.
Many thanks,
The authors
_[1] Rampal, N., Hobeichi, S., Gibson, P. B., Baño-Medina, J., Abramowitz, G., Beucler, T., ... & Gutiérrez, J. M. (2024). Enhancing Regional Climate Downscaling through Advances in Machine Learning. Artificial Intelligence for the Earth Systems, 3(2), 230066._
Pdf: /pdf/dadf4ac317d94579bff2f2f0cd5ca4d23d9e0087.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Inverse M-Kernels for Linear Universal Approximators of Non-Negative Functions | Accept (poster) | Summary: The authors combines
- the notion of M-matrix from linear algebra
- the notion of positive semidefinite kernel from kernel learning theory
to create the notion of M-kernel. The M-kernel can be used for solving non-negative constrainted least squares in an RKHS.
Strengths: - The idea is novel, to my knowledge
- The fusion of these two ideas is an elegant way to efficiently solve the problem of non-negative regression in an RKHS
- The experiments are thorough and demonstrates the theory well
Weaknesses: - There isn't a lot of kernels that are known to be M-kernels (it looks like we only know for 1-dimensional kernels). The authors acknowledge this.
- There are many small grammar/writing errors.
Technical Quality: 4
Clarity: 2
Questions for Authors: - What is $\gamma$ on L209?
- On L213, is this tridiagonal formula well-known? what is the main proof idea/reference citation to this result?
Typos/word choice/grammar. Number at the front denotes line number.
- 45, "reminiscent" -> "analogue"/"generalization"/"extension" ("reminiscent" isn't used much in mathematical writing to my knowledge)
- 45, "belong" -> "belonging"
- 48 "input space" -> "input spaces"
- 49 "shed the light" -> "shed light"
- 65 "invokes" -> "is given by"
- 71 "which is known as the" -> "a property known as"
- 73 "the computation of O(N 2) and O(N )" -> "O(N 2) and O(N ) kernel evaluations"
- 94 "represents the positive semi-definiteness of matrices" -> "is the positive semi-definite constraint"
- 176 "notorious" -> "well-known" ("Notorious" means "famous for something bad". Unusual for usage in a mathematical context.)
Confidence: 5
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the highly positive comments, by which we are strongly encouraged! We feel sorry for the grammatical flaws in this paper. Despite this, we appreciate your thorough understanding of our work's content and value. We promise to improve the grammatical quality of this paper using a professional English proofreading service. Below we provide a detailed response to each of the comments.
**There isn't a lot of kernels that are known to be M-kernels (it looks like we only know for 1-dimensional kernels).**
As the reviewer pointed out, this paper succeeded in only showing two examples of strict inverse M-kernels, which could be a limitation of this paper. However, we are optimistic about the future. This paper successfully, for the first time, clarified a novel condition that allows a linear model to achieve both non-negativity and good representation power, that is, the condition of strict inverse M-kernels. By presenting the condition to a high-level NeurIPS audience, we believe that the open question of how to construct strict inverse M-kernels in a more general manner would be addressed eventually.
**There are many small grammar/writing errors**
We apologize for the inconvenience and appreciate the kind suggestions about writing, which we will reflect in our paper. We will ensure that the writing quality of the paper is improved through professional proofreading.
**What is $\gamma$ on L209?**
$\gamma$ on L209 is the hyperparameter of intersection kernel $k_{\text{int}}(x,x')$, which should be determined based on data. We will ensure that it is clearly stated in the paper. If $\gamma$ on L339 confused the reviewer, we apologize for it: $\gamma$ on L339 is the typo of $s(N+1)$.
**On L213, is this tridiagonal formula well-known?**
We derived the tridiagonal formula by using some known properties of symmetric Toeplitz matrices (e.g., see Trench 2001) as a reference. It is clear that the derived tridiagonal formula is a straightforward generalization of the inverse of Toeplitz matrices, but we cannot find any references that explicitly mention the tridiagonal formulae of one-dimensional exponential and intersection kernels. We will add the above discussion to the paper.
Trench, W. F. (2001). Properties of some generalizations of Kac-Murdock-Szego matrices. Contemporary Mathematics, 281, 233-246.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses. I maintain my opinion that this paper is a nice contribution to the community including posing new interesting questions. I will keep my score. | Summary: The paper considers learning in RKHS with non-negativity constraints $f(x) \geq 0$, which turns up with surprising regularity in areas of machine learning. The proposed approach uses an elegant "trick" involving restricting the kernel selection the so-called inverse $M$-kernels that allows the constraint to be enforced in a straight-forward manner (ie adding a simple linear constraint to the optimization).
Strengths: This paper addresses a problem which I have encountered numerous times in my own research, as such global constraints on GP models seem ubiquitous in applied Bayesian optimization. I have therefore seen (and used) any number of ad-hoc approaches to dealing with such constraints, typically involving either inducing point grids (which can add significant computational complexity) or latent-space constructs - log-space, function squared etc - that may distort the model in unhelpful and unintuitive ways. This paper is the first proposal I have seen that appears to offer what I would consider a satisfactory solution to the problem.
- the problem is well-motivated.
- presentation is mostly readable (though see my concerns below) and the flow of the paper is clear.
- as far as I can tell the solution is mathematically sound.
Weaknesses: The role of the function $s(N)$ needs to be discussed in more depth. Reading the paper I am left with the impression that $s$ is a bit of a fudge-factor: ideally you would only want to work with strict inverse M-kernels, but in practice they're hard to find so you compromise by including $s$. My problem here is that you the set of functions $s$ is too broad, including potential arbitrary scaling with $N$. Speculatively speaking, maybe you could discuss $s$ in big-O terms - you could even categorize inverse M-kernels this way, which would give you a sort of heirarchy (you might argue that a kernel for which $s \sim O(N)$ suffices is in a sense better than one requiring $s \sim O(N^2)$). Alternatively would it be reasonable to borrow from NTK theory (where weight matrices are scaled according to width) and scale the kernel itself by $\frac{1}{N}$ and replace $s$ with a constant?
I am a little disappointed that the experiments don't include multidimensional examples but it is clear from the discussion that this is a difficult problem so I am willing to overlook this point.
Minor point: please take the time to run this paper through a spell-checker!
Technical Quality: 4
Clarity: 2
Questions for Authors: See weaknesses section.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the highly positive and constructive comments. It is incredibly encouraging to know that a reviewer with practical experience in the estimation of non-negative functions finds value in our work! Below, we provide a detailed response to each of the comments.
**The role of the function needs to be discussed in more depth...**
Thank the reviewer for the pertinent comments. We agree with the reviewer about the role of $s(N)$ being ambiguous in the paper. According to the reviewer's suggestion, we will introduce $s$ as a non-negative factor at the definition of inverse M-kernels, and discuss a possible scaling of $s$ with data size $N$: The ideal case is $s = 0$ (strict inverse M-kernels), where inverse M-kernel model $f_{\text{IMK}}$ has greatest representation power, but we can only find such kernels on one-dimensional input scenario; By exploiting the strict path product condition (for details, see Future Work and Limitations), we can find inverse M-kernels with $s \sim \mathcal{O}(N)$ regardless of the input dimensionality, where $f_{\text{IMK}}$ has limited representation power for a large $N$; An essential next step is to investigate a more scalable factor (e.g., $s \sim \mathcal{O}(N^{1/2})$ ) according to observed data, which could relax the limited representation power of $f_{\text{IMK}}$ in multi-dimensional input settings.
**please take the time to run this paper through a spell-checker!**
Sorry for the inconvenience. We promise to improve the grammar quality of the paper by using English proofreading services. We appreciate the reviewer's efforts to read our paper despite the poor English.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am happy to keep my recommendation unchanged as I think this is a good (though not perfect) contribution tackling an important problem. | Summary: The paper proposes a new kernel family that can be used to fit non-negative functions tractably, which has theoretical novelty. The universal approximation properties are analysed, and a connection is made to permanental processes. The method is applied to univariate regression, density estimation and intensity estimation.
Strengths: The method contribution has clear originality and novelty.
The paper is clearly written and presented.
The results are overall good.
Weaknesses: The method is limited to fitting univariate non-negative functions, which is disappointing. Scalar functions can already be handled with existing techniques. This limits the papers significance to low in its current form.
The results are a bit uneven, and sometimes the new method does not seem to fit that well. In Fig1 the QNM is slightly better, in Fig2 the QNM is arguably much better. In FigB2 the new method is much better than baseline, but the fit is still quite poor. Furthermore, all experiments are very simple, and baselines quite simple as well. For instance, there are no GP-based or Cox-process based competing methods. There are also no comparisons to non-negative splines.
I'm afraid the paper has not been able to find a purpose for their good idea, or show what is the open problem that only it can solve.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No issues
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for carefully reading our paper and giving valuable comments. While the reviewer has positive evaluations of the core part of this paper (Soundness: good, Presentation: good, Contribution: good), we understand that the critical concern of the reviewer lies in the ambiguity of the paper's purpose and significance. We can clearly state the purpose and significance of our work, and therefore, we believe that we can fully dispel the concern raised by the reviewer. Below, we provide a detailed response to each of the comments.
**The method is limited to fitting univariate non-negative functions, which is disappointing. Scalar functions can already be handled with existing techniques. This limits the papers significance to low in its current form...**
**I'm afraid the paper has not been able to find a purpose for their good idea, or show what is the open problem that only it can solve.**
This paper tackles the *open problem of whether a linear model can simultaneously satisfy non-negativity and universal approximation*. Through the novel concept of inverse M-kernels, we have, for the first time, demonstrated that the answer to the problem is 'Yes' at least in the case of one-dimensional input scenario. While non-linear models that satisfy both non-negativity and universal approximation have been proposed so far, no such linear model has been developed in existing techniques. The linearity of a model offers significant computational advantages in both learning and prediction of latent function, and as Reviewer wAiw kindly commented, its practical value is significant—even in the case of one-dimensional inputs. A comparison of CPU times between QNM (non-linear model) and our linear model in Tables 1 and 2 clearly demonstrates that our model achieved computational speeds tens to hundreds of times faster than QNM.
Based on the reviewer's comments, we found that our explanation of the paper's purpose and significance was inadequate. We will add the above discussion in the introduction.
**In Fig1 the QNM is slightly better, in Fig2 the QNM is arguably much better.**
This is a misunderstanding by the reviewer, likely due to his/her overlooking that the evaluation metric ($d_{\text{KL}}$) is the Kullback-Leibler distance (the lower, the better): In Figure 2, QNM is slightly inferior to our model regarding $d_{\text{KL}}$. We apologize for the unclear explanation. We will add the statement "the lower, the better" to the legend and the main text.
**In FigB2 the new method is much better than baseline, but the fit is still quite poor.**
Actually, the estimation result achieved by our model in Figure B2 is by no means poor. Event sequence data generated from a point process is inherently noisy, and it is challenging to estimate the intensity function from such data accurately. For example, the center of Figure 2 in (John & Hensman, 2018) shows the estimation result using a variational Bayesian method for the same underlying intensity function, where the result is similar to that of our model in Figure B2. To clarify it, we implemented and evaluated one of the SOTA methods, the structured variational approach with sigmoidal Gaussian Cox processes (Aglietti et al., 2019). Please see the following response about intensity estimation for more details.
**Furthermore, all experiments are very simple, and baselines quite simple as well. For instance, there are no GP-based or Cox-process based competing methods.**
*Regression and Density Estimation*:
In this paper, we focus on point estimation rather than interval estimation, where GP-based models usually reduce to kernel methods that include our model. An exception is the approach by (Pensoneault et al., 2020), which imposes the non-negativity constraints on finite virtual points by utilizing the confidence intervals of posterior distributions. It is a very different approach from the kernel method that cannot access posterior distributions. However, as mentioned in Section 2.2, the approach is out of the scope of our paper because it does not guarantee non-negativity at locations other than the points. We believe that NCM, QNM, and SNF are appropriate baselines in the literature.
*Intensity Estimation*:
As the reviewer pointed out, Gaussian Cox processes (GCPs) are the gold standard for intensity estimation. Actually, the intensity estimator defined in Section 3.2 is the MAP solution of the permanental process, a variant of GCP assuming that the square root of the intensity function is generated from a Gaussian process. Therefore, the reviewer's claim that there are no Cox process-based methods in the experiments is inaccurate. However, from the reviewer's comments, we found that other GCP-based methods should be included in the baselines to clarify the contribution of our paper within the context of intensity estimation: In the literature on GCPs, the advantage of permanental processes over other GCPs has been considered as the efficient estimation algorithm, and our work improves the predictive performance of the fast-to-compute permanental processes. We added the estimation result of the structured variational Bayesian approach with sigmoidal Gaussian Cox processes (Aglietti et al., 2019), denoted by STVB. We implemented STVB with the TensorFlow code provided by (Aglietti et al., 2019), where the number of inducing points were set as regularly aligned 100 points within the observation domain. Table C1 and Figure C1 (please see the pdf file in global response) show that our model achieves comparable predictive accuracy while being hundreds of times faster than STVB, which highlights our work's contribution.
John and Hensman (2018). Large-scale Cox process inference using variational Fourier features. International Conference on Machine Learning.
Aglietti et al. (2019). Structured variational inference in continuous cox process models. Advances in Neural Information Processing Systems 32.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response!
I can’t follow the response wrt the linear+universal models. Linear functions are not universal, since they can only fit simple straight functions. Maybe you mean non-linear kernel functions, but they are then not linear (in the ambient space). There is clearly some confusion about terminology.
Also, can’t we do universal non-negative kernel functions already eg. by having an exponential link function on a (latent) kernel function?
Based on the response I think in FigB2 you take Aglietti’s GP model, and change the kernel. Is this accurate? But surely this can’t be true: the Cox-GP is already non-negative by exponential link function construction, so adding M-kernels there should do nothing. You are probably doing some bigger changes to the CGP model, perhaps by removing the exp-function. This does not seem to be defined in the paper. The M-kernel+CGP combination is mysterious to me.
It also seems that the Aglietti’s model is not properly executed: the FigB2IEK is giving pretty bad fit, and it does not look like the plots in Aglietti or Hensman. Maybe the code is not that great, but one should still try to tune the results until they looks roughly like the papers.
Overall I’m still not yet convinced of the paper’s merits. The theoretical contribution seems substantial, but if it’s limited to univariate cases I don’t see benefit. I think the main empirical contribution is that the M-kernels can make simple univariate fitting tasks faster, but I don’t think it really matters whether we can do it in 5 seconds or 0.05 seconds. I would assume that if we really care about having a good model, one would happily spend hours doing eg. MCMC to build a robust model.
Also, doing non-negative spline fitting is well known, and does not seem to suffer from computational bottlenecks. The paper should at least discuss them.
Maybe a good demonstration of the method could be some really long time-series (eg. millions of points). Or perhaps the main contribution could be accelerating the CoxGP models, which would be very useful since GP inference is a major bottleneck. However to show this I think the paper should give a much more comprehensive treatment of how the M-kernels can be used to adapt CGP, and how they improve over the CGP domain in general [ie. comparing against a single CGP method from 5 years ago is not sufficient].
I still vote for rejection, but I will raise my score to 4, and would not object acceptance given others' positive reviews.
---
Reply to Comment 1.1.1:
Comment: We really appreciate the reviewer's responses. Unfortunately, the reviewer still seems to have misunderstood some points of our work, which we believe can be dispelled fully by the following discussion.
**I can’t follow the response wrt the linear+universal models. Linear functions are not universal, since they can only fit simple straight functions. Maybe you mean non-linear kernel functions, but they are then not linear (in the ambient space). There is clearly some confusion about terminology.**
We use the term "linear model" to refer to a finite linear combination of (non-linear) kernel functions evaluated on data points. As the reviewer suggested, the term never means linear regressions or linear kernels. Although the term is consistent with the terminology used in the most relevant reference (Marteau-Ferey et al. 2020), we found it confusing. Is it acceptable for the reviewer to adopt "linear representation," which is often used in the literature on kernel method and representer theorem? We are willing to follow the reviewer's suggestion!
**can’t we do universal non-negative kernel functions already eg. by having an exponential link function on a (latent) kernel function?**
If the reviewer is stating that $f(\cdot) = \exp \Bigl( \sum_n \alpha_n k(x_n,\cdot) \Bigr)$ is a universal non-negative model, then it may be correct, but it is NOT what we are aiming for: we try to find a universal non-negative model of the form, $f(\cdot) = \sum_n \alpha_n k(x_n,\cdot)$, in this paper.
**Based on the response I think in FigB2 you take Aglietti’s GP model, and change the kernel. Is this accurate?**
No. IEK in Figure B2 is not Aglietti's GP model, but the intensity estimator with reproducing kernels/permanental process (we adopted Gaussian kernels). If the reviewer is confusing Figure B2 (in Appendix) with Figure C1 (in the pdf file uploaded in rebuttal), then we did not change the kernel in the provided code, where Gaussian kernel is implemented: Figure C1 shows the result of Aglietti’s GP model with Gaussian kernel.
**the Cox-GP is already non-negative by exponential link function construction, so adding M-kernels there should do nothing. You are probably doing some bigger changes to the CGP model, perhaps by removing the exp-function. This does not seem to be defined in the paper. The M-kernel+CGP combination is mysterious to me.**
The reviewer is mistaken about Aglietti's model: the model employs a sigmoidal link function, not an exponential link function. Furthermore, we did not change the structure, link function or kernel function of Aglietti's model.
**It also seems that the Aglietti’s model is not properly executed: the FigB2IEK is giving pretty bad fit, and it does not look like the plots in Aglietti or Hensman. Maybe the code is not that great**
Please see Figure C1 (uploaded at the rebuttal period) for the result of Aglietti's model, not Figure B2 which does not contain Aglietti's model. Also, IEK in Figure B2 is a different model from Hensman's model (and of course, Aglietti's model), and thus IEK's poor performance is not inconsistent with Aglietti's and Hensman's papers.
**Overall I’m still not yet convinced of the paper’s merits... I don’t think it really matters whether we can do it in 5 seconds or 0.05 seconds. I would assume that if we really care about having a good model, one would happily spend hours doing eg. MCMC to build a robust model.**
First and foremost, we would like to express that we respect the reviewer’s perspective. However, we would also like to emphasize that in the field of machine learning, the development of faster estimation methods is widely recognized as an essential research topic, and many researchers are actively working on this issue. We hope that the reviewer will take this fact into consideration. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their valuable comments. We responded in as much detail as possible. We believe that we can entirely dispel the concerns raised by the reviewers. Please see the added figure (Figure C1) in response to Reviewer WaA4's suggestion.
Pdf: /pdf/9636818fa26c0c68152b6da4dff4cc018571421f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Policy-shaped prediction: avoiding distractions in model-based reinforcement learning | Accept (poster) | Summary: The paper addresses the problem of distractions in model-based RL by proposing a policy-shaped prediction (PSP) method, which combines segmentation and adversarial learning to accurately identify and prioritize policy learning on crucial parts of dynamics by incorporating saliency maps from image-based environments. Authors achieve this by finding relevant importance signals from the gradients of the policy w.r.t input images and aggregate importance signal into gradient-based weighting by SAM network. As the main motivation, along with the new distraction-suppression algorithm, the paper introduces a novel distraction benchmark. Overall, the authors show that the proposed PSP demonstrates up to 2x improvement in robustness from distractions, while attaining similar scores in distraction free settings as baselines.
Strengths: The motivation behind proposed approach is sound and clearly explained.
Empirical results support the main claims of the paper, proving the efficiency of proposed method.
Main novelty stems from enhancing the application of policy-gradient based weighting of the important features by reformulating weight factor in terms of mean of different segmentation masks, provided from powerful segmentation models.
Detailed description of novel distraction benchmark is provided, showcasing the ability of PSP to be robust against static adversarial perturbations, surpassing previous methods.
Authors provide concise and dense experiments, ablation studies, along with all the necessary reproducibility details.
Weaknesses: 1) It is not clear whether much simpler segmentation models would work, because the method seems to be agnostic to the choice of the segmentation model. Evaluation of additional segmentation models would support the paper and could potentially increase the overall speed of PSP training.
2) In the new benchmark description, Section 3.2, the authors claim that the background distractions are predictable and dependent on the agent’s actions and dynamics. However, the most natural distractions are independent from the agent and ignoring such dynamic case greatly limits the usefulness of the proposed benchmark.
3) The paper does not mention domain adaptation approaches in RL, which solve exactly same task of learning general policy, invariant to distractions or other kinds of domain shifts.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) It would be beneficial to include images of saliency maps during training and to discuss how they vary.
2) Could the authors change SAM in PSP to another segmentation model? How would the final results change?
3) How well the algorithm performs on dynamically changing distractions? (e.g noisy TV)
4) How accurate are the SAM masks? How said inaccuracies affect the weight factor in Eq (2) on the overall objective in Eq (1)?
5) Could the proposed method be applied in model-free RL setting?
6) What are the benefits of using PSP against domain adaptation algorithms, which are also robust to perturbations as they learn general invariant representations of agent dynamics/environment (known to be robust to domain shifts)? (e.g., refer to Raychaudhuri et al, "Cross-Domain Imitation Learning from Observations")
7) In Table 2, the results for Reafferent environment with gradient weighting and with the adversarial action head seem to be very similar to the case when both of those are removed (383.1 +- 23.8 and 379.0 +- 46.8). Could the authors provide some reasoning for why is this the case?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors fully addressed possible limitations and potential societal impacts in Section 5 of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their excellent comments.
**Evaluation of additional segmentation models**:
Thank you for this great suggestion. We now include evaluation the 'tiny' variant of the recently released SAM2 segmentation model. This increases the speed of the segmentation step by over 2x (in a very unoptimized implementation), and yields similarly performant results (Fig. R1A).
**Dependent vs. independent distractions**:
Existing benchmarks such as Distracting Control Suite target the described scenario -- with distractions that are independent from the agent, and natural videos as the distracting background. The goal of our Reafferent environment is to complement these existing benchmarks by focusing on scenarios where distractions are predictable and dependent on the agent. The Reafferent environment is not meant to replace the Distracting Control Suite, but rather to augment it in an important and challenging way.
**Relation to domain-adaptation algorithms**:
Thank you for bringing up the interesting connection to domain adaptation algorithms. In general, the domain adaptation setting is somewhat different from our problem: in domain adaptation, (e.g. Raychaudhuri et al) there is an expert environment (which in our case might be an environment without distractions) and an agent environment (perhaps with distractions). The task is to adapt policies from the non-distracting environment to the distracting environment. This is an interesting and potentially fruitful approach, but it is also quite different from our setting in which the distractor is potentially always present. Additionally, even if the setting were directly applicable, Raychaudhuri et al. only uses a low dimensional joint-level state, and it is unclear how or if that would scale to high dimensional visual observations. Nevertheless, we will add discussion of the conceptual relevance of domain adaptation to the manuscript. Thank you.
**Images of saliency maps during training**:
Thank you for the great suggestion, we now show example saliency maps from across training (Fig. R1F). Qualitatively, the saliency maps appear to improve their specificity across training.
**Dynamically changing distractions (e.g. noisy TV)**:
Thank you for raising the interesting scenario of a noisy TV. Interestingly, existing algorithms such as baseline DreamerV3 actually have no problem dealing with such distractions: because the distraction is not predictable, the model learns to simply ignore it. Thus as the distractions become more dynamically changing and less predictable, they may tend to become less problematic. In Fig. R1K, we provide an example training curve from a baseline DreamerV3 agent in a cheetah environment with randomly-changing background distractions, showing that it is able to succeed in the face of this white noise distraction.
**Accuracy of SAM masks**:
We investigated this important question in two ways. First, we implemented PSP with the tiny variant of SAM2, and observe similar results as with SAM. Second, we investigated the segmentation masks directly and observe evidence that precise details of the segmentation may not matter much. In Fig. R1B for example, in a performant PSP agent, the cheetah is sometimes split in two, three, or four segments, which suggests a degree of insensitivity to exact details of the segmentation. Thus, PSP appears to be somewhat insensitive to the exact details of the segmentation. Indeed, because the segmentation only acts as a way to average together gradient signals (Eq. 2), so as long as it groups things of relatively similar importance, the precise details and exact accuracy do not appear to matter much.
**Application to model-free RL**:
This is an interesting question, thank you. In model-free methods the latent state is already learned based on the gradient signals from the policy and value function, in contrast to the model-based settings (such as Dreamer) where the latent state is also informed by a reconstruction loss. Indeed, as we show, the model-free baseline DrQv2 performs quite well in the distracting settings. Our work is focused on bringing some of the advantageous features of the model-free approach into the model-based realm.
However, it is quite possible that incorporating segmentation via an additional channel in the input image or adding an action-prediction head could clean up the image embedding in the model-free RL setting. This would be very interesting to investigate in future work.
**Ablation results**:
This is a great observation that some of the ablation results appear to similar to each other. This observation touches on three key points.
First, the latter score (379.0) *does* have policy gradient weighting, it just does not use the segmentation model to aggregate gradients. The comparison with no gradient weighting and no action head would be the unaltered DreamerV3 of Table 1 (which scores substantially worse, 158.4 ± 45.7), which we have now included in the ablation table.
Second, while there is a similar score in both cases on the Reafferent environment, there is a substantially better score with the full PSP approach on the Unaltered environment (712.3 vs 418.2).
Third, while the gradient signal alone is adequate to boost performance on the reafferent environment, the gradient signal alone is noisy and this can disrupt the higher scores that are attainable on the unaltered environment. By incorporating the additional methods, especially the segmentation, we can reduce the noisiness of the gradient signal. We now include an additional ablation: policy gradient weighting + segmentation but without the action head (Fig. R1I), showing that the segmentation improves performance on the unaltered environment (compare the second and fifth row). Combining the gradient signal, segmentation, and adversarial action prediction as PSP then yields good performance on both environments.
---
Rebuttal Comment 1.1:
Comment: I appreciate all the clarifications. As a reviewer who gave the highest score to this work, I do *not* think I should be increasing the score any higher. For the discussion period, however, I do want to underscore that the authors conducted a lot of new meaningful experiments for their rebuttal. On the other hand, the response to 1AxW on the added complexity appears to be too empirical, without any attempts of assessing the complexity analytically (even if taking into account a sound ablation effort). Hence, I keep the original score.
---
Reply to Comment 1.1.1:
Title: Response to aBfS response
Comment: ### Overview
We deeply appreciate the reviewer for taking the time and effort to review our additional results, and we thank this reviewer for noting the meaningful addition these results contribute to the work. **We believe we have addressed each of this reviewer's stated concerns**, including with three new experiments (SAM2, noisy TV, and additional ablations) and additional data (example saliency maps, example segmentations). Furthermore, we believe these *new and expanded result address most of the concerns of all reviewers* and should be accounted for if considering others' scores when determining whether to raise this reviewer's score. Thank you again for your effort and for your thoughtful and constructive comments.
### Analytical complexity
In terms of the response to 1AxW, we are happy to discuss the analytical properties of PSP. There are four major components of PSP that affect performance. In order of additional compute: (1) Policy gradient based weighting, (2) image segmentation (e.g. with SAM) (3) action adversarial head, and (4) segmentation-based aggregation of the gradient weighting.
*Policy gradient based weighting*: For each latent state produced by the encoder RSSM in each step of a rollout, we take the gradient of the policy with regard to the image pixel inputs. This auto differentiation yields a complexity of O((E + R + P)\*S^2\*W\*H), where E is the number of encoder parameters, R is the number of parameters of the RSSM, P is the number of parameters of the policy, S is the number of steps in a rollout, W is the width of the input image, and H is the image height.
There are a couple of major opportunities for optimization in future implementations. First, for debugging and experimental reasons, we have been computing the gradient of the policy with respect to all rollout steps, instead of just the current step. This reveals an opportunity to reduce the computational burden by a factor of S, where S is 64 in our implementation. We note reducing this contribution could likely induce a radical speedup in the performance of future implementations. Second, we take the gradient of the policy with regard to the image during rollouts only to enable visualization for debugging & figures. This is strictly speaking unnecessary overhead and could be eliminated for a gain during training.
*Image segmentation*: The details of the additional complexity depend on the specific algorithm used. With our addition of results using SAM2 tiny, we now demonstrate the viability of using different segmentation algorithms. Notably, the segmentation process is entirely parallelizable separately from the training process: images are segmented as they are collected and stored in the replay buffer. We find that segmentation is *not* a bottleneck in training speed. Moreover we find that advances in segmentation algorithms (e.g. SAM2) reduce the computational resource requirements for this process.
*Adversarial head*: For each training step, we take the gradient of the loss of the action prediction head with regard to the parameters of the encoder. Therefore, the cost is O((E + R + A)*S), where A is the number of parameters of the action prediction head. These gradients are added with the regular gradients during a train step, so the adversarial head does not introduce additional iterations during training.
*Segmentation aggregation of gradients*: We take the mean value of the gradient with respect to each mask by multiplying each mask by the gradient weighting & then taking the hadamard product of the resulting image (matrix) with the inverse of the sum of mask elements that are 1. This results in an additional cost of O(M\*W\*H), where M is the number of masks.
While these analytical properties can guide further improvements to the algorithm, we ultimately note that the set of hardware and software used is a very important component of actual wall clock performance. Many elements of complexity are parallelizable on a GPU, and thus may not actually contribute greatly to wall clock time. Additionally, Jax & CUDNN make choices about the exact algorithms that implement the high level graph, which in our experience can impact not only wall clock time but even numerical stability. Therefore, we do believe that any assessment of performance must include concrete code and empirical assessments, and this is why we focused on presenting empirical comparisons. Thank you for the opportunity to clarify. | Summary: This paper introduces Policy-Shaped Prediction (PSP), a method in model-based reinforcement learning (MBRL) designed to focus on significant aspects of an environment by reducing the influence of distracting information. PSP incorporates a pre-trained segmentation model, a task-aware reconstruction loss, and adversarial learning to enhance the robustness and focus of MBRL agents. The method is evaluated using a benchmark tailored to assess its efficacy against intricate and predictable distractions that do not aid in decision-making.
Strengths: - The introduction of a segmentation model to MBRL is innovative and appears effective based on the results presented.
- The paper demonstrates improvements over existing approaches in terms of focusing learning on relevant environmental features and maintaining performance in distraction-filled settings.
- Methodology extends well-known concepts with new mechanisms, potentially setting a foundation for further exploration in distraction-resistant MBRL.
Weaknesses: - PSP exhibits high variance in performance, particularly noted in tasks like Hopper Stand, indicating that the method may not consistently achieve the best results across all scenarios.
- The implementation of PSP, especially with the use of segmentation models like SAM and adversarial learning, is resource-intensive, which might limit its practical applicability in environments with limited computational resources.
- The added complexity of PSP, including policy-gradient weighting and adversarial components, might pose challenges for implementation and stability during training.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Given the resource-intensive nature of PSP, how feasible is it to apply this method in real-world applications, such as robotics or autonomous driving, where computational resources and real-time performance are critical?
- How does PSP perform in dynamic environments where the reward structure or salient features change over time? Is the method adaptable to such scenarios?
- How sensitive is PSP to the quality of the segmentation model used? Would a less accurate segmentation model significantly impact the performance of PSP?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While the PSP shows impressive performance in the introduced benchmark, its dependency on a high-quality segmentation model could limit its deployment in varied or less-controlled environments. Furthermore, the paper lacks a detailed discussion on the potential computational overhead introduced by PSP components.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments.
**High variance in Hopper Stand:** Thank you for this observation. Hopper Stand is a challenging environment with fairly sparse reward. Indeed none of the other MBRL methods were ever able to achieve success on this task in the Reafferent environment (the maximum score of any method across this grand total of 12 runs was 8.7), whereas PSP achieved a score as high as 377.5. Thus, while all baseline methods consistently exhibited low performance on this task, across all runs in our experiments, PSP was uniquely able to reach a higher score.
Additionally, we note that we actually had a fourth run of PSP on Hopper Stand with Reafferent background that was quite promising but crashed (irrecoverably) before completing 1M steps, and was therefore not included in our tally, which achieved a score 389 by step ~900K. Thus, 2/4 runs with PSP yielded an agent that figured out how to successfully stand, whereas 0/12 MBRL baseline runs did (Fig. R1J).
**Applicability in environments with limited computational resources:**
Thank you for this great point. We highlight that the added burden of PSP only occurs during training, and not policy inference, which is unchanged. Additionally, the adversarial learning, on its own, does not introduce significant additional computational burden, as the action head is much smaller than the image encoder. We see that the ablation in which only this head is added takes only an additional 11% training time (Fig. R1L).
Regarding the expense of applying SAM, we acknowledge the burden this model introduces. We initially chose SAM as it was most likely have good segmentation quality across environments. However, this model has also spawned a number of cheaper segmentation approaches that could make this approach far less resource-intensive. In particular, the recently released SAM2 can achieve >6x inference speeds relative to SAM (Table 6 of Ravi et al 2024) and thus can reduce the required resources. We expect metrics like this to further improve as models are distilled, optimized, and improved. As highlighted before, we have now included results with the 'tiny' variant of SAM2, which exhibits similar PSP performance to SAM (Fig. R1A).
**Complexity adding challenges for implementation and training stability**:
Thank you for raising these points. Empirically, we observe that PSP is stable on Cheetah Run. It is less stable on Hopper Stand, but this is a challenging environment in which the baselines are substantially less performant, and PSP does not appear to be less stable than the baselines.
In terms of implementation complexity, we introduce several important changes, which our ablations show to be critical. Our implementation is therefore admittedly more complex than the baseline. However, the changes primarily consist of adding a few extra terms to the loss function and incorporating a segmentation module, and the reference implementation we provide demonstrates that it is fairly straightforward to make these additions to an algorithm such as DreamerV3.
**Feasibility for application to real-world**:
We appreciate the reviewer's interest in real-world applications, and we are excited about future work investigating such applications. We wish to highlight that the inference cost of a model trained with PSP is identical to its inference cost without PSP training; only the training cost is impacted. Therefore, any model-based RL method that is feasible for such applications already would still be feasible with the addition of PSP training techniques. We moreover point out that in general it is not trivial to apply Dreamer-type algorithms in real-world settings, but that there is encouraging precedent (e.g. DayDreamer, Wu, Escontrela, Hafner et al. 2022)
**Dynamic environments with changing reward structure or salient features**:
We expect that PSP is likely capable of adapting to a changing reward structure, especially if the salient features remain the same. To test this, we trained a PSP agent on Walker Run for 1M steps, and then switched to a Walker Stand reward. We found that PSP was able to quickly adapt to the new reward function (Fig. R1D). Changes in salient features represent a more difficult challenge, but we think that PSP would still be able to adapt, in part because of the flexibility afforded by the mask interpolation of Eq. 3 (see Fig. R1C). It is likely that PSP will adapt more slowly then an algorithm that does not selectively prioritize parts of the scene (though the latter algorithms will not be robust to distractions). Methods such as Curious Replay (Kauvar et al 2023) may help mitigate these adaptation challenges.
**Sensitivity to quality of segmentation model**:
We investigated this important question in two ways. First, we implemented PSP with the tiny variant of SAM2, and observe similar results as with SAM (Fig. R1A). Second, we investigated the segmentation masks directly, and we observe evidence that precise details of the segmentation may not matter much. In particular, as shown in Fig. R1B, the SAM segmentation does *not* segment the cheetah perfectly. In some cases, the cheetah is split into 2 segments, while in other cases it is split into 3 or 4 segments. Thus, PSP appears to be somewhat insensitive to the exact details of the segmentation.
In terms of deployment to varied or less-controlled environments, we again highlight that the segmentation is only used during the training step (which could be offline), and is not used during inference/deployment of the trained model/policy.
**Computational overhead**:
Thank you for this suggestion. We have now detailed a break down of the computational overhead of the PSP components (Fig. R1L).
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer 1AxW
Comment: Thank you for your detailed and thoughtful rebuttal. I have carefully reviewed your responses, including the additional experiments and clarifications provided in response to Reviewer aBfS's comments. I appreciate the analytical breakdown of PSP's complexity and the steps taken to mitigate computational overhead, especially regarding the segmentation process with SAM2. However, I concur with Reviewer aBfS that further analytical consideration of the empirical complexity is needed to better understand its impact on broader adoption, particularly in resource-limited environments. While your response and contributions are valuable, I will keep my original score, as addressing these complexity concerns more thoroughly would strengthen the paper’s contribution.
---
Reply to Comment 1.1.1:
Title: Clarification request
Comment: We sincerely appreciate the reviewer's time and engagement with our responses. Thank you. We welcome further discussion and would like to seek clarification on a specific point. Could the reviewer please elaborate on what is meant by an "analytical consideration of the empirical complexity"?
Our experiments demonstrated significantly improved performance over the baseline DreamerV3 in distracting environments while maintaining performance in less-distracting settings. This considerable increase in performance only reduced the unoptimized wall clock speed by less than a factor of four. While optimization efforts could likely improve this speed, we believe the novel ideas and performance improvements introduced in our work are the primary focus.
Furthermore, we showed that ongoing innovations in segmentation algorithms yield equivalent performance on PSP while significantly reducing computational resource requirements.
We would greatly appreciate if the reviewer could specify what additional information they are seeking beyond what we provided, including in our response to aBfS. Your insights are very valuable to us, and we look forward to addressing any remaining concerns or questions. | Summary: This paper presents a novel model-based reinforcement learning (MBRL) method that focuses on important parts of image-based environments with distractions, aiming to improve policy learning. The proposed method introduces gradient-based weighting, segmentation-based aggregation, and adversarial action prediction to world models. Experiments are conducted using the DeepMind Control Suite environment with varying levels of distractions.
Strengths: 1. The paper introduces a new method that utilizes policy-gradient weighting and object-based weighting for image reconstruction.
2. It devises a Reafferent DeepMind Control environment where the distracting background deterministically depends on the agent’s previous action and the elapsed time within an episode.
Weaknesses: 1. In Figure 1, is the arrow between the $z_i$ and the image decoder reversed? The notation of $i$ is confusing, as it is unclear whether it represents a time step or an image pixel (Line 86). The arrows between $z_i$ and $v_i$, $a_i$ seem to suggest that a value model and a policy model are learned in the world model learning phase. However, in my opinion, these two models should be learned in the downstream RL phase.
2. I wonder if the policy used to learn policy-gradient weight is the downstream policy that aims to maximize value. If I understand correctly, the policy model is used, but its weights are not updated in the world model learning phase. Additionally, how the connection between the latent state $s$ and the image pixel in Eq (1) is established, considering that the input of the policy model is based on $s$.
3. I have some doubts about the Segment Anything Model (SAM). Was it fine-tuned in the experiment? It would be helpful if the authors could provide more details on how SAM is used in this work and what the output of $SEG(x_i)$ is. SAM outputs multiple masks for different objects based on prompts, so how do you determine which mask to use?
4. In Eq (4), inferring an action from the state of a single time step seems unreasonable. In general, actions are inferred from successive states. Moreover, I do not agree with the statement that the encoder extracting information about the agent's action is wasteful. A more reasonable explanation is needed.
5. It is strongly encouraged for the authors to provide visual results of $W$ and segmentation masks in the experiments to facilitate clearer explanations and analysis.
6. The experimental environment and tasks are too limited and simplistic to demonstrate the effectiveness of the proposed method. It would be beneficial to use a more realistic environment, such as CARLA, which also includes various distractions related and unrelated to the agent's actions.
7. The performance of the proposed method appears relatively poor based on Figure 6, A1, and Tables 1 and 2. Specifically, it is challenging to conclude from Table 1 that the proposed components improve the model's performance.
8. In Figure 5, the comparison images are not from the same point in time or the same episode, which does not support the stated conclusion. Additionally, for a more accurate performance comparison, it would be more appropriate to provide an MSE result that only computes the agent region.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see the weaknesses section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and helpful comments.
**1.** We are grateful to the reviewer for catching this typo. In Figure 1, we should be using *t* as the subscript, not *i*. We will update the figure and text accordingly. We will additionally make clear that while v_i and a_i are learned in a downstream RL phase, they depend on the latent state z_i.
**2.** Thank you for the opportunity to clarify. The policy used to weight reconstruction loss is the same policy trained downstream to maximize value. Model training in the Dreamer architecture consists of alternating phases of world model updating (based on action sequences in the replay buffer) and policy/value learning (using imagined action sequences that leverage the world model). The policy is dependent on the latent state, and the latent state is dependent on the image. The reviewer’s understanding is correct that neither the policy nor the value model weights are changed during the world model learning phase. However, the policy depends on the latent state, and the latent state depends on the image pixels. The gradient between the policy and the image pixel is established via standard auto-differentiation (with gradients that flow through the latent state). We will update the text to clarify, thank you.
**3.** Thank you for the opportunity to clarify details about our use of SAM. In our experiments, SAM is not finetuned. For our implementation, in order to determine our segmentation masks, SAM is prompted with a grid of 256 points. The resulting masks are filtered via metrics computed by the SAM algorithm, specifically a prediction IOU (intersection over union) thresh and a “stability score", followed by non-maximum suppression (NMS). Finally, pixels that have not been assigned to a segmented object are grouped into a single extra object, to ensure that they are not ignored entirely. The outcome of this procedure is a segment assignment for each pixel. We will update the text to clarify this procedure.
**4.** Thank you for providing us the opportunity to further clarify the role of the action-prediction head specified in Eq. 4. The intuition is that the current state *may* contain information about the preceding action. You are completely correct that in general, a past action cannot be inferred from just a single timestep. However, in *some* cases they can. Our assertion is that maintaining this information in the image embedding is redundant and a waste of capacity, *since the the previous action is already provided as input to the world model*. This can become a particularly problematic source of wasted capacity when extensive or detailed aspects of the image are related to the previous action, and thus are redundant. The action prediction head allows us to remove this redundant information from the image embedding step.
We found that this intuition was supported by our empirical results. Notably, realize we failed to include results from a key ablation that we had run: policy gradient weighting + segmentation but without the action head. We have now added it (Fig. R1I, see the second row), and it makes it clear the performance improvements from including the action head (the first row), in both the unaltered and reafferent environments.
**5.** Thank you for this great suggestion, we will provide additional examples of segmentations and weighting masks , such as Figs. R1B,F,G.
**6.** We appreciate the reviewer's interest in application to more complex settings. We chose the current settings because they allowed for direct comparison across a wide variety of techniques that had been tuned for Deepmind Control Suite and related environments, and provided a good foundation for our new Reafferent environment. Extensions to environments such as CARLA are a great direction for future work that we are excited to pursue.
**7.** Our claim based on the experimental results is that PSP yields: an unambiguously better performance on the most challenging distractors (e.g. Reafferent), a more consistently good result on the Distracting Control Suite (it is in second place on both tasks, while the first place method is different in each case), and adequate performance that is the same or better than baseline DreamerV3 on the unmodified Control Suite. The data in Figure 3, 6, A1 and Table 1 support these claims. A similar conclusion comes from Table 2, in which the combination of methods yields good performance on both the unaltered and reafferent environments. While we see that it is possible to achieve even better performance on Reafferent environment, this can come at the cost of dramatically reducing performance in non-distracting settings (as is the case with value-gradient or policy-gradient weighting alone, in which there is a stark tradeoff in performance between the distracting and non-distracting settings), and the full PSP model therefore performs better than the ablations across settings.
**8.** Thank you for this suggestion. We have now included a comparison image at the time point in the same episode that clearly exhibits the superior model performance of the PSP agent on the Reafferent environment (Figure R1H).
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I have adjusted the score to 4 due to lingering doubts regarding the effectiveness of the proposed method.
---
Reply to Comment 1.1.1:
Title: Reply
Comment: We are very grateful for the reviewer's time and engagement with our responses. Thank you. | Summary: ### Review Summary
This paper presents a novel approach to improving model-based reinforcement learning (MBRL) by identifying that detailed but irrelevant aspects of the world can exhaust the capacity of the world model, thus hindering the learning of important environment dynamics. The proposed method, Policy-Shaped Prediction (PSP), uses a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning to focus the world model's capacity on relevant environment features. The paper demonstrates that PSP outperforms other approaches, including DreamerV3 and DreamerPro, in environments with intricate but irrelevant background distractions.
### Major Concerns
1. **Effectiveness of Policy Gradient Information**: The utility of the policy gradient information when the policy is not sufficiently good is questionable. This raises a chicken-and-egg problem: if the policy is poor, the gradient information might not effectively guide the model's learning, but if the policy is already good, further gradient information might not be necessary. Clarifying this point is crucial to support the core contribution of the paper. I have the following guess given the positive results: either the gradient information provided early on is sufficiently informative or the policy and model improve together, even if initial gradients are inaccurate. The authors should provide further explanation on this aspect.
2. **Motivation Comparison with Related Work**: The motivation emphasized in the paper—to distinguish between decision-relevant foreground and irrelevant background during model learning—has been similarly addressed in previous works on Block MDPs and invariant representations (e.g., [1], [2]). These works, although employing different technical approaches, share a similar motivation. The authors should compare their approach with these works, providing theoretical or experimental analysis to highlight the differences and advantages.
3. **Dependence on Pretrained Segmentation Model**: Given that the segmentation model is pretrained, what is its range of applicability? How much do the final results depend on the effectiveness of the segmentation model? The authors only conducted experiments in two environments and did not explore this dependency, raising questions about the generalizability of the approach to new environments.
4. **Experimental Section Clarity**: The experimental section starts with good questions and appears to be structured to answer them. However, the current presentation makes it difficult for readers to find the answers and understand which experiments address which questions. Revising the presentation for clarity would help make the experimental logic more apparent.
5. **Applicability to Different Scenarios**: The three methods proposed (policy gradient with respect to state, segmentation, adversarial training loss) seem tailored to specific problems (e.g., decision depend on object segmentation, clear object boundaries). If the problem scenario changes slightly, will these methods still work? Even considering the current scenario, how many common RL problems can this approach cover? The experiments are primarily conducted on two tasks, which weakens the persuasiveness of the results.
### References
[1] Learning Invariant Representations for Reinforcement Learning without Reconstruction
[2] Invariant Causal Prediction for Block MDPs
Strengths: See above
Weaknesses: See above
Technical Quality: 2
Clarity: 2
Questions for Authors: See above
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments.
**Effectiveness of Policy Gradient Information**: We do see some evidence of the chicken-and-egg problem. Importantly, though, our interpolated weighting term (see Eq. 3 in the manuscript) provides a route for escaping this problem by allowing the model and policy to always have access to cues from the entire state and to improve together. Thus, even if the initial policy-weighting is bad the model still has the chance to globally improve its predictions and recover from the initially bad weighting. As might be expected, following your great intuition, we find that the interpolation specified by Eq. 3 was essential to achieving consistently good performance.
To demonstrate the value of this interpolation step, we ran an experiment comparing agents with and without the interpolation, on Cheetah Run with Reafferent distraction. As shown in Fig. R1C, without interpolation (green line), the agent is stuck with poor performance for nearly 150K steps, until it eventually does manage to escape the chicken-and-egg problem. However, when including interpolation (blue line) we see that the agent is able to quickly achieve nonzero performance and begin steadily improving.
Additionally, in Fig. R1F, we demonstrate how the salience map improves throughout training, suggesting that the policy and world model can improve together with training.
**Motivation Comparison with Related Work**: Thank you for bringing up the interesting connection to invariant representations. Indeed these methods are quite relevant, and it is valuable to consider methods for identifying state abstractions that leverage bisimulation [1] or the closely related concept of invariant causal feature prediction [2]. The DBC algorithm from [1] is most relevant to our context, as it is designed explicitly for high dimensional RL applications with distractions. On the other hand, the MISA algorithm from [2] requires multiple distinctly labeled experimental settings, and while they demonstrate imitation learning from high dimensional images, extension to high dimensional RL applications is not necessarily straightforward and could be a novel contribution in its own right.
However, we do not think that it is apt to experimentally test DBC as a baseline in our setting for two key reasons: 1) The Denoised MDP work [Wang 2023] compares directly against DBC and demonstrates substantially better performance than DBC (see Table 1 in Wang 2023). In turn, our experiments compared directly against Denoised MDP and we observed that PSP exhibited substantially better performance than Denoised MDP. 2) Our focus is on model-based RL, but only a model-free version of DBC was published and it is not straightforward to consider how it would be applied in the context of Dreamer-type architectures. For these reasons, we believe that our current experimental baselines are adequate to encompass these approaches. However, we think the connection is important and we will add discussion of invariant representations to the manuscript. Thank you!
**Dependence on Pretrained Segmentation Model**:
We acknowledge that the use of a pretrained segmentation model has the potential to limit the range of applicability, however we also believe that foundational segmentation models such as SAM have grown robust enough to be considered for these applications. Moreover, future work will improve these models, with cheaper inference and higher quality and more generalizable segmentation – as evidenced by the recent release of SAM2.
We have now included experiments with an *additional* segmentation model, the 'tiny' variant of SAM2, and observe similarly good results as with SAM (Fig. R1A).
Additionally, we note the segmentation model does *not* have to be perfect. The SAM segmentation often splits apart what should be considered a single object. In Fig. R1B, for example, the cheetah is split into two, three, or four segments throughout a successful run. This suggests a degree of insensitivity to exact details of the segmentation.
Notably, the Deepmind Control Suite setting is likely out of the domain of the data on which SAM or SAM2 were trained, and yet segmentations from these algorithms are still adequate to achieve good PSP performance.
**Experimental Section Clarity**: Thank you. We will clarify the results section to explicitly indicate how we answer each of our listed experimental question (e.g. by adding references to Q1, Q2, etc. in the appropriate locations). Specifically: Q1 is answered in lines 199-228, Figure 3, and Table 1. Q2 is addressed by Figure 5 (and additional supplemental images of salience maps that we will include). Q3 is addressed by Figure 6 and Table 1 at line 233. Q4 is addressed by line 230, Figure A1, and Table 1. Q5 is addressed at lines 240-255, and Table 2.
**Applicability to Different Scenarios**: We selected Deepmind Control Suite as the base environment to allow for direct comparison against a number of existing baselines, in the settings that they were demonstrated. Distracting Control Suite was the only environment shared across the DreamerPro, DenoisedMDPs, and Task Informed Abstractions baselines. Furthermore, we additionally incorporate the Reafferent environment to extend beyond Distracting Control Suite into more challenging scenarios. We note that the segmentation-based approach of PSP does assume object-based settings, however this assumption is relevant to a wide variety of problems. We acknowledge that there may be limitations of our approach in certain other scenarios, although it is not clear a priori what those limitations may be, and we look forward to pursuing future work to investigate this.
---
Rebuttal Comment 1.1:
Comment: Thank you to the author for the detailed response. Overall speaking, the author has addressed some of my concerns, so I have raised my score to 5. My remaining major concern is about the generalizability of the proposed method, which involves two aspects:
1. If the key to the decision-making problem is not largely determined by object segmentation, is the proposed method still effective?
2. The generalizability of the Pretrained Segmentation Model—specifically, in what range can it still work if applied to non-natural images?
---
Reply to Comment 1.1.1:
Title: Reply
Comment: We are very appreciative of the reviewer's time and engagement with our responses. Thank you.
While it is true that this may not apply broadly to decision-making scenarios that do not depend on object segmentation, there are many important scenarios that do (especially in the visual domain). Moreover, it is conceivable that our approach could still apply in those settings, with the segmentation simply serving as a reasonable method for reducing noise in the gradient-based weighting. We hope to investigate this in future work.
In terms of the generalizability of the segmentation model, we think that our existing evidence is actually quite corroborative of good generalization to non-natural images. SAM and SAM2, it turns out, are quite effective (i.e. effective enough to work well with PSP) in the very non-natural Deepmind Control Suite and our even less-natural distracting variants. It is possible other settings may present more of a challenge, though we also expect these segmentation models to improve as the datasets used to train them continue to grow. Thank you again for your comments and insight, we really appreciate the discussion. | Rebuttal 1:
Rebuttal: ### Overview for all reviewers
We thank the reviewers for their excellent thoughtful and constructive comments. We agree with the reviewers' assessment that Policy-Shaped Prediction (PSP) is a novel and effective method to address a well-motivated and relevant problem: reducing the influence of distracting information in model-based RL. We will address each reviewer with individual responses.
We want to highlight that we have now extended PSP to use an **additional segmentation algorithm** and demonstrated ***similar performance but with substantially reduced resource consumption***. All four reviewers raised questions regarding the segmentation algorithm that we used (SAM), including the generalizability, required accuracy, and resource requirements of SAM. We believe we have addressed these questions with our new experiments.
Our initial selection of SAM was guided by the belief that it represented just the first of many segmentation algorithms that are sure to follow with improved generalizability, performance, and resource consumption. This belief was corroborated by the recent release of SAM2 (on July 29), which according to the associated manuscript (Ravi, et al. 2024) exhibits both improved performance *and* as little as 17% the required resources (e.g. 6x the segmentation speed). We updated PSP to also be able to use SAM2, using the 'tiny' model, the smallest and lowest accuracy of the provided model sizes. Our basic implementation immediately reduced resource consumption by 2x, while yielding a score on Reafferent Cheetah of 360.2, within the range of our original results with SAM of 383.1 ± 23.8 (note: we only had time/resources available to run one seed, but we would include additional seeds in a final manuscript). Optimizations, including using SAM2's video segmentation capabilities and better utilization of the GPU, would likely further improve this resource consumption. These results also suggest that our implementation of PSP is *not* highly sensitive to the segmentation algorithm.
In the attached pdf (Figure R1), we have also included additional ablations that highlight the importance of the mask interpolation step (Eq. 3) and the action prediction head (Eq. 4), and more visualizations of segmentation masks and salience maps.
**Together, these results bolster the claim that PSP is an effective method for reducing the impact of challenging distractors**, and that it also has the potential to benefit from gains in related realms such as generalizable segmentation.
Pdf: /pdf/d52647cff5b18781d988027a2766898c51907194.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Happy: A Debiased Learning Framework for Continual Generalized Category Discovery | Accept (poster) | Summary: The paper presents a novel approach to the task of Continual Generalized Category Discovery (C-GCD). The proposed framework, named Happy, aims to address the challenges of continuously discovering new classes from unlabeled data while preventing the forgetting of previously learned classes. The authors identify two primary issues in C-GCD: prediction bias and hardness bias. To mitigate these, they introduce clustering-guided initialization, soft entropy regularization, and hardness-aware prototype sampling. Experimental results show that the Happy framework significantly improves performance across various datasets, including notable gains on ImageNet-100.
Strengths: Clear Contributions: The paper clearly outlines its contributions, making it easier for readers to understand the novelty and significance of the work.
Experimental Validation: The framework is validated through extensive experiments on multiple datasets, demonstrating significant improvements over state-of-the-art methods. The 7.5% gain on ImageNet-100 is particularly impressive.
Addressing Biases: The identification and mitigation of prediction and hardness biases are well-articulated and experimentally supported. This shows a deep understanding of the underlying issues in continual learning.
Weaknesses: Complexity of the Framework: While the combination of multiple techniques (clustering-guided initialization, soft entropy regularization, and hardness-aware prototype sampling) is innovative, it also adds complexity to the framework. This could make it challenging to implement and tune in practice.
Scalability Concerns: The paper does not extensively discuss the scalability of the proposed framework. As the number of classes and stages increases, the computational and memory requirements could become prohibitive.
Evaluation Metrics: The paper primarily focuses on accuracy improvements but does not delve deeply into other potential evaluation metrics, such as computational efficiency, memory usage, or robustness to different types of data distributions.
Technical Quality: 3
Clarity: 2
Questions for Authors: Generalizability to Other Domains: While the framework is validated on multiple vision datasets, it is unclear how well it would generalize to other domains, such as natural language processing or other types of sequential data.
Adapt to the SoTA methods: Is it possible to adopt the method to several significant recent works in the field, such as SPTNet, TIDA, and InfoSieve?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful advice and valuable questions, we will respond to your concerns point by point.
> W1: Complexity of the Framework.
* **Effect of each component**. Considering that C-GCD is a challenging task, each component is essential and addresses specific issues. (1) Learning new class: Cluster-guided initialization provides a robust initialization for new class heads, while entropy regularization mitigates prediction bias and allocates necessary probabilities for discovering new classes. (2) Preventing forgetting old classes: hardness-aware prototype sampling effectively controls the forgetting of difficult old classes.
* **Implementation**. (1) Cluster-guided only runs once at each continual stage. (2) Soft entropy regularization, (3) hardness modeling could be implemented on the fly. All of them consume less than 1% of total memory and time, see W3 for more details.
* **Hyper-parameter tuning**.
* We fix the loss weights of self-training $L_{self-train}$ and hardness-aware prototype sampling $L_{hap}$ as 1, considering they are the main loss for new and old classes. And our method basically has three weight parameters $\lambda_1,\lambda_2,\lambda_3$ for $L_{entropy-reg}$, $L_{kd}$ and $L_{con}^u$ respectively. Please see **General Response to Common Concern 1** for more details and sensitivity analysis.
* Overall, the optimal values for each hyperparameter are close to 1. In our experiments, **we simply set all weights to 1, which shows remarkable results across all datasets.** Thus, our method does not require complex tuning of parameters and exhibits strong generalization capabilities.
> W2: Scalability of the proposed method.
* The time consumption increases linearly with the number of classes and stages, which is the **same for all continual learning methods**.
* Disk Memory consumption. To mitigate forgetting of old classes, we save one prototype feature per class (768-dim) using about 3KB, whereas previous methods store multiple labeled images, with one image 224× requiring 588KB and ten images requiring over 5000KB. Thus, our method consumes **less than 1/1000** of previous replay-based methods.
> W3: Evaluation Metrics: computational efficiency, memory usage, or robustness to different distributions.
* **Computational efficiency**. The proposed three components are computationally efficient. We run experiments on a single 4090 GPU. For CIFAR100, we train 5 continual stages with 30 epochs for each, it takes $\sim$2.5h in total. (1) Cluster-guided initialization is run only once at the start of each stage, it takes $\sim$15s, only 0.83% of total time. (2) Hardness-aware sampling and (3) entropy regularization occupy less than 1% of the training time. While KD loss takes relatively the most time, around 30% of the time, however, it is necessary and widely used in continual learning.
* **Disk memory consumption**. As in W2, we only save features for old classes with only <1/1000 memory cost compared with replay-based continual learning. (3KB/5000KB) and our method alleviates privacy issues.
* **GPU memory consumption**. Training costs around 14192MB GPU memory in total. Within it, hardness-aware sampling and entropy regularization occupy less than 0.1%, and KD loss cost 5672MB, whose main objective is to mitigate forgetting, it consumes even less memory than replay-based methods ($\sim$8000MB) with the same batch size.
* **Robustness to different distributions**. We have evaluated various methods on several corrupted distributions, see **General Response to Common Concern 2**.
> Q1: Generalizability to Other Domains.
* Good question. This paper primarily explores visual tasks of visual category discovery. The underlying spirit could also be adapted to NLP data, we leave it for future work.
* For other distributions, we conduct experiments on CIFAR100-C with various corrupted distribution, see **General Response to Common Concern 2**.
> Q2: Adapt to the SoTA methods.
* Great suggestion. We implemented SPTNet [R1] and adapted it to the C-GCD (by incorporating alternate prompt learning), results on CIFAR100 are as follows:
| CIFAR100 | All | Old | New |
| :----------: | :-------: | :-------: | :-------: |
| SPTNet | 59.54 | 67.84 | 8.56 |
| SPTNet+LwF | 63.89 | 70.33 | 24.31 |
| Happy | 69.00 | 71.82 | 51.36 |
| SPTNet+Happy | **70.81** | **73.58** | **54.18** |
we also adapt InfoSieve [R2] (by incorporating self-supervised code extraction into contrastive learning) as you suggested, results on CUB are shown below:
| CUB | All | Old | New |
| :--------------: | :-------: | :-------: | :-------: |
| InfoSieve | 57.23 | 65.28 | 7.75 |
| InfoSieve+LwF | 63.18 | 69.30 | 25.60 |
| Happy | 68.88 | 71.29 | 53.13 |
| InfoSieve +Happy | **69.80** | **72.32** | **54.30** |
Results indicate that adapting SoTA GCD methods to C-GCD with Happy could further enhance the performance.
* We will cite related papers and add them to the revised manuscript.
**References**:
[R1]. SPTNet: An Efficient Alternative Framework for Generalized Category Discovery with Spatial Prompt Tuning. ICLR 2024.
[R2]. Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery. NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: I read all the comments of the other reviewers. I agree with all weaknesses of the other Reviewers. I hold the same opinions.
---
Reply to Comment 1.1.1:
Title: Further Response to Reviewer cVL6
Comment: Dear Reviewer cVL6,
Thanks for your feedback. As Reviewer TLvc and jUDg mentioned, our responses have addressed their concerns. Additionally, we have further explained some of your concerns in our previous responses. If you still have some specific questions, please let us know. We are willing to discuss further.
Thank you for your participation.
---
Rebuttal 2:
Title: Response to Reviewer cVL6
Comment: Dear Reviewer cVL6,
Thanks for your feedback. Considering the "weaknesses of the other Reviewers", we have given responses point by point in the rebuttal. Here, we summarize our responses to the weaknesses pointed out by all reviewers:
* **Hyper-parameters tuning.** Please refer to **General Response to Common Concern 1** and **response to you in W1**. Our method requires three hyper-parameters, and **we only need to set all weights to 1**, which is generalizable and could perform well across all datasets. Therefore, the parameter tuning is simple and easy to implement.
* **Complexity of the method.**
* Firstly, we have demonstrated the necessity of each module, as shown in Table 5 of the main manuscript.
* Secondly, we have provided the computational overhead for our proposed modules (see **responses to you in W2 and W3**). The cluster-guided initialization, entropy regularization, and hardness modeling that we propose consume less than 1% of time and memory resources, which means that they are very efficient and effective.
* **Scalability of the method.** All continual learning methods experience a linear increase in computational demand with the number of classes and stages. Compared to previous replay-based methods, our approach saves disk storage by a factor of 1/1000, as detailed in the **response to you in W2**.
* **Generalization abilities of the method.**
* We have conducted evaluations on 15 corrupted and shifted distributions of CIFAR100-C, as in the **General Response to Common Concern 1** and **Rebuttal PDF**.
* Furthermore, previously, we have also conducted experiments on two more fine-grained datasets: **Stanford Cars** and **FGVC Aircraft** datasets, with the results as follows:
| Standford Cars | All | Old | New |
| :------------: | :-------: | :-------: | :-------: |
| GCD | 47.00 | 47.73 | 42.61 |
| MetaGCD | 54.67 | 55.28 | 50.95 |
| Happy (Ours) | **62.79** | **63.68** | **57.34** |
| FGVC Aircraft | All | Old | New |
| :-----------: | :-------: | :-------: | :-------: |
| GCD | 42.95 | 44.35 | 33.38 |
| MetaGCD | 47.16 | 48.61 | 38.23 |
| Happy (Ours) | **53.10** | **53.81** | **48.71** |
* To conclude, our method achieves remarkable performance compared to the previous sota on 6 datasets and 15 unseen distributions, showcasing the strong generalization abilities of Happy.
* **Adapting to SoTA methods.** We have adapted our method to two sota methods: SPTNet and InfoSieve. Please refer to **response to you in Q2** for more details. We will add all results of adapting Happy to all three methods you listed in the final version. **We will also add citations of the three methods: SPTNet TIDA, and InfoSieve in our reference list.**
Thank you again for your feedback. We hope our responses and extensive experiments have addressed your concerns. If you have any further questions, we welcome continued discussion. Thank you once again for your review.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer cVL6,
Thanks for your feedback. As Reviewer TLvc and jUDg mentioned, our responses have addressed their concerns.
We were wondering if our latest responses have addressed your concerns. If you have further specific questions on the weaknesses of the method, feel free to raise them and we are willing to discuss them with you. We look forward to and appreciate your feedback.
Thank you once again. | Summary: The paper proposes a novel method for the Continual Generalized Category Discovery task, addressing the challenges of discovering new classes and preventing forgetting. The approach introduces Clustering-guided Initialization and Group-wise Soft Entropy Regularization for class discovery, as well as Hardness-aware Prototype Sampling for mitigating forgetting. Rigorous experimentation across several datasets demonstrates a significant improvement in performance.
Strengths: 1. The organization, presentation, and writing of the paper are very clear, and the figures are attractive and easy to understand.
2. The paper analyzes and proposes solutions for several important issues in Continual Generalized Category Discovery (C-GCD), demonstrating strong innovation.
3. The experimental analysis is thorough, with excellent results. The sufficient ablation studies confirm the contribution of each proposed innovation.
Weaknesses: 1. In the C-GCD setting described in the paper, during the continuously discovering stage, the samples of old classes are available. Why not use these old class data directly instead of the Hardness-aware Prototype Sampling method? Some explanation or experimental validation should be added.
2. How is the margin probability computed in line 173 if all the labels are not available?
3. The proposed method uses DINO pretrained ViT-B/16 as the backbone, which, as I understand, is pretrained on the ImageNet dataset. This results in information leakage for CIFAR-100, ImageNet-100, and TinyImageNet datasets in the continual learning process, as all classes are already known in the pretrained model.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see the Weaknesses
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful advice and valuable questions, we will respond to your concerns point by point.
> W1: Why not use these old class data directly instead of hardness-aware prototype sampling?
* Good question. At each continual stage of C-GCD, all training data are unlabeled, i.e., **unlabeled** old and unlabeled new classes samples are mixed together, and the model does not know which samples belong to old categories, thus they cannot be directly utilized.
* We employ hardness-aware prototype sampling to prevent forgetting, without the need to store class-wise labeled samples of old classes. By contrast, previous methods store class-wise labeled data, which could cause privacy issues.
> W2: How to compute margin prob in line 173 without labels?
* For the marginal probability, we compute the average of model **predictive** probabilities across a batch, namely $\overline p\in\mathbb{R}^{K^t}=\frac{1}{|B|}\sum_{i\in B}p_i$, which does not require any labels, and **only uses model predictions**.
> W3: DINO pretrained ViT-B/16 backbone results in information leakage for CIFAR100 and TinyImageNet.
* Very good question. We will address this point in detail and incorporate them into the revision.
* Firstly, DINO is an **self-supervised (unsupervised)** pre-training scheme without any labels. So there is no label leakage in DINO. The main objective of DINO is to learn good and general initial representations for the visual encoder. DINO does not directly learn classification heads for downstream tasks.
* Second, since using a DINO-pretrained ViT has become a standard practice in the literature of GCD [R1, R2, R3], we just followed this setup to ensure a fair comparison.
* Third, we acknowledge your concerns. To thoroughly eliminate the influence of information leakage, we adopted the settings in [R4, R5] and conducted experiments using a DeiT pretrained on 611 ImageNet categories (**explicitly excluding those from CIFAR and TinyImageNet**). Average accuracy over 5 continual stages are shown below:
| | | CIFAR100 | | | TinyImageNet | |
| :---------: | :-------: | :-------: | :-------: | :-------: | :----------: | :-------: |
| | All | Old | New | All | Old | New |
| VanillaGCD | 42.50 | 44.57 | 28.16 | 29.28 | 30.04 | 24.02 |
| MetaGCD | 46.64 | 48.63 | 32.96 | 35.24 | 36.48 | 26.80 |
| Happy(Ours) | **69.34** | **71.37** | **57.44** | **53.50** | **55.67** | **39.40** |
As it shows, in scenarios without any information leakage, our method still consistently outperforms previous methods by a large margin. In contrast, methods like MetaGCD, which rely on non-parametric clustering, suffer significant performance declines due to unstable clustering with less pretrained classes.
**References**:
[R1]. Generalized Category Discovery. CVPR 2022.
[R2]. Parametric Classification for Generalized Category Discovery: A Baseline Study. ICCV 2023.
[R3]. MetaGCD: Learning to Continually Learn in Generalized Category Discovery. ICCV 2023.
[R4]. Learnability and algorithm for continual learning. ICML 2023.
[R5]. Class Incremental Learning via Likelihood Ratio Based Task Prediction. ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Some of my concerns have been addressed. However, regarding W2, the author may have misunderstood my question. What I want to know is how to determine whether a sample belongs to an old class or a new class within a session when all labels are unavailable. The marginal probability for old and new classes can only be computed individually if there is a way to accurately separate old class samples from new class samples in each session in an unsupervised manner.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer jUDg
Comment: Dear Reviewer jUDg,
Thank you very much for your feedback. Here, we provide a detailed explanation of W2: how to compute the marginal probability:
* In this paper, marginal probability for old $\overline p_{old}$ and new classes $\overline p_{new}$ refer to the sum of $\overline{\boldsymbol{p}}\in R^{K}$ (marginal probability of all samples in a batch) at the indices of new and old classes, which is calculated **along the class dimension**, **rather than along the sample dimension**.
* Here both $\overline p_{old}$ and $\overline p_{new}$ are **scalars**, while $\overline{\boldsymbol{p}}\in R^{K}$ is a K-dim **vector**, $K=K_{old}+K_{new}$ denotes the number of total classes of the current stage.
* As a result, **we do not need to separate old class samples from new class samples when we compute $\overline p_{old}$ and $\overline p_{new}$**.
* Specifically, for a batch of samples, let $\boldsymbol p_i$ denotes the prediction of the i-th sample. We first average the model predictions for **all samples** along the batch dimension, **without distinguishing between new and old classes samples**, namely $\overline{\boldsymbol{p}}=\frac{1}{|B|}\sum_{i\in B}\boldsymbol p_i$. Here, both $\overline{\boldsymbol{p}}$ and $\boldsymbol p_i$ are $K$ dim vectors. Then we calculate the marginal probabilities for old and new classes **along the class dimension**, i.e., $\overline p_{old}=\sum_{c=1}^{K_{old}}\overline{\boldsymbol{p}}[c]$ and $\overline p_{new}=\sum_{c=K_{old}+1}^{K}\overline{\boldsymbol{p}}[c]$ (so both $\overline p_{old}$ and $\overline p_{new}$ are scalars and $\overline p_{old}+\overline p_{new}=1$). Note that $\overline{\boldsymbol{p}}[c]$ represents the c-th index of the vector $\overline{\boldsymbol{p}}$.
* **Here is one example**. On the first stage of CIFAR100, there are 50 old classes and 10 new classes, so the dimension of model predictions is $K=60$. We first compute the average of model predictions for all samples in a batch and obtain the **vector** $\overline{\boldsymbol{p}}$. Then we sum the first 50 dimensions of $\overline{\boldsymbol{p}}$ to obtain the **scalar** $\overline p_{old}$ and sum the last 10 dimensions to obtain the **scalar** $\overline p_{new}$.
* To conclude, $\overline p_{old}$ and $\overline p_{new}$ are marginal probabilities of **all** samples **along the dimension of old/new classes**, rather than the marginal probability of **old/new samples**. So we do not need to determine whether a sample belongs to an old class or a new class.
We really appreciate your feedback. If you have any additional questions, please let us know. We greatly respect your insights and enjoy the discussions with you. Thanks again! | Summary: The paper points out two-bias issues in CGCD: prediction bias in probability space and hardness bias in feature space. To tackle those two issues, they propose cluster-guided initialization and soft entropy regularization to mitigate prediction bias, and they propose hardness-aware prototype sampling to mitigate hardness bias and forgetting.
Strengths: * The paper is easy to follow. I really appreciate this writing idea: explain the problem and solve the problem.
* The experiment show great improvement over existing method.
Weaknesses: 1. Overclaim. The paper argues that they extend CGCD to realistic scenarios. But from the experiment setup, I am not convinced that the scenarios they consider are more realistic than existing work [16,18]. The stage and new classes are still limited.
2. Too many hyper-parameters and loss terms. There are too many hyper-parameters to adjust different loss weights, hindering its generalization. In addition, there are too many loss terms, hindering the understanding of the method. For example, what is the effect of the proposed Eq 4?
3. In Table 5, why KD improve New class by a large margin? What does the new class mean? The total novel class or the new class in the last task?
4. As the paper points out two issues: prediction bias and hardness bias, and proposes methods to mitigate the two issues, they should provide evidence to show that two issues are mitigated.
Minor:
1. The rightmost one in Figure 2 is (d) instead of (b).
2. $L_{hap}$ in eq 10 and 11 should be consistent.
3. citation 23 is a little strange. It seems that it was published in ICCV2023. The citation should be corrected.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weaknesses, especially 2 and 4. I will raise my score if the 2 and 4 are well resolved.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful advice and valuable questions, we will respond to your concerns point by point.
> W1: Overclaim the CGCD setup.
* **Differences from [16,18]**. We consider (1) more continual stages (5>3 and 10>3 in table 4) with more novel classes to discover (50%>30% of total classes are new) (2) rehearsal-free, i.e., we do not save class-wise labeled samples for old classes, which alleviate privacy and memory issues.
* Thanks for pointing out the issue. We acknowledge that the settings of our study still have some gaps compared to real-world scenarios and it is a basic exploration towards real-world applications.
* We will revise the statement as you suggested.
> W2: Too many hyper-parameters and loss terms.
* **Explain of each loss function**. Overall, our method can be divided into three parts: $L_{Happy}=L_{new}+L_{old}+\lambda_3L_{con}^u$. Each of them is important, see table 5 for ablation studies.
* $L_{new}$: discover new classes (cluster). It has two parts (1) self-distillation $L_{self-train}$ with sharpen targets for self-training. (2) soft entropy regularization $L_{entropy-reg}$ to alleviate prediction bias, wherein eq (4) is regularization between old and new classes as a whole.
* $L_{old}$: mitigating forgetting of old classes. It also has two parts (1) hardness-aware prototype sampling $L_{hap}$ eq (10) and (2) KD loss $L_{kd}$.
* $L^u_{con}$: unsupervised contrastive learning (i.e., SimCLR) for basic representation.
* The effect of eq (4) is to reserve the necessary probability for learning new classes and mitigate prediction bias towards old classes.
* **About hyper-parameters**.
* In our method, we fix the weight of $L_{self-train}$ and $L_{hap}$ as 1, considering they are related to the core part of learning new and old classes. (just like cross-entropy loss in standard deep learning)
* We only need to tune the weights $\lambda_1,\lambda_2,\lambda_3$ for $L_{entropy-reg}$, $L_{kd}$ and $L_{con}^u$, respectively. For the detailed sensitivity analysis, see the table in **General Response to Common Concern 1**. The optimal weight for the three loss are all near to 1.
* In practice, we set all the loss weights to 1 for all datasets without fussy tuning, which works fine and generalizes well to several datasets. As a result, we do not need to extensive search, **just set all the loss weights to 1 and it works fine**.
> W3: In table 5, what does new class mean? Why KD improve new class by a large margin?
* Good question. In table 5, new class means the average accuracy of new classes over 5 stages, i.e., $C_{new}^1,\cdots, C_{new}^5$. i.e., we report the average of new accuracy across five continual tasks.
* In C-GCD, the classification performance is determined by both the feature space and the classification head.
* Without KD loss, the feature space may shift and become mismatched with the head, causing the model to misclassify some new classes as old, which significantly reduces the accuracy of new classes.
* Additionally, since the evaluation of C-GCD relies on Hungarian matching across all classes to achieve the highest overall accuracy (see line 224), there are generally more old classes, which tends to stabilize the old classes while new classes experience greater fluctuation.
> W4: Provide evidence to show that two issues are mitigated.
* Very good suggestion. We will append the following results to the revised manuscript.
* **Prediction bias**. We provide two metrics: (1) $\Delta p=\overline p_{old}-\overline{p}_{new}$: the difference in marginal probabilities between old and new classes. (2) the proportion of new classes' samples misclassified as old classes (new$\to$old). The results are as follows (after stage-1):
| (in %) | $\Delta p \downarrow$ (on C100) | new$\to$old $\downarrow$ (on C100) | $\Delta p \downarrow$ (on CUB) | new$\to$old $\downarrow$ (on CUB) |
| :------------------------: | :-----------------------------: | :--------------------------------: | :----------------------------: | :-------------------------------: |
| Ours w/o $L_{entropy-reg}$ | 81.50 | 63.25 | 83.20 | 65.80 |
| Ours w/ $L_{entropy-reg}$ | **5.76** | **10.20** | **10.25** | **11.05** |
The results from two datasets demonstrate that $L_{entropy-reg}$ effectively reduces prediction bias, with a significantly lower marginal probability gap and fewer new class samples misclassified as the old.
* **Hardness bias**. We also present two metrics: (1) $Var_0$: variance in accuracy of the initial labeled classes $C_{init}^0$. (2) hardest Acc: accuracy of the hardest classes in $C_{init}^0$. Results are as follows (after training on 5 stages):
| (in %) | $Var_0 \downarrow$ (on C100) | hardest Acc $\uparrow$ (on C100) | $Var_0 \downarrow$ (on CUB) | hardest Acc $\uparrow$ (on CUB) |
| :------------------------: | :--------------------------: | :------------------------------: | :-------------------------: | :-----------------------------: |
| Ours w/o hardness sampling | 23.04 | 65.10 | 21.77 | 62.65 |
| Ours w/ hardness sampling | **10.33** | **70.23** | **9.28** | **68.40** |
Results show that hardness-aware sampling effectively reduces hardness bias, with lower accuracy variance and higher hardest Acc.
> Minor typos.
* Thanks for your detailed review. We will correct all the typos carefully.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer TLvc,
Thanks very much for your time and valuable comments.
In the rebuttal period, we have provided detailed responses to all your comments and questions point-by-point for the unclear presentations.
Any comments and discussions are welcome!
Thanks for your attention and best regards,
Authors of Submission 5010.
---
Rebuttal Comment 1.2:
Comment: I appreciate author's feedback. My concern are resolved. I keep my original score.
---
Reply to Comment 1.2.1:
Title: Further Response to Reviewer TLvc
Comment: Dear Reviewer TLvc,
We are glad that our responses have resolved your concerns. Regarding the issues **W2 and W4**, which you are particularly concerned about, we have further organized our responses, summarized as follows:
* W2: Too many hyper-parameters and loss terms.
* **Understanding of the method.** Our method consists of three parts: $L_{Happy}=L_{new}+L_{old}+\lambda L_{con}^u$.
1. Cluster new classes: $L_{new}$, with self-training and mitigation of prediction bias for new class.
2. Mitigate forgetting old classes: $L_{old}$, with hardness-aware prototype sampling to alleviate hardness bias.
3. Contrastive learning: $L_{con}^u$ to generally ensure feature representations.
* **Hyper-parameter tuning.** For $L_{new}$ and $L_{old}$, our results show that a 1:1 weight ratio is optimal, because both old and new classes matter and we need to maintain a balanced focus on both new and old classes as a whole (other weight ratios of $L_{new}$ and $L_{old}$ will result in a 5% to 10% performance degradation). So keep the weight ratio of $L_{new}$ and $L_{old}$ to 1, and the only parameter for tuning is $\lambda$ for $L_{con}^u$. We found that $\lambda$ equals 0.7$\sim$1 works fine. Therefore, It is simple to choose hyper-parameters for our method, just tune $\lambda$ for $L_{con}^u$, which generalizes well on all datasets.
* **Overall.** The three objectives are complementary and non-conflicting, where each is essential, and they collectively improve classification across all classes.
* W4: Provide evidence to show that two issues are mitigated.
* Here, we provide more intuitive examples.
* **Prediction bias** refers to the bias that the model tends to predict new class samples to the old ones. **Metric:** We compute the proportion of new classes' samples misclassified as old classes. **Example:** In our method, the ratio of new samples misclassified to old ones decreases from 63.25% to 10.20%, which means that Happy could largely mitigate prediction bias, with better separation between old and new classes.
* **Hardness bias** refers to the bias that models have weaker classification and more severe forgetting on more hard old classes. **Metric:** We compute the accuracy of the hardest class and the accuracy variance among all old classes. **Example:** On CIFAR100, our method improves the hardest accuracy from 65.10% to 70.23%, and the variance decreases from 23.04% to 10.33%, which means our method greatly mitigates the forgetting of hard old classes and alleviates the extent of imbalance caused by hardness bias.
We hope our further responses regarding **W2 and W4** have clearly resolved your concerns. We were wondering whether our paper could be re-evaluated considering these further explanations on **W2 and W4**.
If you have any other questions, we would be happy to further discuss with you. We really appreciate your feedback. | Summary: The article presents a method for Continual Generalized category discovery. A de-biasing learning framework for the Category Discovery (C-GCD) task is designed to address the challenge of continuously discovering new concepts in an ever-changing environment while maintaining recognition of known categories. Traditional C-GCD studies have some limitations, such as storage and privacy issues caused by storing samples of past classes, and only considering limited incremental stages or assuming a proportion of known samples, which are not in line with practical application scenarios. Therefore, the study focuses on a more realistic C-GCD setup that includes more learning phases, more new categories, and after each phase, data from previous phases is not accessible.
Strengths: Originality: In this paper, the authors propose a novel de-biased-learning framework, "Happy, "specifically for the continuous Generalized Category Discovery (C-GCD) task, which is a relatively underexplored research area. It has designed a unique set of methods to handle the challenge of incrementally discovering new categories in unlabeled data while preserving the ability to recognize old categories, which traditional machine learning and deep learning approaches have overlooked. In particular, the proposed cluster-guided initialization, soft entropy regularization and hardness-aware prototype sampling strategies are innovative solutions to the unique problems of C-GCD.
Quality: The quality of the paper is reflected in its detailed theoretical analysis and experimental verification. The authors not only elaborate the design principle and motivation behind the method, but also conduct extensive experiments on multiple datasets to prove the effectiveness of the "Happy" framework achieving significant performance improvement, showing its strong generalization ability and practicality. In addition, the paper also reflects on the limitations and assumptions of the method, indicating that the author has deeply considered the comprehensiveness and rigor of the study.
Clarity: The structure of the paper is clear and the logic is coherent. From the introduction, which clearly explains the research background and motivation, to the method, which analyzes the components of the framework and their working principles in detail, to the presentation and analysis of the experimental results, every step is well organized. In addition, the authors also provide detailed implementation details and algorithm flow in the appendix, which increases the readability and reproducibility of the paper.
Weaknesses: 1. Although the authors acknowledge and briefly discuss the social impact of technology, the paper does not detail specific negative impacts that a "Happy" framework could bring, such as potential bias transmission, fairness issues, privacy violations, or security risks. For an algorithm intended for application in the open world, the lack of a comprehensive social impact analysis may limit its acceptance and ethical application in practice.
2. It is clearly stated that this study does not include theoretical results, which means that a complete hypothesis set and theoretical proof are not provided to support the validity of the proposed method. While empirical research has shown the effectiveness of "Happy, "the lack of a solid theoretical foundation may diminish its persuasive power in academia.
3. The authors acknowledge that there are some assumptions and limitations to the study, such as overfitting the model, performance in noisy environments, and testing on specific datasets. These factors may limit the method's general applicability and ability to generalize, especially in different data distributions or more complex real-world environments.
4. It is pointed out that even after the introduction of the debias mechanism, the recognition accuracy of the new class still fluctuates at different incremental stages, which indicates that the model may have stability problems when processing data of different class difficulty. This volatility can affect model reliability and user trust in real-world deployments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the experimental part, how do you make sure that in the unsupervised increment phase, the model not only finds new categories, but also accurately distinguishes them without confusing them with the old ones? Are there specific metrics or experimental Settings to measure this ability to differentiate?
2. How does the soft entropy regularization realize the reasonable distribution of the probability of the new class? Can you explain this process in detail and how it is combined with cluster-guided initialization to improve the clustering performance of new categories?
3. In the hardness-aware prototype sampling strategy, how do you define and quantify the "hardness" of a class, and how do you sample effectively against this hardness to mitigate forgetting? Are there concrete examples of how this strategy helps the model remember old categories that are difficult to classify?
4. The paper mentions that in preliminary experiments it was found that the model tends to misclassify new categories into old ones, and that the features of the old categories are disturbed when learning the new ones. In addition to the proposed solutions, have other technologies, such as meta-learning or memory enhancement networks, been considered to further improve these problems?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations of the work in the paper, including the confidence calibration problem and the scope of the study mainly focused on the classification task.
Confidence Calibration problem: In the Continuous Generalized Category Discovery (C-GCD) task, the confidence of the model is not calibrated due to an imbalance in the labeling conditions between the initial phase and the sustained phase. This leads to a clear gap between the old category and the new category, and even degrades performance when incorporating prior information.
The scope of application is limited to classification tasks: Although this paper mainly discusses the C-GCD learning paradigm under classification tasks, the application of this method has not been extended to other fields, such as object detection and image segmentation.
The authors note that future work should consider how confidence calibration techniques can be incorporated into C-GCD to further reduce potential bias. At the same time, they encourage the extension of the C-GCD learning framework to more types of visual tasks in order to expand its applicability and impact. These discussions reflect a clear understanding of the limitations of the current findings and suggest possible directions for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful advice and valuable questions, we will respond to your concerns point by point.
> W1: Lack of a comprehensive social impact analysis.
* We summarize the negative impacts: (1) **Bias/error accumulation**. Given the stringent unlabeled conditions of C-GCD, the initial learning bias may continuously accumulate, leading to a greater bias in later stages. (2) **Spurious correlations**. Category discovery relies on knowledge learned from labeled classes. If incorrect correlations are learned, e.g., over-reliance on the background, these may interfere with the learning of new ones.
* We will discuss them in detail in the final version.
> W2: Lack of a theoretical foundation.
* Happy is essentially self-learning on unlabeled data, and the theoretical foundation is conditional entropy maximization.
* Specifically, the objective can be expressed as $\max I(Y, Z) = H(Y) - H(Y|Z)$, where $Y$ represents the labels and $Z$ the features, with $I$ and $H$ denoting mutual information and entropy, respectively. The first term is equivalent to the proposed marginal entropy regularization $L_{entropy-reg}$, while the second one $H(Y∣Z)$, can be computed by the posterior probability $p(y|z)$, which we aim to minimize through self-distillation $L_{self-train}$.
* We will include this theoretical analysis.
> W3: There are assumptions that may limit the general applicability especially in different distributions.
* Our method achieves sota not only on four datasets (table 2) but also on the distribution-shift CIFAR100-C. (**See General Common Concern 2 for more details**). The results demonstrate that our method has a stronger generalization ability.
> W4: The accuracy of the new class still fluctuates at different incremental stages with the debias methods.
* The fluctuation in new categories is an inherent issue in C-GCD. However, our method exhibits significantly less variability compared to others, e.g., on CIFAR100, the maximum new accuracy difference across stages is $\Delta_{new}=6.3$ (56.1-49.8), which is less than the 17.3 (48.9-31.6) of MetaGCD. Similar patterns can be observed in other datasets as well.
> Q1: How to make sure the model not only finds new categories but also avoids confusing them with the old ones? Are there specific metrics?
* We follow SimGCD and employ a unified classification head for both old and new classes in a joint prediction space. So our method could classify both new and old categories together, which not only implicitly differentiates between new and old classes but also discovers new ones.
* **Specific metrics**. We calculate the proportion of new class samples misclassified as old classes (new$\to$old) and old classes misclassified as new (old$\to$new) after stage-1 training. Results are shown below:
| misclassify ratio | new$\to$old (C100) | old$\to$new (C100) | new$\to$old (CUB) | old$\to$new (CUB) |
| :---------------: | :----------------: | :----------------: | :---------------: | :---------------: |
| SimGCD | 85.75 | 6.78 | 84.18 | 6.65 |
| MetaGCD | 30.65 | 6.64 | 32.36 | 6.50 |
| Happy | **10.20** | **4.35** | **11.05** | **4.95** |
Results show Happy can more effectively distinguish between new and old classes with a lower misclassify ratio, and achieves a higher accuracy for new classes (e.g., 51.3>31.6 at stage-5 in table 2).
> Q2: Explain soft entropy regularization and cluster-guided initialization.
* Soft entropy regularization. See Sec 3.2 line 128, as the model tends to bias the predictions to old classes, so we implement an explicit constraint to ensure necessary predictive probabilities for new classes to facilitate learning. Considering that at each stage the prior ratio of old and new classes is inaccessible, we employ a soft regularization, assuming a general balance between the probabilities for new and old classes ($p_{old} = p_{new} = 0.5$), which is achieved by the marginal entropy maximization.
* Cluster-guided initialization ensures a good **initialization** for new class heads, while entropy regularization improves **training** of new classes with less prediction bias.
> Q3: How to define "hardness" and how to sample against this hardness? Concrete examples of how it helps remember difficult old classes?
* As in Sec 3.4 Eq (9), we define class-wise **hardness** $h_c$ as the average cosine similarity between the class center $\mu_c$ and other class center $\mu_j$, where $\mu_j$ is computed in Eq (8). Intuitively, the greater the similarity to other classes, the more likely it is that its feature space will be confused with others, thereby the class is more difficult.
* To sample against hardness, considering more difficult categories tend to be forgotten more readily, we sample harder classes more frequently. So we treat $h_c$ as logits and model hardness distribution with softmax function in Eq (9).
* **Concrete samples**. After learning 5 stages, we evaluate the model on initial labeled classes $C_{init}^0$ and the accuracy of the hardest class (lowest acc) is reported:
| hardest Acc | C100 | CUB |
| :---------: | :-------: | :-------: |
| SimGCD | 62.85 | 59.68 |
| MetaGCD | 65.31 | 61.26 |
| Happy | **70.23** | **68.40** |
Results show that Happy could remember difficult classes better with higher hard class accuracy.
> Q4: Have other technologies been considered to further improve the task?
* MetaGCD actually utilizes meta-learning, but the results are weaker than ours (see table 2), because MetaGCD only simulates the C-GCD process without any debiasing mechanism.
* In essence, meta-learning and memory enhancement can be integrated with our debiasing mechanism to further enhance the results. We leave this aspect to future work.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer TDQJ,
Thanks very much for your time and valuable comments.
In the rebuttal period, we have provided detailed responses to all your comments and questions point-by-point for the unclear presentations.
Any comments and discussions are welcome!
Thanks for your attention and best regards,
Authors of Submission 5010.
---
Rebuttal Comment 1.2:
Comment: Thanks to the author for answering my question, but it failed to solve my doubts about the weakness of the method
---
Reply to Comment 1.2.1:
Title: Further Response to Reviewer TDQJ
Comment: Dear Reviewer TDQJ,
Thank you very much for your feedback. Regarding the weaknesses you've highlighted, we have further organized our responses and summarized them as follows:
* W1: Lack of a comprehensive social impact analysis.
* Here, We have included a more detailed discussion on societal impacts, supplemented with specific examples.
* **Medical risks:** Our method could be applied in the medical field, where biases in learned categories might affect the discovery or diagnosis of new diseases. Over time, these errors could increase, potentially delaying critical medical interventions.
* **Fairness issues:** Regarding different genders and populations, our model may learn unfair biases from the labeled data, which could then be transferred to new domains. This could perpetuate inequalities within the model's predictions.
* W2: Lack of a theoretical foundation.
* The theoretical foundation of our method is the **InfoMax principle** [R1], namely **maximizing the mutual information (MI) between the inputs and outputs of a system.** Here inputs and outputs denotes the feature $Z$ and model predictions $Y$.
* For the self-training problem of learning new classes in C-GCD, we employ the maximizing the mutual information (MI) theory. Specifically, the objective can be expressed as $\max I(Y, Z) = H(Y) - H(Y|Z)$, where $Y$ represents the labels and $Z$ the features, with $I$ and $H$ denoting mutual information and entropy, respectively.
* The proposed entropy regularization on model predictions $Y$ is the minus of first term $-H(Y)$, while the self-distillation objective aims to encourage the model to predict more confidently, resulting in lower prediction entropy, namely the second term of minimizing $H(Y|Z)$, namely maximizing $-H(Y|Z)$.
* Overall, our method follows the Infomax principle and aims to maximize mutual information, to ensure the quality of self-learning.
* W3: There are assumptions that may limit the general applicability, especially in different distributions.
* We need to clarify here that our method does not rely on explicit assumptions or predefined ratios of new to old categories in previous methods, thereby enhancing its generalizability. This has been demonstrated in our experiments, including the distribution shift experiments detailed in the Rebuttal PDF.
* W4: The accuracy of the new class still fluctuates, which can affect model reliability and user trust in real-world deployments.
* Our method significantly mitigates fluctuations. For example, in CIFAR 100, the gap in accuracy among classes has been reduced from 17.3% to 6.3%. This ensures the reliability and stability of our approach in real-world deployments.
**References:**
[R1]. Self-organization in a perceptual network. Ralph Linsker. 1988.
Hope these further responses address your concerns about the weaknesses part. If you have further questions, please let us know. We are willing to have more discussions with you. Thanks for your participation. | Rebuttal 1:
Rebuttal: We thank all reviewers for their dedication and insightful comments, and we believe these comments are significant for improving the overall quality of this paper.
We are pleased that the reviewers appreciate our paper from various aspects, including the novelty of the method [TDQJ,jUDg,cVL6], clear writing [TLcv, jUDg, cVL6], and remarkable performance [TDQJ, TLvc, jUDg, cVL6].
In this paper, we explore the task of Continual Generalized Category Discovery (C-GCD). At each continual stage, data from new and old classes are mixed together without labels. Compared to previous settings, our setup is more realistic, (1) encompassing more stages and new classes to discover, (2) does not preserve class-wise labeled samples of old classes to replay.
Here, we respond to some common concerns:
> Common Concern 1: The method is complex with many hyper-parameters.
* Our approach can be divided into three parts: (1) $L_{new}$: employs self-distillation $L_{self-train}$ and $L_{entropy-reg}$ to reduce prediction bias to learn new classes. (2) $L_{old}$: hardness-aware prototype sampling $L_{hap}$ and $L_{kd}$ to further mitigate hardness bias and forgetting of old classes. (3) $L_{con}^u$: contrastive learning to ensure basic representations. Each of them is important.
* **Hyper-parameters**. We fix the weight of $L_{self-train}$ and $L_{hap}$ as 1, considering they are the main objectives for new and old classes. As a result, our method mainly contains three loss weights $\lambda_1,\lambda_2,\lambda_3$ for $L_{entropy-reg}$, $L_{kd}$ and $L_{con}^u$ respectively. Here, we give sensitivity analysis on CIFAR100 as follows: (average All Acc over 5 stages)
| $\lambda_1$ ($L_{entropy-reg}$) | 0 | 0.5 | 1.0 | 3.0 | 5.0 |
| :---------------------------------------: | :---: | :-------: | :---: | :---: | :---: |
| All Acc | 60.60 | **69.04** | 69.00 | 63.98 | 59.23 |
| $\lambda_2$ ($L_{kd}$) | 0 | 0.5 | 1.0 | 3.0 | 5.0 |
| :------------------------------: | :---: | :---: | :---: | :-------: | :---: |
| All Acc | 65.31 | 66.98 | 69.00 | **69.30** | 68.90 |
| $\lambda_3$($L_{con}^u$) | 0 | 0.5 | 0.7 | 1.0 | 3.0 |
| :--------------------------------: | :---: | :---: | :-------: | :---: | :---: |
| All Acc | 68.74 | 68.94 | **69.16** | 69.00 | 68.92 |
As shown above, the model is relatively insensitive to $\lambda_3$, whereas $\lambda_1$ and $\lambda_2$ have a more significant impact. Overall, the optimal values for each hyperparameter are close to 1.
In our experiments, **we simply set all weights to 1 which shows remarkable results across all datasets.** Thus, our method does not require complex tuning of parameters and exhibits strong generalization capabilities.
* **Computational costs**. The proposed three components: cluster-guided initialization, hardness-aware sampling and entropy regularization bring very small computation overhead, and the consumption (both time and GPU memory) of each part is **less than 1%**. More details are in **Response to Reviewer [cVL6]**.
> Common Concern 2: Generalization to different distributions.
* We conduct experiments on the distribution-shift dataset CIFAR100-C [R1] with severity=2, and test the model on all 100 classes after 5 stages of training. The results are shown below:
| Distribution | Original | +gauss noise | +snow | +fog | +motion blur | +pixelate |
| :----------: | :-------: | :----------: | :-------: | :-------: | :----------: | :-------: |
| VanillaGCD | 51.36 | 23.36 | 44.38 | 51.12 | 48.52 | 46.77 |
| MetaGCD | 55.78 | 23.53 | 44.80 | 53.63 | 48.69 | 48.72 |
| Happy (Ours) | **59.99** | **36.51** | **52.61** | **56.77** | **51.60** | **52.13** |
Our method consistently outperforms others across several unseen distributions, showcasing its strong robustness and generalization ability.
* **See the Rebuttal PDF for all 15 distributions of CIFAR100-C**.
> Common Concern 3: Evidence to show prediction bias and hardness bias are mitigated.
* **Prediction bias**. We present two metrics (after stage-1): (1) $\Delta p=\overline p_{old}-\overline{p}_{new}$: the gap of marginal predictive probability on old and new classes, (2) the proportion of new classes samples misclassified as old ones. Both metrics are low in our method, which shows that the bias is mitigated. **See response to Reviewer [TLvc] for more details**.
* **Hardness bias**. We also show two metrics (after stage-5): (1) $Var_0$: the variance of accuracy among the initially labeled classes $C_{init}^0$, (2) $acc_{hard}$: accuracy of the hardest class. The former decreases and the latter increases with our method, which means that the bias is mitigated. **See response to Reviewer [TDQJ, TLvc] for more details**.
For other specific questions, we respond to each reviewer point by point as below. We will carefully revise all comments from the four reviewers and incorporate them into the revised paper. Thanks again to all reviewers for their valuable suggestions!
**References**:
[R1]. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. ICLR 2019.
Pdf: /pdf/94af117989b287bd171959df6837dec379d8cd97.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LORA-MOO: Learning Ordinal Relations and Angles for Expensive Many-Objective Optimization | Reject | Summary: This paper proposed the LORA-MOO framework, a surrogate-assisted MOO algorithm that learns surrogates from spherical coordinates. This includes an ordinal-regression-based surrogate for 10 convergence and M −1 regression-based surrogates for diversity.
Strengths: The considered problem is pretty important.
Weaknesses: Major:
1. Line 113-116, a bit too repetitive.
2. Line 121, an initial dataset size of 11D-1, too specific.
3. Lines 121-134: the algorithms are described too specifically. It is more suitable for EC journals rather than NeurIPS.
4. Selection Criteria, too simple.
5. What does LORA-MOO means?
6. Some HV-based MOBO methods are not compared as they are failed to solve many objectives. This argument is not accurate, please consider to run "https://github.com/xzhang2523/libmoon/blob/main/libmoon/solver/mobo/run_dirhvego.py", which supports more than ten objectives problems.
Minor:
There are too many grammar errors in this paper.
(1) Line 249, they are failed -> they failed. .
(2) Line 270, HV use -> HV uses .
(3) Line 175, consider using \max.
(4) line 21, are widely exist -> widely exist.
(5) line 64, an non-pa .. -> a non-pa..
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness 1: Line 113-116, a bit too repetitive.
Response: Thanks for your comment.
We explain the connection between SAEAs and BO in lines 113 - 116 since some less-skilled readers may not be knowledgeable enough about this. Actually, we met many reviewers who do not know the connection between SAEAs and BO.
---
Weakness 2: Line 121, an initial dataset size of 11D-1, too specific.
Response: Thanks for your comment.
11D - 1 is a widely used conventional setting for initial datasets in the literature (such as the study recommended by the reviewer in weakness 6 below, line 11 in the linked code page). We use this setting to ensure a fair comparison and also make our work consistent with existing studies.
---
Weakness 3: Lines 121-134: the algorithms are described too specifically. It is more suitable for EC journals rather than NeurIPS.
Response: Thanks for your comment.
We cannot grasp the point of this comment. As far as we know, many specific papers on our topic have been published in NeurIPS.
---
Weakness 4: Selection Criteria, too simple.
Response: Thanks for your comment.
1. We argue that the simplicity of a method can be a weakness. Some famous algorithms are extremely simple but working, e.g. ReLU.
2. In addition, we would like to explain our selection criteria here.
- For convergence criterion, the selection criterion is simple since the modeling of dominance relation is relatively complex. Many operations have been done during the modeling procedure, making the selection criterion pretty intuitive and simple.
- For diversity criterion, we do not think it is simple, we actually have a separate pseudo-code (see Alg. 4 in Appendix C.3) to describe its details.
---
Weakness 5: What does LORA-MOO means?
Response: Thanks for your comment.
LORA-MOO is the abbreviation of our algorithm, Learning Ordinal Relations and Angles for expensive Many-Objective Optimization (The title of this submission).
---
Weakness 6: Some HV-based MOBO methods are not compared as they are failed to solve many objectives. This argument is not accurate, please consider to run (we hided the link given by the reviewer because any external link is not allowed to appear in our rebuttals.), which supports more than ten objectives problems.
Response: Thanks for your comment.
1. We would like to revise the inaccurate statement in our paper ''Some HV-based MOBO methods are not compared as they failed to solve many objectives'' to ''Some HV-based MOBO methods are not compared due to their long runtimes when solving many-objective problems".
2. We have attempted to run many HV-based MOBO methods for comparison, but it turned out that they were unable to complete a single run of our 10-objective optimization test within one day.
3. For the recommended link, we have looked into this link and get two questions:
- We have read the paper associated with this link (Hypervolume Maximization: A Geometric View of Pareto Set Learning, NeurIPS 2023), the paper was proposed for multi-objective optimization and it did not claim its effectiveness on many-objective optimization. In addition, the experiment section conducted only multi-objective optimization experiments.
- We have checked the library associated with this link (LibMOON), we found that it was a Python library for multi-objective optimization. We have downloaded LibMOON and configured for it, but when we attempted to run the reviewer recommended code (with a new setting of n\_obj = 10), the program reported errors. We have identified that the error was caused due to the optimization problem (e.g. ZDT, DTLZ) files were coded for only multi-objective optimization. In a word, for now, LibMOON did not support many-objective optimization.
Hope our answer meets your expectation, if you have any further suggestions about how to run the recommended code for many-objective optimization problems, please let us know. We are really interested in this.
---
Weakness Minor:
Response: Thanks for your comment.
We have looked thorough our paper and corrected all grammar errors.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your response. I have no questions for this paper. | Summary: This paper proposes a surrogate-assisted evolutionary many-objective optimization algorithm, named LORA-MOO. LORA-MOO is composed of a surrogate for ordinal modeling, which focuses on convergence, and m-1 surrogates for distribution modeling, which focus on diversity. Empirical study demonstrates the effectiveness of the proposed algorithm.
Strengths: This paper is well-written and easy to follow. Although the proposed algorithm seems somewhat complicated, all the technical details are clearly presented, and the motivations behind them are explained. The empirical study is generally solid, with all the major parameters included in the ablation study, and most of the commonly used test instances are covered. The results show that LORA-MOO outperforms the baselines.
Weaknesses: Although LORA-MOO obtained better indicator values than the baselines in synthetic problem benchmarks, it is difficult for me to find some fundamental differences between LORA-MOO and previous MOAs. Nor could I see what new insights this paper can provide for solving expensive MOPs. LORA-MOO models the convergence of solutions by a surrogate problem of the domination level, which is a common idea in MOO, adopted by many past methods such as NSGA-II. Such a surrogate is intuitive, but it is unclear to me why it could work better than the existing surrogates such as pairwise relation or function values, and why such a surrogate can be successfully modeled by a Gauss process. LORA-MOO uses m-1 surrogates to predict the spherical coordinate, but it seems identical to predicting function values. LORA-MOO also contains many other components, such as EA, PSO, non-dominated sorting, various clustering methods, and some subset selection mechanisms. These components have long been widely adopted by many MOAs, and there are many alternatives available as well. I agree that these components could usually make an algorithm perform better, but this paper does not adequately demonstrate any necessity for such a combination or any connections between these components. The ablation study appears to be a parameter-tuning experiment, presenting some results under different parameters. However, there is no ablation for the many components in the algorithm, so it is unclear what contributions these components actually make.
Expensive optimization problems are closely connected to real-world applications, and many real-world MOPs are indeed expensive problems. Therefore, I believe this is a very valuable research direction. However, the empirical study in this paper is mainly conducted on synthetic problems. DTLZ and WFG have undoubtedly driven the development of the MOO field, but they have also caused a significant number of researchers to focus narrowly on these synthetic test sets. As a result, there are now many algorithms that perform excellently on synthetic test sets but struggle to adapt to real-world problems. The authors have also conducted tests on NASBench, which is crucial for comprehensively demonstrating the algorithm's capabilities, but the results do not seem to be sufficiently convincing.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Regarding typesetting, I recommend using the Roman font for the operators instead of the italic font. For instance, use $\max$, $\exp$, $\cos$ instead of $max$, $exp$, and $cos$.
2. P2, L45. One of the difficulties of expensive optimization problems is the small data size. Pairwise relations can increase the data size, making it possible to train some DL models. Therefore, this is an advantage of pairwise relations. Additionally, why do the authors believe that the amount of data will increase exponentially? At most, there will only be $O(N^2)$ pairs, so the increase is at most quadratic. Considering that DL models can usually be trained in parallel, I don't think efficiency could be a major drawback. The authors repeatedly claim efficiency advantages (L45, L59), but the experimental results do not demonstrate any efficiency advantage over the baseline.
3. P7, Fig. 1. The caption "Evaluations" is clipped.
4. P7, Line 268. IGD/IGD+ uses a subset sampled from the true PF.
5. Page 9, Fig. 4. Why present the normalized runtime instead of the real time?
6. P17, On the reference point for HV. It is inappropriate to set the reference point for HV to (1,1, ...,1). Firstly, the reference point can only be set near 1 if all solutions have already converged well. Additionally, to better measure diversity, the reference point for HV needs to be larger than the nadir point, so it is generally set to at least $1.1$. For expensive multi-objective optimization problems, the reference point needs to be set to an even larger value because most solutions may not have converged well. For example, in Table 10 and Table 14, the HV values are all 0, indicating that the reference point is set too low.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This paper does not summarize its limitations. I suggest the authors reconsider the limitations of this work and fully present them in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness:
Response: Thanks for your comments.
As we listed in our contributions (Section 1), the main differences are: 1) Our ordinal-regression-based model which trained on dominance relations and artificial ordinal relations. 2) The idea of modeling surrogates in spherical coordinates and use these surrogates for diversity maintenance.
As far as we know, our ordinal-regression-based model is completely different from almost all surrogates in the literature. It shows that ordinal relation can be approximated using regression model and its effectiveness. In addition, the modeling with spherical coordinates may inspire readers to solve expensive MOPs in different coordinate systems.
We would like to clarify that the idea of using domination level is common in MOO, but the idea of modeling ordinal dominance relations with regression models is rare in expensive MOO.
Additionally, NSGA-II is not a model-based optimization algorithm and is not designed for expensive MOO, we are confused that why the reviewer mention NSGA-II when talking about ''modeling dominance level''.
Our surrogate is well designed to approximate ordinal relations, it describe the ordinal landscape of objective space and provide the direction of optimization. The details of how to modeling with GP is presented in Section 3.2. We firstly quantified ordinal relations as numerical values by dominance relations and some artificial relations. Then we use GP to approximate the quantified numerical values.
In fact, spherical coordinate are numerical values, while function values are also numerical values.
We would like to clarify that our ablation studies have demonstrated the contributions of our algorithm components. For example:
- The ablation study reported in Appendix F.2 has demonstrated the contribution of our $\lambda$-dominance (described in Section 3.2.1). When $\lambda$ = 0, the component of $\lambda$-dominance is actually removed from our algorithm. In Table 4, by comparing the results of $\lambda$ = 0 and $\lambda$ > 0, we can observe the contribution of this component.
- Similarly, our ablation study in Appendix F.3 shows the contribution of our artificial relations (described in Section 3.2.2) since $rp_{ratio}$ = 1 indicates this component is removed.
- Our ablation study in Appendix F.4 shows the contribution of our clustering-based initialization (see Section 3.3.1) since $n_c$ = 1 indicates this component is removed.
In summary, we can observe the contributions of our algorithm components directly from our ablation studies.
We are confused why the reviewer think our NAS results are not sufficiently convincing.
Our NAS experiment shows that our LORA-MOO outperforms the comparison algorithms. While the comparison algorithms tend to be converged, our LORA-MOO is still able to reach greater HV values which are close to the maximal HV value on this problem. It would be appreciated if the reviewer could provide some reasons for this comment.
---
Question 1:
Response: Thanks for your comment. We have revised our font as recommended.
---
Question 2:
Response: Thanks for your comment.
Increasing the data size is detrimental to Gaussian Processes models and many other modeling techniques, so we argue that it is an advantage of pairwise relations.
For some DL models, although increasing the data size makes it possible to train DL models, it also increases the time cost of model training. Therefore, it is hard to say increasing the data size is an advantage.
In addition, pairwise relations are used to train classification surrogates (DL models), however, our experiments show that classification-based optimization algorithms (e.g. CSEA, REMO) are inferior to other regression-based optimization algorithms (e.g. KRVEA, KTA2) in optimization performance.
For the word ''exponentially'', we have revised it to ''quadratically'' for improving the accuracy of our statement.
Although DL models can be trained in parallel, the total computational cost has increased, however, we admit that parallel computation alleviates the curse caused by the increased data size.
The efficiency advantage is shown in Fig. 4. It can be seen that, when the number of objectives increases, LORA-MOO's runtime increases in a slower rate than KRVEA and KTA2's runtime. For remaining comparison algorithms, they train fewer surrogates than LORA-MOO, but their optimization performance are significantly less competitive.
---
Question 3:
Response: Thanks for your comment. We have plotted a new figure to show x-axis clearly.
---
Question 4:
Response: Thanks for your comment.
We are confused on this comment.
The true PF is a set consisting of infinite solutions, how can we compute IGD / IGD+ if we do not use an uniformly sampled subset of the true PF as reference points? As far as we know, all studies in the literature used a subset of the true PF to compute IGD / IGD+, as we described in Appendix. D.
---
Question 5:
Response: Thanks for your comment.
Comparison algorithms are implemented in different programming languages, e.g. Matlab, Python, so it is impossible to compare the real runtime directly. In addition, for many-objective optimization, it is crucial to investigate the relation between runtime and the number of objectives. If the runtime of an algorithm increases rapidly as the number of objectives increases, then such an algorithm would be unsuitable for optimization problems with too many objectives. After the normalization of runtime, it can be directly observed that how runtime would varies with the number of objectives.
---
Question 6:
Response: Thanks for your comment. We have re-calculated HV values as suggested.
---
Limitation:
Response:
Currently, our algorithm pick two solutions (by convergence and diversity) for expensive evaluations in each iteration. A dynamic selection strategy can be developed to select dynamic number of solutions for evaluations to improve the evaluation efficiency.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. However, your rebuttal does to satisfactorily address my concern about the novelty of the proposed method. Therefore, I maintain my score. The authors seem confused about my comments, so I provide some clarifications.
> We are confused that why the reviewer mention NSGA-II when talking about ''modeling dominance level''
NSGA-II uses a dominance level to directly model convergence. I know LORA-MOO is model-based while NSGA-II is not. LORA-MOO just used a regression-based model to model such a dominance level, so it does not seem to provide new insights about how to model convergence.
> Question 4: We are confused on this comment.
In Line 268, you say "IGD/IGD+ use a set of truth Pareto fronts". What does "a set of truth Pareto fronts" mean? It is more accurate to express as "IGD/IGD+ uses a subset sampled from the true PF."
---
Reply to Comment 1.1.1:
Comment: Thanks for your comment.
We would like to clarify that:
1. In many-objective optimization, it is impossible to model the dominance level in NSGA-II.
- For a many-objective optimization problem, all solutions in the initial training dataset could be non-dominated, indicating all solutions would locate in the same dominance level. Modeling such a dominance level is useless, it just like to model a classification model with all solutions in the same class.
2. The modeling of ordinal relations in LORA-MOO is able to solve the difficulties mentioned above, it is completely different from the dominance level in NSGA-II and it overcomes the drawback of dominance level in NSGA-II.
- As described in Section 3.2, the ordinal relations LORA-MOO learned from training dataset are a mixture of lambda-dominance relations and a clustering-based artificial relations. Both of two relations are novel concepts we proposed in this work.
- In addition, there is an ordinal relation mapping before using them to train models, which makes our model stable.
3. The novel ordinal regression model is only a part of our contributions.
- We also proposed a spherical coordinate base modeling method to maintain the diversity of non-dominated solutions.
---
Rebuttal 2:
Comment: Thank you for your response. I know that the strict dominance relation is not suitable for many-objective scenarios. However, there have been many existing relaxed dominance relations such as $\epsilon$-dominance [1], grid-dominance [2], and $k$-optimality [3]. They are designed for many-obj optimization. It is not clear why the proposed $\lambda$-dominance can be more effective than the existing dominance relations.
In short, LORA-MOO seems to be a recombination of some existing methods or similar ideas. The motivation and necessity for such a combination are not clearly presented. There is no theoretical analysis or ablation study for these components. Therefore, I maintain a negative rating for this paper.
[1] Combining convergence and diversity in evolutionary multiobjective optimization, Evolutionary Computation, 2002.
[2] A grid-based fitness strategy for evolutionary many-objective optimization, GECCO 2010.
[3] A fuzzy definition of ‘optimality’ for many criteria optimization problems, TCYB.
---
Rebuttal 3:
Comment: Thanks for your comment.
We would like to clarify the following issues:
$\textbf{About ablation studies and theoretical analysis}$:
- We do have ablation studies in Section 4.2 and Appendix F. More importantly, our ablation studies have already demonstrated the contributions of these algorithm components (Detailed explanations about this are available in our first rebuttal to weaknesses).
- We have provided a theoretical runtime analysis in the response to another reviewer.
---
$\textbf{About contributions}$:
- We would like to highlight that our modeling method for convergence is called $\textbf{ordinal-regression model}$, $\textbf{NOT dominance-regression model}$. Although the modeling process includes some dominance related relations, it also includes a lot of artificial ordinal relations which are NOT dominance level related relations (described in Section 3.2.2).
- The proposed ${\lambda}$-dominance is $\textbf{only a minor contribution (about 10\\%) in our work}$. We have more contributions on our clustering-based artificial ordinal relations, mapping of ordinal relations, our spherical coordinate based diversity maintenance strategy, our global search strategy ... We are confused that why the reviewer ignores our main contributions while concerning severely on the similarity between our minor contribution and existing dominance related works.
- LORA-MOO is not a combination or recombination of existing methods. Many components in LORA-MOO are not available in any existing studies, such as the clustering-based artificial relations. Again, we hope the reviewer can focus on our main contributions mentioned above.
- There is no motivation for combination since our work is not a combination of existing works.
$\quad$
Thanks for your reading.
---
Rebuttal Comment 3.1:
Comment: Thank you for your reply.
I said, "There is no ablation study **for these components**". The ablation study in Section 4.2 and Appendix F only discussed the parameters, not the contribution of the components. For, example, you should replace one component with another one to demonstrate this component is necessary.
I read your "runtime analysis". It is only a time complexity analysis, not a runtime analysis. I think the authors do not know what "runtime analysis" means. You can search "runtime analysis" on Google Scholar to get some good papers to learn.
I keep a negative rating for this paper.
---
Reply to Comment 3.1.1:
Comment: Thanks for your comment.
As we explained in our initial rebuttal, we do have ablation studies for $\textbf{these components}$:
The ablation study reported in Appendix F.2 has demonstrated the contribution of our $\lambda$-dominance (described in Section 3.2.1).
When $\lambda$= 0, the component of $\lambda$-dominance is actually $\textbf{removed}$ from our algorithm.
In Table 4, by comparing the results of $\lambda$ = 0 (performance with normal dominance relations) and $\lambda$ > 0 (performance with this component, $\lambda$-dominance relations), we can observe the contribution of this component and why it is necessary to our algorithm (lines 598-600 in our manuscript).
Similarly, our ablation study in Appendix F.3 shows the contribution of our artificial relations (described in Section 3.2.2) since $rp_{ratio}$ = 1 indicates this component is $\textbf{removed}$. We have provided explanations about the contribution of this component in lines 618-627.
Our ablation study in Appendix F.4 shows the contribution of our clustering-based initialization (see Section 3.3.1) since
$n_c$ = 1 indicates this component is $\textbf{removed}$ and replaced by a random initialization. The performance of $n_c$ = 1 and that of $n_c$ > 1 are compared and explained (lines 641-646).
In summary, we can observe the contributions of our algorithm components directly from our ablation studies.
$\quad$
---
As for theoretical analysis, we admit it is inappropriate to say runtime analysis, but time complexity analysis is provided.
$\quad$
---
Finally, we wonder to know if we have solved your concerns on contributions.
---
Rebuttal 4:
Comment: Thank you for your response. I increased my rating. However, it is still not clear to me what new insights this paper could provide in solving expensive optimization problems. I suggest the authors submit this manuscript to TEC rather than NeruIPS. Maybe the audiences there are more interested in this paper. | Summary: This paper proposes a novel surrogate-assisted evolutionary algorithm named LORA-MOO, the core contribution is the introduce of ordinal-regression-based model spherical coordinates approximation to SAEA and LORA-MOO can find a good trade-off between optimization efficiency and optimization results.
Strengths: This paper provides a novel perspective for modeling surrogates with high efficiency.
The experiments results seem good.
Weaknesses: Motivation and contributions are limited. To the best of my knowledge, the main contrition of LORA-MOO is introducing ordinal-regression-based model for convergence and spherical coordinates for diversity. However, why do it and what is the connections between them? Besides, the manuscript includes a lot of informal expression. The proposed method is complex and effectiveness is limited.
Technical Quality: 2
Clarity: 1
Questions for Authors: a. Why is Many-objective optimization problems called MOOPs for short? What's the abbreviation of Multi-objective optimization problems?I think the author needs to consult more relevant literature to make the expression more formal.
b. Considering that the author mentioned MOBO, but there was no comparison in the experimental stage, it is interesting to provide a comparison with PSL-MOBO[1], qNEHVI[2], DAPSL[3] and so on. In addition, more SOTA methods should be introduced to demonstrate the superiority of LORA-MOO since almost all the compared methods is out of date.
[1] Lin, Xi, et al. "Pareto set learning for expensive multi-objective optimization." Advances in Neural Information Processing Systems 35 (2022): 19231-19247.
[2] Daulton, S.; Balandat, M.; and Bakshy, E. 2021. Parallel bayesian optimization of multiple noisy objectives with expected hypervolume improvement. Advances in Neural Information Processing Systems, 34: 2187–2200.
[3] Lu, Yongfan, Bingdong Li, and Aimin Zhou. "Are You Concerned about Limited Function Evaluations: Data-Augmented Pareto Set Learning for Expensive Multi-Objective Optimization." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 13. 2024.
c. In all the tables, the author doesn't have any highlights, which can't let readers know what LORA-MOO is good at or not. Moreover, most tables in Appendix miss the comparison (=,-,+) in terms of a single question
d. In the experiments section, I only saw some results presented, but I didn't see any in-depth analysis.
e. Lack of Real-World Application Depth: Although a real-world network architecture search problem is mentioned, the paper does not delve deeply into real-world applications. More case studies or industrial applications should be considered [4].
f. Lack of Theoretical Analysis: The paper focuses heavily on empirical results but lacks a rigorous theoretical analysis to support the empirical findings. Theoretical results, such as convergence proofs or time complexity analysis, would strengthen the paper's contributions.
g.The idea of the introduce of spherical coordinates has been discussed in [5]. What is the main difference between [5] and this paper?
[5] Zhang, Xiaoyuan, et al. "Hypervolume maximization: A geometric view of pareto set learning." Advances in Neural Information Processing Systems 36 (2024).
h. The motivation is not enough. It seems that the main target of LORA-MOO is to enhance efficiency via only training a single model. However, I can’t find obvious advantage in terms of Runtime Comparison. There is not enough motivation to support the introduction of A. There is not enough motivation to support the introduction of spherical coordinates.
i. Why PSO is introduced to conduct offspring generation?
j. Why use S_A as the input of the Kriging model (Algorithm 1)? In my understanding, S_o and S_a are enough for construction of Kriging model.
k. In each iteration, only two solutions (one for convergence and one for diversity) are evaluated. So, why not design a dynamic strategy to determine the number of evaluation solutions considering that the number of promising solutions is varying with the population evoluting.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: See questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness:
Response: Thanks for your comment.
We were confused that why the reviewer thought our contributions were limited. Our work proposed a novel model and a novel optimization method. In addition, we noticed that the reviewer was the only reviewer who thought our presentation was not good.
The reason to introducing ordinal-regression-based model is to assist convergence and optimization search via a single model. If the most workload of model-based optimization is completed by a single model, the efficiency of optimization algorithm would be enhanced. For spherical coordinates, they are employed to maintain diversity but will be used only once at the end of optimization search in each iteration. These spherical coordinate models are used with low frequency to improve computational efficiency without compromising solution diversity.
Our experiments show that our algorithm outperforms all the comparison algorithms, we would appreciate it if the reviewer could provide some reasons on the comment of ''effectiveness is limited''.
---
Questions a:
Response: Thanks for your comment.
Many-objective optimization problems are called MaOOPs for short to distinguish itself from multi-objective optimization problems. However, the abbreviation of many-objective optimization and the abbreviation of multi-objective optimization did not appear in our paper simultaneously, so it is not necessary to make such a difference in abbreviations.
However, we appreciate the reviewer' attitude toward formal expressions and have revised our abbreviations.
---
Questions b:
Response: Thanks for your comment.
The topic of our work is many-objective optimization instead of multi-objective optimization. As far as we can see, the references mentioned above are all designed for multi-objective optimization.
We actually attempted to run some Multi-objective BO for comparison purpose. However, recent MOBO are mainly Hypervolume-based optimization algorithm, and the computation of hypervolume is very time-consuming for many-objective problems. Our attempts failed as it costed more than one day to complete a single run of MOBO methods for 10-objective problems. We mentioned this in Section 4.1, line 250.
---
Questions c:
Response: Thanks for your comment.
- For the table in the main paper and the last 8 tables in the Appendix, we have presented the statistical test results between LORA-MOO and every compared algorithm, denoted by symbols +, -, and $\approx$. Readers can understand LORA-MOO performance via these symbols.
- For 4 tables about ablation studies, we do not have statistical result symbols for each row since the statistical tests are conducted between all LORA-MOO variants. For example, each variant in Table 3 needs to be compared with all other 4 variants. It is impossible to put 4 statistical test results in 1 cell. Therefore, we put the summary of statistical test results at the end of these 4 ablation studies' tables.
We have added highlights to ablation studies' tables for improving clarity.
---
Questions d:
Response: Thanks for your comment.
Due to the page limitation, we can only report important results in the main paper, but readers can find our ablation studies and in-depth analysis in Appendix F.
---
Questions e:
Response: Thanks for your comment.
We are conducting more real-world experiments and would report them soon.
---
Questions f:
Response: Thanks for your comment.
We admit that some theoretical analysis would strengthen our paper. This is a limitation of our work.
---
Questions g:
Response: Thanks for your comment.
We have looked into the referred paper but we find very limited similarity between our work and the referred paper. The referred paper is a Hypervolume-based algorithm and it does not approximate spherical coordinates. Oppositely, its model requires an input of spherical coordinate and outputs a solution, but our model inputs solutions and outputs spherical coordinates. We use spherical coordinates in completely different way, the only similarity between our works is that both of us mentioned spherical coordinates in our own work.
---
Questions h:
Response: Thanks for your comment.
Efficiency is one motivation for regression-based SAEAs. For classification-based SAEAs and previous ordinal-regression-based SAEAs, although they train very few surrogates and are thus efficient, their limited number of surrogates are unable to provide enough information for diversity maintenance. Therefore, these SAEAs have poor performance in when the number of objectives is large.
Based on above two motivations, we make a trade-off and proposed LORA-MOO which is efficient and effective on diversity maintenance (our contributions 1 and 2 that are listed in Section 1).
From Fig 4, it can be observed that our runtime is shorter than regression-based SAEAs such as KRVEA and KTA2, but our optimization performance is better than other algorithms such as REMO and CSEA.
---
Questions i:
Response: Thanks for your comment.
PSO is only used as an optimizer for our algorithm. We did not make any modification to PSO or claim any contribution on PSO, it is not a novelty of our work. PSO can be replaced with any evolutionary optimizer for our algorithm.
---
Questions j:
Response: Thanks for your comment.
$S\_o$ contain only ordinal values and $S_a$ contains only angular coordinates, from the perspective of model training, they are both numerical labels. We need $S_A$ to provide corresponding variables $\textbf{x}$.
---
Questions k:
Response: Thanks for your comment.
Actually a dynamic strategy is our further work. One of our limitation is the static strategy of LORA-MOO.
---
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My concerns have been partially addressed. Since the lack of additional experiments and theoretical analysis, I have decided to maintain my score.
---
Rebuttal 2:
Comment: Thanks for your comment.
1. $\textbf{About Experiment}$:
We have conducted a new real-world NAS experiment with different network architecture and 8 objectives (error, parameters, flops, edge gpu latency, edge gpu energy, eyeriss latency, eyeriss energy, and eyeriss arithmetic intensity). The comparison results are consistent with the real-world problem we reported in our manuscript. Our LORA-MOO outperforms the comparison algorithms and have reached a mean HV value of 0.5776 over 30 independent runs. We would add the corresponding figure and descriptions to our manuscript (As we are unable to add figures in our comment now).
---
2. $\textbf{Theoretical Analysis}$:
We attempt to add the following time complexity analysis:
Notations:
- n: the number of training samples.
- N: the number of test samples.
- m: the number of objectives.
- g: the number of generations for reproducing candidate solutions.
- p: the population size for a generation.
$\quad $
The model used in LORA-MOO is Gaussian Process, the training time complexity is analyzed as follows:
- Time complexity of covariance matrix computation is $O(n^2)$.
- Time complexity of Cholesky decomposition and computation of likelihood: $O(n^3)$.
The prediction time complexity is analyzed as follows:
- Time complexity of computing the covariance between test sample and training samples: $O(n*N)$.
- The time complexity of predicting the mean: $O(n*N)$.
- The time complexity of predicting the variance: $O(n^2*N)$.
In summary, the overall training complexity is $O(n^3)$, and the overall prediction complexity is $O(n^2*N)$.
$\quad $
Now we analyze the time complexity of model-based optimization algorithms, for each iteration, the number of test samples is $p * g$, so the total number of test samples is approximately $N = n * p * g$.
1. For LORA-MOO, for a $m$-objective optimization problem:
- The time complexity of training an ordinal model and $m-1$ angular models is $O(n^3* m)$.
- The time complexity of prediction in the ordinal model is $O(n^3 * g * p)$.
- The time complexity of prediction in $m-1$ angular models: $O(n^3 * p * (m-1))$.
- The overtime time complexity in models for LORA-MOO:
$O(n^3 * (m + g * p + p * m - p)) \approx O(n^3 * (p * g + p * m)) \approx O(n^3 * p * (m + g))$
2. In comparison, for other optimization algorithms with $m$ surrogate models:
- The time complexity of training $m$ models: $O(n^3 * m)$
- The time complexity of prediction: $O(n^3 * g * p * m)$.
- Time overtime time complexity in $m$ models: $O(n^3 * (m * (1 + g * p))) \approx O(n^3 * p * m * g))$
3. For other optimization algorithms with only one surrogate model:
- Time overtime time complexity: $O(n^3 * (1 + g * p)) \approx O(n^3 * p * g)$
$\quad $
Therefore, increasing the number of objectives $m$ has limited impact on the time cost of LORA-MOO ($O(n^3 * p * (m + g))$), but for the comparison algorithms with $m$ surrogate models, their time cost would increase rapidly ($O(n^3 * p * m * g)$).
Although LORA-MOO has $m$ surrogate models in total, its time complexity does not significantly larger than the time complexity of optimization algorithms with only one surrogate model ($O(n^3 * p * g)$). | Summary: This paper introduces a surrogate assisted method for multi-objective optimization. The approach learns a surrogate function with the ordinal values as the regression labels. The ordinal values are generated using a iterative algorithm with the most dominated solutions having the highest ordinal values. The ordinal values are used to train a Kriging model used to select a point for observation using the convergence criterion. Another point is selected using the Kriging model trained on spherical coordinates via a diversity criterion. The approach has several parameters which are tuned via experimentation on real and benchmark datasets.
Strengths: - The approach provides an innovative way for optimizing multi-objective functions by separating out the two objectives thus simplyfying the problem.
1) The convergence objective designed to select the best solution wrt the ordinal values
2) The diversity objective designed to improve the diversity of the Pareto optimal solutions
- It is experimentally shown that the method improves the IGD metric compared to several benchmark and real MOO problems.
- The ideas presented in the paper are well motivated and well presented.
Weaknesses: - The approach presented in the paper is not sufficiently novel. Ordinal regression for multi-objective optimization has been studied before [1]. The differences with related prior work have not been discussed in detail.
- The proposed algorithm has many tunable parameters, and it is unclear how the parameters affect performance on real world problems when they have only been tuned on benchmark problems.
- The real world experiment on NAS shows improved regret eventually, but converges slower than other existing approaches. It is difficult to judge on the effectiveness of this approach based on a single experiment. Experiments on more real world optimization problems are necessary to make a conclusion.
- The paper is missing several notable MOO approaches from the Bayesian optimization community [2,3,4,5].
[1] Yu, Xunzhao, et al. "Domination-based ordinal regression for expensive multi-objective optimization." 2019 IEEE symposium series on computational intelligence (SSCI). IEEE, 2019.
[2] Tu, Ben, et al. "Joint entropy search for multi-objective bayesian optimization." Advances in Neural Information Processing Systems 35 (2022): 9922-9938.
[3] Zhang, Richard, and Daniel Golovin. "Random hypervolume scalarizations for provable multi-objective black box optimization." International conference on machine learning. PMLR, 2020.
[4] Paria, Biswajit, Kirthevasan Kandasamy, and Barnabás Póczos. "A flexible framework for multi-objective bayesian optimization using random scalarizations." Uncertainty in Artificial Intelligence. PMLR, 2020.
[5] Abdolshah, Majid, et al. "Multi-objective Bayesian optimisation with preferences over objectives." Advances in neural information processing systems 32 (2019).
Technical Quality: 3
Clarity: 4
Questions for Authors: What parameters were used for the NAS problem and how were they tuned? Do parameters tuned on benchmark problems generalize to real world problems?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness 1: The approach presented in the paper is not sufficiently novel. Ordinal regression for multi-objective optimization has been studied before [1]. The differences with related prior work have not been discussed in detail.
Response: Thanks for your comment.
We would like to add the following explanations to Section 2 to emphasize our novelties:
A specific drawback of [1] is the lack of information regarding solution distribution, which results in poor optimization performance when the number of objectives is large. In other words, the method proposed in [1] lacks an efficient diversity maintenance strategy, making it unsuitable for many-objective optimization.
Our work is designed for many-objective optimization. We have the following novelties when compared with [1]:
- We introduced $\lambda$-dominance to simplify the quantification of ordinal relations.
- We added artificial relations to alleviate the imbalance of training sets caused by the increasing number of objectives.
- We developed a spherical coordinate based diversity maintenance strategy to improve the diversity of obtained non-dominated solutions.
- The reproduction and selection methods are quite different. Our LORA-MOO uses global search, while [1] contains local search.
---
Weakness 2: The proposed algorithm has many tunable parameters, and it is unclear how the parameters affect performance on real world problems when they have only been tuned on benchmark problems.
Response: Thanks for your comment.
Our benchmark problems have covered diverse features of optimization problems (such as unimodal, multimodal, scaled functions, degenerated Pareto front, shifted Pareto front, and disconnected Pareto front) and we have conducted comprehensive ablation studies on them. When looking at a benchmark problem with specific features, we can observe and conclude how tunable parameters affect the optimization performance on problems with these features.
Therefore, if we obtained any prior knowledge about what features a real-world problem have, we might be able to tune parameters accordingly (based on the experience we concluded from benchmark problems).
In addition, we argue that it is not proper to tune parameters on real-world problems because for real-world expensive optimization problems, it is unrealistic to tune parameters on them before solving them. The cost of tuning parameters on real expensive problems is unaffordable.
---
Weakness 3: The real world experiment on NAS shows improved regret eventually, but converges slower than other existing approaches. It is difficult to judge on the effectiveness of this approach based on a single experiment. Experiments on more real world optimization problems are necessary to make a conclusion.
Response: Thanks for your comment.
The compared algorithms converge quickly at the early stage due to their local search strategies (e.g. KTA2 uses only optimal evaluated solutions to reproduce new candidate solutions). However, they both are affected adversely by the side effect of local search: When the number of evaluations reaches 200, they tend to be trapped in local optima.
In comparison, our LORA-MOO uses a global search strategy, so it would be relatively slow at the beginning but converges continuously during the optimization. It can be observed that the convergence speed of LORA-MOO does not slow down when the number of evaluations reaches 300, in comparison, the comparison algorithms have low convergence speeds at that time.
---
Weakness 4: The paper is missing several notable MOO approaches from the Bayesian optimization community [2,3,4,5].
Response: Thanks for your comment.
We have added these references to our Section 2.2.
---
Question: What parameters were used for the NAS problem and how were they tuned? Do parameters tuned on benchmark problems generalize to real world problems?
Response: Thanks for your comment.
We used the parameters we tuned on benchmark problems on the NAS problem directly. The detailed parameter settings are available in Section 4.1 and Appendix F.
It should be noted that a given parameter setting could lead to better results on some optimization problems but also could lead to worse results on other optimization problems. Just like the No Free Lunch rule, it is impossible to find a parameter setting that is optimal to all optimization problems.
Considering our benchmark problems have covered diverse features of optimization problems and we have conducted comprehensive ablation studies on them, we think our parameter setting is optimal for most optimization problems. Therefore, we did not tune our parameters for the NAS problem.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am keeping my current score as
- We cannot draw a concrete conclusion based on results from a single real world experiment.
- While there are several improvements proposed in this paper over prior work, the improvements are relatively minor.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments.
---
About real-world experiment:
We have conducted a new real-world NAS experiment with different network architecture and 8 objectives (error, parameters, flops, edge gpu latency, edge gpu energy, eyeriss latency, eyeriss energy, and eyeriss arithmetic intensity). The comparison results are consistent with the real-world problem we reported in our manuscript. Our LORA-MOO outperforms the comparison algorithms and have reached a mean HV value of 0.5776 over 30 independent runs. We would add the corresponding figure and descriptions to our manuscript.
---
About contributions:
We would like to clarify that:
- Only the first difference between our LORA-MOO and OREA we listed above is an improvement over prior work, and this contribution is a very minor contribution in our work.
- The remaining three differences (1.Clustering-based artificial relations, 2.Modeling angles in the spherical coordinate system for diversity maintenance, and 3. A novel global search method with novel initialization and reproduction strategies) are our main contributions. These three contributions are completely new ideas we developed to solve many-objective optimization problems, they do not exist in prior studies and they are not improvements of other works.
- The development of our clustering-based artificial relations defines a novel way to learn ordinal-regression models. It distinguishes our ordinal model from prior work.
- The development of our angle modeling method provides a novel way to maintain diversity, which is different from existing studies.
- OREA was not designed for many-objective optimization, it can be observed from our experiments that OREA only works well on multi-objective problems, while our LORA-MOO outperforms OREA significantly on many-objective optimization problems.
Although we used ordinal regression model in our work, we hope the reviewer can take a look at our other three main contributions. Just like there are many different classification models in diverse studies, ordinal regression models can also be different and diverse.
Thanks. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Long-range Meta-path Search on Large-scale Heterogeneous Graphs | Accept (poster) | Summary: This paper proposes an efficient meta-path search method on large-scale heterogeneous graphs. The proposed progressive sampling strategy and sampling evaluation strategy is effective for reducing the memory and time overhead, especially when the maximum hop is large. Experimental results show the effectiveness and efficiency of the proposed method.
Strengths: 1. The paper is well-written and easy to understand.
2. The experiments are comprehensive and the experimental results reveal the effectiveness of the proposed method.
3. The motivation sounds reasonable.
Weaknesses: 1. This novelty of this paper is limited.
2. The paper lacks a theoretical analysis about why the sampling strategy in the search stage is valid.
3. Experimental results on more large-scale graph datasets should be conducted.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The differentiable search strategy is widely used in many automated graph learning tasks, such as DiffMG. There is no significant contribution in the search strategy other than the progressive sampling method. Moreover, the proposed sampling method is simply based on path strength. Similarly, the proposed sampling evaluation that uses multiple sampling to select top-$M$ meta-paths is also straightforward. Overall, the major concern about this paper is the limited novelty, although the experimental results seem good.
2. Why the simple progressive sampling method is effective? The authors should add a theoretical analysis about the effectiveness. For example, is it possible to filter out the promising meta-paths especially in the early stage of the search process?
3. The paper aims to achieve the meta-path search on large-scale datasets. However, only one large-scale dataset (i.e., OGBN-MAG) is employed. It is suggested to add more large-scale datasets to evaluate the effectiveness of the proposed method.
4. More experimental analysis should focus on large-scale datasets, such as the performance and efficiency analysis in Figure 3.
5. The interpretability meta-paths searched by LMSPS should be explained in detail. Many searched meta-paths seem strange and unreasonable. For example, PPPPPP in ACM, PPPAIAI in OGBN-MAG. The semantics of these meta-paths is hard to understand.
6. It is suggested to analyze all searched meta-paths in-depth and give more insights about how to choose suitable meta-paths manually.
7. Many hyperparameters need to be tuned on each dataset, such as the search space size $C$, the number of sampled meta-paths $M$, and the maximum hops.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. In the following, we respond to your concerns point by point.
---
### **W1: This paper has limited novelty because the proposed method is simple and straightforward.**
**R1:** Although the other three reviewers enjoyed the novelty, we appreciate that the reviewer gave us the opportunity to highlight our novelty. We would like to clarify that the simplicity of the proposed LMSPS does not mean the novelty is limited. The novelty is highly related to the contribution. **If a method is effective, a simple and straightforward design is better than a complex design**.
The contributions of LMSPS have been summarized in the global response. To achieve the goals listed in the contributions, we propose a progressive sampling algorithm and a sampling evaluation strategy to overcome the efficiency and effectiveness challenges, respectively. **The high efficiency and high generalization of LMSPS exactly come from the simplicity** of the proposed method.
In addition, compared to other differentiable search methods such as DiffMG, both the progressive sampling algorithm and sampling evaluation strategy are novel. In Table 3, LMSPS is compared with them and shows obvious advantages. We will clarify the novelty more clearly in the revised manuscript. Thank you very much!
---
### **W2: Why is the simple progressive sampling method effective?**
**R2:** Thank you for the insightful question. As you mentioned, it is hard to filter out promising meta-paths in the early stage. So, as described in Lines 609-610 of Appendix B.4, LMSPS **warmups the parameters** for 20 epochs without dropping out any meta-paths.
Because **the effectiveness of the algorithm is determined entirely by the final performance**, we can not provide theoretical analysis without experimental results to demonstrate the effectiveness. However, we have conducted adequate experiments to validate its effectiveness.
* In Table 3, LMSPS is compared with six methods and shows obvious advantages.
* In Table 6, the ablation study shows that both the progressive sampling method and sampling evaluation strategy can improve performance significantly.
* In Table 11, the results show that the search stage of LMSPS can converge well on all five datasets.
We believe these results have validated the effectiveness. In addition, we have also provided an intuitive analysis of the effectiveness in Lines 634-639. Thanks to your question, we will highlight the effectiveness of the algorithm more clearly.
---
### **W3: It is suggested that more large-scale heterogeneous datasets be added.**
**R3:** Thank you for the suggestion. Different from homogeneous graph fields, there are few academic large-scale datasets in heterogeneous graph fields due to heterogeneity. OGBN-MAG is the only large-scale heterogeneous graph dataset with a leaderboard and comparable baselines.
To evaluate the effectiveness of LMSPS in more large-scale heterogeneous graphs, in Table 5, we also conduct experiments on four constructed datasets based on OGBN-MAG to demonstrate that the advantages of utilizing long-range dependencies are more obvious for sparser heterogeneous graphs. Based on your suggestion, if you could kindly provide some suitable large-scale heterogeneous datasets, we would love to evaluate LMSPS on them. Thank you!
---
### **W4: More analysis, such as performance and efficiency analysis, should focus on large-scale datasets.**
**R4:** Thank you for the suggestion. We would like to clarify that Table 2 and Lines 292-299 have analyzed the performance and efficiency of LMSPS on large-scale datasets under different maximum hops. In addition to Table 2, we also conducted a large number of experiments on large-scale datasets in Tables 1, 5, 6, 10, and 11. For most other experiments, we can only conduct reasonable comparisons on small and medium datasets **because most baselines run out of memory on large-scale heterogeneous graphs**. We will highlight the experiments and analysis of large-scale datasets in the revised manuscript. Thank you!
---
### **W5&W6: The interpretability of searched meta-paths should be explained. It is suggested to give more insights about how to choose meta-paths manually.**
**R5&R6:** Thank you for the suggestion. One of the important features of meta-paths is their hop count. The 5-hop meta-path "PPPPPP" means aggregating the node features from 5-hop "P" neighbors through the paths "PPPPPP". We also provided a detailed example in the global response for understanding the searched meta-paths.
The manual meta-paths rely on intense expert knowledge, which is both laborious and data/task-dependent. Considering it is impossible to understand the meta-paths of all real-world heterogeneous graphs, our automatic meta-path search shows great advantage by freeing researchers from the understand-then-apply paradigm.
In lines 674-706, we have focused on explaining and giving some insights about choosing meta-paths in OGBN-MAG. We summarize them as: **The importance of information from P (Papers), F (Fields), I (Instructions) to A (Authors) gradually decreases**.
The insights of the other datasets are shown in the global response. Due to space limitations, we can not interpret hundreds of meta-paths one by one here. We will add the explanations and insights in the revised manuscript. Thank you!
---
### **W7: Many hyperparameters need to be tuned on each dataset, such as C, M, and the maximum hops.**
**R7:** Thank you for the question. We would like to clarify that **both C and M are not tuned on each dataset**. In Line 603, we have described that the number of selected meta-paths M is 30 for all datasets. Similarly, C is not tuned on each dataset. Based on Eq. (3), C is not a hyper-parameter but a dynamic value progressively decreasing with the search epochs. We will clarify them in the revised manuscript.
Thank you once again for your insightful comments.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Dear authors,
Thanks for your rebuttal. My further concerns are as follows:
1. I recognize the statement that a simple and straightforward design is better than a complex design. But, I want to emphasize the novelty is rather incremental although the experimental results seem to be promising. Differentiable search method is widely used in many applications inspired by NAS (Neural Architecture Search). Applying differentiable search method to the meta-path search is not a novel idea. Moreover, some similar ideas for selecting meta-paths and reducing expert experiences have also been proposed. In other words, searching for meta-paths for heterogeneous graphs is not a new topic. Applying a commonly used search method for a not new topic is not very exciting. Although the authors states that the progressive sampling algorithm and sampling evaluation strategy are novel. But, from my view, the two tricks are too engineering without any theoretical analysis. And, the main body of this paper is still the differentiable search method. **Overall, the novelty of the paper is not yet up to level of NeurIPS**.
2. The paper focus on the meta-path search on large-scale graphs. However, only one large-scale graph is used. Moreover, it is suggested that the authors can choose some large graphs from other fields not limited to academic graphs. **Many large graphs in OGB are heterogeneous and can be used for evaluation**.
3. The concerns that many searched meta-paths seem strange and unreasonable still remains. Although the authors select some searched meta-paths and make explanations. But, most searched meta-paths lack interpretability. Interpreting the searched meta-paths one by one is infeasible. If the authors can find some common characteristics from the searched meta-path, this will have greater significance for guiding the design of the meta-path.
Overall, due to the limited novelty, the lack of more large-scale graph datasets from different fields, and the lack of the interpretability and the insight about the searched meta-paths, I maintain my ratings unchanged.
---
Reply to Comment 1.1.1:
Title: Theoretical analysis about the reasonableness of sampling search
Comment: **Zero-order condition**: Consider two high-dimensional random variables $\mathbf{y} = f(\mathbf{x}) \in \mathbb{R}^{m \times d_1}$ and $\mathbf{z} = g(\mathbf{x}) \in \mathbb{R}^{m \times d_1}$. We say that $\mathbf{y}$ and $\mathbf{z}$ satisfy the zero-order condition if, for any valid sample $\mathbf{x} \in \mathbb{R}^{n \times d}$, the inequality $|\mathbf{y} - \mathbf{z}|_2 \leq \epsilon$ holds, where $\epsilon$ is a very small positive number.
**Lemma 1**: Let $M$ represent the maximum number of activable paths, with each pair of operations satisfying the zero-order condition. Using $M$ distinct expectations and variances, it is possible to approximate all combinations (i.e., $2^M$).
Lemma 1 guarantees that we can track the combination containing $i$ meta-paths with at least $M$ iterations. **Given that the number of iterations significantly exceeds $M$, the relative importance of the meta-paths can be learned during the search stage.**
Below is the proof of Lemma 1:
Let
${\mathbf{y}}_{p_y(y)} = f(\mathbf{x})$,
${\mathbf{z}}_{p_z(z)} = g(\mathbf{x})$,
and ${\mathbf{x}} \sim p_{x(x)}$.
For the case $M=1$, the expectations of $\mathbf{y}$ and $\mathbf{z}$ can be expressed as follows:
$$
\begin{aligned}
{\\mathbb E}[{\\mathbf y}]&={\\mathbb E}[f\left({\\mathbf x}\right)]=\int p_{{\\mathbf x}}({\\mathbf x}) f({\\mathbf x}) {\mathrm d}{\\mathbf x}
\end{aligned}
$$
$$
\begin{aligned}
{\\mathbb E}[{\\mathbf z}]&={\\mathbb E}[g\left({\\mathbf x}\right)]=\int p_{{\\mathbf x}}({\\mathbf x}) g({\\mathbf x}) {\mathrm d}{\\mathbf x}
\end{aligned}
$$
According to the zero-order condition, $f(\mathbf{x}) \approx g(\mathbf{x})$. Since $p(\mathbf{x})$ is the same for both $\mathbf{y}$ and $\mathbf{z}$, it follows that $\mathbb{E}[\mathbf{y}] \approx \mathbb{E}[\mathbf{z}]$.
Next, we prove that $Var[\mathbf{y}] \approx Var[\mathbf{z}]$. Note that $Var[\mathbf{y}] = \mathbb{E}\left[\mathbf{y}^{2}\right] - \left(\mathbb{E}[\mathbf{y}]\right)^{2}$ and $Var[\mathbf{z}] = \mathbb{E}\left[\mathbf{z}^{2}\right] - \left(\mathbb{E}[\mathbf{z}]\right)^{2}$. Thus, it suffices to prove that $\mathbb{E}\left[\mathbf{y}^{2}\right] \approx \mathbb{E}\left[\mathbf{z}^{2}\right]$. This can be similarly demonstrated as follows:
$$
\begin{aligned}
{\\mathbb E}\left[{\\mathbf y}^{2}\right]&=\int p_{{\\mathbf y}}({\\mathbf y}) {\\mathbf y}^{2} {\mathrm d} {\\mathbf y} = \int p_{{\\mathbf x}}({\\mathbf x}) f^{2}({\\mathbf x}) {\mathrm d} {\\mathbf x}
\end{aligned}
$$
$$
\begin{aligned}
{\\mathbb E}\left[{\\mathbf z}^{2}\right]&=\int p_{{\\mathbf z}}({\\mathbf z}) {\\mathbf z}^{2} {\mathrm d} {\\mathbf z} = \int p_{{\\mathbf x}}({\\mathbf x}) g^{2}({\\mathbf x}) {\mathrm d} {\\mathbf x}
\end{aligned}
$$
According to the zero-order condition, we have $Var[\mathbf{y}] \approx Var[\mathbf{z}]$.
For the case of $M=2$, when both paths are selected, the output becomes $\mathbf{y} + \mathbf{z}$, and its expectation can be written as:
$$
\begin{aligned}
{\\mathbb E}[{\\mathbf y} + {\\mathbf z}] &= {\\mathbb E}[{\\mathbf y}] + {\\mathbb E}[{\\mathbf z}] \approx 2{\\mathbb E}[{\\mathbf y}]
\end{aligned}
$$
The variance of $\mathbf{y} + \mathbf{z}$ is:
$$
\begin{aligned}
Var[{\\mathbf y} + {\\mathbf z}] \approx Var [2{\\mathbf y}] = 4 Var[\\mathbf y]
\end{aligned}
$$
Thus, there are two types of expectations and variances: $\mathbb{E}[\mathbf{y}]$ and $Var[\mathbf{y}]$ for ${\mathbf{y}, \mathbf{z}}$, and $2\mathbb{E}[\mathbf{y}]$ and $4Var[\mathbf{y}]$ for ${\mathbf{y} + \mathbf{z}}$. Similarly, for the case where $M \in [1, K]$, there will be $M$ types of expectations and variances.
---
Rebuttal 2:
Title: Thank you for the detailed response!
Comment: Thank you very much for the detailed response. In the following, we respond to your concerns point by point.
---
### **W1: Searching for meta-paths for heterogeneous graphs is not a new topic. Applying a commonly used search method for a not new topic is not very exciting.**
**R1:** We appreciate that the reviewer gave us the opportunity to highlight our novelty again. We agree that searching for meta-paths for heterogeneous graphs is not a new topic. However, we would like to clarify that:
1. LMSPS is the first HGNN that makes it possible to achieve automated meta-path search for large-scale heterogeneous graph node property prediction*.
2. LMSPS is the first HGNN to utilize long-range dependency in large-scale heterogeneous graphs.
3. The searched meta-paths of LMSPS can be generalized to other HGNNs to boost their performance, which has not been achieved by existing works.
**All the above contributions are new topics.** In addition, as described by the reviewer, the differentiable search method is widely used in many applications inspired by NAS. There have been dozens of papers accepted by top conferences. In our paper, we have cited the related paper a dozen times, highlighted the differences between LMSPS and them, and compared LMSPS with the representative methods. **Overall, we can conclude that our search method is totally new.** The theoretical analysis of the reasonableness of the sampling search is provided below. Thank you very much!
---
### **W2: Many large graphs in OGB are heterogeneous and can be used for evaluation**.
**R2:** Thank you for the question. **We would like to clarify that many excellent works [1-4] also highlight their advantage on large-scale heterogeneous node property prediction only based on the results on OGBN-MAG.** In addition, in OGB, there are only six datasets for node property prediction: ogbn-products, ogbn-proteins, ogbn-arxiv, ogbn-papers100M, and ogbn-mag, and MAG240M. Except for ogbn-mag and MAG240M, all other four datasets are not heterogeneous. Because MAG240M contains over 240,000,000 nodes, it has hardly been tested by related works. The following tables shows the statistics of ogbn-mag and MAG240M. Although we have tried to run MAG240M, our hardware conditions cannot support the training even in the preprocessing stage. Thank you very much!
| Dataset | Num paper nodes | Num author nodes | Num institution nodes | Num field nodes | Total |
| -------- | --------------- | ---------------- | --------------------- | --------------- | ------------------- |
| ogbn-mag | 736,389 | 1,134,649 | 8,740 | 59,965 | 1,939,743 |
| MAG240M | 121,751,666 | 122,383,112 | 25,721 | - | 244,160,652 (126x) |
| Dataset | Num paper-paper edges | Num author-paper edges | Num author-institution edges | Num paper-field edges | Total |
| -------- | --------------------- | ---------------------- | ---------------------------- | --------------------- | -------------------- |
| ogbn-mag | 5,416,271 | 7,145,660 | 1,043,998 | 7,505,078 | 21,111,007 |
| MAG240M | 1,297,748,926 | 386,022,720 | 44,592,586 | - | 1,728,364,232 (82x) |
---
### **W3: The authors should find some common insights from the searched meta-path.**
**R3:** We appreciate that the reviewer gave us the opportunity to highlight our insights again. We have shown insights into DBLP, IMDB, and ACM in the global response. We show them again as follows.
* In DBLP with target node type Author, the information from P (Paper) and A (Author) is slightly more important than that from T (Term) and V (Venue).
* In IMDB with target node type Movie, the importance of information of K (Keyword), M (Movie), A (Actor) and D (Director) gradually decreases.
* In ACM with target node type Paper, the importance of information of P (Paper), A (Author) and C (Conference) gradually decreases.
* **For all related datasets, the importance of node type is highly related to the target node type**.
---
Thank you once more for your efforts. Please kindly let us know if our response has addressed your concerns. We are happy to answer your remaining concerns and questions if you have any.
---
[1] Open Graph Benchmark: Datasets for Machine Learning on Graphs. NeurIPS, 2020.
[2] Graph Attention Multi-Layer Perceptron. KDD, 2022.
[3] Simple and Efficient Heterogeneous Graph Neural Network. AAAI, 2023.
[4] An Efficient Subgraph-Inferring Framework for Large-Scale Heterogeneous Graphs. AAAI, 2024. | Summary: The paper proposes a novel framework LMSPS, aimed at efficiently utilizing long-range dependencies in large-scale HIN. The framework addresses two primary challenges: reducing computational costs while maximizing information utilization and overcoming the over-smoothing problem common in GNNs. LMSPS employs a progressive sampling algorithm to dynamically reduce the search space for meta-paths, thus identifying a subset of effective meta-paths tailored to the specific dataset and task.
Strengths: 1. The paper is well-written and easy to follow. The technical designs are clearly described.
2. The idea of using a progressive sampling algorithm to narrow the search space for meta-paths is novel and well motivated.
3. The framework is designed to handle large-scale graphs efficiently, maintaining stable performance and resource usage even as the complexity of meta-paths increases.
4. The proposed method consistently outperforms SOTA baselines across multiple datasets, including large-scale datasets like OGBN-MAG, demonstrating its robustness and effectiveness.
Weaknesses: 1. I still have some doubts about the necessity of modeling long-range dependencies in heterogeneous graphs. For example, in an academic network, the label of a paper can be well predicted by relying on some very close nodes. Could the authors provide an example to illustrate specific situations where long-range dependency is crucial?
2. Although the paper compares LMSPS with various baselines, more detailed ablation studies focusing on the individual components of LMSPS would strengthen the validation of its effectiveness.
Technical Quality: 3
Clarity: 4
Questions for Authors: See W1.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments that greatly encourage us. In the following, we respond to your concerns point by point.
---
### **W1: I still have some doubts about the necessity of modeling long-range dependencies in heterogeneous graphs. Could the authors provide an example to illustrate specific situations where long-range dependency is crucial?**
**R1:** Thank you so much for the insightful question. As shown in Table 2 of Section 6.3, SeHGNN can not utilize meth-paths larger than three hops on OGBN-MAG, and the best performance is 52.44%, while the performance of LMSPS increases from 52.72% to 54.83% when the maximum hop grows from 3 to 6. We can see that long meta-paths can increase performance significantly, but the performance of short meta-paths is also not bad. Utilizing long meta-paths means freely combining effective information from long and short meta-paths, which is the core advantage of LMSPS.
Understanding long meta-paths is difficult. However, automatic meta-path search actually frees researchers from the understand-then-apply paradigm. Based on LMSPS, we can search long and effective meta-paths without prior knowledge for various datasets. It is much more convenient than defining manual meta-paths based on expert knowledge, which is both laborious and data/task-dependent.
Thanks to your suggestion, we take the meta-path MDMDMK (M←D←M←D←M←K) from IMDB as an example. IMDB includes four different entity types: Movies (M), Directors (D), Keywords (K), and Actors (A). The task is to predict the category of the target movies. MDMDMK is a 5-hop meta-path that is hard for experts to understand and then apply. However, for many movies without keywords, the meta-path M←D←D←D←M←K is important because the target movies can aggregate the keyword information from the movies of co-directors. This example shows **the ability of long-range dependencies to complete the missing information that can not be obtained from close nodes**.
We hope the example meets your requirements. We will clarify the necessity of modeling long-range dependencies more clearly in the revised manuscript. Thank you!
---
### **W2: More detailed ablation studies focusing on the individual components of LMSPS would strengthen the validation of its effectiveness.**
**R2:** Thank you for the kind suggestion. In Table 6 of Section 6.6, we have conducted ablation studies to analyze the effects of individual components in LMSPS. Specifically, we separately remove the progressive sampling method, sampling evaluation strategy, and concatenation operation to observe the performance of LMSPS. As shown in Table 6, the performance of LMSPS significantly decreases when removing progressive sampling or sampling evaluation strategy and slightly decreases after replacing the concatenation operation with the transformer block. We will highlight the ablation study in the revised manuscript. Thank you! | Summary: The paper proposes a new framework called Long-range Meta-path Search through Progressive Sampling (LMSPS), which differs from traditional meta-path-based GNN training methods on heterogeneous graphs. LMSPS introduces a strategy for building a search space that includes all meta-paths related to the target node type and employs a sampling evaluation strategy to conduct specialized and effective meta-path selection.
The authors studied two new observations: a) A small number of meta-paths dominate the performance, b) Certain meta-paths can have a negative impact on performance.
They designed the LMSPS framework with a super-net in the search stage and a target-net in the training stage to mitigate costs and the over-smoothing problem that occurs in recent Heterogeneous Graph Neural Networks.
In conclusion, the paper presents a new framework that could help us better understand the challenges of leveraging long-range dependencies in large-scale heterogeneous graphs.
Strengths: * The paper in general presents a nice framework. The paper is well-written, has good coherence, and is well-structured.
* The two observations are interesting, i.e., a few meta-paths dominate the performance, and certain meta-paths can have negative impact on performance.
* The paper is very clear with thorough experiments and analysis. The originality of the experiment is strong.
* The experiments have demonstrated their limitations and possibilities to improve the effectiveness across datasets.
Weaknesses: * Even though the observations are interesting, I think the paper could do more to explore their implications for robust generalization. What does robust generalization to other classes imply or reveal about the process of long-range meta-path GNN training? The reduction of meta-path samples is effective. The authors clarify that the optimal maximum hop depends on the dataset and task, as mentioned in Appendix G, and cannot be determined automatically. Thus, will the sampling search impact the model's robustness?
* The evaluation results observed in Figure 1(a) and (b) cannot demonstrate the improvement in performance upon the removal of certain meta-paths. This raises the question of whether the observation that "certain meta-paths can have a negative impact on performance" is generally applicable. The second observation from Figure(c) is derived from the ACM dataset. Will this observation be valid for other heterogeneous graphs?
* What is the cost of the pre-processing stage? Can it be reduced by sampling to decrease the neighbor aggregation costs?
* Line 172 clarifies that MLP requires less human intervention compared to Transformer. Are there specific experiments demonstrating the superiority of MLP?
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see the weakness section.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments that greatly encourage us. In the following, we respond to your concerns point by point.
---
### **W1: What does robust generalization to other classes imply or reveal about the process of long-range meta-path GNN training? Will the sampling search impact the model's robustness on searched meta-paths?**
**R1:** Thank you for the insightful question. As described in Lines 170-171, to discover meta-paths with high generalization, the search results should not be affected by specific modules. So, the more complex the model, the harder it is for the discovered meta-paths to generalize well, which can be viewed as a kind of overfitting on architectures. Based on this consideration, compared to the previous works [1-3] that do not show the generalization, the architecture of our model is very simple, with pure MLPs as the parametric modules.
The sampling strategy is very important for our meta-paths search. Its function can be summarized as follows.
* Similar to dropout, the sampling strategy keeps the parametric modules changing in the search stage, which is important for preventing the search meta-paths from being affected by specific modules. **So, the sampling search can increase the generalization ability.**
* When the maximum hop is large, the search stage will run out of memory without the sampling strategy.
* The sampling strategy can overcome the deep coupling issue [4] of differentiable search by introducing randomness in each iteration.
As the search stage aims to determine the relative importance of meta-paths rather than achieve robust accuracy, the sampling search has little impact on the robustness of the searched meta-paths. Specifically, because the search stage has many iterations, the architecture parameter of each meta-path will be updated multiple times and the relative importance can be learned during training even with the sampling strategy. It is supported by Table 11 in Appendix E.3, which shows that the search stage of LMSPS can converge well on all five datasets. Thanks to the question, we will clarify the corresponding part more clearly in the revised manuscript.
---
### **W2: Will the second observation, "certain meta-paths can have a negative impact for heterogeneous graphs", be valid for other heterogeneous graphs?**
**R2:** Thank you for the insightful question. Except for ACM, in Lines 142-143, we have described an important fact that various recent HGNNs [5-7] have removed some edge types in the Freebase dataset to exclude corresponding heterogeneous information during pre-processing based on substantial domain expertise or empirical observations. The basic logic behind this behavior is that the meta-paths related to the edge types have negative impacts on heterogeneous graphs in the task. In addition, in Figure 5 of Appendix E.4, we can see that the performance of LMSPS doesn't always increase with the growth of the number of utilized meta-paths on DBLP, IMDB, and ACM, which also supports the second observation to some extent. Thanks to the question, we will clarify the corresponding part more clearly.
---
### **W3: What is the cost of the pre-processing stage? Can it be reduced by sampling to decrease the neighbor aggregation costs?**
**R3:** Thank you for the questions. Following SeHGNN [7], the pre-processing executes the simplified neighbor aggregation only once without any parameter updating. Specifically, we use the multiplication of adjacency matrices to calculate the final contribution weight of each node to targets without calculating nodes in the middle of paths, which is much more efficient than the pre-processing step in HAN [3]. On small datasets, the pre-processing stage takes 1~4 seconds. On OGBN-MAG, the pre-processing stage takes about 130 seconds, which is about 1.2% of the whole training stage. We believe the neighbor aggregation costs can be reduced by sampling. However, it is not very necessary because the current pre-processing cost is much smaller than the training cost. We will explore this in future work and clarify the corresponding part more clearly in the revised manuscript. Thank you!
---
### **W4: Line 172 clarifies that MLP requires less human intervention compared to Transformer. Are there specific experiments demonstrating the superiority of MLP to Transformer?**
**R4:** Thank you for the question. Although the Transformer is a widely used and powerful model, in line 172, we highlight that the Transformer involves more inductive bias, i.e., human intervention, than MLPs, which have been supported by many works [8-10].
In LMSPS, the difference of importance between the searched effective meta-paths is much smaller than that between the full meta-path set, making the attention mechanism seem unnecessary. For higher efficiency and generalization, we use pure MLPs instead of Transformer. In Table 6 of the ablation study, we employ the Transformer for semantic attention on all meta paths. The Transformer version performs slightly worse than LMSPS even if it uses many more meta-paths and is out-of-memory on Freebase and OGBN-MAG. We will clarify the corresponding part more clearly in the revised manuscript. Thank you very much!
---
**References:**
[1] Graph Transformer Networks. NeurIPS, 2019.
[2] Heterogeneous Graph Transformer. WWW, 2020.
[3] Heterogeneous Graph Attention Network. WWW, 2022.
[4] Single Path One-Shot Neural Architecture Search with Uniform Sampling. ECCV, 2020.
[5] DiffMG: Differentiable Meta Graph Search for Heterogeneous Graph Neural Networks. KDD, 2021.
[6] Differentiable Meta Multigraph Search with Partial Message Propagation on Heterogeneous Information Networks. AAAI, 2023.
[7] Simple and Efficient Heterogeneous Graph Neural Network. AAAI, 2023.
[8] MLP-Mixer: An all-MLP Architecture for Vision. NeurIPS, 2021.
[9] A Generalization of ViT/MLP-Mixer to Graphs. ICML, 2023.
[10] Scaling MLPs: A Tale of Inductive Bias. NeurIPS, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. These have addressed the majority of my questions. I appreciate the effort and insights the authors put into the paper. I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for appreciating the effort and insights! We're glad to hear that you're satisfied. | Summary: This paper presents an empirical study demonstrating that not all meta-paths are useful; some even negatively impact performance. Selecting the most meaningful meta-paths is crucial. The authors propose LMSPS, a super-net-based method to select beneficial meta-paths effectively.
Strengths: S1. The presentation is excellent and easy to follow.
S2. This paper is the first attempt to combine super-net and heterogeneous graph learning.
S3. Experimental results show that their model achieves state-of-the-art (SOTA) performance.
Weaknesses: W1. The paper's title is misleading. According to the title, the work seems to search for meaningful long-range meta-paths only to improve the performance of HIN representation learning. However, I think the major idea is to select effective meta-paths efficiently to overcome the issue of the exponential increase in the number of meta-paths. As shown in Table 9, some short meta-paths are still important. This work is actually a meta-path selection task in my point of view.
W2. The motivation of SeHGNN is that "models with a single-layer structure and long meta-paths outperform those with multi-layers and short meta-paths". I think it does not mean that long-range meta-paths are more important than short paths. I agree that different meta-paths have different importance. But is the length of the paths the main reason for this? As analyzed in the Limitation section, although the maximum hop is set to 12, the best performance is achieved at 6. Some early studies claimed that long paths can introduce noise and less relevant connections between nodes, leading to less accurate or meaningful representations.
W3. For most datasets, the improvement is marginal compared to the second-best performance. The enhancement brought by the selected meta-paths is not significant enough according to the experimental results.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Please refer to the aforementioned weaknesses.
2. In OGB, I can see that LMSPS achieved the 3rd place. Have you tried to compare with the first two methods?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations are sufficiently stated in the appendix. To my knowledge, there is no negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. In the following, we respond to your concerns point by point.
---
### **W1: The paper's title is misleading. I think the major idea is to select effective meta-paths efficiently to overcome the issue of the exponential increase in meta-paths.**
**R1:** Thank you for the careful reading and insightful comments. On homogeneous graph fields, there have been many outstanding works [1-4] that highlight long-range dependencies in their title but still employ short-range dependencies. We think that the logic behind the behavior is **the importance of short-range dependencies is widely accepted and utilizing them is not challenging**. So, they highlight long-range dependencies in their title. Similarly, although our work can search meaningful short meta-paths, the most significant contribution differentiating this work from others is the ability to search effective long-range meta-paths in heterogeneous graphs.
Moreover, though the major idea is totally the same as you mentioned, the issue of exponential increase in the number of meta-paths comes from our attempt to search long-range meta-paths. So, utilizing long-range dependency in heterogeneous graphs is the purpose, and overcoming the exponential issue is the specific process.
If the reviewer could kindly provide a more suitable title, we would love to use it. Thank you very much!
---
### **W2: I think SeHGNN does not mean that long-range meta-paths are more important than short paths. Long paths can introduce noise and less relevant connections between nodes, leading to less accuracy.**
**R2:** Thank you. This is a very insightful comment. Although we also do not think long-range meta-paths are more important than short meta-paths, we appreciate that the reviewer gave us the opportunity to explain our key idea of utilizing long meta-paths. Compared to existing work only utilizing short meta-paths, the key advantage of LMSPS is freely combining the effective information from long and short meta-paths
**Although most long meta-paths can introduce noise or redundant information, some effective long meta-paths can bring extra valuable information beyond short meta-paths**. The exact example can be found in the global response. Searching for effective long-range meta-paths is exactly one of our key contributions. As shown in Table 2 of Section 6.3, the performance of SeHGNN [5] and LMSPS keeps increasing as longer meta-paths are gradually introduced. However, SeHGNN can not utilize meth-paths larger than three hops on OGBN-MAG and the best performance is 52.44%, while the performance of LMSPS increases from 52.72% to 54.83% when the maximum hop grows from 3 to 6. So, utilizing effective long meta-paths can improve performance instead of leading to less accuracy. We will clarify it more clearly in the revised manuscript. Thank you very much!
---
### **W3: The enhancement is not significant enough according to the experimental results.**
**R3:** Although the other three reviewers highlighted the enhancement of experimental results in Strengths, we appreciate that the reviewer allowed us to explain our results more clearly. Based on Table 1, most of the second-best results come from SlotGAT [6]. LMSPS achieves an average of 1.00% absolute improvement compared to SlotGAT on small and medium datasets. Considering these datasets are widely used and the existing scores are high enough, a 1.00% average improvement is not bad. Moreover, SlotGAT can not run on OGBN-MAG due to the out-of-memory issue, highlighting the advantage of LMSPS.
The second-best result on OGBN-MAG is 51.45%, which is outperformed by LMSPS by a large margin of 3.38%. Considering that OGBN-MAG is a large-scale dataset that is much more challenging than the other datasets and LMSPS is designed for large-scale heterogeneous graphs, the improvements can validate its effectiveness. In addition, based on Table 5, LMSPS outperforms the second-best method by a large margin of 4.78% on the sparser large-scale dataset, which is also significant. We will explain our results more clearly in the revised manuscript. Thank you very much!
---
### **W4: LMSPS achieves the 3rd place in ogbn-mag leaderboard. Have you tried to compare with the first two methods?**
**R4:** Thank you for the question. I have tried to compare LMSPS with the first two methods, which use curriculum learning to change the input sequence of the data. However, we notice their results have been widely questioned due to the test label leakage problem (The code for the first place is completely based on that for the second place). The OGB team also noticed this problem and has asked the authors to investigate the results in two weeks. However, the authors haven't finished the investigation yet and have asked the OGB team to remove the leaderboard submission. Although we can not provide the external link due to the rebuttal policy, the discussion can be easily searched on GitHub. In addition, neither of the above works has complete papers, making their methods less trustworthy.
In summary, **LMSPS still ranks 1st on the ogbn-mag with trustworthy results**.
Thank you once again for your insightful comments.
---
**References:**
[1] Representing Long-Range Context for Graph Neural Networks with Global Attention. NeurIPS, 2021.
[2] Graph-based high-order relation modeling for long-term action recognition. CVPR, 2021.
[3] Hope: High-order graph ode for modeling interacting dynamics. ICML, 2023.
[4] High-order pooling for graph neural networks with tensor decomposition. NeurIPS, 2022.
[5] Simple and Efficient Heterogeneous Graph Neural Network. AAAI, 2023.
[6] SlotGAT: Slot-based Message Passing for Heterogeneous Graphs. ICML, 2023.
---
Rebuttal 2:
Title: Looking forward to your response
Comment: Dear Reviewer #dJZo,
We sincerely appreciate your thorough review and insightful comments on our manuscript. We have taken the time to carefully address all the points you raised, including
- explaining the misleading title
- clarifying the importance of long-range meta-paths and their role
- defending the improvement, especially in the ogbn-mag leaderboard
- explaining the leaderboard issues
Please refer to the **rebuttal content** for details.
As the discussion period is limited to 7 days and only 1 day remains, we kindly request your prompt feedback on our responses. Your expertise is crucial to us, and we welcome any additional thoughts you may have. Thank you once again for your time and attention.
Best regards,
Authors of #4444
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed response. I am sorry for providing my feedback late.
W3 and W4 have been addressed satisfactorily. Below are more comments regarding W1 and W2.
W1: My further concern is that there is no analysis of how the discovered long-range meta-paths benefit performance. It would be interesting to see some discussions about the impact of the discovered long-range meta-paths on performance improvement. Additionally, the methodology seems to lack specific techniques for searching meaningful "long-range" meta-paths. It mainly focuses on utilizing a super-net to discover "useful" meta-paths. The novelty of this part seems marginal.
W2: I think SeHGNN is primarily designed for efficient heterogeneous graph learning rather than discovering useful meta-paths. It cannot handle long-hop meta-paths due to the setting of enumerating all possible meta-paths. If this setting is changed to a limited number of meta-paths, SeHGNN could be efficient due to its simplified attention mechanism. Again, since the motivation of this work is to efficiently select the most effective meta-paths, the experiments should give more discussions about the quality of the discovered meta-paths.
---
Rebuttal 3:
Title: Thank you for the detailed response!
Comment: Thank you very much for the detailed response. In the following, we respond to your concerns point by point.
### W1: How do the discovered long-range meta-paths benefit performance? The methodology seems to lack specific techniques for searching meaningful "long-range" meta-paths.
**R1:** Thank you for the insightful question. In the global rebuttal "For the importance of long-range meta-paths," we have provided a detailed example to show the ability of long-range dependencies to complete the missing information that can not be obtained from close nodes. Also, as shown in Table 2 in Section 6.3 (also shown below for quick check), the performance of LMSPS keeps increasing as the maximum hop value grows, which means gradually adding longer meta-paths. It indicates that LMSPS can overcome the issues caused by utilizing long-range dependency, e.g., over-smoothing and noise. Moreover, as shown in Table 4 of Section 6.4, the discovered meta-path can also benefit SeHGNN.
Searching for long-range meta-paths has two main challenges: the exponentially increasing issue and the noise issue. To overcome both issues, as described in Lines 157-169, we propose a progressive sampling algorithm and a sampling evaluation strategy to overcome the two challenges, respectively. Specifically, the high-efficiency progressive sampling algorithm ensures LMSPS can search effective short and long-range meta-paths under a large maximum hop. As different meta-paths could be noisy or redundant to each other, top-M meta-paths are not necessarily the optimal solution when their importance is calculated independently. The sampling evaluation strategy evaluates the overall performance of each meta-path set. So, it can overcome the noise issue.
| Max hop | Num path | SeHGNN (Time / Test Acc (%)) | LMSPS (Time / Test Acc(%)) |
| :-----: | :------: | :------------------------: | :----------------------------: |
| 1 | 4 | 4.35 / 47.18 | 3.98 / 46.88 |
| 2 | 10 | 6.44 / 51.79 | 5.63 / 51.91 |
| 3 | 23 | 11.28 / 52.44 | 10.02 / 52.72 |
| 4 | 50 | OOM | 14.34 / 53.43 |
| 5 | 107 | OOM | 14.77 / 53.90 |
| 6 | 226 | OOM | 14.71 / **54.83** |
---
### **W2.1: If this setting is changed to a limited number of meta-paths, SeHGNN could be efficient due to its simplified attention mechanism.**
**R2.1:** Thank you for the insightful comment. Based on our second observation, i.e., certain meta-paths can have a negative impact on heterogeneous graphs; the attention mechanism has limitations in dealing with negative meta-paths. As described in Lines 142-144, the second observation is supported by the fact that various recent HGNNs have removed some edge types to exclude corresponding heterogeneous information during the pre-processing stage. For example, SeHGNN removes all edge types related to node type F (Field) in the ACM dataset. If simplified attention can handle negative meta-paths, this step is unnecessary. On the contrary, by meta-path search, LMSPS can easily drop out negative meta-paths. Table 2 supports this conclusion. When the maximum hop is 3, LMSPS outperforms SeHGNN by 0.28%, even the latter using more meta-paths.
---
### **W2.2: The experiments should give more discussions about the quality of the discovered meta-paths.**
**R2.2:** As described in Lines 310-319, to demonstrate the high quality of searched meta-paths, on the one hand, the meta-paths should be effective in the proposed model. On the other hand, the effective meta-paths mainly depend on the dataset instead of the architecture, so the meta-paths should be effective after being generalized to other HGNNs. Based on the results in Tables 1,2,3,5, using the search meta-paths, LMSPS outperforms the other baselines on almost all conditions, sometimes significantly, which can validate the high quality of the discovered meta-paths on the proposed model.
Because finding meta-paths that work effectively across various HGNNs is a tough task, it has not been achieved by previous works. However, based on Table 4 (also shown below for quick check), After simply replacing the original meta-path set with our searched meta-paths and keeping other settings unchanged, the performance of HAN and SeHGNN both improve, demonstrating the effectiveness of our searched meta-paths. Thank you very much!
| Method | DBLP | IMDB | ACM | Freebase |
| ------------ | -------------- | -------------- | -------------- | -------------- |
| HAN | 92.05 | 64.63 | 90.79 | 54.77 |
| HAN-LMSPS | 93.54 | 65.89 | 92.28 | 57.13 |
| SeHGNN | 95.24 | 68.21 | 93.87 | 63.41 |
| SeHGNN-LMSPS | 95.57 | 68.59 | 94.46 | 65.37 |
---
Thank you once more for your efforts. Please kindly let us know if our response has addressed your concerns. | Rebuttal 1:
Rebuttal: We are very grateful to the reviewers for carefully reviewing our paper and providing constructive comments and suggestions that have helped improve our submission. We especially thank the reviews for recognizing that our paper has:
1. **good originality** on method ((Reviewers djZo and 71Wo) and experiments (Reviewer maas),
2. **outstanding experiment results** (All Reviewers),
3. **nice presentation** (All Reviewers).
The main concerns include the limited novelty (r1gQ), the third-place ranking in OGB (djZo), the importance of long-range meta-paths (dJZo, 71Wo), and the insights from searched meta-paths (r1gQ). We briefly introduce the responses to these concerns in this general response section and provide concrete details in the response to each reviewer.
**For the limited novelty**, because the novelty is highly related to the contributions. We summarize the contributions as follows.
- Large-scale dataset. LMSPS is the first HGNN that makes it possible to achieve automated meta-path selection for large-scale heterogeneous graph node property prediction.
- Long-range dependency. LMSPS is the first HGNN to utilize long-range dependency in large-scale heterogeneous graphs. To achieve the above two goals, LMSPS has addressed two key challenges: (1) Alleviating costs while striving to effectively utilize information in exponentially increased receptive fields and (2) overcoming the well-known over-smoothing issue.
- High generalization. As shown in Table 6, the searched meta-paths of LMSPS can be generalized to other HGNNs to boost their performance, which has not been achieved by existing works. To accomplish this objective, LMSPS uses an MLP-based architecture instead of a transformer-based architecture for meta-path search because the former involves fewer inductive biases, i.e., human interventions.
**For the third place ranking in OGB**, we have provided the details that the first two methods were questioned by the OGB team due to the test label leakage problem. The authors have asked the OGB team to remove the leaderboard submission.
**For the importance of long-range meta-paths**, we have provided a detailed example to show the ability of long-range dependencies to complete the missing information that can not be obtained from close nodes. Take the meta-path MDMDMK (M←D←M←D←M←K) from IMDB as an example. IMDB includes four different entity types: Movies (M), Directors (D), Keywords (K), and Actors (A). The task is to predict the category of the target movies. MDMDMK is a 5-hop meta-path that is hard for experts to understand and then apply. However, for many movies without keywords, the meta-path M←D←D←D←M←K is important because the target movies can aggregate the keyword information from the movies of co-directors.
**For the insights from searched meta-paths**, we have added the missing insights from the searced meta-paths of DBLP, IMDB, and ACM. In DBLP with target node type Author, the information from P (Paper) and A (Author) is slightly more important than that from T (Term) and V (Venue). In IMDB with target node type Movie, the importance of information of K (Keyword), M (Movie), A (Actor) and D (Director) gradually decreases. In ACM with target node type Paper, the importance of information of P (Paper), A (Author) and C (Conference) gradually decreases. In addition, the importance of node type is highly related to the target node type.
**We hope that our response has addressed your concerns. In case you still have some concerns or we missed anything, please let us know.**
**Best regards** | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Piecewise-Stationary Bandits with Knapsacks | Accept (poster) | Summary: The paper studies Bandits with Knapsack (BwK) under a piecewise stationary environment. For the online matching problem where the true reward is fully known in each time period, the paper obtains a $\Omega(1/\ln(\eta_{\max}/\eta_{\min}))$ where $\eta_{\min}$ and $\eta_{\max}$ are such that all rewards and resources are in $[\eta_{\min}, \eta_{\min}]$. The paper also studies the online learning problem and gives theoretical results.
Strengths: 1. The writing is good, and mostly easy-to-follow.
2. The work gives theoretical guarantees to both the online matching and the online learning problem.
Weaknesses: I appreciate all the technical proofs in the paper. However, it seems the paper lacks enough technical novelty as well as strong theoretical results. For details, please see Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The ratio of $\ln(\eta_{\max}/\eta_{\min})$ seems to be weak. Imagine if $r_t=r$ and $c_t=c$ for all $t$, but $r$ and $c$ are dramatically different. It is reasonable to claim that the ratio approaches $1$ when the budget is not too low. Is it possible to refine the definition of $\eta$ so that it only concerns $r/c$ (similar to that in Zhou et al. (2008))?
2. In your contribution, you claim your performance guarantee is w.r.t. a dynamic benchmark, while existing adversarial BwK literature focus on the stationary benchmark. I think in many online matching papers the benchmark is also a dynamic one (see. e.g., Zhou et al. (2008)). Can you elaborate on this?
3. In the online matching part, the algorithm design is not entirely novel. From my perspective, the inventory reserving idea is a deterministic version of the initial randomized policy in Immorlica et al. (2019). It is also similar to the booking limit or nested booking limit in the online matching/booking problem (see, e.g, [1]). Another issue with regard to the inventory reserving design is that it is not adaptive enough and may waste some resources because the reservation policy is fixed at the beginning. Can you elaborate on the technical novelty in your policy design?
4. In the online learning problem, it seems you are essentially decreasing the competitive ratio in exchange for $\sqrt{T}$ regret. Is this necessary? And also how do you set $\alpha$?
5. There are very limited numerical experiments for the planning problem and there is no experiment for the learning problem. I think only comparing to Immorlica et al. (2019) is not convincing. The authors may also consider comparing their algorithms with that in Zhou et al. (2008),
[1] Ball, Michael O., and Maurice Queyranne. "Toward robust revenue management: Competitive analysis of online booking." Operations Research 57.4 (2009): 950-963.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's time and effort in evaluating our manuscript. We highly value your feedback and would like to address your concerns and questions point by point.
1. “The ratio of $\ln (\eta_{\max}/\eta_{\min})$ seems to be weak. Imagine if $r_t=r$ and $c_t=c$ for all $t$, but $r$ and $c$ are dramatically different. It is reasonable to claim that the ratio approaches $1$ when the budget is not too low. Is it possible to refine the definition of $\eta$ so that it only concerns $r/c$ (similar to that in Zhou et al. (2008))?
Response: We are grateful for your keen observation. Indeed, we can refine the definition of $\eta$ so that it only concerns $r/c$. We will explain the change based on IRES in Section 3, and the same follows for IRES-CM in Section 4.
By our orignal definition, in line 5 of Algorithm 1, we solve $\text{LP}(r^{(l)}, c^{(l)},\eta_{\min} \cdot \alpha^q)$ $\forall q \in \{0,1, \ldots, M\}$. In this step, each $\eta_{\min} \cdot \alpha^q$ is a guess of $B_l^{\star}/(t_l - t_{l-1})$. This is the only step we require $c^{(l)} \in [\eta_{\min}, \eta_{\max}]$, in order to show that a “correct” guess of $B_l^{\star}/(t_l - t_{l-1})$ leads to a “correct” guess of $ m^{\star}_l$ where $\text{Ratio}^{(l)\star} \in (\alpha^{m_l^{\star}}, \alpha^{m_l^{\star}+1}]$. All other steps only depend on the reward-consumption ratio. However, essentially line 5 only requires to output a guess of an arbitrary decision $x_l$ such that
\begin{align}
\frac{\sum_{a \in \mathcal{K}} r^{(l)}(a) x_l(a)}{\sum_{a \in \mathcal{K}} c^{(l)}(a) x_l(a)} \in (\alpha^{m^{\star}_l}, \alpha^{m^{\star}_l+1}]
\end{align}
and $\sum_{a \in \mathcal{K}} c^{(l)}(a) x_l(a) \geq B_l^{\star}/(t_l - t_{l-1})$.
In our revision, we re-define $\eta_{\min} = \min_{t,a} \frac{r_t(a)}{c_t(a)}, \eta_{\max} = \max_{t,a} \frac{r_t(a)}{c_t(a)}$, similar with $L$ and $U$ in Zhou et al. (2008) but considering multiple arms. Then $\text{Ratio}^{(l)\star} \in [\eta_{\min},\eta_{\max}]$. We define $M=\lceil\log_{\alpha}(\eta_{\max}/\eta_{\min})\rceil$ and partition $[\eta_{\min}, \eta_{\max}]$ into $M$ intervals $(\eta_{\min}\cdot \alpha^{m-1}, \eta_{\min} \cdot \alpha^m]$ where $m=1,\ldots,M$. For each stationary piece $l \in \mathcal{L}$, we denote $m_l^{\star} \in ${$0,\ldots, M-1$} as the interval such that $\text{Ratio}^{(l)\star} \in (\eta_{\min} \cdot \alpha^{m_l^{\star}}, \eta_{\min} \cdot \alpha^{m_l^{\star}+1}]$. Most of our paper remains the same, with only $M$ intervals and $m_l^{\star}$ replaced by their new definitions.
The only major change happens in line 5 of Algorithm 1. Specifically, we change it into solving the following LPs for all $q \in ${$0, \ldots, M-1$} for the optimal solution $x_l^{(q)\star}$:
\begin{align}
\max \sum_{a \in \mathcal{K}} c^{(l)}(a) x_l^{(q)}(a)
\end{align}
\begin{align}
\text{s.t.} \sum_{a \in \mathcal{K}} r^{(l)}(a) x_l^{(q)}(a) \geq \alpha^{q} \cdot \sum_{a \in \mathcal{K}} c^{(l)}(a) x_l^{(q)}(a)
\end{align}
\begin{align}
\sum_{a \in \mathcal{K}} r^{(l)}(a) x_l^{(q)}(a) \leq \alpha^{q+1} \cdot \sum_{a \in \mathcal{K}} c^{(l)}(a) x_l^{(q)}(a)
\end{align}
\begin{align}
\sum_{a \in \mathcal{K}} x_l^{(q)}(a)\leq 1
\end{align}
\begin{align}
x_l^{(q)}(a) \geq 0 \qquad \forall a \in \mathcal{K}.
\end{align}
It can be seen that the modified line 5 of Algorithm 1 realizes its essential function. Then rest of the proofs remain valid (with updated definitions). We maintain a CR of $O(\log(\eta_{\max}/\eta_{\min}))$, with $\eta$ only depending on $r/c$ . We have modified the manuscript accordingly.
---
Rebuttal 2:
Comment: 2. “In your contribution, you claim your performance guarantee is w.r.t. a dynamic benchmark, while existing adversarial BwK literature focus on the stationary benchmark. I think in many online matching papers the benchmark is also a dynamic one (see. e.g., Zhou et al. (2008)). Can you elaborate on this?”
Response: When we mention “while existing adversarial BwK literature focus on the stationary benchmark,” we are focusing solely on research works that consider the **bandit feedback** setting. That is, the DM only observes the outcomes $(R_t(A_t), C_t(A_t))$ **after** pulling arm $A_t$, which is studied in our Section 4. The bandit feedback setting is more stringent than the **full feedback setting** in online matching, which require observing the actual values of $(R_t(a), C_t(a))$ for all $a \in \mathcal{K}$ **before** choosing $A_t$.
Our intuitive discussions in Section 3 has the same full feedback setting as online matching papers (which certainly resonates with your comment), for example [Karp et al. (1990), Mehta et al. (2007), Zhou et al. (2008)], which crucially require knowing $(R_t(a), C_t(a))$ in crafting their potential function [Mehta et al. (2007), Zhou et al. (2008)], or algorithm gadget [Karp et al. (1990)], which cannot be readily generalized to the bandit setting. By contrast, we take a different approach in algorithm design in Section 3, which circumvent the difficulty is transiting from full feedback to bandit feedback setting. The natural generalization from full to bandit feedback shown in Section 4 is one of our core contributions. To this end, we emphasize that bandit feedback does hinder optimization, as illustrated by how the regret term in the bandit setting (Theorem 4.2) degrades with an increasing $L$, the number of change points.
Regarding the benchmark considered by Zhou et al. (2008), it is indeed dynamic. However, it is a **best single arm** benchmark allowing pulling a single arm in each round; while our opt(FA) is a **best distribution over arms** benchmark. In fact, in the bandit-feedback stationary outcome setting, Badanidiyuru et al. (2018) consider a best distribution over arms benchmark, which is similar with our opt(FA). They show, in their Appendix A, that it requires optimizing over the set of probability distributions on the $K$ arms to get the optimal regret, while pulling a best single arm yields significantly worse regret (which could noticably affect the achievable CR). It is not hard to show that in the picewise-stationary setting, this is also the case. Thus, our benchmark is **strictly stronger** than Zhou et al. (2008).
To conclude, our benchmark is strictly stronger than Zhou et al. (2008) and we consider a more difficult setting than online matching in Section 4. We appreciate the reviewer’s comment and will cite and compare with the online matching literature in our revised manuscript.
Reference
[1] Karp, Vazirani, Vazirani, An Optimal Algorithm for On-line Bipartite Matching, STOC 1990
[2] Mehta, Saberi, Vazirani, Vazirani, AdWords and Generalized On-line Matching, JACM 2007
[3] Badanidiyuru, A., Kleinberg, R., & Slivkins, A. (2018). Bandits with knapsacks. Journal of the ACM (JACM), 65(3), 1-55.
---
Rebuttal 3:
Comment: 3. “In the online matching part, the algorithm design is not entirely novel.... Another issue with regard to the inventory reserving design is that it is not adaptive enough and may waste some resources because the reservation policy is fixed at the beginning. Can you elaborate on the technical novelty in your policy design?”
Response: We appreciate the reviewer’s comment, but we argue that it is not accurate to say our inventory reserving idea is a deterministic version of other works such as Immorlica et al. (2019). While some papers consider different inventory reservation strategies, ours offers a novel perspective on the problem, and its simplicity and intuitiveness enhance our contribution. Our approach splits the reward into two terms: the reward-consumption ratio multiplied by the resource consumption on each stationary piece $l$ (see Section 2.3). By reserving a certain amount of inventory for each ratio interval, our task simplifies to guessing the optimal reward-consumption ratio interval in each round. Our idea and corresponding strategy are crucially different from existing literature. Moreover, the technical details of our proofs are not at all similar with any existing paper.
Immorlica et al. (2019)’s inventory reservation strategy depends solely on the cumulated reward (see the definition of their $\hat{g}$ in Algorithm 3), without considering the reward-consumption ratio, using traditional gradient descent-based algorithms. Our experiments demonstrate that their inventory reservation strategy is significantly more conservative than ours in the piecewise-stationary setting. Ball and Queyranne (2009) study a rather restrictive setting with two customer classes, prior knowledge of each class's fare, and fixed resource consumption of $1$. This is a very simple setting, which allows them to consider a strategy depending only on the ratio between each fare class. If our strategy is perceived as a similar version of existing methods, then by that logic, all inventory reservation strategies could be viewed as fundamentally similar.
We do agree that our algorithm is not as adaptive as some best-of-both-world algorithms, but it does provide a near-optimal competitive ratio in our non-stationarity setting. Although the process of guessing ratio intervals via LPs in a round-robin manner may seem primitive, it leaves room for further refinement which may lead to less resource waste. Our experiments show that although our algorithm wastes some resources, it is less conservative in inventory reservation compared to Immorlica et al. (2019)’s adversarial algorithm, leading to better performance in our piecewise-stationary setting. We also believe that our piecewise-stationary setting is more realistic than stationary/adversarial/non-stationary with bounded global variation settings in practice, and thus worth studying.
To sum up, given that we introduce a novel and intuitive perspective of the piecewise-stationary Bwk, develops new inventory reservation algorithms and proof techniques, and achieves a near-optimal performance guarantee (compared to the best distribution over arms dynamic benchmark), we believe our contribution is solid.
---
Rebuttal 4:
Comment: 4. "In the online learning problem, it seems you are essentially decreasing the competitive ratio in exchange for $\sqrt{T}$ regret. Is this necessary? And also how do you set $\alpha$?"
Response: Thank you for the insightful concern. We do not yet know what is the minimal decrease in competitive ratio in exchange for a sublinear-in-$T$ regret. We choose a simpler way to explore so that we do not need to detect the exact change points. There could potentially be more refined exploration approaches that lead to better coefficients in the competitive ratio, while maintaining a worse regret. We will keep exploring in our future research. Regarding $\alpha$, it can be set as any constant $> 1$, such as $2$ or $e$.
5. "There are very limited numerical experiments for the planning problem and there is no experiment for the learning problem. I think only comparing to Immorlica et al. (2019) is not convincing. The authors may also consider comparing their algorithms with that in Zhou et al. (2008)."
Response: Thanks for the reviewer’s comment! Given that our paper is primarily theoretical, we conduct only demonstrative experiments on smaller datasets. We clarify that our experiments focus on a bandit feedback setting, which indeed involves a learning problem. The underlying rewards and resource consumption are set to be deterministic for convenience, but the algorithm does not know they are deterministic. We apologize for the misleading typo in line 632 where "known" should have been "unknown", and we have correct it. Hence we believe comparing our proposed IRES-CM and the existing benchmark by Immorlica et al. (2019) is appropriate. We will provide a study on the full feedback setting, and compare with Zhou et al. (2008) which only works with full feedback. This supplements our main study with bandit feedback.
Finally, we sincerely thank the reviewer for the careful inspection and insightful concerns! Your suggestions have significantly helped us improve the quality of our paper. We genuinely hope you could re-evaluate our contributions given our clarifications. We are happy and open to any further discussion.
---
Rebuttal Comment 4.1:
Comment: I would like to thank the authors for the very detailed response. My concerns have been moderately addressed and thus I have raised my score. While I very much appreciate authors' efforts and clarifications, I think the paper may require a significant re-structure by adding more results (theoretical & numerical) and discussions in the next iteration.
---
Reply to Comment 4.1.1:
Comment: Thank you for the feedback! We will surely add more theoretical results to support our revision, and we will also add more discussions distinguishing our work from existing literature that you kindly mentioned. If granted an additional page in the main text, we plan to incorporate the numerical results currently in the appendix, and add further numerical experiments and discussions in the next iteration. At present, we have maximized our page limit to effectively present our results and novel ideas, so we put the numerical results in the appendix.
---
Rebuttal 5:
Title: Numerical experiments regarding Zhou et al.'s algorithm and the bandit learning setting
Comment: Dear reviewer, pardon our late supplementation. We would like to provide some numerical results regarding the Zhou et al. and the bandit setting that you mentioned.
We present a two-piece stationary illustrative case here. We let the rewards and resource consumption in all rounds be uniformly distributed within a $[-0.2,+0.2]$ range regarding their mean values. We let $B=9360$ and $T=20000$ as in our original manuscript. We remark that Zhou et al. require observing $(R_t, C_t)$ in each round before making decisions, which is a way too strong requirement. Hence, we assume that Zhou et al. observe $(r_t, c_t)$ instead of $(R_t, C_t)$ before making decisions in each round, which is still a stronger assumption than the bandit setting considered by us and Immorlica et al.
In the following tables, we take the average rewards over 10 experiments for each algorithm and present the cumulative rewards in a multiple of every 1000 rounds.
We first consider the $(r_t, c_t)$ values in our original manuscript, where we let $r^{(1)}(1)=r^{(1)}(2)=0.5, c^{(1)}(1)=c^{(1)}(2)=1$, $r^{(2)}(1)=1, r^{(2)}(2)=0.5, c^{(2)}(1)=0.5, c^{(2)}(2)=1$. In this case, the optimal solution of the benchmark FA chooses a single arm on each stationary piece, and Zhou el al. outperform both Immorlica et al. and our IRES-CM.
| t | 1000 | 2000 | 3000 | 4000 | 5000 | 6000 | 7000 | 8000 | 9000 | 10000 | 11000 | 12000 | 13000 | 14000 | 15000 | 16000 | 17000 | 18000 | 19000 | 20000 |
|------------------|--------|--------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Immorlica et al. | 409.42 | 668.89 | 868.46 | 959.76 | 1004.28 | 1047.26 | 1081.50 | 1338.21 | 1379.11 | 1415.21 | 1486.26 | 1556.22 | 1611.23 | 1680.19 | 1743.14 | 1810.03 | 1873.71 | 1938.27 | 2001.66 | 2066.87 |
| Zhou et al. | 498.39 | 998.42 | 1499.79 | 1998.63 | 2100.68 | 2100.68 | 2100.68 | 2100.68 | 2100.68 | 2100.68 | 3100.24 | 4099.27 | 5098.96 | 6096.99 | 7099.31 | 8099.78 | 9025.09 | 9025.09 | 9025.09 | 9025.09 |
| IRES-CM | 468.43 | 866.61 | 1294.79 | 1680.58 | 2060.31 | 2471.52 | 2863.14 | 3252.99 | 3544.64 | 3647.43 | 4580.54 | 5579.57 | 6433.62 | 7148.54 | 7148.54 | 7148.54 | 7148.54 | 7148.54 | 7148.54 | 7148.54 |
We next consider a case with a slight change on $(r_t, c_t)$ on the second stationary piece. Specifically, we let $r^{(1)}(1)=r^{(1)}(2)=0.5, c^{(1)}(1)=c^{(1)}(2)=1$, $r^{(2)}(1)=0.5, r^{(2)}(2)=1, c^{(2)}(1)=0.5, c^{(2)}(2)=1$. In this case, the optimal solution of the benchmark FA chooses a distribution over arms on the second stationary piece, where $x^*_{2}(1)=0.128, x^*_{2}(2)=0.872$. As shown, in this case our IRES-CM outperforms both Immorlica et al. and Zhou et al. This is consistent with the theoretical results that Zhou et al. achieve sub-optimal rewards compared with a **best distribution over arms** benchmark.
| t | 1000 | 2000 | 3000 | 4000 | 5000 | 6000 | 7000 | 8000 | 9000 | 10000 | 11000 | 12000 | 13000 | 14000 | 15000 | 16000 | 17000 | 18000 | 19000 | 20000 |
|------------------|--------|--------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Immorlica et al. | 434.69 | 656.97 | 909.24 | 943.53 | 988.01 | 1030.39 | 1064.42 | 1101.87 | 1144.65 | 1181.05 | 1248.60 | 1314.08 | 1372.59 | 1437.17 | 1508.36 | 1572.78 | 1631.64 | 1691.74 | 1751.38 | 1826.37 |
| Zhou et al. | 494.79 | 988.91 | 1488.38 | 1990.28 | 2444.41 | 2444.41 | 2444.41 | 2444.41 | 2444.41 | 2444.41 | 3442.66 | 4442.90 | 4465.25 | 4465.25 | 4465.25 | 4465.25 | 4465.25 | 4465.25 | 4465.25 | 4465.25 |
| IRES-CM | 442.14 | 850.82 | 1273.24 | 1721.91 | 2186.27 | 2636.42 | 3049.17 | 3467.15 | 3884.67 | 4339.88 | 4772.74 | 5021.60 | 5021.60 | 5021.60 | 5021.60 | 5021.60 | 5021.60 | 5021.60 | 5021.60 | 5021.60 |
We believe the above results further validates the strength of our algorithm. We will supplement the figure format results of the above case and more large-scale cases in our revision. We hope this supplementation further addresses your concern, and thank you for your valuable comments! | Summary: The paper studies the bandits with knapsacks problem in a piecewise-stationary environment. The paper proposes an algorithm guaranteering a near-optimal competitive ratio for the problem. The guarantees hold wrt a dynamic benchmark which is stronger than the standard stationary benchmark employed in adversarial BwK settings.
Strengths: The setup is interesting and well motivated, and it fits nicely within the line of works trying to bridge between fully stochastic and fully adversarial environments. Moreover, the proposed algorithm provides some interesting insights on the problem and a new perspective with respect to the standard LagrangeBwK approach of Immorlica et al.
Weaknesses: See questions.
Technical Quality: 3
Clarity: 4
Questions for Authors: Is the overhead of solving the LP at each iteration significant in your experiments? For example, how does it compare to "lighter" per-iteration updates, such as the primal-dual approach by Immorlica et al.?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are genuinely grateful to the reviewer for dedicating time and effort to evaluating our paper. We are glad to address your question.
In fact, the computational load of our algorithm is noticeably lighter than Immorlica et al. (2019). We solve $\text{LP}(r^{(l)}, c^{(l)},\eta_{\min} \cdot \alpha^q)$ for each $q \in ${$0,1, \ldots, M$} in each iteration, which leads to no more than $(M+1)T = \lceil \log(\eta_{\max}/\eta_{\min}) \rceil T$ LPs solved over the planning horizon. Immorlica et al. (2019) needs to solve $\hat{g}(t) = \max_{\tau \in [t]} \tau \cdot \text{opt}(\text{LP}(\bar{M}^{\text{ips}}_{\tau}, B, \tau))$ in each iteration (see their Algorithm 3 on page 20), which involves solving $t$ LPs in each round $t$. Thus, their primal-dual algorithm requires solving $T(T+1)/2$ LPs over the planning horizon. Since our $M$ is a logarithmic factor regarding the value range of $r_t(a), c_t(a) \in [0,1]$ for all $t,a$, we typically have $M<<T$.
Our numerical experiments consistently exhibit computational time aligned with the theoretical complexity, with our algorithm being around 5-6 times faster than that of Immorlica et al. (2019). However, given that our paper is primarily theoretical, we conducted demonstrative experiments on smaller datasets where the overhead of solving LPs is not prominent.
We do agree that solving $(M+1)T$ LPs could still be expensive, and there could potentially be lighter approaches regarding per-iteration updates. Concerning non-stationary Bwk, the following papers do not require solving more than one LP per iteration: Fikioris and Tardos (2023) study an approximate stationary setting; Castiglioni et al. (2022a) study a large budget setting; Liu et al. (2022) focus on a bounded global variation setting. Their algorithms are lighter than ours in terms of computational buren, but their settings are significantly different from ours. Therefore, these works are not directly comparable with ours.
Finally, we thank the reviewer for the valuable point on computational complexity. We will continue to work on reducing the computational load of our algorithm. We are happy and open to discussing any further questions.
References
[1] Fikioris, G., & Tardos, É. (2023, July). Approximately stationary bandits with knapsacks. In The Thirty Sixth Annual Conference on Learning Theory (pp. 3758-3782). PMLR.
[2] Castiglioni, M., Celli, A., & Kroer, C. (2022, June). Online learning with knapsacks: the best of both worlds. In International Conference on Machine Learning (pp. 2767-2783). PMLR.
[3] Liu, S., Jiang, J., & Li, X. (2022). Non-stationary bandits with knapsacks. Advances in Neural Information Processing Systems, 35, 16522-16532.
---
Rebuttal 2:
Title: More on improving the computational overhead
Comment: (We apologize for writing $\hat{r}, \hat{c}$ as $r,c$ in this comment. Complicated math notation cannot show well.)
Pardon our supplement discussion. In fact, the computational load of our algorithm can be easily reduced to solving no more than $1$ LP per iteration. Instead of solving LP$(r_t, c_t, \eta_{\min} \cdot \alpha^{q})$ for all $q \in${$0,1, \ldots, M-1$} in each round (line 6 of Algorithm 2), we can solve a single LP$(r_t, c_t,\eta_{\min} \cdot \alpha^{q_t})$ after deciding the $q_t$ value in line 16. By doing so, we retain the same performance guarantee as the original manuscript.
In our original manuscript (and our rebuttal), we didn't write down Algorithm 2 in the most computationally efficient way. Since we mainly focused on illustrating the learning process naturally, we chose to present Algorithm 2 in a way that shows how it naturally generalizes from Algorithm 1. In the deterministic outcome setting (Algorithm 1), we solve $M+1$ LPs for each stationary piece. Then in the bandit setting (Algorithm 2), we largely keep the structure of Algorithm 1 for the purpose of demonstrating the main ideas. However, after running more experiments on problems with larger scales, we find that the aforementioned modification significantly enhanced our algorithm's efficiency. Therefore, we think it's better to implement our algorithm more efficiently. We will revise the manuscript so that Algorithm 2 solves no more than $1$ LP per iteration. We will also supplement more experiments with larger scales in our revised manuscript.
Thank you so much for your valuable advice on computational overhead! | Summary: This paper studies the bandits with knapsacks problem in a piecewise-stationary environment and designs an algorithm that achieves a provably near-optimal competitive ratio. Instead of using a static benchmark, the performance guarantee is present based on a dynamic benchmark.
Strengths: - The problem setup, piecewise-stationary BwK, complements the BwK literature.
- This paper is well-structured. The warm-up section 3 on deterministic outcome setting really helps in understanding the algorithm for stochastic outcome setting.
- For each algorithm/theoretical result, there are detailed explanations trying to give the intuition behind it.
- The theoretical results are built on mild assumptions. They are solid and technical.
Weaknesses: - It is hard for readers not having expertise in BwK to get the intuition of why the algorithm is near-optimal. Specifically, why IRES is good by reserving an equal budget for each ratio?
Technical Quality: 3
Clarity: 2
Questions for Authors: - In line 5 of algorithm 1, why using $\eta_{\min}\cdot\alpha^q$ with restricting $q$ in ${\0,1,...,M-1\}$ to guess $B^*_l/(t_l-t_{l-1})$?
- In Theorem 4.2, is it possible that $opt(FA)$ is close to the term $\tilde{O}(L\sqrt{|\mathcal{K}NT|})$? In this case, would this bound be close to zero regardless of the preceding product term?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have discussed the limitations as they claimed in the paper checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s time and effort in evaluating our manuscript. Your feedback is highly valued, and we would like to address your concerns and questions point by point.
Regading "Weaknesses": “It is hard for readers not having expertise in BwK to get the intuition of why the algorithm is near-optimal. Specifically, why IRES is good by reserving an equal budget for each ratio?”
Response: We understand and share the concern about making the intuition behind our algorithm clear. In addition to the warm-up section 3, we have included a high-level overview and intuition in Section 2.3 (pages 3-4) to help clarify the technical discussions and makes our contributions more accessible. Based on the reviewer’s feedback, we recognize the need to introduce more foundational concepts of BwK before presenting our idea and will modify our paper accordingly.
BwK differs from traditional multi-armed bandits (MAB) due to resource constraints. Each action in BwK not only yields a reward but also consumes resources from a limited budget, making it crucial to balance exploration and exploitation while ensuring the resource budget is not exceeded. In piecewise-stationary MAB, the performance of a non-anticipatory algorithm $\pi$ is expressed as $\sum_{t=1}^T R_t(a_t) \geq \sum_{t=1}^T r_t(a_t^{\star}) - Reg$, where $\sum_{t=1}^T R_t(a_t)$ is the cumulative reward achieved by algorithm $\pi$, $\sum_{t=1}^T r_t(a_t^{\star})$ is the cumulative expected reward obtained by the best arm selections (with prior knowledge on $r_t$ over the entire planning horizon), and Reg is a sublinear-in-$T$ regret.
In piecewise-stationary Bwk, without assuming bounded global variation (Appendix A.2 on page 11), achieving a sublinear-in-$T$ regret is impossible. Therefore, the performance guarantee is in the form $\sum^T_{t=1} R_t(a_t) \geq \frac{1}{\text{CR}}\cdot \text{opt(FA)} - \text{Reg}$, where opt(FA) is the cumulative expected reward obtained by the best dynamic policy (with prior knowledge on $(r_t, c_t)$ over the entire planning horizon, see Section 2.1 on page), CR is the competitive ratio and Reg is a deductive regret. In our work, we aim to develop algorithms that minimizes CR and ensures Reg to be sublinear-in-$T$. Specifically, IRES (full-feedback deterministic algorithm) and IRES-CM (bandit-feedback stochastic algorithm) both achieve CR=$O(\log (\eta_{\max}/ \eta_{\min})) = O(M)$ (Theorem 3.2, Theorem 4.2), and we provide a matching CR lower bound (Theorem 4.5).
Recall that we decompose the optimal reward opt(FA) into the reward-consumption ratio product the amount of resources assigned for each stationary piece $l$ (see equation (3) on page 4, sorry that the math formulation in the rebuttal may not show well).
\begin{equation}
\text{opt(FA)} = \sum_{m=-M}^{M-1} \sum_{l \in \mathcal{L}} \text{Ratio}^{(l)\star} \cdot \mathbf{1}(m^{\star}_l =m) \cdot B^{\star}_l.
\end{equation}
IRES aims to achieve a reward guarantee for each interval $m$ regarding the reward-consumption ratio $1(m_l^{\star} =m) \cdot \text{Ratio}^{(l)\star}$ and the resource consumption $\sum_{l \in \mathcal{L}} 1(m_l^{\star} =m) \cdot B_l^{\star}$. Regarding why IRES is good by reserving an equal budget for each ratio, a high-level explanation is that by doing so, we reserve adequate inventory (compared to $\sum_{l \in \mathcal{L}} 1(m_l^{\star} =m) \cdot B_l^{\star}$) for the correct guess of $1(m_l^{\star} =m) \cdot \text{Ratio}^{(l)\star}$ for each interval $m$. Specifically, we achieve the guarantee by performing two tasks:
(a) for each $l \in \mathcal{L}$, we guess the value of $m$ such that $\text{Ratio}^{(l) \star} \in (\alpha^m, \alpha^{m+1}]$. By solving a series of LPs in a round-robin manner, we guarantee that for at least $1/(M+1)$ fraction of requests on each $l$, our guessed ratio interval are close to the correct interval $m^{\star}_l$. By accomplishing task (a), we ensure that for each $l \in \mathcal{L}$, at least $1(m_l^{\star} =m) \cdot B_l^{\star}/(M+1)$ requested resources are served by resources reserved for interval $m$, generating reward at a ratio of at least $\alpha^m$.
(b) for each interval $m$, we "reserve" $B/2M$ resource units. That is, we reserve an inventory of $B/2M$ resource units to satisfy requests with a guessed reward-consumption ratio interval $m$. When the inventory reserved for interval $m$ is depleted, the DM rejects (by choosing $a_{\text{null}}$) all future requests with a guessed interval $m$. By accomplishing task (b), if the reserved inventory for interval $m$ are not depleted by round $T$, our algorithm earns a reward of at least $\alpha^m \cdot 1(m_l^{\star} =m) \cdot B_l^{\star}/(M+1)$ during stationary piece $l$. Else, if the reserved $B/2M$ resource units for interval $m$ are depleted by round $T$, then the DM earns a reward of at least $\alpha^m \cdot B/(2M) \geq \alpha^m \cdot \sum_{l \in \mathcal{L}} 1(m_l^{\star} =m) \cdot B_l^{\star}/(2M)$ from resources reserved for interval $m$, since $\sum_{l \in \mathcal{L}} 1(m_l^{\star} =m) \cdot B_l^{\star} \leq B$.
Given (a) & (b), a judicious analysis on the relationship between stationary pieces and reward-consumption ratio intervals is required to ensure a reward of
\begin{align}
\frac{1}{O(M)} \cdot \sum_{l \in \mathcal{L}} \alpha^m \cdot \mathbf{1}(m^{\star}_l =m) \cdot B^{\star}_l
\end{align}
is accrued for each interval $m$. Hence, we achieve CR$=O(M)$.
---
Rebuttal 2:
Title: Regarding "Questions"
Comment: (We apologize for the weird section division in this part. Otherwise the math notation do not show correctly.)
1. "In line 5 of algorithm 1, why using $\eta_{\min}\cdot\alpha^q$ with restricting $q \in${$0,1,\ldots,M−1$} to guess $B_l ^{\star} /(t_l-t_{l-1})$?"
Response: We are sorry that there is a typo in line 5 of Algorithm 1. It should be “solve $\text{LP}(r^{(l)}, c^{(l)},\eta_{\min} \cdot \alpha^q)$ $\forall q \in${$0,1,\ldots,M$}” instead of solving for $q \in${$0,1,\ldots,M−1$}. We will correct this typo. Rest asured that the algorithm description and proofs are all correct regarding the value range of $q$. We next explain why we use $\eta_{\min}⋅\alpha^q$ with restricting $q \in${$0,1,\ldots,M$} to guess $B_l ^{\star} /(t_l-t_{l-1})$.
Recall from Section 2.1 that $r^{(l)}(a), c^{(l)}(a) \in [\eta_{\min}, \eta_{\max}]$ for all $l, a$, and recall from Section 2.3 that we define the set $\mathcal{L} = ${$l \in ${$1, \ldots, L$}: $\sum_{a \in \mathcal{K}} x_l^{\star}(a) > 0$}. Then we have $\sum_{a \in \mathcal{K}} c^{(l)}(a) x_l^{(q)\star}(a) \in [\eta_{\min}, \eta_{\max}]$ since $0<\sum _{a \in \mathcal{K}} x^{(q)\star}_l(a) \leq 1$ for all $l \in \mathcal{L}$.
This further indicates that $B_l ^{\star} /(t_l-t_{l-1}) \in [\eta_{\min}, \eta_{\max}]$. Plugging in $q \in${$0,1,\ldots,M$}, we essentially use $M+1$ intervals $[\eta_{\min}, \eta_{\min} \cdot \alpha], (\eta_{\min} \cdot \alpha, \eta_{\min} \cdot \alpha^2], \ldots, (\eta_{\min} \cdot \alpha^{M-1}, \eta_{\max}]$ to cover $[\eta_{\min}, \eta_{\max}]$ and guess which interval $B_l ^{\star} /(t_l-t_{l-1})$ falls into.
In Claim 3 (see Appendix B.3 on page 12), we show that the round-robin technique in IRES ensures that, on each stationary piece $l$, $x_l^{(q)\star}$ (optimal solution to $\text{LP}(r^{(l)}, c^{(l)},\eta_{\min} \cdot \alpha^q)$) for at least one $q \in${$0,1,\ldots,M$} is close to the $x^{\star}_l$ (optimal solution to FA) in terms of both the resource consumption and the reward-consumption ratio. Recall that we denote $m^{\star}_l \in ${$-M,\ldots, M-1$} as the interval such that $\text{Ratio}^{(l) \star} \in (\alpha^{m^{\star}_l}, \alpha^{m^{\star}_l+1}]$, and we denote $m_t \in ${$-M, \ldots, M-1$} such that $\text{Ratio}_l^{(q_t)} \in (\alpha^{m_t},\alpha^{m_t+1}]$.
Then Claim 3 leads to the important result that for all $l$, in at least $1/(M+1)$ fraction of rounds $t \in ${$t_{l-1}+1, \ldots, t_l$}, we have $m_t \in ${$m^{\star}_l-1,m^{\star}_l$}.
To conclude, by guessing a $q$ such that $\eta_{\min} \cdot \alpha^q$ is within a factor of $\alpha$ from $B_l^{\star}/(t_l - t_{l-1})$ (which happens in at least $1/(M+1)$ fraction of rounds), we also have $\text{Ratio}_l ^{(q)} $ to be at most a factor of $\alpha$ from $\text{Ratio}^{(l)\star}$. Therefore, for at least $1/(M+1)$ fraction of rounds, we have the correct guess of the reward-consumption ratio interval on each stationary piece.
2. "In Theorem 4.2, is it possible that opt(FA) is close to the term $\tilde{O}(L\sqrt{|\mathcal{K}|NT})$? In this case, would this bound be close to zero regardless of the preceding product term?"
Response: Yes, the reviewer’s point is correct. If opt(FA) is close to $\tilde{O}(L\sqrt{|\mathcal{K}|NT})$, then the bound in Theorem 4.2 will be close to zero. Unfortunately, this order of loss is unavoidable in bandit problems. In stationary unconstrained MAB, the optimal regret is $\tilde{O}(\sqrt{|\mathcal{K}|T})$; in piecewise-stationary unconstrained MAB, the optimal regret is $\tilde{O}(\sqrt{L|\mathcal{K}|T})$. We emphasize that even the theoretical regret exceeds the optimal reward, these results, as well as our result, are still meaningful from the following aspects: (1) The performance bound serves as a measure of how well our algorithm performs relative to the best possible strategy. Even if the theoretical regret exceeds the optimal reward, the bound indicates that our algorithm's regret grows sub-linearly with time $T$; (2) The bound provides a worst-case scenario guarantee, reassuring users that the algorithm won't perform significantly worse than the optimal strategy, even in less favorable conditions; (3) The regret bound provides a theoretical guarantee that the performance will remain within certain limits, offering confidence in the algorithm's consistency and reliability.
Finally, we thank the reviewer for the thoughtful feedback, which has significantly helped us improve our paper. We are happy and open to discussing any further questions.
---
Rebuttal Comment 2.1:
Comment: Thanks for these detailed explanations. As I am unfamiliar with this topic, it is hard for me to evaluate the value of this job and I prefer to keep my score. | Summary: This paper addresses the challenge of piecewise non-stationary stochastic bandits with knapsacks. In bandit with knapsacks, at each round a learner is asked to choose an action and receives both a reward and a budget cost. The goal of the learner is to maximize its cumulative reward while satisfying some cumulative budget constraints. The authors prove a competitive ratio with respect to a non-stationary oracle with $L$ changes of order $O(1/\log(\eta_{\max}/\eta_{\min}))$ under the assumption that budget costs smaller than $\eta_\max$ and rewards larger than $\eta_\min$ and if L is sufficiently small. Earlier results on non-stationary bandit with knapsacks are recent and depend on a global variation measure (glo) that is hardly satisfied in practice.
Strengths: - The considered setting is relevant and of interest for practical applications.
- The authors design an algorithm that achieves an optimal competitive ratio with a matching lower bound.
- The non-stationary assumption is much more realistic than previous work.
- Experiments included in the appendix demonstrate that the performance improvements with respect to the only existing baseline. The latter is however designed for the adversarial setting and is thus too conservative in their setting.
Weaknesses: - The main part of the paper is too technical with many different notations, inline mathematical formulas, and not easy too follow.
- The algorithm needs to know the loss and budget bounds in advances.
- The dependence of the regret in $L$ seems suboptimal since the results only hold for $L \leq o(\sqrt{T\eta_{\min}})$ while we expect $L$ to be possibly as large as $o(T)$.
- I am not convinced by the lower bound of Lemma 2.3 which uses $L = T/B$ and is thus in a setting where the upper-bound does not hold. The result would be stronger for a fixed value of $L$.
Technical Quality: 3
Clarity: 2
Questions for Authors: - I suggest the authors to address my limitations.
- The paper only considers piecewise stationarity, would it be possible to generalize it to smooth variations of losses and budget costs?
Typos:
- p3, l117: $B^2/T$
- p5, l160: "The DM does not *know*"
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your time and effort in reviewing our paper. We address your concerns and questions one by one below:
1. “The main part of the paper is too technical with many different notations, inline mathematical formulas, and not easy to follow.”
Response: We apologize for the dense notation in our paper. Given the highly technical and unconventional nature of our work, detailed notation is necessary to convey our ideas accurately. To help readers, we have included a high-level overview and intuition in Section 2.3 (pages 3-4). We hope this section clarifies the technical discussions and makes our novel contributions more accessible.
2. “The algorithm needs to know the loss and budget bounds in advances.”
Response: We believe there may be a misunderstanding. Our algorithm requires knowing the range where the reward and resource consumption fall in, but it does **not** require prior knowledge of budget bounds. In the full-feedback deterministic outcome setting in Section 3, not only our algorithm, but also our performance guarantee does not rely on the budget $L$ (see Theorem 3.2 on page 6). In the bandit-feedback stochastic outcome setting in Section 4, while our performance guarantee (see Theorem 4.2 on page 8) depends on $L$, the algorithm itself does not need to know $L$. That being said, we do provide an improved performance guarantee when $L$ is known (see Remark 4.3 on pages 8-9).
3. “The dependence of the regret in $L$ seems suboptimal since the results only hold for $L = o(\sqrt{T \eta_{\min}})$ while we expect $L$ to be possibly as large as $o(T)$.”
Response: We acknowledge that this is indeed a limitation of our piecewise-stationary setting, as discussed in Section 2.2. In the bandit-feedback stochastic outcome setting, without prior knowledge of $L$, our result is meaningful only when $L = o(\sqrt{T \cdot \eta_{\min}})$. With prior information of $L$, our result is meaningful when $L=o(T \cdot \eta_{\min})$. In the full-feedback deterministic outcome setting, our result is meaningful even when $L=T$.
This limitation arises from our exploration process (line 9-13, Algorithm 2 on page 8) in the bandit-feedback setting. There could potentially be other change point monitoring algorithms where this restriction can be lifted. Nevertheless, our main contribution lies in our novel design of breaking the problem down into guessing the reward-consumption ratio interval and reserving adequate inventory for each interval. This design allows us to achieve a near-optimal competitive ratio without knowing the exact change points, making our contribution meaningful despite this limitation.
4. "I am not convinced by the lower bound of Lemma 2.3 which uses $L=T/B$ and is thus in a setting where the upper-bound does not hold. The result would be stronger for a fixed value of $L$."
Response: We appreciate the reviewer’s careful inspection and insightful concern. The competitive ratio (CR) lower bound in Lemma 2.3 is derived for a full-feedback deterministic outcome setting, where our algorithm has no restriction on $L$. Our deterministic performance guarantee (for $\eta_{\min}>0$) in Theorem 3.2 holds even when $L=T$. Therefore, $L=T/B$ is in a setting where the upper-bound holds, in a full-feedback deterministic setting.
We do agree with the reviewer that in the bandit-feedback stochastic setting, we require $L = o(\sqrt{T \cdot \eta_{\min}})$ (or $L=o(T \cdot \eta_{\min})$ with prior knowledge of $L$), which indeed contradict Lemma 2.3 where we set $L=T/B$, when $B$ is small. The rationale behind Lemma 2.3 is that we want to provide a bound that surpasses Immorlica et al. (2019), where the CR depends on $T$.
In fact, our Theorem 4.5 on page 9 already provides a valid CR lower bound, where we set $L$ to be a fixed value $2 \log(\eta_{\max}/\eta_{\min})$. The lower bound also show that $\eta_{\min}>0$ is a necessary condition for obtaining a non-trivial CR.
5. “The paper only considers piecewise stationarity, would it be possible to generalize it to smooth variations of losses and budget costs?”
Response: We think it could be possible to generalize our approach to smooth variations, since Algorithm 1 (full-feedback deterministic outcome setting) applies to any non-stationarity (see Theorem 3.2 on page 6). The performance dependence on $L$ in the bandit-feedback stochastic setting arises from our exploration process in Algorithm 2. There could potentially be more refined change point monitoring algorithms that adapt to other types of non-stationarity.
Our key contribution is the new perspective of decomposing the reward into the reward-consumption ratio and the amount of resources assigned for each stationary piece $l$ (see equation (3) on page 4). Then by reserving a fixed amount of inventory for each ratio interval, our task simplifies to guessing the optimal reward-consumption ratio interval in each round. Since our paper is the first to propose this design, we believe a simpler non-stationarity helps us better illustrate our main contribution and make the paper easier to follow. We will continue to refine our algorithm to adapt to more complex non-stationarity.
Finally, we thank the reviewer for the careful inspection and insightful question, which indeed help us to improve our paper! We have corrected the typos and hope that we have clarified your concerns. We are happy and open to discussing any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. This addresses some of the points I raised.
> We believe there may be a misunderstanding. Our algorithm requires knowing the range where the reward and resource consumption fall in, but it does not require prior knowledge of budget bounds.
This was actually what I meant by my point. Is is possible to not require the knowledge of the reward and ressource ranges?
> The competitive ratio (CR) lower bound in Lemma 2.3 is derived for a full-feedback deterministic outcome setting, where our algorithm has no restriction on $L$. Our deterministic performance guarantee (for $\eta_{\min}>0$) in Theorem 3.2 holds even when $L=T$.
Thank you. I understand better but this is still unfortunate to have the result for the full-information feedback only. Wouldn't it be possible to have a lower-bound for any fixed value L?
Additionally, after revisiting Theorem 3.2, I realize that the paper is indeed quite challenging to engage and the analysis of Thm. 3.2 hard to follow. It's also unclear to me how the dependence on $L$ is reflected in the result.
---
Reply to Comment 1.1.1:
Title: Demonstrating why knowing $\eta_{\min}$ is necessary
Comment: To show that it is necessary to know $\eta_{\min}$, we let the DM be provided with a lower range $<\eta_{\min}$ (a looser range), and show that it leads to sub-optimal CR. We first construct a general case with $N+1$ instances when $\eta_{\min} = \beta^{-N}$ for some absolute constant $\beta >1$. We consider $N+1$ instances with two arms $\mathcal{K}= \{1\}$ and $a_{\text{null}}$, and instance $n$ happen with probability $p_n$. All instances have deterministic outcomes, and they share the same reward model $R_t(1) = 1$ for all $t$. Their consumption functions are:
$\text{Instance $1$: }C^{(1)}(1)= \left(\underbrace{1, \ldots, 1}_{B\text{ rounds}}\right)$
$\text{Instance $2$: }C^{(2)}(1)= (\underbrace{1, \ldots, 1},\underbrace{1/\beta, \ldots, 1/\beta}_{B \cdot \beta\text{ rounds}})$
$\text{Instance $3$: }C^{(3)}(1)= \left(\underbrace{1, \ldots, 1}, \underbrace{1/\beta, \ldots, 1/\beta}, \underbrace{1/\beta^2, \ldots, 1/\beta^2}_{B \cdot \beta^2\text{ rounds}}\right)$
$\ldots$
$\text{Instance }N: C^{(N)}(1)= \left(\underbrace{1, \ldots, 1}, \underbrace{1/\beta, \ldots, 1/\beta}, \ldots, \underbrace{1/\beta^{N-1}, \ldots, 1/\beta^{N-1}}_{B \cdot \beta^{N-1}\text{ rounds}}\right)$
$\text{Instance }N+1: C^{(N+1)}(1)= \left(\underbrace{1, \ldots, 1}, \underbrace{1/\beta, \ldots, 1/\beta}, \ldots, \underbrace{1/\beta^{N-1}, \ldots, 1/\beta^{N-1}}, \underbrace{1/\beta^{N}, \ldots, 1/\beta^{N}}_{B \cdot \beta^{N}\text{ rounds}}\right).$
(We are sorry that the subfixes for the underbraces cannot show properly in this bubble. In any instance, the $n$-th stationary piece has a length of $B \cdot \beta^{n-1}$ rounds.)
Through a similar analysis to the proof of Lemma 2.3, we can derive a CR lower bound of $N(1-1/\beta)+1/\beta$ using the instance family above. In the following CR expressions, we omit coefficient regarding $1/\beta$ as they are constants. For example, we write CR$=N(1-1/\beta)+1/\beta$ simply as CR$=\Theta(N)$. When the DM is provided with information $\eta_{\min} = \beta^{-N}$ for any $N \in \mathbb{N}$, a CR lower bound is derived based on $N+1$ instances as constructed above.
We suppose the real $\eta_{\min} = \min_{t,a} \frac{r_t(a)}{c_t(a)} = \beta^{-\Lambda}$, $\eta_{\max} = \max_{t,a} \frac{r_t(a)}{c_t(a)} = 1$, but the DM only has weak prior information that
$\tilde{\eta}_{\min}=\beta^{- \kappa \cdot \Lambda}$
($\kappa>1$ can be set to be arbitrarily large) and $\eta_{\max} = 1$. Then from the DM's point of view, the optimal CR she/he could derive is CR$=\Theta(\kappa \cdot \Lambda)$, while from the perspective of who knows the real $\eta_{\min} = \beta^{-\Lambda}$, the optimal CR should be $\Theta(\Lambda)$.
We first show that given the loose lower range $\beta^{- \kappa \cdot \Lambda}$, the DM will not benefit from tightening the value ranges by blindly guessing a value of $\eta_{\min}$. We suppose the DM tightens the value range to $[\beta^{-(\kappa \cdot \Lambda - d)}, 1]$ for some $d \geq 1$, without knowing the real $\eta_{\min}$. Then he/she derives a CR lower bound with $N +1= \kappa \cdot \Lambda - d+1$ instances based on the above construction. Then the DM can expect to achieve a total reward of $\Theta(B \cdot \beta^{\kappa \cdot \Lambda-d}/(\kappa \cdot \Lambda-d))$ (similar analysis with proof of Theorem 2.3). However, since the DM does not know the real $\eta_{\min}$, it is possible that in fact $\eta_{\min} = \beta^{- \kappa \cdot \Lambda}$. In this case, the optimal reward can be as large as $\Omega(B \cdot \beta^{\kappa \cdot \Lambda})$ if $N = \kappa \cdot \Lambda$. Hence, from the DM's perspective, she/he could achieve a sub-optimal CR of $\Theta(\beta^d \cdot (\kappa \cdot \Lambda-d))$ if she/he blindly assume $\eta_{\min} = \beta^{-(\kappa \cdot \Lambda - d)}$, which is significantly worse than the optimal CR$=\Theta(\kappa \cdot \Lambda)$.
Therefore, the DM must derive a CR on the full range $[\beta^{\kappa \cdot \Lambda},1]$, which involves $N+1=\kappa \cdot \Lambda+1$ instances as constructed above. Therefore the DM expects a reward of $\Theta(B \cdot \beta^{\kappa \cdot \Lambda}/(\kappa \cdot \Lambda))$ when there are indeed $N+1 =\kappa \cdot \Lambda+1$ instances. However, since in fact there are only $\Lambda+1$ instances, the DM wastes all her/his resources reserved for instance $\Lambda+2, \ldots, \kappa \cdot \Lambda+1$ and she/he can only achieve a reward of $O(B \cdot \beta^{\Lambda}/(\kappa \cdot \Lambda))$. Compared with the actual optimal reward $\Omega(B \cdot \beta^{\Lambda})$ with $\Lambda+1$ instances, the DM achieves a sub-optimal CR of $\Omega(\kappa \cdot \Lambda)$. Since $\kappa$ can be arbitrarily large, the CR derived without correct knowledge of $\eta_{\min}$ is significantly worse than the optimal CR$=\Theta(\Lambda)$.
---
Reply to Comment 1.1.2:
Title: General response to the comment by Reviewer BBCY
Comment: We appreciate the reviewer’s insightful and pointed concerns. Please allow us to address them further.
Regarding the requirement on the knowledge of the reward and resource ranges, unfortunately, it is necessary. While it is possible to derive a worse bound without $\eta_{\max}$ by setting it to its upper bound of 1, knowing the lower bound is essential for our algorithm's functionality. We have provided a case with some light proof in the comment above this one to show that, to achieve a near-optimal CR, it is indeed a necessary condition to know $\eta_{\min}$. To our knowledge, existing literature that derives near-optimal performance bounds with respect to $\eta_{\max}/\eta_{\min}$ typically requires knowledge of both $\eta_{\min}$ and $\eta_{\max}$ (Zhou et al., 2008; Im et al., 2021; Zeynali et al., 2021). Thus, we believe this assumption is reasonable.
Regarding the analysis in Section 3.2, our algorithm aims to perform two tasks: (i) ensuring at least one out of every $M+1$ guesses on the reward-consumption ratio is accurate for each stationary time segment; (ii) allocating sufficient resources for each reward-consumption ratio interval (interval for short), to establish a near-optimal CR. Our algorithm IRES guarantees a interval-wise $O(M)$ CR. To attain this, we focus on two scenarios: intervals where the reserved resources are depleted before the end of the horizon (Claim 2, see Line 199) and intervals where the reserved resources are not fully consumed until the end (Claim 1, see Line 197). The interval-wise CR naturally leads to the overall CR result.
For Claim 1, we need the total number of stationary pieces where $t_{l} - t_{l-1} \leq M$ to be at most $o(T)$. Thus we cannot set $L=T/B$ even in the deterministic setting. We have clarified Theorem 3.2’s dependence on $L$ in the revised manuscript and removed Lemma 2.3. Our original mentality behind providing Lemma 2.3 is that we find a CR lower bound which is better than Immorlica et al. (2019) w.r.t. $T$, but we indeed fail to keep it aligned with our assumptions. Thank you very much for pointing this out! We could potentially make Theorem 3.2 stands when $L=O(T)$ by modifying the algorithm to draw $q$ (see line 5 of Algorithm 1) in a randomized manner (instead of a round-robin manner). However, we agree with you that this is only applicable in the full-feedback setting.
Despite the limitation behind Lemma 2.3, we highlight that its proof (see Appendix C.5) still provides a valid CR lower bound of $\Omega(L)$, which applies to any legit $L$ values. In the proof of Theorem 4.5 (Appendix C.6), we essentially show that $L$ can be no larger than $\log(\eta_{\max}/\eta_{\min})$ in our constructed instance, leading to a matching CR lower bound to our performance guarantee. We will revise these proofs to clearly present the lower bound for all valid $L$ values. We hope this addresses your concern regarding the CR lower bound.
We further remark that demonstrating Claim 1 and Claim 2 involves a judicious analysis on the rewards gained on each stationary piece and the rewards gained for each ratio interval, which depend heavily on the resource consumption status for each interval. Hence, the somewhat complex analysis involving the term $\tilde{\mathcal{T}}^{(m)}$ (defined in Section 3.3) is necessary. We understand and share the concern about making the intuitions clear (as our paper is technical and highly unconventional). Thus, we strive to shed light on our analysis by providing a high-level overview and intuition in Section 2.3 (pages 3-4), making our contributions more accessible. We have also revised Theorem 3.2 to clearly show the dependence on $L$.
To conclude our comments:
1. Knowing $\eta_{\min}, \eta_{\max}$ beforehand is a necessary condition for any non-anticipatory algorithm to achieve a CR of $O(\log(\eta_{\max}/\eta_{\min})), but we believe it is a mild assumption since it is assumed in relevant papers. We will clearly point out this limitation in Section 2.2.
2. Our Lemma 2.3 of setting $L=T/B$ is not aligned with our assumption, yet we do provide a CR lower bound for all legit $L$.
3. While the notation in our analysis are unavoidable, we will ensure that the dependence on $L$ is clearly highlighted in Theorem 3.2.
We hope this addresses all your concerns. Despite the limitations, we believe our paper is novel and makes significant advances over existing works.
References
[1] Im, S., Kumar, R., Montazer Qaem, M., & Purohit, M. (2021). Online knapsack with frequency predictions. Advances in neural information processing systems, 34, 2733-2743.
[2] Zeynali, A., Sun, B., Hajiesmaili, M., & Wierman, A. (2021, May). Data-driven competitive algorithms for online knapsack and set cover. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 12, pp. 10833-10841). | Rebuttal 1:
Rebuttal: Dear Review Team, we are grateful for your careful reading and thoughtful comments. They are highly relevant and very insightful. On top of the point-to-point responses to individual reviewers, we would like to summarize and clarify critical concerns.
1. Are the problem and the algorithm sufficiently meaningful?
Response: We believe our piecewise-stationary setting is meaningful. Theoretically, unconstrained piecewise-stationary bandits has drawn significant attention in recent years, with many interesting results derived. However, no prior paper addresses the bandits with knapsacks (Bwk) case. In the presence of resource constraints, merely detecting change points in {$(r_t, c_t)$}$_{t=1}^T$ is insufficient. Even with known change points, not knowing the means ${r^{(l)}, c^{(l)}}$ in each stationary piece $l = 1, \ldots, L$ prevents us from determining the optimal resource consumption in each stationary piece. Thus, besides hedging against arbitrary change points, we must address the uncertainty in ${r^{(l)}, c^{(l)}}$ and its impact on the "correct" amount of resource consumption. This marks a difference from unconstrained non-stationary bandits. Practically, our setting is more realistic than existing stationary/pure adversarial/bounded global variation settings.
We believe our algorithm is highly meaningful. Our Sections 3, 4 concern the **full** and **bandit** feedback settings respectively. In the former, the DM observes $r_t(a), c_t(a)$ before choosing $A_t$. In the latter, the DM only observes $R_t(A_t), C_t(A_t)$ (with means $r_t(A_t), c_t(A_t)$) after choosing $A_t$. In Section 3, we provide a novel perspective on the non-stationary Bwk problem, where we decompose the reward into two terms: the reward-consumption ratio multiplied by the resource consumption on each stationary piece. Correspondingly, we propose an intuitive inventory reservation algorithm by reserving a certain amount of inventory for each ratio interval and guessing the optimal reward-consumption ratio interval in each round. In Section 4, we carefully control the estimation procedure on {$(r_t, c_t)$}$_{t=1}^T$, which maintains the competitive ratio while achieving a regret term similar to that in unconstrained piecewise-stationary bandits. We provide the first provably near-optimal performance guarantee in the piecewise stationary BwK), which (a) **compares against the true optimum**, and (b) **allows bandit feedback**. Our ideas and algorithms are completely novel, and the technical details of our proofs are not at all trivial or similar to any existing paper.
2. Requiring $r_t(a), c_t(a) \in [\eta_{\min}, \eta_{\max}]$ for all $t,a$ weakens the result.
Response: We have redefined $\eta$ so that $r_t(a)/c_t(a) \in [\eta_{\min}, \eta_{\max}]$ for all $t,a$. We have revised our algorithm mildly to achieve the same near-optimal competitive ratio, which is a stronger result. More details are provided in the response to question 1 of Reviewer engA.
3. Novelty of algorithm design compared to Immorlica et al. (2019), Zhou et al. (2008) and other existing works.
Response:
$\bullet$ Comparing with online matching papers (e.g. Zhou et al. (2008)):
(i) Requiring less information: Our Section 3 has the same full feedback setting as online matching papers, which crucially require observing the actual values of $(R_t(a), C_t(a))$ for all $a \in \mathcal{K}$ **before** choosing $A_t$. Online matching algorithms cannot be readily generalized to our bandit setting (Section 4), where the DM only observes the outcomes $(R_t(A_t), C_t(A_t))$ **after** pulling arm $A_t$. The natural generalization from full to bandit feedback is one of our core contributions.
(ii) Stronger benchmark: Zhou et al. (2008) measures the performance of their algorithm by the **best single arm** benchmark which requires pulling a single arm in each round; while our benchmark opt(FA) is a **best distribution over arms** benchmark, which is theoretically much stronger than Zhou et al. (2008)'s benchmark.
(iii) More general results: In the full feedback setting, our performance guarantee encompasses Zhou et al. (2008), and in Section 4 we bypass the difficulty in Zhou et al. (2008) with the bandit setting.
$\bullet$ Comparing with inventory reservation papers: Immorlica et al. (2019)’s inventory reservation strategy depends solely on the cumulated reward without considering the reward-consumption ratio, using traditional gradient descent-based algorithms. Our experiments demonstrate that their inventory reservation strategy is significantly more conservative than ours in the piecewise-stationary setting. Other works such as Ball and Queyranne (2009) all focus on different or more specialized settings. Our inventory reservation strategy is highly distinctive from existing methods.
$\bullet$ Comparing with non-stationary Bwk papers: Adversarial Bwk papers (Immorlica et al. (2019)) consider a more general non-stationary setting than ours, but compare their performance with a stationary benchmark where a fixed optimal arm (or a fixed distribution over arms) is applied in all $T$ rounds. Bwk with bounded variation papers (Liu et al. (2022)) require the total parameter variation to be bounded in terms of a global budget, which is a very strong assumption. Our setting does not require bounded global variation and our performance guarantee is compared with a dynamic benchmark, which makes our result strong both theoretically and practically, and comparatively more realistic.
4. Lower bound of Lemma 2.3 requires $L=T/B$, which does not look convincing.
Response: Theorem 4.5 already provides a valid CR lower bound, where we set $L$ to be a fixed value $2 \log(\eta_{\max}/\eta_{\min})$ and show that $\eta_{\min}>0$ is a necessary condition for obtaining a non-trivial CR.
We thank the review team for the valuable comments, and we are happy and open to discussing any further questions. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper studies Bandits with Knapsacks in a piecewise-stationary environment, where the underlying reward can change over time.
The authors provide provably near-optimal competitive ratio for this setting, which achieves a dynamic benchmark and obtains stronger results compared to existing adversarial Bwk works. Specifically, the algorithm proposed in the paper does not rely on the prior information of the number of stationary pieces and the time indexes when changes happen.
Strengths: 1. The contribution of this work is solid. Compared to previous works that use the global variation glo to quantify performance, the algorithm in this paper uses the number of reward changes as a measure. This is a better choice in non-stationary bandits in general.
2. The paper is well written and provides adequate details to understand the flow of the material.
Weaknesses: As I understand, the main idea of this paper is to categorize the rewards into $M$ levels based on the maximum per unit reward (Ratio*). Each level's maximum per unit reward is a multiple of the previous one, allowing to focus solely on the highest level's maximum per unit reward. In this setting, the algorithm allocates $\frac{B}{2M}$ resources to each level. In this regard, the algorithm's reward is at least the maximum per unit reward multiplied by $ \frac{B}{2M} $, while the optimal algorithm's profit is at most the maximum unit profit multiplied by $ B $. This leads to a competitive ratio $O(M) = O(\log(\eta_{max}/\eta_{min} ))$.
In this regard, such a design seems somewhat trivial. I can understand that for this problem, perhaps this method is optimal. However, given such an algorithm, I am not sure if the problem itself is sufficiently meaningful.
Typos:
1. Line 160: The DM does not L...
Technical Quality: 2
Clarity: 3
Questions for Authors: For the non-stationary bandit research I known before, algorithms typically need to detect whether the reward has changed every round. When a change is detected, the algorithm usually employs a restart mechanism to forget the past information. However, in Algorithm 2 of this paper, there is no such detection-restart mechanism. In this regard, I am curious about how the non-stationary rewards impact the algorithm in this paper.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's time and effort in evaluating our manuscript. We highly value your feedback and would like to address your concerns and questions in what follows.
Regarding "Weaknesses":
1. Clarification on Algorithm Details (Full-Feedback Deterministic Outcome Setting): We would like to point out some missing situations and details in the reviewer's summary, which could potentially resolve some possible misunderstanding. The reviewer’s summary, "the algorithm's reward is at least the maximum per unit reward multiplied by $𝐵/2𝑀$, while the optimal algorithm's profit is at most the maximum unit profit multiplied by $𝐵$," captures only the special case when our reserved $𝐵/2𝑀$ units of inventory for each reward-consumption ratio interval are depleted before the end of the planning horizon. The more general and complex scenario arises when our reserved $𝐵/2𝑀$ units are fully consumed for some ratio intervals $m \in \mathcal{M}_2$ (see Claim 2, Section 3.3, on page 6) and not fully consumed for some ratio intervals $m \in \mathcal{M}_1$ (see Claim 1, Section 3.3, on page 6). In the latter case, our algorithm ensures that on each stationary piece $l$ with the optimal ratio $\text{Ratio}^{(l)\star}$ falling in intervals $m \in \mathcal{M}_1$, at least $B^*_l/M$ units are consumed at the optimal ratio. However, it does not provide any direct guarantee that $1/O(M)$ fraction of the optimal algorithm's profit is obtained for intervals $m \in \mathcal{M}_1$. Establishing our competitive ratio requires judicious analysis to bridge between the piece-wise and interval-wise results, which is not trivial (see Appendices B.4, C.2, C.3).
2. Clarification on Algorithm Details (Bandit-Feedback Stochastic Outcome Setting): Our primary focus is on the bandit-feedback stochastic outcome setting, where existing algorithms for non-stationary unconstrained multi-armed bandits (e.g., sliding window) cannot be naïvely generalized. Our novel design allows for monitoring underlying changes in parameters through sampling rounds rather than change point detection algorithms, which further simplifies the problem. More details, please refer to the next bubble where we address your questions.
3. Comparison with Prior Work: Previous works on online non-stationary optimization/bandits with knapsacks focus on either adversarial settings (Immorlica et al., 2019; Kesselheim and Singla, 2020) or a global variation budget (Jiang et al., 2020; Balseiro et al., 2022; Liu et al., 2022). We believe our piecewise-stationary setting is meaningful both theoretically and practically:
$\bullet$ Theoretically, Immorlica et al. (2019), Kesselheim and Singla (2020) consider a more general non-stationary setting than ours, but compare their performance with a stationary benchmark where a fixed optimal arm (or a fixed optimal distribution over arms) is applied in all $T$ rounds. Jiang et al. (2020), Balseiro et al. (2022), Liu et al. (2022) require the total non-stationarity to be bounded within a global variation budget (see Remark 4.2), which is a very strong assumption. Our setting does not require bounded global variation and our performance guarantee is compared with a dynamic benchmark, which we find to be an interesting and meaningful result.
$\bullet$ Practically, the adversarial setting may be too conservative, and the global variation budget setting can apply to very limited data structures. Our piecewise assumption on non-stationarity is reasonable and observable in many real-life scenarios, such as sales patterns that remain stationary for certain periods before changing during hot seasons/promotions/new trends.
We further argue that piecewise-stationary bandits without constraints has drawn significant research attention. We list some of the relevant papers in the following:
[1] Auer, P., Gajane, P., & Ortner, R. (2019, June). Adaptively tracking the best bandit arm with an unknown number of distribution changes. In Conference on Learning Theory (pp. 138-158). PMLR.
[2] Cao, Y., Wen, Z., Kveton, B., & Xie, Y. (2019, April). Nearly optimal adaptive procedure with change detection for piecewise-stationary bandit. In The 22nd International Conference on Artificial Intelligence and Statistics (pp. 418-427). PMLR.
[3] Besson, L., Kaufmann, E., Maillard, O. A., & Seznec, J. (2022). Efficient change-point detection for tackling piecewise-stationary bandits. Journal of Machine Learning Research, 23(77), 1-40.
[4] Bhatt, S., Fang, G., & Li, P. (2023, April). Piecewise stationary bandits under risk criteria. In International Conference on Artificial Intelligence and Statistics (pp. 4313-4335). PMLR.
Our work is the first to study piecewise-stationary bandits with knapsacks, which is a new and interesting setting that requires extra care in maintaining the order optimal competitive ratio in Section 4.
4. Simplicity and Contribution: While our algorithm is not complex, its simplicity and intuitiveness enhance our contribution. We offer a novel perspective on non-stationary BwK, distinct from traditional gradient descent-based algorithms. Our approach simplifies the problem by focusing on two elements: guessing the reward-consumption ratio interval and reserving adequate inventory for each interval. Although the process of deciding ratio intervals via LPs in a round-robin manner may seem primitive, it leaves room for further refinement of our competitive ratio's coefficients (we already achieve the optimal competitive ratio’s order w.r.t. $\eta_{\max}/\eta_{\min}$). Nevertheless, given that our paper introduces this novel perspective, we believe our contribution is solid.
---
Rebuttal 2:
Comment: Regarding "Questions": We appreciate your question on how our algorithm monitors changes in rewards and resource consumption without a restarting mechanism. The key is that we decompose the reward into the reward-consumption ratio product the amount of resources assigned for each stationary piece $l$ (see equation (3) on page 4, sorry that the math formulation in comment may not show well).
\begin{equation}
\text{opt(FA)} = \sum_{m=-M}^{M-1} \sum_{l \in \mathcal{L}} \text{Ratio}^{(l)\star} \cdot \mathbf{1}(m^{\star}_l =m) \cdot B^{\star}_l.
\end{equation}
By reserving a fixed amount of inventory for each ratio interval, our task simplifies to guessing the optimal reward-consumption ratio interval in each round.
In the full-feedback deterministic outcome setting (Algorithm 1 on page 5),
\begin{equation}
\text{Ratio}^{(q)} := \frac{\sum_{a \in K} r^{(l)}(a) x_l^{(q) \star}(a)}{\sum_{a \in K} c^{(l)}(a) x_l^{(q)\star}(a)},
\end{equation}
which is a guess of $\text{Ratio}^{(l)\star}$, is observable for each $q$. Then it suffices to check the ratio interval that $\text{Ratio}_{l}^{(q)}$ belongs to.
In the bandit-feedback stochastic outcome setting, we only need an additional estimate on $(r_t(a), c_t(a))$ to ensure the estimated $Ratio^{(l)\star}$ falls within the correct interval. This is achieved through random sampling rounds, termed "exploration rounds" (see Lines 7-13 in Algorithm 2). Specifically, in each round $t$, we conduct sampling with probability $\gamma_t$, where we uniformly at random choose an arm $a \in \mathcal{K}$ and pull it for $N$ times (see equation (5) on page 7). Then we update $(\hat{r}_t(a), \hat{c}_t(a))$, which is an estimate on $(r_t(a), c_t(a))$, as in equation (5) on page 7. We then guess $\text{Ratio}^{(l)\star}$ based on $(\hat{r}_t(a), \hat{c}_t(a))$.
We highlight that the major performance difference between IRES and IRES-CM is the loss caused by estimating $(r_t,c_t)$, reflected in the following aspects: (i) reward loss caused by exploration rounds; (ii) the most recent exploration rounds contain change points, causing failed estimation of $(r_t, c_t)$; (iii) the most recent exploration rounds do not contain change points, but there is a large discrepancy between $(r_t,c_t)$ and $(\hat{r}_t,\hat{c}_t)$ (due to untimely update).
The performance guarantee of our algorithms is in the form of $\sum^T_{t=1} R_t(a_t)\geq \frac{1}{\text{CR}}\cdot \text{opt}(\text{FA}) - \text{Reg}$. In Appendix B.7 & C.4, we respectively prove that the losses due to (i, ii) are accounted for in Reg, while (iii) is accounted for in the CR, with high probability.
In the full-feedback deterministic outcome setting, our performance guarantee holds as long as $L=o(T)$. Nevertheless, in the bandit-feedback stochastic outcome setting, $L$ shows up in Reg$=\tilde{O}(L \sqrt{|\mathcal{K}|NT})$ (or Reg=$\tilde{O}(\sqrt{L|\mathcal{K}|NT})$ when $L$ is known) (see Theorem 4.2 on page 8). Reg is a sublinear-in-$T$ reward loss, provided $L=o(\sqrt{\eta_{\min}T})$ (or $L=o(\eta_{\min}T)$ when $L$ is known). While our sampling process is effective, we acknowledge that it could potentially be complemented or replaced by other change point monitoring algorithms.
We finally remark that our algorithm design does not need to detect the exact change points, but our sampling (exploration rounds) works primarily due to the piecewise-stationary nature. The intuition is that, given the presence of resource constraints, identifying the change point is not enough, different from the unconstrained setting. Indeed, even in the simple case when we know there is only one change point occurring at $t=T/2$, the DM still needs to estimate how many resource units are consumed in the first piece $\{1, \ldots, T/2\}$ and in the second piece $\{T/2 + 1, \ldots, T\}$ in the optimal solution. This means that, during $\{1, \ldots, T/2\}$, the DM shall need some form of knowledge on the mean outcome in the second piece. Such a need is different from the setting without resource constraint, where the DM can achieves a SOTA regret bound by restarting the standard UCB at $t=T/2$. This explains why we do not have a detect-restart mechanism frequently used in unconstrained cases, but we discover that our sampling strategy is better suited for the resource constrained setting.
We thank the reviewer for the valuable comments, and we have corrected the typos. We hope we have clarified all your concerns, and we hope you will consider re-evaluating our contributions after the clarification. We are happy and open to discussing any further questions.
Title: Response to "Questions" | null | null | null | null | null | null |
GraphVis: Boosting LLMs with Visual Knowledge Graph Integration | Accept (poster) | Summary: - This paper proposes an instruction tuning method with visual knowledge graph to enhance the large vision language models with external knowledge and perform better on QA tasks.
Strengths: - It is a new idea that organize the external knowledge as an image to enhance the LVLMs.
Weaknesses: - Only LLaVA-v1.6-Mistral is employed as a LVLM backbone in the experiments. As there are also many other LVLMs, I think more experiments should be conducted on diverse LVLMs to demonstrate the effectiveness of GraphVis.
- The visual graph understanding ability is not evaluated seperately. Only the final performance of VQA tasks are reported in the experiments. As a two-stage training framework, I think the Visual Graph Comprehension Fine-tuning should be also evaluated with suitable metrics.
- How can it be proved with some examples that the Visual Graph Comprehension Fine-tuning really works?
Technical Quality: 3
Clarity: 3
Questions for Authors: I hope authors could consider my comments and give me a response.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There is a limitation section in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're grateful for your support and helpful feedback. Please find our response below for the questions raised in the review and additional experiments.
---
**Q1**. More experiments should be conducted on diverse LVLMs to demonstrate the effectiveness of GraphVis.
**A1**. Thank you for your suggestion. In our limitations discussion, we highlighted that scaling up experiments with larger models would be an interesting direction if compute resources permit. In response to the raised point, we have added experiments on the CSQA task using LLaVA-v1.5 (Vicuna-7B). This model has a different LLM backbone and pre-training process compared to the one initially used in our study.
| Method | CSQA |
| :--------: | :--------: |
| Base LVLM | 68.1 |
| GraphVis |79.9 |
---
**Q2**. Evaluate the model’s visual graph understanding ability separately.
**A2**. Thank you for your suggestion. In Table 1 of the attached PDF, we further evaluated LVLM on these graph comprehension tasks, both before and after training on the synthetic tasks. To ensure a fair comparison, we utilized synthetic images from the test data of CSQA to construct a test set. The accuracy for each individual task is reported. Due to time constraints, we implemented exact matching in determining answer accuracy, which, while strict, provides insight into performance gains and error sources.
We observed that graph comprehension tasks are essentially difficult for the LVLM, as such images and tasks are scarce in its pre-training and fine-tuning data. On tasks such as triple listing, it almost cannot fulfill the task. For an output example:
"Based on the image provided, the graph appears to represent a network or a system with nodes (blue circles) and edges (black lines) connecting them. To list all the triples in the graph, I'll describe each triple as a sequence of three nodes in the graph, which are connected by edges. Here are the triples in the graph: 1. (node1, node2, node3) 2. (node2, node3, node4)..."
Since these preliminary tasks were considered a warm start for the model to learn grounding its reasoning on graph images, we only fine-tuned on these tasks for one epoch. Nevertheless, we observed a notable gain across all tasks after just one epoch of fine-tuning.
---
**Q3**. Examples to show the effectiveness of GraphVis.
**A3**. Thank you for your suggestion. In Figure 4 of our [attached one-page pdf](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf), we provided a specific example in the mode generations for the VQA task ScienceQA. The question fundamentally requires the model to traverse through the food web from a starting point, following the directed arrows. It must identify the potential nodes connected to the starting point in a certain direction and match the node names with the provided options. The original model failed to complete this task successfully. However, with GraphVis, the model's ability to handle such image data improved significantly, resulting in a correct answer.
---
Rebuttal Comment 1.1:
Title: Rating Change
Comment: Thank you for your response. I have change my rating from 5 to 6.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your prompt response and positive feedback on our rebuttal! | Summary: The paper presents a method to improve large vision language models (LVLMs) by integrating knowledge graphs (KGs) visually. The approach, GraphVis, uses LVLMs to understand KGs through image visualizations, enhancing comprehension and reasoning. It employs a curriculum fine-tuning strategy, starting with simple graph features and moving to complex QA tasks. Experiments show significant performance gains in both textual QA and zero-shot VQA, outperforming existing methods with efficient parameter training. The paper also addresses potential societal impacts and future work.
Strengths: 1. The paper introduces a novel approach, GraphVis, which effectively integrates structured knowledge from knowledge graphs into large visual language models using a visual modality.
2. GraphVis employs a unique, sequential curriculum fine-tuning scheme that progressively trains the model on basic graph features before moving to more complex reasoning tasks.
3. The paper demonstrates that GraphVis not only improves textual question-answering performance but also enhances zero-shot visual question-answering capabilities.
Weaknesses: 1. While the paper demonstrates the effectiveness of GraphVis using ConceptNet, it may not be clear how well the approach generalizes to other knowledge graphs with different domains. Further research would be needed to confirm its effectiveness across various KGs.
2. What is the diversity of question-answer pairs used for visual understanding finetuning? Could training on overly monotonous datasets lead to overfitting, and would this affect downstream tasks?
3. Even if two graphs have identical structures, they can present different visualization results through changes in the positions of nodes, edges, and other elements. Would this affect the training of the model?
4. In Table 1, the comparison seems unfair for baseline methods. It could be better if authors provide the zero-shot performance on CSQA and OBQA by training on other available datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your support and constructive feedback. Please find our detailed response below for the questions raised in the review.
---
**Q1**. Further research would be needed to confirm its effectiveness across various KGs.
**A1**. Thanks for raising this important aspect. As we have also mentioned in our conclusion paragraph on limitations and future work, it is indeed crucial to explore the generalizability of GraphVis across different KGs. While these investigations are beyond the scope of our current study, we will further emphasize this future research direction and outline our plans for future investigations in our revised manuscript.
---
**Q2**. What is the diversity of question-answer pairs used for visual understanding finetuning? Could training on overly monotonous datasets lead to overfitting, and would this affect downstream tasks?
**A2**. Thank you for pointing out this. The synthetic QA data may indeed result in an overly monotonous dataset, and therefore we employed different questions for each synthetic task to increase the diversity. Moreover, as we discussed in line 197-201, we incorporated real QA data for finetuning, which further mitigates this potential issue.
To evaluate the impact of question format diversity on model performance, we conducted experiments using only one of the question formats for each synthetic task, as compared to the 5 formats used in our paper. We note that the 1 format is contained in the 5 formats for a fair comparison. The results are summarized in Table 4 of [our attached pdf](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf).
---
**Q3**. Visualization may change even when the structure of the graph is identical. Would this affect the training of the model?
**A3**. Thank you for raising this question. We acknowledge that different visualizations can exist for the same graph structure. In our study, we generated random visualizations of the graphs to capture an average performance across these variations.
To address the potential effects of different visualizations, we conducted an additional experiment using another set of randomly generated images with different visualization colors and shapes for the CSQA dataset (as shown in Table 5 of our attached pdf). The results of this experiment will be included in our revised manuscript to show the robustness of our result.
We further recognize that leveraging the diversity in graph images for the same graph structure is an interesting future direction. Utilizing this diversity could further enhance the model's ability to generalize and improve its robustness to different visual representations. We will add this aspect in the discussion on future research.
---
**Q4**. It could be better if authors provide the zero-shot performance on CSQA and OBQA by training on other available datasets.
**A4**. We first clarify that the fine-tuning baselines that we compare with (e.g. GNP) similarly use the training data from CSQA and OBQA. To provide stronger and more comprehensive baselines, we conduct experiments with fine-tuning on the same training data without any additional KG information, and fine-tuning with KAPING prompting. Results are shown in Table 2 of our attached pdf, which continue to demonstrate the effectiveness of our method. We will include these two additional baselines in our Table 1 in our revision.
Regarding the generalizability of our results, we would like to point out that our setting for VQA is indeed zero-shot. With fine-tuning on these synthetic graph images and textual QAs, the LVLM exhibited notable improvement on zero-shot tasks such as ScienceQA and MMBench, which also contains data of graph structures. These results highlight the successful transfer and generalization capabilities of our approach across different datasets and tasks.
---
Thank you again for your helpful comments. We hope that our clarifications and additional experiments address the raised concerns.
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: The responses addressed my concerns. I have changed my rating.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for getting back to us and for the positive feedback on our rebuttal!
---
Rebuttal 2:
Comment: Dear reviewer omgG,
Thank you again for your support and valuable feedback. We appreciate your insights and hope that we have adequately addressed your questions. Specifically,
1. Exploring various KGs: We agree this is an important area for future work and will include a discussion in our revision.
2. Additional experiments: We've provided new results on:
- The effect of diverse question formats (Table 4 in the [attached PDF](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf))
- Different visualizations for synthetic graph images (Table 5 in the [attached PDF](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf))
3. Generalizability: We included additional baselines that were similarly fine-tuned and tested on the same data (Table 2 in the [attached PDF](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf)). Specifically regarding generalization, we clarified that our VQA tasks are performed in a zero-shot setting. Further explanation of our contribution regarding VQA is presented in [Global A3](https://openreview.net/forum?id=haVPmN8UGi¬eId=qU5a2KzdFg).
We hope these responses and clarifications have been helpful. If you have any further questions about our rebuttal, we're happy to provide additional information or clarification. We sincerely appreciate the time and effort you've invested in reviewing our work! | Summary: GraphVis introduces a novel method for integrating knowledge graphs (KGs) into large language models (LLMs) by preserving the graph structure through the visual modality. Utilizing Large Vision Language Models (LVLMs) and a curriculum fine-tuning scheme, GraphVis enhances both textual QA and VQA performance, demonstrating significant improvements over existing KG-enhanced LLM methods.
Strengths: 1. The use of visual representations to preserve the intricate structure of KGs is a novel approach, addressing limitations of linearized text triples and improving the expressiveness of structured data integration.
2. The two-phase curriculum fine-tuning, starting with graphical feature recognition and progressing to reasoning tasks, is a technically-sound strategy.
3. The paper provides extensive evaluations across commonsense reasoning QA and VQA benchmarks, showcasing substantial performance gains.
Weaknesses: 1. The paper does not discuss how the properties of the generated graph images, such as size and resolution, affect the model's performance. Understanding these factors is crucial for replicating and optimizing the method.
2. While the paper claims performance improvements, it is important to confirm whether this cross-modal methodology is a novel approach. A comparison with existing methods and a discussion on how GraphVis advances the current state-of-the-art would be beneficial.
3. The proposed method seems to lack significant technical contributions. The approach primarily leverages existing techniques (e.g., curriculum fine-tuning and visual graph generation) without introducing substantial innovations.
Technical Quality: 3
Clarity: 3
Questions for Authors: As mentioned in Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your support and suggestions, for which we have included additional experiments accordingly. We hope our explanations below answer your questions and provide more clarity.
---
**Q1**. How the properties of the generated graph images, such as size and resolution, affect the model’s performance?
**A1**. Thank you for your suggestion. It is generally observed in VQA tasks that images with lower resolution can lead to degraded performance, as these images are considered as "corrupted" and often leads to object hallucinations. For the QA tasks that we considered in our evaluation, we conducted additional experiments using graph images with smaller sizes and consequently lower resolutions (50x50). The results are summarized in Figure 3 of our [attached one-page pdf](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf).
From the figure, we can observe that reducing the size and resolution of the graph images (GraphVis (small)) leads to a decrease in performance compared to the standard GraphVis setup. This indicates that higher resolution graph images are crucial for the model to accurately comprehend and utilize the visual information encoded in the graph images as well.
Lastly, we emphasize that image size and resolution can be considered as hyperparameter choices. This does not affect our main contribution, which is demonstrating that graph images are a more effective means of conveying useful graph information compared to verbalization.
---
**Q2**. While the paper claims performance improvements, it is important to confirm whether this cross-modal methodology is a novel approach.
**A2**. Thank you for pointing out the importance of highlighting the novelty of GraphVis. The contribution of GraphVis is two-fold.
1. **Novel use of visual modality for KG-Enhanced LLMs**: GraphVis is the first to employ visual modality for KG-enhanced LLMs, leveraging graph visualization to bridge the gap between structured KG data and multimodal LLM processing capabilities.
2. **Utilization of KG and textual data in fine-tuning LVLMs**: GraphVis uniquely proposes the use of KG and textual data to fine-tune LVLMs by leveraging synthetic graph images and textual QA datasets.
We recognize there may be some misunderstanding regarding our second contribution. To clarify, the VQA tasks considered in our paper were performed in a zero-shot setting. Our primary contribution lies in demonstrating the potential of utilizing vast training data from the text-only domain and generating synthetic images with graph structures to enhance the LVLM’s understanding of images that have underlying graph structures.
To provide further clarity, the current LVLMs (e.g. LLaVA-v1.6) are pre-trained and fine-tuned on large corpus of vision-language instruction-following data. This corpus includes data curated from various VQA training datasets, human annotations, and GPT-4V generations. Obtaining such training data for LVLMs, however, is considerably expensive as it involves data from different modalities. For instance, generating 6k image descriptions with 1k tokens per output using GPT-4V would cost approximately $200. Therefore, researchers have been exploring ways of generating synthetic data to further improve these LVLMs [1-3].
Our major contribution here is to offer a new perspective on gathering fine-tuning data to enhance LVLMs. Specifically, we propose that pure textual data can be combined with relevant synthetic graph images derived from KGs to improve the LVLM’s capability in image comprehension and reasoning. This approach is particularly beneficial for images with graph structures, which are relatively scarce.
We will include and highlight the above discussion in our revised manuscript to provide more clarity on our contributions.
---
**Q3**. Further discussion on technical contribution.
**A3**. Thank you for your interest in the technical contributions of our work. As we addressed in our response to Q2, there may have been some misunderstanding regarding our contributions. Here, we clarify and elaborate on the technical advancements introduced by GraphVis.
Our method is novel in advancing cross-modal improvements, which has not been explored for the three modalities that we consider, through two key mechanisms:
Visual modality for KG-Enhanced LLMs: GraphVis is the first approach to integrate visual representations of KGs within the processing framework of KG-enhanced LLMs.
Synthetic graph images for fine-tuning LVLMs: By combining textual data with synthetic visual graphs, we enhance the LVLM's ability to process and reason about graph-structured information, leading to improved performance in tasks that involve images with underlying graph structures. This approach is particularly valuable given the scarcity of real-world images with graph structures.
We emphasize that efficient and affordable data curation for the fine-tuning of LLMs and LVLMs is one of the most important technical directions for advancing their performance. A significant body of work focuses on synthetic data generation (unimodal [4-6] or multimodal [1-3]) rather than redesigning or modifying the architecture of the large models. Our work contributes to this direction by providing a new method for leveraging data from an unused domain and generating synthetic multimodal data that enhances LVLM capabilities.
---
[1] Aligning modalities in vision large language models via preference fine-tuning.
[2] Enhancing large vision language models with self-training on image comprehension.
[3] Understanding alignment in multimodal LLMs: a comprehensive study.
[4] Self-rewarding language models.
[5] Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models.
[6] Scaling relationship on learning mathematical reasoning with large language models.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Thanks for your response. I have updated my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your timely and encouraging feedback on our rebuttal! | Summary: The paper introduces GraphVis, a novel approach that enables Large Vision Language Models (LVLMs) to reason about visual knowledge graphs for QA tasks. Unlike previous methods that either input knowledge graph (KG) triplets directly to LLMs or use graphical neural networks to capture structured representations, GraphVis instead represents the graph visually as images of nodes and edges. This inherently introduces an auxiliary OCR task to parse the image of visual knowledge graph, and then the model needs to take the graphical image as input to answer commonsense questions. The knowledge graphs are derived from ConcetNet and relevant nodes and edges are extracted based on the question and answer choices by using off the shelf parser. The authors demonstrate that finetuning LLaVA 1.6 on textual commonsense questions without paired image but instead with the retrieved visual knowledge graphs, leads to improvements in CSQA, as well as zero-shot performance in VQA tasks such as MMBench and ScienceQA.
Strengths: **Originality and Significance:**
While knowledge augmented LLMs have been explored in prior work, GraphVis is the first to leverage the multimodal capabilities of LVLMs to explicitly understand and interpret the structured relationships visually represented in knowledge graphs (KGs). Incorporating tables, graph figures, and other structured representation as visual context has been investigated, but not specifically for knowledge graphs.
The paper demonstrates that training on text only QAs with synthetically generated knowledge graphs improves the zero-shot performance of visual QAs over the base model. This is quite a significant finding, and opens up more interesting applications of aligning text data with synthetic multimodal context for improving multimodal tasks.
Ablation studies thoroughly examine different strategies of training stages and the order of prompts for graph reasoning questions, and show that separating initial image comprehension stages from subsequent reasoning tasks leads to the best result.
**Clarity and Quality:**
The paper clearly shows how a visual graph is constructed and interpreted by LVLMs. It is easy for readers to follow their training stages. Example images of VQA tasks involving graphs help the readers to understand the transferability of comprehending synthetic KGs in a multimodal setup.
Weaknesses: - Missing details and analysis of the synthetic visual graphs. Authors should show the statistics of retrieved subgraphs per question, including average number of nodes, degrees of freedom, and etc. Evaluation of the visual graph comprehension tasks should be included to measure how effectively LVLMs understand the graph structure accurately, and their main sources of error in graph comprehension.
- Unfair Comparison of GraphVis to zero shot approaches. KAPING is a zero-shot approach that involves no model training and only augments the knowledge directly in the input of LLM. GraphVis instead finetunes the model to understand the visual graph as context. A more fair comparison would be to either follow KAPING with finetuning by training **LVLMs with knowledge graph triplets**, or evaluate zero-shot performance of LVLMs with visual graph as input but no finetuning.
- Not enough evidence if the improvement comes from integrating the visual knowledge graph, or finetuning on the QA data. The authors mostly compare GraphVis to the baseline LLaVA model trained on visual instruction tuning data. Since GraphVis additionally finetunes the base model with QA data, it is no surprise that the model outperforms the base model across the QA tasks. More appropriate baseline candidates are finetuning Mistral-7B LLM or LLaVA LVLMs with the QA data.
- The authors should also include all ablation studies for VQA tasks, and not only for CSQA.
- Missing qualitative results of GraphVis vs baseline to show the benefits of visual graphs for VQA tasks.
Technical Quality: 2
Clarity: 3
Questions for Authors: Questions and suggestions are derived from the weaknesses section.
1. Disentangle the benefits of finetuning with visual graphs vs on QA data. From the ablation studies, it is not clear if the model improves by comprehending and integrating the visual graphs, or by simply finetuning more on QA data.
2. Introduce more fair comparison of prior work. Authors should follow the finetuning adaptation of KAPING by train models with KG triplets, instead of visual graphs.
3. What are the main sources of error for graph comprehension tasks? It would be helpful to identify what might be the bottleneck for understanding the graphical structure when KGs are presented as image.
4. Authors should present convincing qualitative results of their GraphVis model, not only on how the dataset is constructed.
5. What are the results if visual graph comprehension stage is omitted, and directly proceeds to KG-Enhanced QA Fine-tuning?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We appreciate your recognition of the originality and significance of our work, and grateful for the positive feedback on the clarity and quality of our paper. Regarding the raised questions, please find our detailed response below with additional experiments and clarifications to potential misunderstanding. We organized the questions according to the order of the weaknesses section.
---
**Q1**. Could the authors provide more details and analysis of the synthetic visual graphs? What are the performance and the main sources of error for graph comprehension tasks?
**A1**. Thank you for raising these important questions. Please find our detailed response to each question in [global rebuttal](https://openreview.net/forum?id=haVPmN8UGi¬eId=qU5a2KzdFg) on Global A1 (with Figures 1 and 2 in [attached pdf](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf)) and Global A2 (with Table 1 in attached pdf).
Concisely, the retrieved subgraphs (avg. 17 nodes, 25 edges) contain substantial information, but some can be overly complex. Graph comprehension tasks is challenging to LVLMs due to limited graph data in pre-training and fine-tuning. However, fine-tuning for just one epoch showed notable improvements.
---
**Q2**. More discussions on the baselines. Adding KAPING with finetuning would be more comprehensive.
**A2**. We aimed to provide a comprehensive overview and comparison by including the most recent baselines on KG-enhanced LLMs. While KAPING is a prompting method, we also included fine-tuning methods like KSL and GNP, as well as the larger models they selected (FLAN-T5 11B and GPT-3.5). In Table 1 of our paper, we also highlighted the differences between none fine-tuning and fine-tuning methods.
In response to your suggestion, we included the additional baseline of fine-tuning LVLMs with KG triplets, to more comprehensively compare with KAPING. Following the original zero-shot setting, we maintained the top 10 retrieved triples and appended them to the question as the training data. The accuracy results of this comparison are presented in Table 2 of our attached pdf.
As observed, KAPING with fine-tuning (KAPING w/ FT) shows an improvement over the base LVLM and zero-shot KAPING. Meanwhile, the vision-language model benefits more from a visual graph input than a linearized textual input. We also note that the computation of top-k embeddings among all retrieved triples was very time-consuming, while GraphVis does not require such computations.
---
**Q3**. Disentangle the benefits of finetuning with visual graphs vs on QA data.
**A3**. From our experiments in our previous response A2,fine-tuning with the additional visual information results in the best performances, while linearizing the retrieved knowledge subgraph into texts maintains helpful information but falls short in proving the useful structured information. In Table 1 of our attached pdf, we additionally add the baseline of fine-tuning on the QA training data without any additional KG information.
We will include these two additional baselines in our revision.
---
**Q4**. The authors should also include all ablation studies for VQA tasks.
**A4**. As we do not retrieve KG subgraphs for the VQA tasks, it is not applicable to investigate our second ablation study “comparison with prompting”. However, in Table 3 of our attached pdf, we do include the additional results on ScienceQA as one of the VQA tasks from either doing a curriculum fine-tuning or simply joint fine-tuning on the curated synthetic data. As indicated by the results, curriculum learning transfers to these VQA tasks as well.
Regarding the inapplicability of the second ablation study, we would like to further clarify our setting for VQA. The VQA tasks were done in the zero-shot setting and aimed to show the interesting benefits of fine-tuning LVLMs on the synthetic graph images and leveraging the existing textual QA data. As we highlighted in our paper, while current LVLMs are fine-tuned on human-labeled vision-language instruction data, images of complex graph structures are much more scarce compared to the many natural images, in addition to the scarcity of reasoning tasks designed for graph images. GraphVis highlights the potential to leverage textual data and KG images to improve the LVLM’s capability in reasoning with graph images. Therefore, we do not retrieve KG subgraphs for VQA tasks, but we consider that GraphVis provides a new perspective of data source for LVLMs. Our [Global A3](https://openreview.net/forum?id=haVPmN8UGi¬eId=qU5a2KzdFg) provides a more detailed explanation.
---
**Q5**. Missing qualitative results of GraphVis vs baseline to show the benefits of visual graphs for VQA tasks.
**A5**. Thank you for your suggestion. In Figure 4 of our attached one-page pdf, we provided a specific example in the mode generations for the VQA task ScienceQA. The question fundamentally requires the model to traverse through the food web from a starting point, following the directed arrows. It must identify the potential nodes connected to the starting point in a certain direction and match the node names with the provided options. The original model failed to complete this task successfully. However, with GraphVis, the model's ability to handle such image data improved significantly, resulting in a correct answer.
---
**Q6**. What are the results if visual graph comprehension stage is omitted, and directly proceeds to KG-Enhanced QA Fine-tuning?
**A6**. We appreciate the suggestion to evaluate the impact of omitting the visual graph comprehension stage. In response, we conducted additional ablation studies to investigate this scenario. For the CSQA dataset, omitting the visual graph comprehension stage resulted in an accuracy of 77.5%, which underperforms the curriculum training result of 82.8%. We will add the comprehensive ablation study in our revision.
---
Rebuttal 2:
Title: Inquiry for discussion
Comment: Dear reviewer GUVR,
Thank you again for your constructive feedback and questions. We sincerely hope that our responses and clarifications have been helpful in addressing your questions and concerns. Specifically,
1. We provided additional statistics of the synthetic visual graphs ([Global A1](https://openreview.net/forum?id=haVPmN8UGi¬eId=qU5a2KzdFg)) as well as the model’s performance on each task ([Global A2](https://openreview.net/forum?id=haVPmN8UGi¬eId=qU5a2KzdFg)).
2. As suggested, we added more fine-tuning baselines including fine-tuning on textual QA only and fine-tuning with KG triples (Table 2 of [our attached pdf](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf))
3. We extended our ablation study to VQA tasks (Table 3 of [our attached pdf](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf))
4. We provided further explanation on the zero-shot setting of our VQA tasks and our corresponding contribution ([Global A3](https://openreview.net/forum?id=haVPmN8UGi¬eId=qU5a2KzdFg)).
5. We provided a specific example to show how GraphVis improves the LVLM on reasoning with graph images (Figure 4 of [our attached pdf](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf)).
We would like to inquire if there are any questions about our rebuttal, for which we're happy to provide additional information and further clarifications. We truly appreciate the time and effort you’ve invested into reviewing our work!
---
Rebuttal Comment 2.1:
Comment: Dear reviewer GUVR,
Thank you again for taking the time to review our paper. We appreciate your detailed feedback and acknowledgement of the originality and significance of our work.
In response to the feedback received, we have conducted additional experiments and provided detailed clarifications in our rebuttal. While most reviewers have responded positively to our revisions, we hope our detailed responses adequately address your concerns as well. We would appreciate your attention to our rebuttal and any further feedback you may have, as it will give us the opportunity to provide more details before the author-reviewer discussion session ends and help us continue to improve our work. Thank you for your valuable insights! | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their insightful and encouraging feedback. We are grateful for the recognition of the novelty and significance of our work (Reviewer GUVR, 2Am7,omgG,Tf8h), extensive experiments and superior performance (Reviewer GUVR, 2Am7,omgG), clear writing flow (Reviewer GUVR), etc.
In response to the comments, we have provided additional experiments in the [attached one-page pdf](https://openreview.net/attachment?id=qU5a2KzdFg&name=pdf), including
1. **Distribution of Retrieved Subgraphs (Figures 1 and 2)**: With an average of 17 nodes and 25 edges, the retrieved 2-hop subgraphs contain substantial information. However, some subgraphs are overly complex, indicating room for improvement in pruning methods.
2. **Performance of LVLM on Synthetic Graph Comprehension Tasks (Table 1)**: These tasks are inherently challenging for LVLMs due to the scarcity of similar data in pre-training and fine-tuning stages. With fine-tuning on only one epoch, we observed a notable improvement.
3. **Additional Baselines (Table 2)**: Adding fine-tuned LVLMs with textual QA (Base w/FT) and textual KG triplets (KAPING w/FT) respectively. The visual graphs provided additional information and performance gains for vision-language models.
4. **Extending Ablation Study to VQAs (Table 3)**: We observed a similar pattern as in our ablation studies on QAs, where curriculum learning outperformed joint fine-tuning.
5. **Lower Resolution Images (Figure 3)**: By corrupting visual graph inputs (resizing), we observed a slight performance decay, consistent with universal observations that corrupted images lead to worse responses and more hallucinations.
6. **Number of Synthetic Question Formats (Table 4)**: Reduced diversity in synthetic question formats resulted in slight performance decay for GraphVis.
7. **Robustness to Different Visualizations of the Same Graph Architecture (Table 5)**: GraphVis demonstrated robustness to different visualizations.
For the most raised questions, please find our detailed response below.
---
**Global A1: statistics of the retrieved subgraphs**
We appreciate reviewers’ suggestions to include more detailed statistics on the synthetic visual graphs. In Figures 1 and 2 of our attached pdf, we provide the distribution of the number of nodes and edges in the retrieved subgraphs in CSQA. Specifically, we have one retrieved subgraph for each question, and the statistics for the retrieved subgraphs are:
- Average node number: 17.36
- Average edge number: 25.48
- Average node max degree: 7.82
---
**Global A2: evaluation on the synthetic graph comprehension tasks**
In Table 1 of the attached PDF, we further evaluated LVLM on these graph comprehension tasks, both before and after finetuning on the synthetic graphs' tasks. To ensure a fair comparison, we utilized synthetic images from the test data of CSQA to construct a test set. The accuracy for each individual task is reported. Due to time constraints, we report answer accuracy using exact matching , which, while strict, provides insight into performance gains and error sources.
We observed that graph comprehension tasks are essentially difficult for the LVLM, as such graph images and tasks are scarce in its pre-training and fine-tuning data. On tasks such as triple listing, it can hardly answer correctly. For an output example:
"Based on the image provided, the graph appears to represent a network or a system with nodes (blue circles) and edges (black lines) connecting them. To list all the triples in the graph, I'll describe each triple as a sequence of three nodes in the graph, which are connected by edges. Here are the triples in the graph: 1. (node1, node2, node3) 2. (node2, node3, node4)..."
Since these preliminary tasks were considered a warm start for the model to learn grounding its reasoning on graph images, we only fine-tuned on these tasks for one epoch. Nevertheless, we observed a notable gain ranging from 7.6\% to 11.7\% across all tasks after just one epoch of fine-tuning.
---
**Global A3: clarification on our contribution in VQA tasks**
The VQA tasks were done in the zero-shot setting and aimed to show the interesting benefits of fine-tuning LVLMs on the synthetic graph images and leveraging the existing textual QA data. As we highlighted in our paper, while current LVLMs are fine-tuned on human-labeled vision-language instruction data, images of complex graph structures are much more scarce compared to the many natural images. Also the reasoning tasks designed for complex graph images are very limited. GraphVis highlights the potential to leverage textual data and KG images to improve the LVLM’s capability in reasoning with graph images.
To further clarify, existing pre-training and fine-tuning corpus includes data curated from various VQA training datasets, human annotations, and GPT-4V generations. Obtaining such training data for LVLMs, however, is considerably expensive as it involves data from different modalities. For instance, generating 6k image descriptions with 1k tokens per output using GPT-4V would cost approximately $200. Therefore, researchers have been exploring ways of generating synthetic data to further improve the LVLMs [1-3].
Our major contribution here is to offer a new perspective on gathering fine-tuning data to enhance LVLMs. Specifically, we propose that pure textual data can be combined with relevant synthetic graph images derived from KGs to improve the LVLM’s capability in image comprehension and reasoning. This approach is particularly beneficial for images with graph structures, which are relatively scarce.
[1] Aligning modalities in vision large language models via preference fine-tuning.
[2] Enhancing large vision language models with self-training on image comprehension
[3] Understanding Alignment in Multimodal LLMs: A Comprehensive Study
---
We have also addressed the comments in each individual rebuttal.
Pdf: /pdf/bda6494bc184d8b711a60ca022ce260d4fed02d7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Vision-Language Models are Strong Noisy Label Detectors | Accept (poster) | Summary: This paper proposes a novel method for learning with noisy labels leveraging pre-trained foundation models. The method is motivated by new findings that prompt learning is more robust to noisy labels when fine-tuning CLIP. The paper then designs a simple detector by learning both positive and negative prompts for each class. This paper showcases strong empirical results on multiple datasets, outperforming many existing methods.
Strengths: 1. The proposed method is well-motivated. The effectiveness of CLIP is often overlooked in previous works for learning with noisy labels, and this paper examines it thoroughly and finds a good way to adapt CLIP in downstream tasks even with noisy labels.
2. This paper makes an important contribution that prompt learning is robust to robust to noisy labels, while full fine-tuning is better on clean datasets. Based on this observation, the paper proposes a two-stage method.
3. This paper proposes a novel idea by jointly learning positive and negative prompts for classes, and leverage the negative learning loss to optimize learnable prompts.
4. Extensive experiments on both synthetic and real-world datasets show that the proposed method consistently outperforms existing baselines.
Weaknesses: 1. It is unclear why positive and negative prompts can help detect noisy labels. It is suggested to provide some visualization examples.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can examples detected as noisy label help improve the performance?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer mmEa,
We sincerely appreciate the reviewer for the thoughtful feedback. We are encouraged by comments like *The proposed method is well-motivated* and *This paper makes an important contribution*. We address your concerns one by one.
> W1. It is unclear why positive and negative prompts can help detect noisy labels.
The primary benefit of utilizing dual textual prompts is their ability to facilitate adaptive and instance-specific selection. In contrast, the conventional Small-loss strategy presents two significant limitations:
1) The Small-loss strategy requires the selection threshold to be manually set. The efficacy of noise detection heavily depends on this threshold, and finding an optimal value typically necessitates prior knowledge, such as the noise ratio in the dataset.
2) The Small-loss strategy assumes that all samples with high loss values are inherently noisy. However, it overlooks the presence of hard but clean samples that also exhibit high losses. By sorting samples based solely on loss and selecting only those with lower losses as clean, the Small-loss strategy risks overlooking these challenging yet valid instances.
By leveraging dual prompts, sample selection can be based on the similarity between each sample and the corresponding two prompts, i.e. $sim(I_i, T_k^+) > sim(I_i, T_k^-)$. This is equivalent to learning an adaptive threshold for each sample in a data-driven manner, thereby circumventing the limitations of the Small-loss strategy.
> Q1. Can examples detected as noisy label help improve the performance?
We can make use of noisy data for further improvement by treating noisy data as unlabeled data and leveraging semi-supervised techniques. Specifically, by simply incorporating FixMatch [1] in the second stage of DeFT, we improve the test accuracy on noisy CIFAR-100 with 60\% symmetric noise from 85.72\% to 86.13\%.
[1] Sohn K, Berthelot D, Carlini N, et al. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. NeurIPS 2020. | Summary: This paper focuses on improving the finetuning performance of pretrained vision models by removing the noisy labeled data. Specifically the authors propose a two-stage method: the first stage is to learn a noise detector by prompt learning of the text encoder in CLIP. The second stage is to full-finetune the pretrained models (including CLIP and other pretrained models) . In the experiment, the authors include both synthetic datasets and real datasets and compare their model with different 6 different methods. They also use the filtered dataset to finetune different pretrain models.
Strengths: 1. The paper is well writen and easy to follow.
2. The analysis of the relationship between noisy data and finetune methods is very insightful (Figure 1)
3. The proposed two-stage method is reasonable and results in good performance compared to other baselines.
4. It is great to show that the filtered data works for both CLIP models and non-CLIP models even though the filtering is done by the model based on CLIP.
Weaknesses: 1. The optimization part is a bit unclear. (See Questions Below)
2. The proposed noisy detector cannot be reusable when working on a different dataset. Each downstream dataset needs a specific trained detector.
3. Noisy label problem is a more severe problem during the pretraining than the finetuning. But this paper only focuses on the finetuning, which makes the research scope narrow in this case.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I am confused about the optimization for the positive prompts. Are positive prompts learnt together with negative prompts and Visual PEFT by using $L_{dp} + L_{sim}$? Or are they learned separately?
2. Can a noisy detector learned on one dataset transferred to another dataset? For example, the noisy detector learned on ImageNet can be used to detect noise in CIFAR100 dataset since there are a great number of class overlapping in these two datasets?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have included the discussion of limitations in the supplementary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer FyAK,
We sincerely appreciate the reviewer for the thoughtful feedback. We are encouraged by comments like *The analysis of the relationship between noisy data and finetune methods is very insightful* and *is well-written and easy to follow*. We address your concerns one by one.
> W1. The optimization part is a bit unclear.
>
> Q1. Are positive prompts learnt together with negative prompts and Visual PEFT by using $L_{dp} + L_{sim}$? Or are they learned separately?
Sorry for the confusion. The positive prompts are **learnt together** with the negative prompts and Visual PEFT by $L_{dp} + L_{sim}$ in the first stage. We summarized the proposed method in **Algorithm 1** in the Appendix of the paper.
> W2. The proposed noisy detector cannot be reusable when working on a different dataset.
>
> Q2. Can a noisy detector learned on one dataset transferred to another dataset?
Though the main focus of this paper is to identify noisy labels in the specific downstream data, the proposed noisy label detector can also generalize to a different dataset. To validate this, we use the noisy Tiny-ImageNet dataset ($64$$\times$$64$ size) to train the noisy label detector and test it on the ImageNet dataset with 40% symmetric noise. The reason we do not use the noisy CIFAR-100 dataset for training is due to the difficulty in constructing a consistent class name mapping between CIFAR-100 and ImageNet (e.g., "turtle" in CIFAR-100 is referred to as "loggerhead" in ImageNet).
The experimental results show that the F1-score on the noisy ImageNet dataset reaches 95.16%, which is 10.16% higher than the Zero-shot baseline. Besides, we find that the proposed noisy label detector can also generalize to other types of noisy labels. We construct a test dataset comprising 50% instance-dependent label noise to verify the effectiveness of the detector trained with symmetric noise and achieve an F1-score of 97.88%. These results demonstrate the strong generalization ability of the proposed noisy label detector.
> W3. Noisy label problem is a more severe problem during the pretraining than the finetuning. But this paper only focuses on the finetuning, which makes the research scope narrow in this case.
The noisy label problem during fine-tuning is significant because it directly impacts the model's performance on specific downstream tasks. To address this challenge, we propose the DeFT framework, which significantly improves the robustness of models against noisy labels, making a valuable contribution to the field.
While our focus is on fine-tuning, we acknowledge that exploring the noisy label problem during pretraining is an intriguing avenue for future research. We hope our work provides meaningful insights and promotes further studies on this problem.
---
Rebuttal Comment 1.1:
Comment: The authors address all my concerns and questions. Thanks for the effort in making the rebuttal. I changed my rating from 5 to 6.
---
Reply to Comment 1.1.1:
Title: Response by Authors
Comment: Dear Reviewer FyAK,
Thank you for your thoughtful suggestions and the positive feedback on our work. | Summary: The paper proposes a method for detecting noisy samples using vision-language models (CLIP). The main idea is to efficiently adapt (via prompt tunning) the clip model on noisy data and use this adapted model to select clean data. In the second stage, the clean data can be used to fully fine-tune a backbone model. The paper proposes the following contributions: a) using two learnable text prompts (positive and negative) to avoid using a threshold when detecting noise b) using negative learning [40] c) using a two-stage approach to adapt to the noisy distribution (first select clean samples then fully fine-tune using them). Experiments are done on multiple datasets with synthetic (CIFAR-100, Tiny-ImageNet, Stanford-Cars, CUB-200-2011) or real noisy labels (CIFAR-100N, Clothing1M, WebVision).
Strengths: * S1. Good direction of efficiently adapting CLIP models for noise detection.
* S2. The general setting is sound, of using efficiently fine-tuned CLIP to select clean data and then fine-tuning using it.
* S3. Good results on multiple noisy datasets.
Weaknesses: * W1. There needs to be more fair comparisons with baselines that are trained in the same settings and use similar models.
* W2. Table 1 seems to compare DEFT which has a fine-tuning stage on the noisy dataset with the initial clip model that is not trained or adapted at all. Is this the case? Is the small-loss baseline using the initial CLIP model, without any adaptation to the current dataset?
* W2.2. More appropriate baselines would be CLIP models that are adapted in standard ways to the current noisy dataset. Thus, one aspect that needs to be ablated is the fine-tuning method (none, fully fine-tuning, efficient fine-tuning, etc). Then, given a CLIP model, the second aspect that needs ablation is the selection method. Here the paper already compares against zero-shot and small-loss selection, but using the initial, pre-trained CLIP. The same should be done using an adapted clip.
* W2.3. Another simple selection method that, like DEFT doesn’t require a threshold would be to select samples with $sim(I_i, T_k) > th$ (where th=0) where $T_k$ is the text features corresponding to the correct class. This selection should be different from the zero-shot selection because the threshold is applied directly on the similarity of the image and correct class features, without applying softmax, thus without taking into account if the correct class is the class predicted by the zero-shot model.
* W3. Table 2 and Table 3 compare against multiple methods for training in noisy settings, starting with the same visual backbone, i.e. the visual part of CLIP. Is this correct? The question arises, is DEFT using additional information since it does the selection using both the visual and textual part of CLIP? Thus, the DEFT has a possible unfair advantage.
* W4. What is the motivation for using two text prompts (positive and negative)? If only one prompt is used (as usual), we can still make a prediction and optimize this prediction to be the correct class and then apply a fixed threshold (>0.5). Why shouldn’t the simple approach work and what benefits does the dual prompt bring?
Technical Quality: 2
Clarity: 3
Questions for Authors: * Is the $L_{sim}$ loss (Eq.8) used for the adaptation in the second stage, or is it used after the selection in phase 1? If the latter, would this mean that this loss updates the PEFT module and the learnable prompts, and then the final model is used in the second stage?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The method is not sufficiently evaluated against fair baselines, see weak points for more details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 6CXy,
We sincerely appreciate the reviewer for the thoughtful feedback. We are encouraged by comments like *The general setting is sound* and *Good results on multiple noisy datasets*. We address your concerns one by one.
> W1~W2.2. There needs to be more fair comparisons with baselines in Table 1.
To improve the clarity, we would like further to explain the experimental settings of the two baselines:
1) Zero-shot uses the initial CLIP model to predict labels for the training set. Samples where the given label matches the model's prediction are considered clean. In order to distinguish such an approach from model fine-tuning methods, we rename it as the **"Label-match"** strategy in the following discussion.
2) Small-loss selects a proportion of samples with small loss *during training* as clean samples. Therefore, the Small-loss baseline in Table 1 also utilizes the adapted CLIP model like our method, ensuring a fair comparison.
To make a fair comparison between the Label-match strategy and our method, we conduct an ablation study on noisy CIFAR-100 based on the CLIP model adapted with parameter-efficient fine-tuning (PEFT). The Table below presents the F1-score for noisy label detection by three methods. It can be seen that our method outperforms the other two baselines under varying noise ratios.
| Method | *sym.* 0.2 | *sym.* 0.4 | *sym.* 0.6 |
| ----------------------- | :--------: | :--------: | :--------: |
| **Label-match w/ PEFT** | 95.64 | 94.54 | 93.06 |
| **Small-loss w/ PEFT** | 97.01 | 95.08 | 91.79 |
| **DeFT (ours) w/ PEFT** | **98.63** | **98.33** | **97.15** |
Furthermore, we agree with the suggestion to perform ablations on the fine-tuning methods. Accordingly, we use a CLIP model fully fine-tuned on noisy CIFAR-100 for sample selection and present the F1-score in the table below. Comparing the results of the two tables, we observe that:
1) Regardless of the fine-tuning method used, our method (DeFT) achieves superior results.
2) Parameter-efficient fine-tuning (PEFT) performs better under high noise conditions, which is why we choose PEFT for adapting the CLIP model in the noisy label detection stage.
| Method | *sym.* 0.2 | *sym.* 0.4 | *sym.* 0.6 |
| ---------------------- | :--------: | :--------: | :--------: |
| **Label-match w/ FFT** | 96.17 | 93.11 | 85.10 |
| **Small-loss w/ FFT** | 97.16 | 94.77 | 89.81 |
| **DeFT (ours) w/ FFT** | **98.76** | **97.44** | **90.03** |
> W2.3. Select samples with $sim(I_i, T_k) > 0$.
In practice, selecting samples with $sim(I_i, T_k) > 0$ is not effective. Our experiments on CIFAR-100 with 60% symmetric noise using the initial CLIP model reveal that the minimum $sim(I_i, T_k)$ value among all samples is 0.098. As a result, setting the threshold at $sim(I_i, T_k) > 0$ does not exclude any samples and is thus not useful for filtering out noisy samples. However, when we set the threshold to 0.25, the F1-score for noise detection improved significantly to 87.57%. This indicates that the effectiveness of sample selection based on $sim(I_i, T_k)$ is highly dependent on the choice of threshold.
> W3. Table 2 and Table 3 compare against multiple methods with the same visual backbone. Is DEFT using additional information since it does the selection using both the visual and textual part of CLIP? Thus, the DEFT has a possible unfair advantage.
**All methods in Table 2 and Table 3 use the same backbone**, specifically the visual part of CLIP, and our method indeed leverages additional text information for sample selection. However, we do not view this as a drawback but rather one of the main contributions of this paper. To achieve better noise detection performance than previous unimodal methods, we introduce the text modality within the CLIP model. Through the DeFT framework, we successfully utilize multimodal information to enhance noise detection, achieving superior results compared to prior unimodal approaches. To the best of our knowledge, DeFT is the first method to effectively leverage multimodal information in label-noise learning. Similar explorations have also appeared in other domains, such as multi-label classification [1], semi-supervised learning [2], and out-of-distribution detection [3].
> W4. What is the motivation for using two text prompts (positive and negative)? What benefits does the dual prompt bring?
The key advantage of using dual textual prompts is that they enable adaptive and instance-specific selection. With a single prompt, a fixed threshold must be manually set. As discussed in the response to W2.3, finding an appropriate fixed threshold is challenging and often requires prior knowledge, such as the noise ratio. However, by using dual prompts, sample selection can be based on the similarity between each sample and the corresponding two prompts, i.e. $sim(I_i, T_k^+) > sim(I_i, T_k^-)$. This is equivalent to learning an adaptive threshold for each sample in a data-driven manner, eliminating the need for manual tuning.
> Q1. Is the $L_{sim}$ loss (Eq.8) used for the adaptation in the second stage, or is it used after the selection in phase 1?
Sorry for the confusion. $L_{sim}$ loss is only used in the first stage. With the filtered data, we can adapt any visual backbones using the selected clean data in the second stage. We summarized the proposed method in **Algorithm 1** in the Appendix of the paper.
[1] Abdelfattah R, Guo Q, Li X, Wang X, Wang S. Cdul: Clip-driven unsupervised learning for multi-label image classification. ICCV 2023.
[2] Mo S, Kim M, Lee K, Shin J. S-clip: Semi-supervised vision-language learning using few specialist captions. NeurIPS 2023.
[3] Ming Y, Cai Z, Gu J, Sun Y, Li W, Li Y. Delving into out-of-distribution detection with vision-language representations. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for the response. I appreciate the clarification of the Small-loss baseline and the additional ablations.
In W2.3. you show that the initial, pre-trained CLIP model is sensible to the threshold and it makes sense. How about the adapted CLIP model, as asked in W4?
Overall I tend to increase my score to 5.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 6CXy,
We sincerely appreciate your valuable comments and encouraging feedback.
For the adapted CLIP model in W4, the minimum $sim(I_i, T_k)$ is $-0.443$, and the F1-scores for noise detection at different thresholds are presented in the table below. These results indicate that the adapted CLIP model is also sensitive to the choice of threshold. We will add the new baseline in the revised version.
| Threshold | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
| --------- | :---: | :---: | :---: | :---: | :---: | :---: |
| F1-score | 80.77 | 85.72 | 89.89 | 91.93 | 92.75 | 91.81 | | Summary: The paper introduces a Denoising Fine-Tuning (DEFT) framework to address the challenge of noisy labels in vision-language models, particularly focusing on models like CLIP. The DEFT framework leverages the robust alignment of textual and visual features pre-trained on extensive image-text pairs to filter out noisy labels. This is achieved by learning class-specific positive and negative textual prompts. Positive prompts highlight distinctive class features, while negative prompts act as thresholds to differentiate between clean and noisy samples. The framework uses parameter-efficient fine-tuning (PEFT) to adapt the visual encoder to align with the textual prompts. Extensive experiments on synthetic and real-world noisy datasets demonstrate that DEFT significantly improves both noisy label detection and image classification performance.
Strengths: 1. This paper proposes to combining textual and visual prompts for noisy label detection, which enhances the robustness of vision-language models to label noise.
2. The framework's generalizability to various pre-trained models and its parameter efficiency make it a versatile solution.
3. The experimental validation on multiple datasets, including real-world noisy data, provides strong evidence of the method's effectiveness.
4. The use of PEFT to maintain the generalization ability of pre-trained models while adapting to specific tasks is particularly noteworthy.
Weaknesses: 1. The heavy reliance on pre-trained models may limit the framework's applicability in scenarios where such models are not available or suitable.
2. The discussion on the practical implementation and potential limitations in different real-world settings is relatively limited.
3. The computational overhead associated with maintaining and fine-tuning dual prompts could be a concern in resource-constrained environments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does DEFT perform when applied to pre-trained models with varying degrees of generalizability and domain relevance?
2. Can DEFT be adapted to scenarios with extremely high noise ratios or highly imbalanced datasets, and what modifications would be necessary to maintain its effectiveness?
3. What are the potential trade-offs between the computational overhead of DEFT and its performance gains, and how can the framework be optimized for deployment in resource-constrained environments?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of this paper, and there is no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer g1PL,
We sincerely appreciate the reviewer for the thoughtful feedback. We are encouraged by comments like *a versatile solution* and *strong evidence of the method's effectiveness*. We address your concerns one by one.
> W1. The heavy reliance on pre-trained models may limit the framework's applicability.
>
> Q1. How does DEFT perform when applied to pre-trained models with varying degrees of generalizability and domain relevance?
**[prevalence of pre-trained models]** Pre-trained models have become a cornerstone in many tasks due to their ability to capture generalizable features from large datasets, such as multi-label classification [1], long-tailed learning [2], and out-of-distribution detection [3]. This paper focuses on noisy label detection, where the pre-trained CLIP model is available and suitable for our method in almost all scenarios.
**[various pre-trained models]** In Table 4 of the paper, we compared various pre-trained models in addition to CLIP. The results validate the effectiveness of our method.
**[domain relevance]** In addition, we conduct experiments on the MNIST dataset. MNIST consists of monochrome images of handwritten digits and has been verified to have a low domain relevance with CLIP as there is no data overlap with the CLIP pre-training data [4]. Results (F1-score) in the table below exhibit that our method consistently outperforms the small-loss baseline in sample selection performance.
| Method | *sym.* 0.2 | *sym.* 0.4 | *sym.* 0.6 |
| --------------- | :--------: | :--------: | :--------: |
| **Small-loss** | 97.52 | 96.16 | 94.18 |
| **DeFT (ours)** | **99.63** | **99.30** | **98.54** |
> W2. The discussion on the practical implementation and potential limitations in different real-world settings is relatively limited.
**[implementation]** We provided the code of this paper in the supplementary material, which contains the practical implementation details of our method on three real-world datasets. We will include more implementation details in the next version of the paper.
**[limitations]** We discussed the limitations in Appendix A.4 of the paper. Specifically, our method may have some potential limitations under different real-world settings. For example, DeFT primarily focuses on the label noise problem in image classification, where the label is a class name. Therefore, it cannot directly handle the noise in image-text pair data, where the label is a text description of the image.
> Q2. Can DEFT be adapted to scenarios with extremely high noise ratios or highly imbalanced datasets?
We reported the noise detection results under severe noise conditions in Appendix A.1 of the Appendix. To tackle high noise ratios, there are two necessary modifications: 1) the learning rate is adjusted from $3$$\times$$10^{-2}$ to $1$$\times$$10^{-2}$, and 2) a smaller weight is assigned to the positive loss component in $L_{dp}$ to mitigate the impact of noisy pseudo-labels. For highly imbalanced datasets, we can employ class-balanced loss functions to replace the standard cross-entropy loss $\ell_{ce}$ in the model adaptation stage, such as the logit-adjustment loss [5].
> W3. The computational overhead could be a concern in resource-constrained environments.
>
> Q3. How can the framework be optimized for deployment in resource-constrained environments
It is noteworthy that we utilize parameter-efficient fine-tuning (PEFT) techniques to adapt CLIP on downstream datasets, which is both effective and efficient compared to fully fine-tuning, as PEFT is more robust to label noise and requires optimizing much fewer parameters. Even in extremely resource-constrained environments, the proposed framework can still be adjusted in the following ways:
1) **Learning transferable detector on smaller datasets**. In the first stage of DeFT, we can learn a noisy label detector on a small dataset and then transfer it to a larger one for sample selection. For example, the detector trained on Tiny-ImageNet can be used to detect noise in the overlapping classes of the ImageNet dataset.
2) **Adapting model with smaller backbones**. In the second stage of DeFT, the filtered data can be used with various visual backbones, as shown in Table 4 of the paper. Therefore, we can utilize a smaller model and still achieve good performance with the filtered clean data.
[1] Abdelfattah R, Guo Q, Li X, Wang X, Wang S. Cdul: Clip-driven unsupervised learning for multi-label image classification. ICCV 2023.
[2] Shi J, Wei T, Zhou Z, Han X, Shao J, Li Y. Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts. ICML 2024.
[3] Ming Y, Cai Z, Gu J, Sun Y, Li W, Li Y. Delving into out-of-distribution detection with vision-language representations. NeurIPS 2022.
[4] Radford A, Kim J, Hallacy C, et al. Learning transferable visual models from natural language supervision. ICML 2021
[5] Menon A, Jayasumana S, Rawat A, Jain H, Veit A, Kumar S. Long-tail learning via logit adjustment. ICLR2021.
---
Rebuttal Comment 1.1:
Title: Official Comment by the Authors
Comment: Dear Reviewer g1PL,
We appreciate your thorough evaluation and helpful suggestions and comments. In our response, we have provided point-by-point responses to your specific comments. We hope our response addresses all the concerns raised in your review.
Since the author-reviewer discussion ends soon, we are happy to hear your thoughts if you need additional clarifications from the authors. Thank you very much.
Best | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling | Accept (poster) | Summary: This paper studies aligning samples from LLMs with human preferences using best-of-$n$ (BoN) sampling methods.
While BoN methods can yield more desirable outputs without changing off-target behavior, they are computationally expensive since they require $n$ samples from the LLM for every prompt.
To address this, the paper proposes BoNBoN alignment, which aims to mimic the sample distribution of BoN without generating $n$ samples.
The authors demonstrate that BoNBoN alignment achieves a high win-rate while minimally changing off-target behavior.
Strengths: ### Clarity and Quality
This paper is generally well-written and easy to understand.
The idea itself sounds intuitive and the technical results I checked seem to be correct although some proofs should be modified.
### Significance
It is worth noting that I am not specialized both in LLM and generative model.
Therefore, while I think the overall claim sounds reasonable, I will not judge how this work is novel and has impact in this field.
Weaknesses: See Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: (Q1) For optimal policy in (3.4), since $Q\_x$ is a CDF of $r(x, Y_0)$, it is a monotonically increasing function, which implies that
$$\begin{equation}
\arg\max\_{\pi} \mathbb{E}[Q\_x(r(x, y))] = \arg\max\_{\pi} \mathbb{E}[r(x, y)].
\end{equation}$$
Therefore, the optimal policy can be written in a very similar formulation to (2.8).
Why do we need to consider $Q\_x$ as in (3.5)?
---
While the authors choose $\alpha$ so that the losses from SFT-BON and IPO-BON contribute approximately equally to the total loss, and claim that this is much easier than choosing $\beta$, this remains unclear to me.
When training a fine-tuning model, one will observe empirical losses from SFT-BON and IPO-BON for given prompts.
The empirical BoNBoN alignment objective can then be computed as an internally dividing point between the objectives of SFT-BON and IPO-BON.
(Q2) How to set $\alpha$ before observing the empirical objectives of SFT-BON and IPO-BON? Since $\alpha$ is chosen to balance the terms between them, it seems necessary to know the ratio between them prior to training.
---
Although BON does not require choosing $\beta$, the choice of $n$ implicitly plays a similar role as $\beta$, as shown in Theorems 1 and 2.
Therefore, a naive question would be:
(Q3) how should one choose $n$?
When $n$ is too large, the possible answers of BON for similar questions would converge to one specific answer, resembling a point mass.
In this scenario, the KL-divergence between BON and the optimal distribution could be large, even though the win-rate increases.
This implies that a large $n$ is somewhat equivalent to small $\beta$.
---
### Major comment
The proof of Theorem 1 from lines 448 to 449 needs modification.
The current arguments using **max** and **min** are mathematically incorrect, as the authors have skipped all coefficients, including negative ones.
The arguments should be written in terms of **argmax** and **argmin** or including all coefficient explicitly.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: ### Minor comments
Although this paper is well-written in general, there are some places missing information.
1. What is $y\_w$ and $y\_l$ in (2.6).
2. What is $\sigma$ in (2.7).
3. What is $r^*$ in line 435.
4. What is $D\_{BW}$ in line 472.
---
After rebuttal:
I have increased my score from 4 to 5 as the authors have addressed my concerns.
The reason I am giving a 5 is that I am unable to fully assess the impact of this work within the LLM field.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
- (A1): Since the expectation also accounts for $x$, the left side and the right side are not equivalent. For different prompts, the distribution of rewards varies. This is fundamental: some prompts are easy for the base model (e.g., “what is 2+2?”) and the optimal alignment is to do nothing. Some prompts have responses with highly variable rewards, and the optimal alignment is to strongly favor the good responses. Having a prompt-specific transformation is what lets us take advantage of this structure.
- (A2): Morally, the point here is that choosing $\alpha$ is more akin to choosing the learning rate than to choosing a parameter governing KL vs reward tradeoff. In theory, it should affect the optimization path, but (ignoring finite sample issues) not the final solution. In principle, to find the optimal KL vs reward tradeoff, one would need to fully fit the model at many distinct values. By contrast, all values of $\alpha$ such that the model converges well should yield the same behavior. So, we only need to find one. In practice, we just use a heuristic to set it based on the losses in the first steps of training.
We also note that this is not the main point of the paper. It does seem like a useful, material advantage. But the core point of the paper—the optimality and achievability of best-of-$n$ alignment—stands irrespective.
- (A3): In principle, one could choose $n$ large enough to observe reward hacking type behavior. However, the win rate increases relatively slowly for $n>10$ (at which point the win rate is already 90%) and the KL divergence increases very slowly. This win rate already suffices to beat existing contrastive approaches with very small KL drift. So, in practice, choosing $n$ around 10 seems to be a sweet spot.
It is an interesting question how to adapt the procedure in situations where more extreme model modification is desirable. We suspect that an iterated best-of-$n$ procedure using multiple rounds of BonBon would be effective. However, this is a direction for future work.
- For major comment: Thanks for pointing this out! We will change the proof in the updated version.
- For minor comment:
- 1-2: $y_w$ and $y_l$ are the win and lose response of the prompt $x$. $\sigma$ is the sigmoid function. Their definitions will be added after the equations in the updated version.
- $r^*$ is $r$ and $D_{BW}$ stands for the (prompt, best response, worst response) dataset. These notations are typos left over from an earlier version. Thanks for pointing them out. We will fix these typos in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your review! Do you have any further questions? If we have resolved your primary concerns, we would greatly appreciate it if you could consider raising the overall rating. If there are any additional improvements we can make to earn a higher score, we would be more than happy to address them. | Summary: The paper describes theoretical results about the Best-of-n sampling procedure in LLM inference. To reduce the computational cost of the procedure, authors develop a novel finetuning method called BoNBoN. Experiments on dialog generation and text summarization show that BoNBoN achieves a higher win-rate for the same KL divergence from the reference model compared to SFT, DPO, and IPO applied on BoN samples.
Strengths: * The paper gives a concise introduction to RLHF and DPO and views the reward-optimal conditional probability as an exponential tilting of the reference model's conditional probability. This motivates the $f_x$-aligned optimal policy.
* The paper analytically solves for the optimal policy at a given KL divergence value from the reference model, and shows that the Best-of-n policy approximates it well.
* To remove the need for sampling n times, the idea of BoNBoN is proposed. Experimental results show that it attains a better win-rate vs KL divergence tradeoff compared to other approaches.
Weaknesses: * It would interesting to plot the theoretically optimal tradeoff in Theorem 1 on the plot in Figure 3, to better visualize the performance of different methods across a range of different KL values.
* More description about Fig 3 would be helpful. What was the value of n used to obtain the BoN sampling operating point? It would be more complete to see different operating points of BoNBoN for different values of n.
Technical Quality: 4
Clarity: 3
Questions for Authors: * IPO-BoN is presented as making use of more than just the winning sample (specifically, it uses the best and worst samples in a contrastive objective). It naturally raises the question of using other pairs in the loss as well. Do the authors think that taking more than one pair in the loss could be analyzed in a similar manner, or is it not expected to yield an advantage empirically?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
First, thanks for all your suggestions about Figure 3! To make it more informative, we will add the theoretical optimal line in the figure and more detailed information in the caption. For the question in the second point, the $n$ in Fig 3 is 8 (we mentioned in Section 5.1 Experimental Setup) and we will add this to the caption. We chose 8 since the win rate of 8 is large enough (nearly 90%) to be favorably competitive with existing contrastive methods. This allows us to compare the win-rate vs KL frontier in the relevant regime. We will also add comparisons for $n=2,\cdots,8$ to the camera ready plots.
For your question: we’ve tried best vs other samples (not the worst one) as pairs for IPO and found that the best vs worst pairs have the most satisfying performance. It would be an interesting direction for future work to find a way to augment the training objective to consume multiple distinct pairs. It is quite possible this could further improve performance. However, we think the simplicity of the BonBon approach is desirable for this paper, since it makes it clear that the advantage is just about best-of-$n$ rather than, e.g., increasing the effective amount of data used for contrastive alignment. | Summary: They claim that best-of-n is an optimal policy with respect to the tradeoff between win rate and the KL divergence. Based on the analysis they propose a strategy to train a model so that it gets to policy similar to the BoN policy.
Strengths: The research question is interesting. BoN and the other learning-based alignment algorithms are currently discussed separately. Understanding the relationship of these algorithms is valuable to the community, if correct.
Weaknesses: Honestly, I couldn’t follow the analysis of the paper, which is probably on my side. Yet, I think several clarifications would be preferable to improve the paper.
- BoNBoN alignment is a combination of existing ideas, which is not a reason for rejection. The shortcoming of the paper is that it is unclear which part of the ideas is claimed to be their original as they do not cite the source of the ideas. 1. Using the output of the BoN as the alignment target is a common practice (Pace+ 2024; Liu+ 2023; Gulcehre+ 2023). 2. Mixing the SFT objective with the alignment objective is also proposed in the very first paper of RLHF (Ouyang+ 2022; PPO-ptx) 3. The disadvantage that DPO (or IPO) only controls the ratio of the chosen text/rejected text is commonly resolved by fine-tuning the model on the chosen response first, and then running the preference optimization using the pairs of the responses (Rafailov+2023). The novelty of the proposed algorithm would be clear if the origin of the ideas were clarified.
- I failed to follow the analysis of the paper. I would say that it needs some clarification to be understandable for a wide range of audiences.
Below is the comment to the paper assuming that I understand the argument of the paper correctly.
My concern with the paper is that it is making an unrealistic assumption and getting to the wrong conclusions. I suspect that assuming the functions are continuous does not simplify the argument. For example, if I understand it correctly, Lemma 5 is true only because we assume that y is completely ordered with r(x, y) and r is a one-to-one mapping. The assumption is not made for the sake of simplifying the argument. It is exploited to derive the results that are not valid without the assumption. If an assumption is required then it should be stated so. I would need a better explanation of why the assumption can be true.
Technical Quality: 2
Clarity: 1
Questions for Authors: - Why do we assume pi_0 is continuous? It is discrete. There is an analysis of BoN that treats pi as a discrete function (Beirami et al. '24; Theoretical guarantees on the best-of-n alignment policy). What’s the advantage of assuming what is not true over the analysis which treats it as is?
- Eq. 3.1. I failed to understand this equation. My guess is that the right-hand side is ignoring the square of pi_0? If so, it should be explained. Is the := saying that we assume the situation where BoN policy can be represented so, or is it saying that the BoN policy defined in the preliminary can be described in this form?
- Eq. 3.5.: I believe this formula can be translated as Eq. (2) in (Liu et al. '24; Decoding-time Realignment of Language Models)". Is that correct?
- Line 143: “However, given the vast number of possible responses, the assumption is mild in practice.” Why is it true? Even if the domain is infinite, it does not mean that its support is large. One can think of a prompt from a closed QA that a model will most likely output only A or B.
- Theorem 2. and footnote 2: What does this Theorem claim? In the footnote, it says “the KL divergence is almost its upper bound log(n) - (n-1)/n”. But in Theorem 2 it says the KL divergence is *exactly* that value. Is it showing the upper bound or the exact value?
- Line 449: where the second to last equation is because… → I guess this refers to the fourth to the fifth equation? I guess this Z refers to Z^C_r(x)?
- Line 500: The approximation is due to the fact that p_i is small → Why is p_i small? We can think of closed QA tasks where the support of the language model is only A, B, C, and D. In this case square of p_i is not negligible.
Confidence: 2
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: I couldn't follow the discussion of the paper to the point where I could evaluate the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
- With respect to prior work: could you expand on the set of references you have in mind? E.g., with links or paper titles. It is not clear to us what papers you’re envisioning, or what connections you have in mind.
- For RLHF (and DPO), it is standard procedure to first run SFT and then, on this SFT’d model, run the RLHF (DPO) procedure. Fundamentally, and most importantly, nothing in the standard RLHF (or DPO) pipeline targets the best-of-n distribution, which is the main point of the present paper. Additionally, the sense in which BonBon combines an SFT and contrastive objective is fundamentally different. The point in the present paper is that both of these objectives have the same analytic solution, so that the SFT time can be used *at contrastive alignment time* in the second step.
- With respect to fine tuning on best-of-$n$ samples: a major contribution of the present paper is to show that the best-of-$n$ heuristic in fact has a remarkably crisp justification. As far as we know, this has not been previously clearly understood. We also note that we experiment with simply fine tuning on best-of-$n$ examples as a baseline and, as we explain at length, we find this works very poorly. The fact that it’s possible to design a contrastive procedure to explicitly target the best-of-$n$ distribution, and that this is much more effective than naive SFT, is an important contribution.
- With respect to more minor comments:
- We adopt the continuous assumption for the following two reasons:
1. Although response $y$ exists in a high-dimensional space, making the assumption of continuity for $\pi_0$ appear unrealistic, the statistics we consider—KL divergence and win rate—are in a one-dimensional space. When the number of responses is large and each probability is low, the difference between the continuous and discrete cases becomes negligible. This point is well illustrated in Section B of the appendix.
2. We acknowledge that the continuous assumption may not be realistic for certain prompts, such as in Q&A scenarios. However, many tasks in LLM alignment aim to enhance abstract qualities like helpfulness or harmlessness of responses. In these settings, prompts often generate diverse responses without dominant answers, aligning with the assumptions mentioned in the previous point. Moreover, since the expectation is taken over a set $D$ of prompts $x$, we believe the theory remains relevant if most prompts in the set elicit diverse responses.
Note also that we find strong agreement with the best-of-$n$ theoretical predictions empirically, and that it is extremely rare in practice for two different responses to be assigned the same reward.
- Equation 3.1 is correct as written. The $:=$ denotes a definition, as is standard notation.
- It is not correct to say that equation 3.5 maps to Liu et al. These papers address different problems; they focus on maximizing the expectation of rewards, whereas our focus is on maximizing the win rate. Despite the similarity in the closed forms, they are not the same. More specifically, the difference is due to the fact that the expectation is also taken over $x$. Distributions of rewards $r(x,Y)$, $Y\sim\pi(y\mid x)$ can be different for different prompts.
- Re. Footnote 2. Assuming continuity makes the KL divergence larger than exploiting the discreteness in the actual responses. So, in this sense, the values we use here (derived under a continuity assumption) are upper bounds on the true discrete distribution. The point of this footnote is simply that in the case where the cardinality of response space is very large---as is typical---the gap between this “upper bound” and the analytic value is small. This is really just noting again that the continuity assumption is reasonable.
- Line 449: we believe this is correct as written.
---
Rebuttal Comment 1.1:
Title: Thank you very much for the clarification
Comment: > With respect to prior work: could you expand on the set of references you have in mind? E.g., with links or paper titles. It is not clear to us what papers you’re envisioning, or what connections you have in mind.
Sorry for the inconvenience. Here is the list of papers I mentioned.
Pace+ 2024; West-of-N: Synthetic Preference Generation for Improved Reward Modeling https://arxiv.org/abs/2401.12086
Liu+ 2023; Statistical Rejection Sampling Improves Preference Optimization https://arxiv.org/abs/2309.06657
Gulcehre+ 2023; Reinforced Self-Training (ReST) for Language Modeling https://arxiv.org/abs/2308.08998
Ouyang+ 2022; Training language models to follow instructions with human feedback https://arxiv.org/abs/2203.02155
Beirami et al. '24; Theoretical guarantees on the best-of-n alignment policy https://arxiv.org/abs/2401.01879
> For RLHF (and DPO), it is standard procedure to first run SFT and then, on this SFT’d model, run the RLHF (DPO) procedure. Fundamentally, and most importantly, nothing in the standard RLHF (or DPO) pipeline targets the best-of-n distribution, which is the main point of the present paper.
> With respect to fine tuning on best-of-n samples: a major contribution of the present paper is to show that the best-of- heuristic in fact has a remarkably crisp justification. As far as we know, this has not been previously clearly understood. We also note that we experiment with simply fine tuning on best-of-n examples as a baseline and, as we explain at length, we find this works very poorly. The fact that it’s possible to design a contrastive procedure to explicitly target the best-of-n distribution, and that this is much more effective than naive SFT, is an important contribution.
I would say that Pace+ 2024; Liu+ 2023; Gulcehre+ 2023 can also be considered as targeting the best-of-n distribution. It would be helpful for the reader if the paper discussed the difference between the proposed method compared to the procedures in these papers as it is not immediate to me.
> We adopt the continuous assumption for the following two reasons
Thank you very much for the explanation. I think the assumption is valid. It would be helpful for the readers if it is clarified in the paper as the applications of LLMs are not constrained to open-ended text generation and they are also used in closed QA kinds of tasks where the possible output is limited (e.g., using it as a preference model to tell whether an answer A or B is preferred).
> Equation 3.1 is correct as written
I thought $\pi^{(n)}_{r}(y | x)$ will be $nQ_x(r(x, y))^{n-1} \pi_0(y | x) + \frac{n(n-1)}{2} Q_x(r(x, y))^{n-2} \pi_0(y | x)^2 + \frac{n(n-1)(n-2)}{6} Q_x(r(x, y))^{n-3} \pi_0(y | x)^3 + ... + \pi_0(y | x)^n$ as y can be sampled multiple times and we still get BoN policy to generate y. What am I missing?
---
Reply to Comment 1.1.1:
Comment: >I would say that Pace+ 2024; Liu+ 2023; Gulcehre+ 2023 can also be considered as targeting the best-of-n distribution. It would be helpful for the reader if the paper discussed the difference between the proposed method compared to the procedures in these papers as it is not immediate to me.
The present paper makes two main contributions:
(1) Theoretically, we established the connection between best-of-n sampling and other alignment methods, and proved that the best-of-n sampling distribution is essentially optimal with respect to win rate versus KL divergence.
(2) Built upon the theoretical understanding, we propose BonBon as an efficient way to train a model to mimic its own best-of-n distribution.
Compared to other works you mentioned:
1. Pace+ 2024 use the best-and-worst samples to further improve the reward model but not the language model. The theoretical results in this paper focus on the reward model as well. The papers are thus disjoint in their motivation and development.
2. Liu+ 2023 apply rejection sampling to get samples from $\pi_r(y\mid x)=\frac{1}{Z(x)}\pi_0(y\mid x)\exp\left(\frac{1}{\beta}r(x,y)\right)$ and then apply some contrastive method to to fine tuning. Their target policy is not the best-of-n policy.
3. Similarly, ReST essentially utilizes the samples from a reward-truncated reference model to do the fine-tuning. The target policy is also not the best-of-n distribution.
4. Compared to Ouyang+ 2022, they use reinforcement learning to do fine tuning and target at a different underlying policy.
5. The main point of Beirami et al. '24 is deriving KL divergence of best-of-n policy in the discrete case. We focus on understanding why best-of-n perform well and its connection to other alignment methods. We also discuss the KL divergence in the discrete case under a more general framework in addition to best-of-n policy.
We emphasize that the referenced papers are wholly distinct from the present paper, both in motivation and results.
>Thank you very much for the explanation. I think the assumption is valid. It would be helpful for the readers if it is clarified in the paper as the applications of LLMs are not constrained to open-ended text generation and they are also used in closed QA kinds of tasks where the possible output is limited (e.g., using it as a preference model to tell whether an answer A or B is preferred).
Thanks for your suggestions! We will expand the discussion where the assumption is introduced.
>I thought $\pi_r^{(n)}(y|x)$ will be $nQ_{x}(r(x,y))^{n-1}\pi_{0}(y|x) + \frac{n(n-1)}{2}Q_{x}(r(x,y))^{n-2}\pi_{0}(y|x)^{2} + \frac{n(n-1)(n-2)}{6}Q_{x}(r(x,y))^{n-3}\pi_{0}(y|x)^{3} + \ldots + \pi_{0}(y|x)^{n}
$ as $y$ can be sampled multiple times and we still get BoN policy to generate y. What am I missing?
Since the reward model $r$ is a one-to-one mapping, you can get the best-of-n distribution by doing the integral:
$$\pi_r^{(n)}(y\mid x) = \int n!\pi_0(y_1\mid x)\cdots\pi_0(y_{n-1}\mid x)\pi_0(y\mid x)1_{r(x,y_1)\le\cdots\le r(x,y_{n-1})\le r(x,y)}dy_1\cdots dy_{n-1}.$$
This integral should be straightforward, noticing that $U_i=Q_x(r(x,Y_i))\sim U(0,1)$ with $Y_i\sim\pi_0(y\mid x)$. | Summary: This paper addresses aligning samples from large language models (LLMs) with human preferences using best-of-$n$ sampling, which involves drawing $n$ samples, ranking them, and selecting the best one. It tackles two main problems. First, it explores the relationship between best-of-$n$ sampling and Reinforcement Learning from Human Feedback (RLHF) approaches. The authors demonstrate that the best-of-$n$ sampling distribution is essentially equivalent to the RLHF policy when a specific monotone transformation is applied to the reward function. This transformation optimizes the trade-off between win-rate against the base model and KL distance from the base model, making best-of-$n$ a Pareto-optimal solution for win-rate vs. KL distance. Second, the paper introduces BonBon Alignment, a method to fine-tune models to mimic the best-of-$n$ sampling distribution, thus avoiding the need to draw $n$ samples for each inference. Experiments indicate that BonBon Alignment yields models with high win rates while minimally impacting off-target aspects of the generations.
Strengths: 1. This paper is well written. The notations are clear and the literature review is sufficient.
2. By justifying that the best-of-n policy is essentially optimal in terms of win rate versus KL-divergence, efficient training for language models based on best-of-n fine tuning is achieved by mimicking the best-of-n sampling distribution.
3. The control of hyperparameter is made simpler with $\alpha$, that balances the loss ingredients.
Weaknesses: When multiple aspects of human preference exist, the proposed method seems to have limited capacity to handle contradictory preference ratings, e.g., helpfulness and harmfulness. Therefore, because the trade-off between diverse aspects of preferences is not explicitly addressed.
Technical Quality: 3
Clarity: 4
Questions for Authors: With an only $\alpha$ controling the divergence, it would be interesting to understand how could the proposed Bonbon alignment reflect multiple aspects of real world preferences, e.g., the balance between helpfulness and harmlessness in Anthropic dataset.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
With respect to multiple aspects: we agree this is a fundamental challenge with preference modeling, and a very interesting subject for future research. We note, however, that this problem is essentially fundamental to all post-training procedures. E.g., even explicit reward modeling has to define a way of aggregating multiple distinct kinds of reward. Any aggregation scheme would then induce a preference labeling. And our results would apply to this preference labeling.
That is: the question of _how to rank samples_ and _what to do with the rankings_ are separable questions. The present paper addresses the second problem. Progress on the first is also very interesting, but is out of scope for this paper.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and thank the authors for their candid responses.
I maintain my positive opinion on this paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments and constructive submissions. Where appropriate, we have incorporated these into the main text (details in reviewer-specific replies), and we believe this has strengthened the paper.
The reviewers agree that the paper addresses an important and interesting problem (PJN3, mr5a), is clearly-written (Jgyb, ySG3), theoretically sound (Reviewer mr5a, ySG3), and with solid experimental results (Reviewer Jgyb, mr5a). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A2PO: Towards Effective Offline Reinforcement Learning from an Advantage-aware Perspective | Accept (poster) | Summary: This paper proposes an offline reinforcement learning method called A2PO, which aims to solve the problem of constraint conflicts in mixed-quality datasets collected from multiple behavior policies. A2PO optimizes offline learning by explicitly constructing advantage-aware policy constraints, especially in the presence of data of variable quality. Specifically, A2PO employs a CVAE to disentangle the distribution of actions under different behavior policies, modeling the advantage values of all training data as conditional variables. The agent can then optimize the advantage-aware policies based on these disentangled action distribution constraints to achieve high advantage values. Through extensive experiments on single- and mixed-quality datasets from the D4RL benchmark, A2PO demonstrates its superior performance over other offline RL baselines and advantage-weighted competitors.
Strengths: 1. The authors introduce CVAE to deal with the problem of disentangling behavior policies in mixed-quality datasets, which is novel to me.
2. The paper is well-structured and clearly written.
3. Several analytical experiments are also provided to help understand the method
Weaknesses: See questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Although this paper presents a new approach, the comparison with existing techniques may not be in depth enough and lacks an analysis of why the algorithm works. In addition, could the authors further discuss the conditions under which A2PO may fail?
2. This paper focuses on the analysis and comparison of advantage-weighted offline RL methods, ignoring recent work on policy constraints in the same relative as A2PO, such as PRDC: Policy Regularization with Dataset Constraint for Offline. It would be more convincing if the authors could add research and experimental comparisons of related work to demonstrate the advantages of A2PO among its relative methods (even SOTA methods).
3. Regarding implementation details, how can we ensure a sufficient number of high-quality samples in the dataset to support training when selecting ξ = 1? Additionally, how can we guarantee the accuracy of the introduced advantage information?
In general, this work demonstrates commendable quality and I would be willing to raise my score if my concerns could be addressed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your support on the idea, writing, and experiment. And we will solve your doubts with text description and experimental support.
**[Q1: Although this paper presents a new approach, the comparison with existing techniques may not be in depth enough and lacks an analysis of why the algorithm works.]**
Sorry for the confusion. In our original manuscript, we have compared our A2PO with other advanced offline RL methods. These techniques include policy constraint methods such as TD3+BC, value regularization methods like CQL, model-based method MOPO, diffusion-based method Diffusion-QL, and advantage-weighted methods like LAPO. Moreover, we have additionally conducted experiments to evaluate the return conditioned methods DT and %BC [1], as well as the data rebalancing techniques ReD [2] and its variation DeReD, as presented in Table R1. To validate the efficacy of the A2PO components, we have also conducted various ablation studies in the original paper. And we have the following key observations:
1. In Figure 1 of the paper, the toy demo showcases the erroneous constraint issue of the AW method and highlights the precise behavior distribution modeling capability of our A2PO, while the following comparing experiments further confirmed this point.
2. In comparison to conventional offline rl methods like TD3+BC/EQL and return-conditioned methods such as DT/%BC, the superiority of our A2PO becomes increasingly apparent as the substantial gaps between behavior policies become larger from medium, medium-expert to random-expert, random-expert-medium.
3. Although diffusion-based method Diffusion-QL precisely models the action distribution and gets comparable performance in several scenarios, our A2PO outperforms Diffusion-QL in most cases, particularly on mixed-quality datasets while being less time-consuming, as shown in Figure 7 of the paper.
4. The ablation study on the CVAE and the advantage-aware policy constraint (Appendices E, F) demonstrates the effectiveness of advantage-aware policy optimization.
These key observations highlight the effectiveness of A2PO to capture high-quality interactions within the dataset for a reasonable advantage-aware policy constraint and policy optimization. Unlike previous AW methods focusing on redistributing the dataset but inadvertently reduce data diversity, our A2PO precisely disentangles and models the mixed behavior policies with advantage-condition technique on CVAE, then the agent can follow such disentangled action distribution constraints to optimize the advantage-aware policy towards high advantage values, therefore consistently improving the performance.
**[Q2: In addition, could the authors further discuss the conditions under which A2PO may fail?]**
Thank you for the constructive comments. The results of the original paper indicate that our A2PO does not perform well when the offline dataset distribution is narrow, such as halfcheetah-medium. When the dataset distribution is severely restricted, A2PO struggles to efficiently train both the CVAE and the agent model, which can result in task failure.
**[Q3: This paper focuses on the analysis and comparison of advantage-weighted offline RL methods, ignoring recent work on policy constraints in the same relative as A2PO, such as PRDC: Policy Regularization with Dataset Constraint for Offline Reinforcement Learning.]**
Thanks for the insight suggestion! We have additionally conducted experiments to compare the recently proposed policy-constrained method, PRDC [3] with our A2PO The results are presented in Table R1, which show that our A2PO consistently achieves superior performance on the majority of the gym, maze, antmaze, and adroit tasks. This comparison clearly underscores the superior performance and effectiveness of advantage-aware policy constraints employed in A2PO for offline policy optimization.
**[Q4: Regarding implementation details, how can we ensure a sufficient number of high-quality samples in the dataset to support training when selecting ξ = 1? ]**
It is challenging to ensure that the dataset comprises an adequate number of high-quality samples. We address this limitation by utilizing the generalization capabilities of the generative model CVAE. By selecting $\xi=1$, the advantage-aware CVAE can generate and infer the optimal actions. As indicated in Table 6 of Appendix E, by directly inputting $\xi=1$, the CVAE-generated action $a\sim p_\psi(\cdot|z\sim\mathcal N(0,I),c)$ can also achieve high performance in various scenarios. Furthermore, we have also investigated the scenario where the dataset contains only a small quantity of high-quality samples, as shown in Appendix H. The results demonstrate that our A2PO algorithm is capable of achieving expert-level performance, even with a limited number of high-quality samples. These findings underscore the generalization ability of the advantage-aware CVAE and the robustness of our A2PO algorithm in deriving optimal policies across diverse structures of offline datasets.
**[ Q5: How can we guarantee the accuracy of the introduced advantage information? ]**
Thanks for the insightful comments. The advantage value is determined by both the critic and the advantage estimation method. It is important to clarify that we do not propose a novel advantage computation method; rather, we adopt the widely accepted baseline in offline reinforcement learning, TD3+BC [4], as our framework. We utilize the advantage definition $ A(s, a) = Q(s, a) - V(s) $ for advantage estimation and subsequent normalization, ensuring that the agent maintains an effective estimation.
[1] Decision Transformer: Reinforcement Learning via Sequence Modeling. NeurIPS 2021
[2] Boosting Offline Reinforcement Learning via Data Rebalancing. arXiv 2022
[3] Policy Regularization with Dataset Constraint for Offline Reinforcement Learning. ICML 2023
[4] A Minimalist Approach to Offline Reinforcement Learning. NeurIPS 2021
---
Rebuttal Comment 1.1:
Comment: Thank you for your review and comments. We hope that our additional evaluations and rebuttal have addressed your primary concerns with our paper. We would really appreciate feedback as to whether there are any (existing or new) points we have not covered, and we would be happy to address/discuss them!
---
Rebuttal 2:
Title: Experiment results for Q1 and Q3
Comment: Table R1. Test returns of our A2PO and return-conditioned and data rebalancing baselines. **Bold** indicates the best performance among the two algorithms. *Italic* indicates that the score are directly taken from the original paper. For the newly-constructed dataset, we rerun %BC, DT with the official code from [1]. As ReD[2] is a method without publicly available source code, we reimplement it ourselves.
| Env | %BC | DT | CQL+ReD | TD3BC+ReD | IQL+DeReD stage 1 | IQL+DeReD stage 2 | IQL+ReD | A2PO |
| -------------------------------- | ------------- | ------------- | ---------- | ---------- | ----------------- | ----------------- | ------------- | ----------------- |
| halfcheetah-medium | *42.5* | *42.6* | 48.2 | ***48.5*** | *47.5* | *47.6* | *47.6* | 47.1$\pm$0.2 |
| hopper-medium | *56.9* | *67.6* | *69.4* | *59.3* | *65.5* | *65.1* | *66.0* | **80.3**$\pm$4.0 |
| walker2d-medium | *75.0* | *74.0* | *83.5* | *83.7* | *74.5* | *81.9* | *78.6* | **84.9**$\pm$0.2 |
| Halfcheetah-medium-replay | *40.6* | *36.6* | ***46.3*** | *44.7* | *43.9* | *43.4* | *44.3* | 44.8$\pm$0.2 |
| hopper-medium-replay| *75.9*| *82.7* | *98.6* | *77.4* | *92.8* | *100.1* | *101.0* | **101.6**$\pm$1.3 |
| walker2d-medium-replay| *62.5*| *66.6* | ***86.7*** | *82.3* | *72.9* | *77.0* | *79.5* | 82.8$\pm$1.7 |
| Halfcheetah-medium-expert| *92.9* | *86.8* | *81.6* | *93.2* | *87.9*| *91.8*| *92.6*| **95.6**$\pm$0.5 |
| hopper-medium-expert| *110.9* | *107.6* | *95.0* | *106.2* | *89.3*| *104.7*| *106.1*| **113.4**$\pm$0.5 |
| walker2d-medium-expert| *109.0* | *108.1* | *110.0* | *110.0* | *110.1*| *110.5*| *110.5*| **112.1**$\pm$0.2 |
| halfcheetah-random-medium| 40.1$\pm$0.3 | 42.0$\pm$1.3 | -| -| -| -| 42.0$\pm$3.0 | **48.5**$\pm$0.3 |
| hopper-random-medium| 21.0$\pm$5.9 | 3.1$\pm$0.0 | -| -| -| -| 6.7$\pm$2.9 | **62.1**$\pm$2.8 |
| walker2d-random-medium| 32.0$\pm$8.1 | 66.9$\pm$8.4 | -| -| -| -| 60.0$\pm$2.6 | **82.3**$\pm$0.4 |
| halfcheetah-random-expert| 7.7$\pm$2.7 | 10.3$\pm$7.4 | -| -| -| -| 42.4$\pm$26.1 | **90.3**$\pm$1.6 |
| hopper-random-expert| 2.4$\pm$0.1 | 90.2$\pm$8.5 | -| -| -| -| 16.7$\pm$6.4 | **112.5**$\pm$1.3 |
| walker2d-random-expert| 53.4$\pm$14.5 | 103.4$\pm$7.9 | -| -| -| -| 93.2$\pm$29.1 | **109.1**$\pm$1.4 |
| halfcheetah-random-medium-expert | 29.1$\pm$5.2 | 42.6$\pm$0.6 | -| -| -| -| 39.1$\pm$21.6 | **90.6**$\pm$1.6 |
| hopper-random-medium-expert | 62.0$\pm$18.3 | 46.1$\pm$1.9 | -| -| -| -| 31.3$\pm$6.6 | **107.8**$\pm$0.4 |
| walker2d-random-medium-expert | 10.6$\pm$4.1 | 78.8$\pm$3.0 | -| -| -| -| 52.0$\pm$10.9 | **97.7**$\pm$6.7 |
Table R2. Test returns of recent policy constraints Offline RL method PRDC and our A2PO.
| Env| PRDC| A2PO|
| -------------------------------- | ------------------- | ------------------ |
| halfcheetah-medium| ***63.5**$\pm$0.9* | 47.1$\pm$0.2 |
| hopper-medium| ***100.3**$\pm$0.2* | 80.3$\pm$4.0 |
| walker2d-medium| ***85.2**$\pm$0.4* | **84.9**$\pm$0.2 |
| Halfcheetah-medium-replay| ***55.0**$\pm$1.1* | 44.8$\pm$0.2 |
| hopper-medium-replay| *100.1$\pm$1.6* | **101.6**$\pm$1.3 |
| walker2d-medium-replay| ***92.0**$\pm$1.6* | 82.8$\pm$1.7 |
| Halfcheetah-medium-expert| *94.5$\pm$0.5* | **95.6**$\pm$0.5 |
| hopper-medium-expert| *109.2$\pm$4.0* | **113.4**$\pm$0.5 |
| walker2d-medium-expert| *111.2$\pm$0.6* | **112.1**$\pm$0.2 |
| halfcheetah-random-medium| **56.5**$\pm$2.6 | 48.5$\pm$0.3 |
| hopper-random-medium| 5.5$\pm$0.4 | **62.1**$\pm$2.8 |
| walker2d-random-medium| 5.5$\pm$0.8 | **82.3**$\pm$0.4 |
| halfcheetah-random-expert| 1.3$\pm$0.5 | **90.3**$\pm$1.6 |
| hopper-random-expert| 24.8$\pm$14.6 | **112.5**$\pm$1.3 |
| walker2d-random-expert| 1.1$\pm$0.7 | **109.1**$\pm$1.4 |
| halfcheetah-random-medium-expert | 10.5$\pm$2.8 | **90.6**$\pm$1.6 |
| hopper-random-medium-expert| 88.5$\pm$15.2| **107.8**$\pm$0.4 |
| walker2d-random-medium-expert| 4.87$\pm$3.2| **97.7**$\pm$6.7 |
| maze2d-umaze| 127.4$\pm$23.4| **133.3**$\pm$9.6 |
| maze2d-medium| 60.0$\pm$5.3| **114.9**$\pm$12.9 |
| maze2d-large| 151.6$\pm$16.0| **156.4**$\pm$5.8 |
| antmaze-umaze-diverse-v2| *90.0$\pm$6.8* | **93.3**$\pm$4.7 |
| antmaze-medium-diverse-v2| *78.8$\pm$6.9* | **86.7**$\pm$9.4 |
| antmaze-large-diverse-v2| *50.0$\pm$5.4* | **53.3**$\pm$4,7 | | Summary: The authors propose an advantage aware offline RL algorithm for datasets consisting of data from multiple behavior policies. The algorithm consists of two steps, (1) behavior policy disentangling: Wherein a CVAE is trained to output actions conditioned on normalized advantage and state. (2) Policy optimization, where the policy is optimized based on the actions generated by the CVAE. The authors demonstrate the performance of the algorithms on standard benchmarks
Strengths: 1. The paper is well written and easy to follow
2. The experimental section is extensive
Weaknesses: 1. **The Agent policy optimization is unclear**
1. Why is a behavior regularization term (2nd term in (8)) needed when you have a generative model trained on the dataset?
2. Why not condition the 2nd in eq 8 term on c*?
2. **Missing some return conditioned baselines**
1. Comparison with return conditioned approaches such as %BC and Decision Transformers are needed to understand the advantage of the proposed approach
2. Comparisons with data rebalancing techniques [1] can help understand methods with mixed datasets
[1] Yue, Yang, et al. "Boosting offline reinforcement learning via data rebalancing." arXiv preprint arXiv:2210.09241 (2022).
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors address limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments on the experiment's completeness and the writing's quality. We will address your concerns in this section.
**[Q1.1: The Agent policy optimization is unclear: Why is a behavior regularization term (2nd term in (8)) needed when you have a generative model trained on the dataset?]**
Sorry for the confusion. First of all, as an offline RL method with policy constraints, A2PO requires the agent policy to closely align with the behavior policy. In order to achieve this, we incorporate a behavior regularization term that constrains the action distribution of the advantage-aware agent based on the computed advantage value.
Meanwhile, although latent action space inherently imposes a constraint on the action output [1], the regularization term ensures that the actions chosen by the A2PO policy align with the advantage condition $\xi$ determined by the critic, thereby providing a more precise policy constraint. To confirm our perspective, we conducted an ablation study by eliminating the regularization term in Table 7 and Appendix F of the original paper. The results of this study demonstrate that the regularization term in A2PO enhances its performance in most cases, providing strong support for the necessity and effectiveness of the regularization term.
**[Q1.2: The Agent policy optimization is unclear: Why not condition the 2nd in eg 8 term on *c*?]**
Thanks for your insightful comment. By directly constraining the agent's output on $c^*$, the agent will become fully constrained on the 'optimal' action generated by the CVAE. However, this approach may introduce additional computing errors due to the training of the CVAE. Instead, we propose utilizing the critic-computed $c$ and constraining the agent on the precisely sampled action. This not only avoids errors but also allows the agent to be aware of the multiple action distributions of behavior policies within the datasets.
To further verify our statement, we conducted an ablation study on the 'random-medium', 'random-expert' and 'random-medium' dataset by changing the regularization term from the original advantage-aware form $\mathbb E_{\substack{(s,a)\sim \mathcal D, \tilde{z}\sim \pi_\omega(\cdot|c), \\ a_\xi\sim p_\psi(\cdot|\tilde{z}, c)}}\big[(a-a_\xi)^2\big]$ to $\mathbb E_{\substack{(s,a)\sim \mathcal D, \tilde{z}^*\sim \pi_\omega(\cdot|c^*), \\ a_\xi^*\sim p_\psi(\cdot|\tilde{z}^*, c^*),\\ a_\text{cvae}^*\sim p_\psi(\cdot|z, c^*), z\sim \mathcal N(0,I)}}\big[(a_\xi^*-a_\text{cvae}^*)^2\big]$. Therefore, the output of the A2PO optimal policy $a_\xi^*$ is constrained only by the CVAE-inferred best action $a_\text{cvae}^*$.. The results are presented in Table R1, which show that our original form outperforms the variation in most cases, providing support for our regularization term.
Table R1. Test returns of A2PO and A2PO constrined on $a_\text{cvae}^*\sim p_\psi(\cdot|z=\mathcal N(0,I), c^*)$. **Bold** indicates the best performance among the two algorithms.
| Env | A2PO constrined on $a_\text{cvae}^*$ | A2PO |
| -------------------------------- | ------------------------------------- | ----------------- |
| halfcheetah-random-medium | 41.0$\pm$1.6 | 48.5$\pm$0.3 |
| hopper-random-medium | 40.9$\pm$2.0 | **62.1**$\pm$2.8 |
| walker2d-random-medium | 57.8$\pm$3.4 | **82.3**$\pm$0.4 |
| halfcheetah-random-expert | **93.1**$\pm$6.2 | 90.3$\pm$1.6 |
| hopper-random-expert | 81.8$\pm$0.2 | **112.5**$\pm$1.3 |
| walker2d-random-expert | 96.8$\pm$3.3 | **109.1**$\pm$1.4 |
| halfcheetah-random-medium-expert | 89.0$\pm$4.8 | **90.6**$\pm$1.6 |
| hopper-random-medium-expert | 12.8$\pm$4.0 | **107.8**$\pm$0.4 |
| walker2d-random-medium-expert | 63.0$\pm$4.5 | **97.7**$\pm$6.7 |
**[Q2: Missing some return conditioned baselines:1. Comparison with return conditioned approaches such as %BC and Decision Transformers are needed to understand the advantage of the proposed approach; 2.Comparisons with data rebalancing techniques can help understand methods with mixed datasets]**
Thanks for your constructive suggestions! We have additionally compared our A2PO with the return condition baselines 10%BC and Decision Transformer [2], and Offline RL with data rebalancing technique ReD and its variation DeReD[3]. The results are presented in Table R2. We directly take the score from the original paper for 'medium', 'medium-replay' and 'medium-expert' datasets. For the new 'random-medium', 'random-expert' and 'random-medium-expert' datasets, due to the time limit, we only evaluate IQL with ReD. The results show that our A2PO outperforms both the return condition baselines and data rebalancing techniques in most cases, which further demonstrates the superiority of our advantage-aware methods to tackle the constrain conflict issue.
[1] PLAS: Latent Action Space for Offline Reinforcement Learning. CoRL 2021
[2] Decision Transformer: Reinforcement Learning via Sequence Modeling. NeurIPS 2021
[3] Boosting Offline Reinforcement Learning via Data Rebalancing. arXiv 2022
---
Rebuttal 2:
Title: Experiment results for Q2
Comment: Table R2. Test returns of our A2PO and other return conditioned and data rebalancing baselines. *Italic* indicates that the scores are directly taken from the original paper. For the newly constructed dataset, we rerun %BC and DT with the official code from [2]. As ReD[3] is a method without publicly available source code, we reimplement it ourselves.
| Env | %BC | DT | CQL+ReD | TD3BC+ReD | IQL+DeReD stage 1 | IQL+DeReD stage 2 | IQL+ReD | A2PO |
| -------------------------------- | ------------- | ------------- | ---------- | ---------- | ----------------- | ----------------- | ------------- | ----------------- |
| halfcheetah-medium | *42.5* | *42.6* | 48.2 | ***48.5*** | *47.5* | *47.6* | *47.6* | 47.1$\pm$0.2 |
| hopper-medium | *56.9* | *67.6* | *69.4* | *59.3* | *65.5* | *65.1* | *66.0* | **80.3**$\pm$4.0 |
| walker2d-medium | *75.0* | *74.0* | *83.5* | *83.7* | *74.5* | *81.9* | *78.6* | **84.9**$\pm$0.2 |
| Halfcheetah-medium-replay | *40.6* | *36.6* | ***46.3*** | *44.7* | *43.9* | *43.4* | *44.3* | 44.8$\pm$0.2 |
| hopper-medium-replay | *75.9* | *82.7* | *98.6* | *77.4* | *92.8* | *100.1* | *101.0* | **101.6**$\pm$1.3 |
| walker2d-medium-replay | *62.5* | *66.6* | ***86.7*** | *82.3* | *72.9* | *77.0* | *79.5* | 82.8$\pm$1.7 |
| Halfcheetah-medium-expert | *92.9* | *86.8* | *81.6* | *93.2* | *87.9* | *91.8* | *92.6* | **95.6**$\pm$0.5 |
| hopper-medium-expert | *110.9* | *107.6* | *95.0* | *106.2* | *89.3* | *104.7* | *106.1* | **113.4**$\pm$0.5 |
| walker2d-medium-expert | *109.0* | *108.1* | *110.0* | *110.0* | *110.1* | *110.5* | *110.5* | **112.1**$\pm$0.2 |
| halfcheetah-random-medium | 40.1$\pm$0.3 | 42.0$\pm$1.3 | - | - | - | - | 42.0$\pm$3.0 | **48.5**$\pm$0.3 |
| hopper-random-medium | 21.0$\pm$5.9 | 3.1$\pm$0.0 | - | - | - | - | 6.7$\pm$2.9 | **62.1**$\pm$2.8 |
| walker2d-random-medium | 32.0$\pm$8.1 | 66.9$\pm$8.4 | - | - | - | - | 60.0$\pm$2.6 | **82.3**$\pm$0.4 |
| halfcheetah-random-expert | 7.7$\pm$2.7 | 10.3$\pm$7.4 | - | - | - | - | 42.4$\pm$26.1 | **90.3**$\pm$1.6 |
| hopper-random-expert | 2.4$\pm$0.1 | 90.2$\pm$8.5 | - | - | - | - | 16.7$\pm$6.4 | **112.5**$\pm$1.3 |
| walker2d-random-expert | 53.4$\pm$14.5 | 103.4$\pm$7.9 | - | - | - | - | 93.2$\pm$29.1 | **109.1**$\pm$1.4 |
| halfcheetah-random-medium-expert | 29.1$\pm$5.2 | 42.6$\pm$0.6 | - | - | - | - | 39.1$\pm$21.6 | **90.6**$\pm$1.6 |
| hopper-random-medium-expert | 62.0$\pm$18.3 | 46.1$\pm$1.9 | - | - | - | - | 31.3$\pm$6.6 | **107.8**$\pm$0.4 |
| walker2d-random-medium-expert | 10.6$\pm$4.1 | 78.8$\pm$3.0 | - | - | - | - | 52.0$\pm$10.9 | **97.7**$\pm$6.7 |
---
Rebuttal Comment 2.1:
Comment: I would like to thank the authors for their detailed response. I have updated my score accordingly. I wish the authors the best!
---
Reply to Comment 2.1.1:
Comment: We sincerely appreciate your positive feedback, the time and effort you have put into reviewing our responses and clarification. It is immensely gratifying to learn that our clarifications have helped address your main concerns.
Thank you once again for your constructive feedback and support. | Summary: The paper proposes using CVAE to train an agent and using a decoder, which conditions on state and advantage value, to generate actions. The Q function and V function are trained simultaneously, and the policy is trained in a TD3-BC style. Experiments show that the method can outperform baselines in most tasks.
Strengths: The paper is easy to follow, and the motivation and intuition for the algorithm design are reasonable. The experiments are comprehensive and justify the advantages of the method claimed by the authors. Conditioning on the advantage value, instead of using it as a coefficient, is interesting, allowing for sampling to make the advantage value extreme ($\xi=1$) to get the optimal action.
Weaknesses: I found some results in the experimental section problematic. More details are in the Questions section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In line 169, what prior distribution of $p(z)$ did you choose?
2. When training the Q and V functions (Eq(7)), the authors claimed that training simultaneously can stabilize training. Do the authors have any intuition, theory, or literature to back up such a design? I ask because, in my memory, most methods that need to estimate the V function train Q and V separately, such as IQL[1] and SAC[2]. Simultaneous training may cause instability, similar to training GANs.
3. In Table 1, I found some baseline results inconsistent with the original paper. For example, hopper-medium-v2 is 90.5 for Diffusion-QL in the original paper, but 70.3 in this paper; antmaze-umaze-diverse is 66.2 for Diffusion-QL in the original paper, but 24 in this paper. Most of the results for Diffusion-QL are not consistent with the original paper, while in line 234, the authors claim their results are from the original paper. Can the authors explain the mismatch?
4. In line 463, the authors state they use the v2 version for antmaze tasks, while most other baselines, including IQL and Diffusion-QL, use the v0 version. In my experience, the v2 version dataset can always provide a higher score than the v0 version when using the same algorithm. It is unfair to compare A2PO's v2 results with the v0 results of other baselines.
5. I found that the training curve of hopper random-medium-expert in Figure 4 is unstable. Can the authors provide more training curves for other environments?
[1] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline reinforcement learning with implicit q-learning." arXiv preprint arXiv:2110.06169 (2021).
[2]Haarnoja, Tuomas, et al. "Soft actor-critic algorithms and applications." arXiv preprint arXiv:1812.05905 (2018).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: My main concern is about the empirical experiments. It seems the algorithm doesn't have competitive performance compared to diffusion-based policy algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for affirming the idea's novelty and the experiment's comprehensiveness. In this section, we aim to address your concerns.
**[Q1: In line 169, what prior distribution of $p(z)$ did you choose?]**
Sorry for the confusion. As stated in line 167 of the original paper, the prior distribution of $p(z)$ is chosen to be $\mathcal N(0,I)$.
**[Q2: When training the Q and V functions (Eq(7)), the authors claimed that training simultaneously can stabilize training. Do the authors have any intuition, theory, or literature to back up such a design?]**
Thank you for your insightful question. We apologize for the confusion caused by the misdescription in the manuscript. In fact, we did not train the Q and V functions simultaneously; instead, we employed an alternating training approach for policy evaluation following the LAPO method [1]. Specifically, the Q-function is still optimized with the Bellman equation: $L_Q(\theta)=\mathbb E_{(s,a,r,s')\sim \mathcal D}\sum_i [r+\gamma V_{\hat \phi}(s')-Q_{\theta_i}(s,a)]^2$, while the V-function is optimized with the the expectation on Q-value: $L_V(\phi)=\mathbb E_{s\sim \mathcal D}[V_\phi(s)-\mathbb E_{ \tilde z^*\sim \pi_\omega (\cdot|c^*), a^*_\xi\sim p_\psi(\cdot|\tilde z^*, c^*) }Q_\hat\theta(s,a_\xi^*)]^2$. Corresponding to the implementation presented in the supplementary material, in line 230-238 of`latent_a.py`, the V-function update codes are:
```python
adv_sample = torch.ones((next_s.size()[0], 1)).to(self.device) #get \xi^*
latent_a = self.actor_target(torch.cat((s, adv_sample), dim=1)) #get \tilde z^*
hat_a = self.vae.decode(torch.cat((s, adv_sample), dim=1), latent_a) #get a_\xi^*
temp_q1, temp_q2 = self.critic_target(s,hat_a) #get Q value
target_v = torch.min(temp_q1, temp_q1) * self.doubleq_min + torch.max(temp_q1, temp_q1) * (1 - self.doubleq_min)
current_v = self.critic.v(s)
loss = ... + F.mse_loss(current_v, target_v)
```
**[Q3: In Table 1, I found some baseline results inconsistent with the original paper.]**
Sorry for the confusion. We have systematically reviewed the score tables for the 'medium', 'medium-replay', and 'medium-expert' datasets, rectifying any recording errors identified. The mismatch scores reported in the original manuscript were obtained by re-running the corresponding source code, which was accidently mixed with the officially reported scores.
We give a new comparison table between these baselines and A2PO, as illustrated in Table R1, and we will update the paper accordingly.
**[Q4: It is unfair to compare A2PO's v2 results with the v0 results of other baselines.]**
Thanks for your constructive comments. We have additionally tested A2PO using the v0 version for the AntMaze tasks. For BC, BCQ, TD3+BC, CQL, AWAC, IQL, and Diffusion-QL, we have directly referenced the scores from the Diffusion-QL paper [1]. For CQL+AW, we have utilized the scores from the CQL+AW paper [2]. For EQL and LAPO—neither of which provide scores for the v0 version—we have re-evaluated their performance, presenting both the scores and standard deviations. The results are summarized in Table R2, indicating that our A2PO continues to achieve performance comparable to that of advanced baselines such as Diffusion-QL.
**[Q5: I found that the training curve of hopper random-medium-expert in Figure 4 is unstable. Can the authors provide more training curves for other environments. ]**
Thanks for the insightful comment. We have provided various A2PO training curves in the supplementary pdf, in which the A2PO is trained with 1M timesteps while performing online evaluation at each 2K timesteps. The results show that the A2PO maintains overall stability during training on most of the tasks and datasets.
**[Q6: My main concern is about the empirical experiments. It seems the algorithm doesn't have competitive performance compared to diffusion-based policy algorithms.]**
We appreciate your valuable comment; however, we respectfully disagree with the assertion that A2PO does not exhibit competitive performance compared to diffusion policy for the following reasons:
1. Even taking into account the comparative data from the Diffusion-QL paper, our A2PO method still outperforms diffusion-QL in 11 out of 18 gym tasks, 4 out of 6 maze tasks, 2 out of 3 kitchen tasks, and 1 out of 2 adroit tasks. This demonstrates that our algorithm exhibits a comparative advantage in most task scenarios over the diffusion policy.
2. The primary focus of our research is not on benchmarking against diffusion-based methods but on addressing the conflict constraint issue, which current AW methods fail to solve effectively. Our extensive experiments indicate that A2PO significantly outperforms existing AW methods, such as LAPO.
3. Our approach is orthogonal to diffusion-based methods. Specifically, Diffusion-QL utilizes diffusion as the policy to implicitly capture the multimodal action distribution, while it overlooks the intrinsic quality of different actions. In contrast, our A2PO explicitly leverages the advantage value to disentangle the action distributions of interrelated behavior policies, ensuring that the quality of different actions is positively correlated with the advantage value. Therefore, our advantage-aware mechanism can be effectively integrated with diffusion-based methods, enhancing their capacity to identify multimodal data of varying quality.
In conclusion, our core contribution lies not in the model itself but in the introduction of an advantage-aware policy.
[1] Latent-Variable Advantage-Weighted Policy Optimization for Offline RL. NeurIPS 2022
[2] Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning. ICLR 2023
[3] Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting. ICLR 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for your review and comments. We hope that our additional evaluations and rebuttal have effectively addressed your primary concerns regarding our paper. We would greatly appreciate any feedback on whether there are existing or new points that we have not yet covered, and we would be glad to address or discuss them further.
---
Rebuttal 2:
Title: Experiment results for Q3 and Q4
Comment: Table R1. Test returns of A2PO and other baselines in gym tasks. **Bold** indicates the best performance among these algorithms.
| | CQL | EQL | Diffusion-QL | CQL+AW | A2PO |
| ------------------------- | ------- | ------- | ------------ | ------ | ----------------- |
| halfcheetah-medium | *44.0* | *47.2* | *51.1* | *49* | 47.1$\pm$0.2 |
| hopper-medium | *58.5* | *70.6* | ***90.5*** | *71* | 80.3$\pm$4.0 |
| walker2d-medium | *72.5* | *83.2* | ***87.0*** | *83* | 84.9$\pm$0.2 |
| Halfcheetah-medium-replay | *45.5* | *44.5* | ***47.8*** | *47* | 44.8$\pm$0.2 |
| hopper-medium-replay | *95.0* | *98.1* | *101.3* | *99* | **101.6**$\pm$1.3 |
| walker2d-medium-replay | *77.2* | *81.6* | ***95.5*** | *87* | 82.8$\pm$1.7 |
| Halfcheetah-medium-expert | *91.6* | *94.6* | ***96.8*** | *84* | 95.6$\pm$0.5 |
| hopper-medium-expert | *105.4* | *111.5* | *111.1* | *91* | **113.4**$\pm$0.5 |
| walker2d-medium-expert | *108.8* | *110.2* | *110.1* | *109* | **112.1**$\pm$0.2 |
Table R2. Test returns of A2PO and other baselines in antmaze v0 tasks.
| Env | BC | BCQ | TD3+BC | CQL | EQL | Diffusion-QL | AWAC | IQL | CQL+AW | LAPO | A2PO |
| ------------------------: | ------ | ------ | ------ | ---------- | ------------- | ------------ | ------ | ------ | ------ | ------------ | ---------------- |
| antmaze-umaze-diverse-v0 | *45.6* | *55.0* | *71.4* | ***84.0*** | 50.8$\pm$11.6 | *66.2* | *49.3* | *62.2* | *54* | 0.0$\pm$0.0 | 72.6$\pm$10.2 |
| antmaze-medium-diverse-v0 | *0.0* | *0.0* | *3.0* | *53.7* | 62.2$\pm$6.7 | *78.6* | *0.7* | 70.0 | *24* | 30.2$\pm$9.4 | **80.2**$\pm$4.0 |
| antmaze-large-diverse-v0 | *0.0* | *2.2* | *0.0* | *14.9* | 38.0$\pm$5.5 | ***56.6*** | *1.0* | *47.5* | *40* | 22.3$\pm$4.8 | 52.1$\pm$7.9 | | Summary: This paper presents A2PO. A2PO is an offline RL method for learning with datasets that was collected by a diverse set of poliices. The aim of the method is to disentangle the data being collected by each policy by using advantage calculation and then using this information to better learn policies from the dataset. Specifically, they use a VAE and condition it on advantage and state and then train in latent states. They perform a thorough set of experiments on benchmarks to compare to baseline methods. They also perform an ablation study to find the important qualities in their method.
Strengths: Originality. Average strength. The originalty of this work is what I would expect. They have with a novel idea of disentangling the policies using advantage estimation which builds upon the previous idea of advantage estimation. I think this work is of solid novelty.
Quality. Major strength. The emperical analysis in this work is done very well. They compare to many baselines and show their method is clearly superior in many of the domains. They also perform a very good ablation study that shows the quality of their choises. I think this is a high point of the paper.
Clarity. Major strength. The paper is very well written, provides excellent figures and is easy to follow. Great job here.
Significance. Average strength. The work here seems like it takes a good step forward in the domain of offline RL with mixed quality datasets. The significance here is solid due to their impressive results but no more than I would expect from a good paper.
Weaknesses: I don't have any major comments here. I believe this paper is complete, written well, has impressive results and is easy to follow.
Technical Quality: 4
Clarity: 4
Questions for Authors: None.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitation they address is that the method is slower than others, but no slower than some baselines they provide. I believe this is addressed well enough and improving upon the computation speed is clearly out of scope.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive support of our work. In the future, we will make further improvements to the A2PO to contribute to the offline rl community. | Rebuttal 1:
Rebuttal: Please refer to the attachment for the figures of all results during rebuttal.
Pdf: /pdf/d414d0661363431c5e577f33debd8e73b81baf26.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces A2PO, a novel approach to offline reinforcement learning that addresses the constraint conflict issue in mixed-quality datasets by using a Conditional Variational Auto-Encoder to disentangle action distributions and optimize policies towards high advantage values. A2PO demonstrates superior performance over existing methods on the D4RL benchmark, showcasing its effectiveness in leveraging diverse offline datasets for more robust policy learning.
Strengths: see questions part
Weaknesses: see questions part
Technical Quality: 3
Clarity: 3
Questions for Authors: This paper focuses on the constraint conflict issue in offline reinforcement learning when dealing with mixed-quality datasets collected from multiple behavior policies. By using a Conditional Variational Auto-Encoder (CVAE), A2PO disentangles the action distributions of different behavior policies present in the mixed-quality dataset. The CVAE models the advantage values as conditional variables, allowing it to generate action distributions that reflect the behavior policies' distinct characteristics.
In general, this paper is interesting and well-written. One question is that since we can directly select the good trajectories according to the rewards, is there any ablation study to show that CVAE can do better than this simple method?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see questions part
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your supportive review of our method, experiments, and writing skills. We will address your doubts here.
**[Q1. since we can directly select the good trajectories according to the rewards, is there any ablation study to show that CVAE can do better than this simple method?]**
Thanks for your constructive suggestions! We have additionally conducted ablation study on our CVAE in the A2PO method. To illustrate the effectiveness of our CVAE approach compared to merely selecting high-quality trajectories for training, we eliminate the CVAE module and implement the trajectory selection method proposed in [1,2], which involves selecting only the top 10% of trajectories based on their returns for agent training.
The results shown in Table R1 clearly indicate that A2PO without CVAE [top-10%] consistently lags behind the original A2PO in most cases, especially on the mixed-qualities datasets like 'walker2d-random-medium' and 'walker2d-random-expert'. This is mainly due to the challenge of determining the hyperparameter $k$ for selecting the top-$k$ trajectories for training across different tasks and datasets. A smaller $k$ filters out more data, limiting the agent's awareness of the action distribution and task information. On the other hand, a larger $k$ introduces constraint conflicts that hinder agent policy improvement. Instead, our A2PO utilizes CVAE conditioned on advantage value to obtain disentangled behavior constraints. By optimizing the advantage-aware policy based on these disentangled action distribution constraints, we are able to achieve higher returns by directing the agent toward high advantage values.
Table R1. Test returns of A2PO and A2PO w/o CVAE [selecting top-10% trajectory for training]. **Bold** indicates the best performance among the two algorithms.
| Env | A2PO w/o CVAE [top-10%] | A2PO |
| -------------------------------- | ----------------------- | ----------------- |
| halfcheetah-medium | 42.1$\pm$0.1 | **47.1**$\pm$0.2 |
| hopper-medium | 61.2$\pm$9.0 | **80.3**$\pm$4.0 |
| walker2d-medium | 17.9$\pm$1.2 | **84.9**$\pm$0.2 |
| halfcheetah-medium-replay | 35.6$\pm$2.9 | **44.8**$\pm$0.2 |
| hopper-medium-replay | 89.8$\pm$1.1 | **101.6**$\pm$1.3 |
| walker2d-medium-replay | 73.9$\pm$3.4 | **82.8**$\pm$1.7 |
| Halfcheetah-medium-expert | 87.2$\pm$0.2 | **95.6**$\pm$0.5 |
| hopper-medium-expert | 103.9$\pm$0.9 | **113.4**$\pm$0.5 |
| walker2d-medium-expert | 111.1$\pm$0.1 | **112.1**$\pm$0.2 |
| halfcheetah-random-medium | 45.9.$\pm$0.2 | **48.5**$\pm$0.3 |
| hopper-random-medium | 60.1$\pm$1.5 | **62.1**$\pm$2.8 |
| walker2d-random-medium | 3.2$\pm$1.8 | **82.3**$\pm$0.4 |
| halfcheetah-random-expert | 54.1$\pm$2.1 | **90.3**$\pm$1.6 |
| hopper-random-expert | 105.8$\pm$1.0 | **112.5**$\pm$1.3 |
| walker2d-random-expert | 10.5$\pm$1.2 | **109.1**$\pm$1.4 |
| halfcheetah-random-medium-expert | 50.3.$\pm$1.0 | **90.6**$\pm$1.6 |
| hopper-random-medium-expert | 101.3$\pm$3.9 | **107.8**$\pm$0.4 |
| walker2d-random-medium-expert | 15.2$\pm$0.6 | **97.7**$\pm$6.7 |
[1] Decision Transformer: Reinforcement Learning via Sequence Modeling. NeurIPS 2021.
[2] Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting. ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply! My concern is solved and I would like to improve my rating to 7. | null | null | null | null | null | null |
Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning | Accept (poster) | Summary: This work studied the ODD detection in mathematical reasoning, which presents a new measurement, TV score, based on the observed pattern collapse property and early stabilization of GLMs. It seems the first discussion on the OOD detection in mathematical reasoning.
Strengths: This appears to be the first discussion on OOD detection in mathematical reasoning. The authors clearly explain why the trajectory works in an understandable and empirical manner. The manuscript is well-organized, and the methodology is both simple and effective.
Weaknesses: 1. Additional metrics commonly used for ODD should be included for evaluation.
2. The writing requires improvement.
3. Further insight into the potential impact of over-smoothing on the setting of the critical parameter $k$ is necessary.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is it true that the embedding dimension should be fixed across layers to compute the embedding difference between neighboring layers?
2. To better support the claim of choosing $k \leq 5$, it would be beneficial to provide a quantitative analysis of over-smoothing occurrence when $k$ is set to a larger value.
3. For testing the ODD performance, could you report the evaluation results in terms of AUPR and F1?
4. Which datasets were used for the analysis in Figure 3?
5. There are some grammatical mistakes that need fixing. Here are a few examples:
(a) "we need ..., then computer ..." in Line 150.
(b) The sentence in Lines 161-162.
(c) "Outliers ..., then lead ..." in Lines 164-165.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately claimed the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your constructive comments! We will respond to the weaknesses and questions you raised in the following areas:**
---
> More Metrics (W1 & Q3)
Thank you for suggesting richer evaluation metrics about AUPR and F1. The results are below:
| Method | Llama2-7B | | | | GPT2-XL | | | |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| | Far-shift OOD | | Near-shift OOD | | Far-shift OOD | | Near-shift OOD |
| | AUPR | F1 | AUPR | F1 | AUPR | F1 | AUPR | F1 |
| MS-Prob | 74.22±1.20 | 68.42±1.39 | 53.30±1.65 | 66.86±1.06 | 81.35±0.98 | 71.78±0.98 | 66.26±0.99 | 65.45±0.85 |
| MC-Drop | 57.86±0.34 | 54.87±1.92 | 40.87±0.66 | 59.78±1.29 | 65.13±1.09 | 62.34±1.38 | 63.24±0.87 | 61.12±1.33 |
| PPL | 78.89±0.40 | 72.25±0.87 | 62.73±0.98 | 75.58±1.11 | 71.32±1.21 | 75.60±0.74 | 62.64±0.59 | 60.95±1.16 |
| I-Emb | 79.63±1.17 | 69.08±1.11 | 54.72±1.89 | 61.92±0.99 | 86.69±0.37 | 88.42±0.29 | 78.82±0.62 | 69.29±1.09 |
| O-Emb| 60.58±1.25 | 56.06±1.65 | 39.84±0.81 | 60.05±0.94 | 78.24±0.86 | 85.34±0.41 | 75.39±0.75 | 68.83±1.06 |
| TV score (ours) | **98.81±0.16** | **93.17±0.49** | **90.73±0.42** | **80.43±0.54** | 98.04±0.08 | 98.86±0.09 | **89.91±0.65** | **78.15±0.76** |
| w/ DiSmo (ours) | 96.19±0.06 | 85.79±0.78 | 80.57±0.72 | 78.98±0.65 | **98.28±0.06** | **99.63±0.07** | 84.04±0.70 | 76.03±0.94 |
**On both metrics, our approach still maintains a significant lead.**
---
> Discussion about k (W3 & Q2)
Thanks for suggesting a more detailed discussion of $k$. In Section 5 and Appendix G.1, we have conducted ablation studies for cases of $k \leq 5$. To support our claim of choosing $ k \leq 5$ and the evidence about the over-smoothing phenomenon when $k$ is too large, we report more AUROC results in Llama2-7B and GPT2-XL as we continue to increase $k$ values:
| $k$ value | Llama2-7B | | GPT2-XL | |
| --- | --- | --- | --- | --- |
| | Far-shift OOD | Near-shift OOD | Far-shift OOD | Near-shift OOD
| 0 | 98.76 | 92.64 | 93.47 | 94.86 |
| 1 | 94.71 | 87.98 | 95.55 | 94.08 |
| 2 | 94.66 | 85.39 | 96.54 | 94.19 |
| 3 | 89.57 | 76.47 | 95.32 | 93.44 |
| 4 | 82.20 | 58.66 | 95.17 | 92.09 |
| 5 | 79.52 | 49.25 | 94.26 | 92.18 |
| 6 | 57.65 | 47.89 | 93.01 | 82.65 |
| 7 | 55.27 | 47.16 | 90.28 | 76.50 |
| 8 | 58.63 | 48.34 | 82.11 | 76.18 |
| 9 | 54.10 | 51.28 | 73.89 | 68.22 |
| 10 | 52.93 | 49.72 | 59.66 | 57.91 |
When $k>5$, the AUROC value on Llama2-7B shows a sharp drop and stabilizes around 50. The AUROC value on GPT2-XL remains high, but also shows a sharp decrease when $k$ increases close to 10. This is because GPT2-XL has 1.5 times more layers than Llama2-7B, the trajectory information is richer and the noise will be relatively more.
Overall, as $k$ continues to increase, the useful trajectory volatility information is gradually blurred even though more noise is erased, causing over-smoothing. Therefore, $k \leq 5$ is a better trade-off range, this is why we choose $k \leq 5$.
---
> Experimental Setup about Figure 3 (Q4)
The ID data curve is the average of all samples in the MultiArith dataset, and the OOD data curve is the average of all samples in the five domains of the MATH dataset (Algebra, Geometry, Counting and Probability, Number Theory, and Precalculus). We will add the setup in the updated version.
---
> Method Detail: Fixed Embedding Dimension (Q1)
The value of the embedding dimension is arbitrary and does not need to be fixed, but the embedding dimensions of neighboring layers should be equal, otherwise they cannot be subtracted.
For language models, the output dimension of the hidden layer is the embedding dimension set in the model configuration (e.g., 4096 for Llama2-7B and 1600 for GPT2-XL), so there is no problem of inconsistent dimensionality.
---
> Paper Writing (W2 & Q5)
Thank you for pointing out the mistakes in our writing, we will correct them in the updated version.
---
**We expect the above responses to address your concerns, and look forward to your more positive comments.**
---
Rebuttal Comment 1.1:
Comment: Thanks for your efforts to address my concerns. I stay positive for this work.
---
Reply to Comment 1.1.1:
Title: Thanks for recognition
Comment: Thank you for supporting our work! | Summary: This paper presents a trajectory-based method for OOD detection in the mathematical reasoning setting. OOD detection is extensively studied in the text setting and image setting. The main motivation of this work is claimed to that mathematical reasoning poses significant challenges to embedding-based methods due to its high-density feature of output spaces, but this feature causes larger discrepancies in the embedding shift trajectory between different samples in latent spaces. The proposed method uses trajectory volatility for OOD detection in mathematical reasoning. Experiments are conducted to validate the performance of the proposed method.
Strengths: This paper studies OOD detection in the mathematical reasoning setting, which is less studied compared to text and image setting.
This paper uses examples to illustrate the motivation and key idea. This improves the accessibility of this paper.
Weaknesses: The motivation is not strong and convincing. Figure 1 illustrates two challenges with respect to the input space and out space respectively. It is not convincing, and more evidence or data analysis should be provided to make these two challenges solid. I do not buy that pattern collapse is generally hold for all types of mathematical reasoning tasks. The example is just a special case. The output can take any value in the real line, and the collapse probability not that high.
The idea of the proposed method is not convincing. I am not convinced that early stabilization is generally held for mathematical reasoning problems. More evidence should be provided to justify it.
The writing is also not precise. For example, in Equation (1), the domain of f is not specified. The notation \phi is not defined.
Also, the experiment is not solid. There is no experiment to justify that the performance improvement is due to addressing the challenges in the input space and output space posed in the Introduction.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weakness.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your constructive comments! We will respond to your concerns one by one.**
---
> W1: The motivation is not strong and convincing. Figure 1 ...
### 1. Input Space
As for the phenomenon that embedding varies less in different domains of input space: We have discussed it in Section 5 and conducted experiments in Appendix G.2 to demonstrate this.
### 2. Output Space
Please refer to **"General Rebuttal: Existence and Universality of "Pattern Collapse" phenomenon in mathematical reasoning"** for detailed responses and evidences.
We must claim a key fact: **For generative language models (GLMs), they model real numbers or mathematical expressions not in a mathematical sense, but based on discrete token sequences after tokenization. Thus, the collapse occurs at the token level, not at the full mathematical expression level**. Due to the autoregressive generative nature of GLMs, the collapse phenomenon occurs during the prediction of each token.
We agree with your opinion that the number of values on the real line is infinite in a mathematical sense. However, **after tokenization, they contain only 0-9 number tokens and a limited number of special symbols**, such as decimal points, slashes, root signs. This means that two largely different expressions in the mathematical sense may cover many of the same tokens.
We emphasize two key conclusions from the statistic data presented in General Rebuttal:
* Existence: **The average token duplication rate is up to 99% on all math tasks, and even a staggering 99.9% on some simple arithmetic tasks**; In contrast, **the token duplication rate on the text generation task is only about 60%, with about 2000 different types of token**, and still increasing as the total number of tokens increases. *These data and comparisons demonstrate that pattern collapse occurs in mathematical reasoning and not in text generation.*
* Universality: **Token repetition rate exceeded 97% on all seven math tasks of different difficulties and types.** *This demonstrates the universality of "pattern collapse" in various mathematical reasoning tasks.*
---
> W2: I am not convinced that early stabilization is generally held for mathematical reasoning problems. More evidence should be provided to justify it.
### 1. Setup Description of Figure 3
The ID data curve is the average of all samples in the MultiArith dataset, and the OOD data curve is the average of all samples in the five domains of MATH dataset (Algebra, Geometry, Counting and Probability, Number Theory, and Precalculus). These are consistent with our settings for the ID and OOD datasets in the experimental setup (Section 4.1). We will add this setup in the updated version.
### 2. More evidences about "early stabilization is generally held for mathematical reasoning problems"
Please refer to **"General Rebuttal: Universality of "Early Stabilization" phenomenon in mathematical reasoning (Expanded visualization of Figure 3 in the paper)"** for detailed responses and evidences.
---
> W3: The writing is also not precise. ...the domain of f is not specified. The notation \phi is not defined.
Thanks for pointing out some questions about paper writing.
* The domain of $f(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{\theta})$ is any value $(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{\theta})$ sampled from the joint probability distribution $P_{\mathcal{X} \times \mathcal{Y} \times \Theta}$ defined on the $\mathcal{X} \times \mathcal{Y} \times \Theta$, where $\mathcal{X}, \mathcal{Y}, \Theta$ are the input space, the output space, and the parameter space, respectively, as defined in lines 70-75. We did not specify this because there are no special values that need to be excluded.
* The notation "\phi" does not appear in our paper.
We will improve our presentation in updated version.
---
> W4: The experiment is not solid. There is no experiment to justify that the performance improvement is due to addressing the challenges posed in the Introduction.
### 1. Response to "There is no experiment to justify that performance improvement is due to addressing the challenges ..."
* First, **the two "challenges" are two objectively existing phenomena in mathematical reasoning scenarios. However, they make existing embedding-based methods that cannot be applied to mathematical reasoning, so we call them "challenges" for existing methods, not for us**. As stated in lines 38-39, *“However, embedding-based methods encounter challenges under mathematical reasoning scenario”*. These two challenges are simply introduced to explain why existing methods cannot be applied to mathematical reasoning;
* Second, because existing methods are hindered by these two phenomena, we need to find new methods to circumvent encountering them. Thus, **we aim to circumvent the two challenges faced by existing methods, not to address them**. As stated in lines 47-48, *“Therefore, we transform our perspective from the static embedding representation to the dynamic embedding shift trajectory in the latent space"*, **we are not improving on existing methods, but rather two different research ideas.**
* Third, these two phenomena are challenges for existing methods, but opportunities for us: we utilize them to design trajectory-based methods (Section 2-3).
### 2. Reasons for performance improvement
A complete explanation of why our method yields performance improvement is given in Section 2 (motivation part).
Refer to **"General Rebuttal: Motivation line from "pattern collapse" to "early stabilization" and TV score"** for details.
### 3. Response to “The experiment is not solid”
We have performed rich dataset experiments, scalable experiments, ablation analyses, and failure analyses in Sections 4-5 and Appendices E-G to demonstrate the solidity of our experiments.
---
**We expect these responses to address your concerns, and look forward to your more positive comments.**
---
Rebuttal Comment 1.1:
Comment: Many thanks for the clarification. After reading the response, most of my concerns still remain such as consolidate the "Pattern Collapse" phenomenon, experiment finding, evaluation, etc. This paper presents some interesting idea, but it needs many ideas to make it solid. Given this, I like to maintain my score.
---
Reply to Comment 1.1.1:
Title: Thanks for response & Invite open discussion
Comment: Thanks for your response. Regarding *"consolidate the "Pattern Collapse" phenomenon, experiment finding, evaluation, etc."*, we have already provided a detailed response in the rebuttal stage, including clarification, statistics, visualizations, and quotations. Here we present some key evidence and conclusions again:
---
### 1. "Pattern Collapse" phenomenon (Evidence: Fact Clarification & Statistics)
First, **we have clarified that the language model's modeling of mathematical expressions is based on tokenization**, so "pattern collapse" occurs at the discrete token level, not at the level of mathematical sense. Understanding "pattern collapse" in terms of the real number line is misconceived, and it goes against the way language models are modeled.
Second, **we have given statistics** on seven different types of math tasks with different domains and difficulty levels, and have compared them to two classic text generation tasks, translation and summarization.
We mainly focus on the number of token types and the duplication rate, which can reflect how much the model collapses at the token level when predicting. We again present the results as follows:
|Types of Tasks|token number in dataset |token type number in dataset |Token Duplication Rate|Vocab Coverage|
|---|---|---|---|---|
|***Mathematical Reasoning***|||||
|Arithmetic (primary difficulty)|16136|14|99.9%|0.04%|
|Arithmetic (middle-school difficulty)|5663|16|99.7%|0.05%|
|Algebra|5234|107|98.0%|0.33%|
|Geometry|2615|75|97.1%|0.23%|
|Counting and probability|2524|43|98.3%|0.13%|
|number theory|2395|71|97.1%|0.22%|
|precalculus|3388|84|97.5%|0.26%|
|*Average* | *5422* | *58* | *98.9%*| *0.18%*|
|***Text Generation***||||
|Translation|2500 |1065 | 57.4% | 3.32% |
||5000 | 1832| 63.3% | 5.10% |
||10000 | 2980 | 70.2% | 9.31% |
|| 15000 | 3494 | 76.7% | 10.61% |
|*Average* | *5833* | *1959* | *66.4%*| *6.12%*|
|***Summarization***||||
||2500 |1265 | 49.4% | 4.01% |
||5000 | 1970 | 60.6% | 6.16% |
||10000 | 3192 | 68.0% | 9.98% |
||15000 | 3876 | 74.1% | 12.11% |
|*Average* | *5833* | *2142* | *63.2%* | *6.69%*|
**The average token duplication rate is up to 99% on all math tasks, and even a staggering 99.9% on some simple arithmetic tasks; The token duplication rate exceeded 97% on all seven math tasks of different difficulties and types.
This demonstrates that the "pattern collapse" occurs on generally all types of mathematical reasoning tasks.**
---
### 2. "Early Stabilization" finding (Evidence: Visualization)
In our GENERAL REBUTTAL, **we have given detailed visualizations** of trajectories across ten datasets of different domains and different difficulty levels in mathematical reasoning.
We have also given the average volatility statistics for layers 1-31 (full layers), 20-31, and 26-31 on each dataset corresponding to the visualizations as follows:
|Dataset |1-31 layers |20-31 layers | 26-31 layers|
| --- | --- | --- | --- |
| *ID Dataset* | | | | |
| MultiArith | 6.53 | 14.84 | 10.89 |
| *Near-shift OOD Dataset* | | | | |
| GSM8K | 8.60 | 20.55| 26.43 |
| SVAMP | 8.02 | 18.82 | 24.68 |
| AddSub | 8.72 | 20.54 | 27.24 |
| SingleEq | 8.14 | 19.26 | 22.42 |
| SingleOp | 7.50 | 17.17 | 21.62 |
| *Far-shift OOD Dataset* | | | | |
| MATH-Algebra | 8.83 | 21.35 | 31.50 |
| MATH-Geometry | 10.00 | 25.27 | 34.14 |
| MATH-Count_and_Prob | 10.30 | 25.77 | 33.70 |
| MATH-Number_Theory | 9.40 | 23.10 | 33.86 |
**In all ten datasets, the phenomenon of "early stabilization" is significantly present, and it can be sufficiently demonstrated that "early stabilization" is universal in mathematical reasoning.**
---
### 3. Experiment Justification (Evidence: Quotation)
**We have explained your misunderstanding** about the purpose of our paper by quoting from the original text:
* The two "challenges" are faced by existing embedding-based methods that cannot be applied to mathematical reasoning, not by us. *As stated in lines 38-39, “However, embedding-based methods encounter challenges under mathematical reasoning scenario”.*
* We aim to circumvent the two challenges faced by existing methods, not to address them. We are not improving on existing methods, but rather two different research ideas. *As stated in lines 47-48, “Therefore, we transform our perspective from the static embedding representation to the dynamic embedding shift trajectory in the latent space".*
---
We strongly respect your concerns about the evidence we gave, and an open discussion always helps us to recognize our limited considerations. Unfortunately, we have not received targeted details of your concerns, can you tell us what it is about our evidence that you do not recognize? Your proposed valuable issues will largely help us to improve our paper quality, and we look forward to a more open discussion with you, thanks! | Summary: This paper studies the OOD problem in GLMs under mathematical reasoning and found that the the patter collapse phenomena in the output space. The trajectory violation score is proposed to distinguish the ID and OOD samples. A thorough evaluation shows that the proposal can outperform traditional algorithms under offline detection, online detection, and quality estimation.
Strengths: (+) This paper is well structured and well-written.
(+) The authors conduct a thorough evaluation of the proposed methods
Weaknesses: (-) The input data for the empirical study in Figure 3 is not introduced. Thus, whether it can reflect the scenario of mathematical reasoning is not clear. It lacks theoretical analysis.
(-) The relationship of the pattern collapse (in Figure 1) and the early stabilization (in Figure 3) is not clearly illustrated.
(-) The datasets for experiments are quite limited.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the internal relationship between the pattern collapse (in Figure 1) and the early stabilization (in Figure 3)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your constructive comments. We will respond to your concerns one by one.**
---
> W1: The input data for the empirical study in Figure 3 is not introduced. Thus, whether it can reflect the scenario of mathematical reasoning is not clear.
Thanks for pointing this out, and sorry for the missing experimental setup (i.e., input data source) for Figure 3.
### 1. Setup Description of Figure 3
The ID data curve is the average of all samples in the MultiArith dataset, and the OOD data curve is the average of all samples in the five domains of the MATH dataset (Algebra, Geometry, Counting and Probability, Number Theory, and Precalculus). These are consistent with our settings for the ID and OOD datasets in the experimental setup (Section 4.1). We will add this setup in the updated version.
### 2. More evidences about "it can reflect the scenario of mathematical reasoning"
Please refer to **"General Rebuttal: Universality of "Early Stabilization" phenomenon in mathematical reasoning (Expanded visualization of Figure 3 in the paper)"** for detailed responses and evidences.
---
> W2 & Q1: The relationship of the pattern collapse (in Figure 1) and the early stabilization (in Figure 3) is not clearly illustrated.
Please refer to **"General Rebuttal: Motivation line from "pattern collapse" to "early stabilization" and TV score"** for detailed responses.
Thanks for pointing out the vague elaboration on the motivation line. The pattern collapse (in Figure 1) and the early stabilization (in Figure 3) are not directly related, but indirectly transitioned through the theoretical intuition of Section 2.1. To summarize, the "pattern collapse" leads to more significant trajectory differences across different samples, and the source of the trajectory differences between ID and OOD samples is the "Early Stabilization" phenomenon.
In detail, the function of each section is:
* In Section 1, we **find the "pattern collapse" (in Figure 1)** in the output space;
* In Section 2.1, we introduce the intuition: **the presence of "pattern collapse"** causes the convergence of the trajectory endpoints of different samples, **leading to significant trajectory differences across samples (As stated in Lines 104 Hypothesis 1)**. We justify this intuition through theoretical modeling and proving.
* We already have the intuition that trajectories can be a good measure, but it is still unknown what kind of difference exists between the trajectories of the ID and OOD samples. Thus, in Section 2.2, we conducted empirical experiments to observe the trajectory volatilities of different scenarios and discover the phenomenon of **"early stabilization" (in Figure 3), which is the root cause of the trajectory differences.**
We will state this motivation line at the start of Section 2 in the updated version.
---
> W3: The datasets for experiments are quite limited.
Thanks for pointing out the limited datasets. We illustrate this in terms of both data type and data size:
### 1. Data Type
As for the data type, **we have collected 10 of the most commonly used datasets** in the LLM mathematical reasoning research for our experiments, and **they cover six types of mathematical tasks and all levels of difficulties from elementary school to college**. Due to space limits, we reported the average results for each setting in the main text, and the results on each dataset have been shown in Appendix E (Tables 8-11). Overall, our dataset types have been as rich as possible.
### 2. Data Size
Data size is a limitation as we mentioned in Section Limitation. However, this is caused by the particular nature of the mathematical reasoning research field, and is not a subjective constraint on us, for the following reasons:
* Compared to traditional text generation tasks such as summarization and translation, mathematical reasoning tasks did not receive much attention prior to the era of LLMs, and thus there are few general-purpose datasets. At the same time, automated construction of mathematical datasets needs to be based on complex rules, resulting in complex mathematical tasks requiring significant and time-consuming human intervention[1,2], making diversity and size severely limited.
* After the advent of the Chain-of-Thought technique[3], mathematical reasoning started to receive more attention from the NLP field, but some of the more complex mathematical tasks are usually built for LLM evaluation[4], such as TheoremQA (600 samples)[5], SAT-Math (220 samples)[6], MMLU-Math (974 samples)[7], making the large-scale training sets not specifically constructed.
All these factors make the dataset size in the field of mathematical reasoning much smaller than traditional generative tasks.
On the other hand, **we are the first to study OOD detection on mathematical reasoning, and the current data type is sufficient to demonstrate the generality of our method.** If new large-scale mathematical reasoning datasets become available, our method can be further applied.
[1] Solving general arithmetic word problems, EMNLP 2015.
[2] Measuring mathematical problem solving with the math dataset, NeurIPS 2021.
[3] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, NeurIPS 2022.
[4] MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning, ICLR 2024.
[5] TheoremQA: A Theorem-driven Question Answering Dataset. ACL 2023.
[6] Agieval: A human-centric benchmark for evaluating foundation models. NAACL 2024.
[7] Measuring massive multitask language understanding. ICLR 2021.
---
**We expect the above responses to address your concerns, and look forward to your more positive comments.**
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification, and my concerns have been mostly addressed. As it is a very important research problem and many illustrations and experiments have been supplemented, I'm happy to increase my score.
---
Reply to Comment 1.1.1:
Title: Thanks for recognition
Comment: Glad to see that we have addressed most of your concerns and thanks for your recognition of our work! | Summary: This work discusses a novel method for out-of-distribution (OOD) detection in generative language models (GLMs), particularly in the context of mathematical reasoning tasks. The key insights are: 1) The high-density output space in mathematical reasoning leads to a "pattern collapse" that causes larger discrepancies in the embedding shift trajectory between different samples in latent spaces, 2) GLMs exhibit early stabilization for in-distribution (ID) samples in mathematical reasoning, while OOD samples do not show this behavior.
Strengths: The authors propose a novel trajectory-based method called "TV score" that leverages the unique characteristics of the high-density output space in mathematical reasoning tasks to effectively detect OOD samples. This approach goes beyond the traditional OOD detection methods focused on uncertainty estimation and embedding distance measurement, which struggle in the challenging mathematical reasoning domain.
Robust OOD detection is crucial for the real-world deployment of generative language models, as these models are susceptible to performance degradation when faced with out-of-distribution inputs. The authors' work addresses a practical and important problem in the field, as mathematical reasoning tasks are increasingly incorporated into language models with high-stakes applications.
The authors' analysis of the unique characteristics of the input and output spaces in mathematical reasoning tasks provides valuable theoretical insights into the challenges posed by this domain for OOD detection.
Weaknesses: The underlying mechanism of the TV score method and its relationship to the observed "pattern collapse" in the output space is not fully explained. Providing a more detailed analysis and visualization of the trajectory dynamics, as well as the intuition behind the choice of trajectory volatility as the detection metric, would enhance the interpretability of the approach.
The computational complexity of the TV score method is not discussed, which is an important consideration for real-world deployment, especially in resource-constrained environments. Investigating the computational efficiency of the method and exploring potential optimizations or approximations would be valuable for improving its practical applicability.
The paper does not address the robustness of the TV score method to adversarial attacks, which is a critical consideration for the security of generative language models. Evaluating the method's performance under different types of adversarial perturbations and developing strategies to improve its robustness would be a valuable extension of this work.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the underlying mechanism of the TV score method, and how does it relate to the observed "pattern collapse" in the output space? How can providing a more detailed analysis and visualization of the trajectory dynamics, as well as the intuition behind the choice of trajectory volatility as the detection metric, enhance the interpretability of the approach?
How does the computational complexity of the TV score method impact its real-world deployment, especially in resource-constrained environments? What steps can be taken to investigate the computational efficiency of the method and explore potential optimizations or approximations to improve its practical applicability?
How robust is the TV score method to adversarial attacks, and what are the critical considerations for the security of generative language models? What steps can be taken to evaluate the method's performance under different types of adversarial perturbations and develop strategies to improve its robustness?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for your constructive comments! We will respond to your concerns one by one.**
---
> W1 & Q1: The mechanism of TV score ... more analysis and visualization of trajectory ... intuition behind the choice of trajectory volatility ...
### 1. Mechanism of TV score and its relationship to "pattern collapse"
The relationship of TV score and "pattern collapse" are not directly related, but indirectly transitioned through the theoretical intuition of Section 2.1. Refer to **"General Rebuttal: Motivation line from "pattern collapse" to "early stabilization" and TV score"** for details.
### 2. Intuition behind the choice of trajectory volatility
The Intuition is our Hypothesis 1 (line 104): The "pattern collapse" in mathematical reasoning scenarios leads to more significant differences among different samples' trajectories compared to traditional generative tasks (we have modeled and proved this intuition from a theoretical perspective in Section 2.1)
### 3. Detailed analysis and visualization of trajectory dynamics
Refer to **"General Rebuttal: Universality of "Early Stabilization" phenomenon in mathematical reasoning (Expanded visualization of Figure 3 in the paper)"** for detailed responses.
---
> W2 & Q2: The computational complexity is not discussed ...
Thanks for your consideration. After obtaining all outputs $\boldsymbol{y_l}$ of each layer, there are two main steps to obtain the final scores:
(1) Get ID Data Information: Fitting Gaussian distribution $\mathcal{G}_l$ = $\mathcal{N}(\boldsymbol{\mu}_l, {\boldsymbol \Sigma}_l)$ for $L$ layers
${\boldsymbol \Sigma}_l$ is a diagonal matrix because $d$ embedding dimensions are independent, so we only need to compute the mean and variance of each dimension of all $n$ ID sample embeddings.
* Compute mean ${\boldsymbol{\mu}}_l$: $d(n-1)$ addition operations and $d$ multiplication operations are required;
* Compute variance ${\boldsymbol \Sigma}_l$: $dn+d(n-1)=d(2n-1)$ addition operations and $dn+d=d(n+1)$ multiplication operations are required.
We fit $L$ Gaussian distributions, so the number of addition and multiplication operations are both $\mathcal{O}(Ldn)$.
We report the computation time of fitting Gaussian distribution in the ID dataset MultiArith ($n$=600):
| time (s) |
| --- |
| 1.132 |
*Note: this part is a one-time event and only needs to be performed once.*
(2) Get OOD Data Information: Compute TV Score
For $k=0$, we need to compute the Mahalanobis Distance between OOD sample and ID distribution (Eq 6). Since ${\boldsymbol \Sigma}_l$ is a diagonal matrix, it involves only simple vector multiplication and does not require matrix multiplication. Specifically, it requires $d + (d-1) = 2d-1$ addition operations and $2d$ multiplication operations. We have $L$ layers, so the numbers of addition and multiplication operations are both $\mathcal{O}(Ld)$.
For $k>0$, for each increase in $k$ by 1, $L$ Gaussian distribution differences, $L$ embedding differences, and $L$ MD computations need to be performed. The increasing numbers of addition and multiplication are both $\mathcal{O}(Ld)$.
We sample 1000 cases to compute TV score. Time is as below:
|k|time mean (s)|time std (s)|
|-|:-:|:-:|
|0|0.0016|0.0001|
|1|0.0033|0.0001|
|2|0.0049|0.0002|
|3|0.0066|0.0002|
|4|0.0082|0.0002|
|5|0.0098|0.0003|
**After complexity analysis and experimental results, it is clear that our method is efficient and can be flexibly deployed in realistic scenarios.** *This is one of our strengths, especially compared to some probability-based metrics such as perplexity, which require time-consuming softmax exponential computation.*
---
> W3 & Q3: The paper does not address the robustness ...
Thanks for your consideration. However, prior works on OOD detection did not take this into account, so the robustness of OOD detection is not a well-defined question. Despite this, we try our best to design experiments to verify this point.
We make the following assumption and goal: For a perturbed ID sample, the sample still belongs to the ID sample in realistic scenarios, but models may misidentify it as an OOD sample. We need to avoid this misidentification.
We refer to [1] for perturbation method to input data in language models:
* Paraphrasing: We generate one paraphrased input by querying ChatGPT using the prompt in [1];
* Dummy Tokens: We randomly select tokens that marginally influence the original meaning and append them to the input. Such tokens could be newline characters, tab spaces, ellipses, or supplementary punctuation marks.
We compare two settings:
* Original: Results in our paper
* Perturbation: We replace ID samples in the test set with the perturbed text. We report results under two perturbations.
||Llama2-7B||||GPT2-XL||||
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
||Far-shift OOD||Near-shift OOD||Far-shift OOD||Near-shift OOD|
||AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|
|Original|98.76±0.11|5.21±0.98|92.64±0.39|28.39±1.38|93.47±0.08|24.10±0.95|94.86±0.23|13.82±0.36|
|Perturbation w/ Paraphrasing|97.94±0.12|5.75±1.00|91.88±0.42|29.28±1.41|93.12±0.09 |24.67±0.97|94.10±0.24|15.02±0.39|
|Perturbation w/ Dummy Tokens|98.54±0.11|5.54±0.98|92.43±0.40|29.16±1.41|93.47±0.08|24.10±0.95|95.01±0.22|12.78±0.34|
**Our method can largely defend against some perturbations in realistic scenarios, showing its strong robustness.** *We conjecture that this is because our method considers a large amount of information in the middle layer, whereas probability- or embedding-based methods only consider a single layer in the output/input, which makes our method better resistant to the influence of some random factors in the output layer, such as overconfidence, and therefore more robust.*
As for more explorations, we leave it for future work.
[1] SPUQ: Perturbation-Based Uncertainty Quantification for Large Language Models. EACL, 2024.
---
**We expect these responses to address your concerns, and look forward to your more positive comments.**
---
Rebuttal 2:
Title: Thanks for reviewing & Invite feedback
Comment: Dear Reviewer SAjz,
We want to express our sincere appreciation for your efforts and time spent reviewing our work and the constructive comments.
During the rebuttal period, we have provided a detailed response to address all the concerns mentioned by you about **mechanism, complexity, and robustness**. With the reviewer-author discussion period ending in one day, we kindly invite you to give some feedback on our responses, and we really hope our responses adequately address your concerns.
Best,
Authors
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer SAjz,
With the reviewer-author discussion period coming to an end, we really hope that we can get your feedback, and that our responses adequately address your concerns.
Best,
Authors | Rebuttal 1:
Rebuttal: General Rebuttal: Motivation line from "pattern collapse" to "early stabilization" and TV score
---
To summarize, our motivation line is:
**(Section 1, Figure 1): We find "pattern collapse" in the output space**
**-> (Section 2.1, Theoretical Intuition and Proving): The "pattern collapse" leads to more significant trajectory differences across different samples**
**-> (Section 2.2, Empirical Experiments): The source of the trajectory differences between ID and OOD samples is the "Early Stabilization" phenomenon**
**-> (Section 3): The "Early Stabilization" makes our trajectory-based detection method effective.**
---
General Rebuttal: Existence and Universality of "Pattern Collapse" phenomenon in mathematical reasoning
---
### 1. Why "pattern collapse"? **Tokenization is the key**
**For generative language models (GLMs), they model real numbers or mathematical expressions not in a mathematical sense, but based on discrete token sequences after tokenization. Thus, the collapse occurs at the token level, not at the full mathematical expression level.**
Due to the autoregressive generative nature of GLMs, the collapse phenomenon occurs during the prediction of each token.
### 2. More cases
For mathematical expressions that are very different in the mathematical sense, **after tokenization, they all contain only 0-9 number tokens and a limited number of special symbols**, such as decimal points, slashes, root signs, curly brackets. They make up a very small percentage of the vocab:
* 5517 -> ['▁', '5', '5', '1', '7']
* 21.59 -> ['▁', '2', '1', '.', '5', '9']
* 71/91 -> ['▁', '7', '1', '/', '9', '1']
* -\\sqrt{3255} -> ['▁', '-\\', 'sqrt', '{', '3', '2', '5', '5', '}']
* y^4-2y^3+7y^2+y-5 -> ['▁', 'y', '^', '4', '-', '2', 'y', '^', '3', '+', '7', 'y', '^', '2', '+', 'y', '-', '5']
* x^2/x^5 -> ['▁', 'x', '^', '2', '/', 'x', '^', '5']
### 3. "pattern collapse" is generally hold for all types of mathematical reasoning tasks
After explaining the tokenization behind "pattern collapse", we now demonstrate the universality of "Pattern Collapse" phenomenon in mathematical reasoning.
> Setup
To demonstrate the universality of “pattern collapse” across various tasks of mathematical reasoning, we conduct the following statistical experiment:
We **categorize the mathematical tasks into various types across different domains and difficulties, then count the token type number and token duplication rate corresponding to each category, and the vocab coverage**. We also test translation and summarization tasks by taking samples with the same token size as the mathematical reasoning dataset for a clear comparison.
We use Llama2 tokenizator (Vocab size = 32000). The computation metric is:
* Token Duplication Rate = 1 - token type number / token number
* Vocab Coverage = token type number / Vocab size
> Statistics Data
***{The statistics data comparisons are shown in PDF Table 2.}***
> Analysis
From the results, we can conclude that:
* Existence: The **average token duplication rate is up to 99% on all math tasks**, and even **a staggering 99.9% on some simple arithmetic tasks**; In contrast, the **token duplication rate on the text generation task is only about 60%, with about 2000 different types of token**, and still increasing as the total number of tokens increases. These data and comparisons demonstrate that pattern collapse occurs in mathematical reasoning and not in text generation.
* Universality: Token repetition rate **exceeded 97% on all seven math tasks** of different difficulties and types.
> Conclusion
These evidences demonstrate that the **"pattern collapse" occurs on generally all types of mathematical reasoning tasks**.
---
General Rebuttal: Universality of "Early Stabilization" phenomenon in mathematical reasoning (Expanded visualization of Figure 3 in the paper)
---
> Setup
**We present detailed visualization of the "early stabilization" phenomenon (detailed version of Figure 3 in the paper), which contains comparisons of trajectory volatility (embedding differences between neighboring layers) curves and sample standard deviation (color shading) between ID and OOD samples.**
The language model is Llama2-7B (32 layers, so 31 neighboring layer-pairs). The ID data is the MultiArith dataset (Arithmetic domain, primary difficulty), and the OOD data consists of 10 datasets from different tasks and difficulties (identical to the setup in Section 4.1):
* Near-shift OOD: GSM8K, SVAMP, AddSub, SingleEq, SingleOp (Arithmetic domain; middle-school difficulty)
* Far-shift OOD: MATH-Algebra/Geometry/Counting_and_Probability/Number_Theory/Precalculus (algebra, geometry, counting-and-prob, number-theory, precalculus domain); university difficulty)
> Visualization and Corresponding Statistics Data
***{The visualization figures of all datasets are shown in PDF Figure 1.}***
***{The average volatility statistics for layers 1-31 (full layers), 20-31, and 26-31 on each dataset are shown in PDF Table 1.}***
> Analysis
From the detailed visualizations and statistical data, we can conclude that:
* Dataset Level:
* ID samples all show the "Early Stabilization" phenomenon compared to OOD math problems of different types and difficulties, because the average volatility in 26-31 layers of the ID dataset is significantly lower than that of OOD datasets;
* The mid-to-late volatilities on the far-shift OOD dataset are more dramatic due to the greater deviation from the distribution of the ID dataset.
* Sample Level: Trajectory volatility can vary somewhat across samples, but in general, ID samples have less trajectory volatility than OOD samples, and in the mid-to-late layers, ID samples generally complete the main inference, while OOD samples still have large learning volatility overall.
> Conclusion
**These detailed visualizations and analyses can demonstrate the generalizability of the "early stabilization" phenomenon in mathematical reasoning scenarios.**
Pdf: /pdf/b49e65d7aa2e2ecadce270edf3b7d2cd453623d3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Is O(log N) practical? Near-Equivalence Between Delay Robustness and Bounded Regret in Bandits and RL | Accept (poster) | Summary: The paper explores interactive decision-making scenarios encompassing bandits, contextual bandits, and reinforcement learning, focusing on the concept of regret minimization. It highlights the Graves-Lai constant, where its zero value is crucial for achieving bounded regret in interactive decision-making. This condition, however, may be stringent for practical applications, prompting questions about its feasibility. The study extends this analysis to include robustness against unknown reward delays, termed ϵ-robustness, which measures algorithmic resilience to misspecified delay models. The main finding asserts that achieving ϵ-robustness is impossible for consistent algorithms unless the Graves-Lai constant is zero. The paper contrasts this negative result with positive outcomes in linear reward models, demonstrating that a zero Graves-Lai constant is sufficient for achieving bounded regret without knowledge of delay models. This dual perspective underscores the theoretical and practical implications of the Graves-Lai constant in designing robust learning algorithms for interactive decision-making under uncertainty.
Strengths: The paper makes significant contributions by establishing theoretical results that link the Graves-Lai constant to delay robustness in interactive decision-making scenarios. It introduces the concept of ϵ-delay robustness, which quantifies how learning algorithms perform under ϵ-contaminated delay models.
It provides rigorous theoretical foundations, leveraging concepts from robust statistics and decision theory to analyze the impact of delay model misspecification on learning algorithms.
The problem of attributing delayed rewards to decisions (anonymous feedback) is clearly articulated, which is essential for understanding the challenges addressed by the proposed algorithms.
Weaknesses: While the paper provides theoretical analysis and proofs, empirical validation through simulations or real-world experiments could strengthen the practical relevance of the results. Lack of empirical validation might limit the confidence in how well the theoretical findings translate into actual performance improvements in real interactive decision-making scenarios.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you provide a more intuitive explanation of how delay robustness affects decision-making in interactive systems?
Are there specific conditions under which the proposed delay-robust algorithms may not perform optimally? What are the limitations of the theoretical framework?
How confident are you that the proposed algorithms will perform well in practical scenarios, given the complexities and uncertainties inherent in real-world applications?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Explicit discussion of the limitations of the proposed approach and avenues for future research could enhance the paper. Addressing these aspects would provide a more balanced view of the scope and potential of the findings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Comment: References to the official Rebuttal below:
\[1\]: Foster, Dylan J., et al. "The statistical complexity of interactive decision making." arXiv preprint arXiv:2112.13487 (2021).
\[2\]: Dong, K., & Ma, T. (2023). Asymptotic instance-optimal algorithms for interactive decision making. The Eleventh International Conference on Learning Representations (ICLR)
\[3\]: Wagenmaker, A. J., & Foster, D. J. (2023). Instance-optimality in interactive decision making: Toward a non-asymptotic theory. In The Thirty Sixth Annual Conference on Learning Theory (pp. 1322-1472). PMLR.
\[4\]: Kang, H., & Kumar, P. R. (2023). Recommender system as an exploration coordinator: a bounded O (1) regret algorithm for large platforms. arXiv e-prints, arXiv-2301.
\[5\]: Hao, B., Lattimore, T., & Szepesvari, C. Adaptive exploration in linear contextual bandit. In International Conference on Artificial Intelligence and Statistics (pp. 3536-3545). PMLR. (2020)
---
Rebuttal 2:
Rebuttal: ### **Answer to comments in "Weaknesses"**
**Comment 1.**
> While the paper provides theoretical analysis and proofs,
empirical validation through simulations or real-world experiments
could strengthen the practical relevance of the results.
Lack of empirical validation might limit the confidence in
how well the theoretical findings translate into actual performance
improvements in real interactive decision-making scenarios.
**Author response to Comment 1**:\
Thank you for your comment. It is true that empirical validation or simulation studies are
out of our paper's scope as in other papers on DMSO. \
_(DMSO is a framework that generalizes many different sequential-decision making problems,it is quite uncommon for papers on DMSO to include simulation experiments for particular environments. Examples include all the key papers of this paper:_
* _The first paper that suggested the concept of DMSO ([Foster, Kakade, Qian, Rakhalin, 2021][1] \[1\])_
* _The paper that characterizes Graves-Lai coefficient for DMSO ([Dong and Ma, 2023 ][2] \[2\])_
* _The paper that proposes an instance-optimal algorithm for DMSO ([Wagenmaker and Foster, 2023 ][3] \[3\])_\
* And so forth, including all other papers on DMSO.
)
However, to help readers to understand the real-world applicability of this paper's theoretical result,
the new draft will have a paragraph on how the Graves-Lai constant being 0 works in practice.
* As discussed in our draft, for linear contextual bandits, the necessary & sufficient condition for bounded regret ([Hao, Lattimore, and Szepesvari 2020][4] \[4\]) can be easily satisfied by having rich enough context space ([Kang and Kumar, 2023][4] \[4\]).
In [Kang and Kumar, 2023][4] \[4\]'s Spotify context, million daily users can be considered a rich enough context for exploring 60,000 new songs uploaded daily. That is, we can apply our algorithm to this Spotify music recommendation and exploration example.
### **Answer to comments in "Questions"**
**Comment 1.**
> Can you provide a more intuitive explanation of how delay robustness affects decision-making in interactive systems?
**Author response to Comment 1**:\
To achieve O(log n) regret, you must pull the best arm much more than you pull the other arms. This hinders identification of non-optimal arms, as
a very small contamination in your knowledge about optimal arm's delay distribution will hinder you from getting a good enough statistical information to help conclude whether a reward is from a non-optimal arm.
**Comment 2.**
> Are there specific conditions under which the proposed delay-robust algorithms may not perform optimally? What are the limitations of the theoretical framework?
**Author response to Comment 2**:
* Recall that algorithm proposed is a proof-of-concept algorithm of which purpose is to prove the equivalence of bounded regret and any-level delay robustness.
To this end, the key assumption we make for this algorithm is that the Graves-Lai constant is 0 (= the iff condition for bounded regret ([Hao, Lattimore and Szepesvari 2021][5] \[5\])).
We prove $poly(n)$ lower bound for the case when this assumption does not hold. But we don't prove any upper bound for the case when the "Graves-Lai constant being 0" does not hold.
* For non-linear setting, whether our positive result works is an open question.
**Comment 3.**
> How confident are you that the proposed algorithms will perform well in practical scenarios, given the complexities and uncertainties inherent in real-world applications?
**Author response to Comment 3**:
* As described above, in real-world online platforms with diverse users, the condition that Graves-Lai constant being 0 easily holds.
To see how well bounded regret algorithm like ours works in practical recommender systems, see [Kang and Kumar, 2023][4] \[4\].
* Graves-Lai constant being 0 is not likely to hold in smaller systems.
### **Answer to comments in "Limitations"**
**Comment 1.**
>Explicit discussion of the limitations of the proposed approach and avenues for future research could enhance the paper. Addressing these aspects would provide a more balanced view of the scope and potential of the findings.
**Author response to Comment 1**:\
Thank you for this comment. In discussions section, we will include the following ideas:
* While this paper proves $poly(n)$ lower bound for the case when "Graves-Lai constant being 0" does not hold,
upper bound for this case is an open question.
* The algorithm proposed is a proof-of-concept algorithm of which purpose is to prove the equivalence of bounded regret and any-level delay robustness.
To this end, the key assumption we make for this algorithm is that the Graves-Lai constant is 0 (= the iff condition for bounded regret ([Hao, Lattimore and Szepesvari 2021][5] \[5\])). While large systems such as Spotify satisfies this condition (a million daily users for 60,000 new songs exploration), smaller systems won't satisfy this condition.
* As discussed in Section 4.2.1, cross-informativeness is
the key to achieving the positive result in our paper. For linear cases, we attribute Section 5 to show that cross-informativeness holds when the Graves-Lai constant is 0.
Whether we can extend the positive result in our paper for other type of models is an open question. Towards this direction, one may want to show that a model's particular structure allows Graves-Lai constant being 0 to imply cross-informativeness.
[1]: https://arxiv.org/abs/2112.13487
[2]: https://openreview.net/forum?id=oGVu9spZaJJ
[3]: https://proceedings.mlr.press/v195/wagenmaker23a.html
[4]: https://arxiv.org/abs/2301.12571
[5]: https://proceedings.mlr.press/v108/hao20b.html
---
Rebuttal Comment 2.1:
Comment: I have read the authors' rebuttal and the comments from the other reviewers. I would like to keep my score unchanged. | Summary: This paper studies anonymous delay (i.e. when it is unknown which trial the delayed reward came from) in interactive decision making. It gives a strongly negative result that if the reward delay distribution is not exactly known and the “Grave-Lai constant” is non-zero then no algorithm has sub-polynomial regret. It also gives a positive result that, for linear-rewards models, the Grave-Lai constant being zero is sufficient for achieving bounded regret with no knowledge of the delay distribution.
Strengths: I think that the negative result (Theorem 4.1) is quite neat although I am not sure of its importance (the other reviewers’ opinions on this will certainly influence my final score).
Weaknesses: Whilst Theorem 4.1 rules out (unless the Grave-Lai constant is zero) sub-polynomial regret with unknown reward distributions, it does not rule out algorithms with very small polynomial regret. A lower bound on a polynomial exponent would be a much better result in my opinion.
The positive result seems limited. As the authors say in the abstract - “as the condition of the Graves-Lai constant being zero may be a strong requirement for many applications, the practical usefulness of pursuing bounded regret (or in this case unknown delay functions) has been questioned”.
Line 7 of Algorithm 1 suggests that F must be finite. If this is true then this is a serious limitation.
Technical Quality: 3
Clarity: 3
Questions for Authors: Line 131: what is an $\epsilon$-probability removal?
Line 135: $\nu$ lots a lot like $v$ - I recommend a different letter
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Answer to comments in "Weaknesses"**
**Comment 1**.
> "Whilst Theorem 4.1 rules out (unless the Grave-Lai constant is zero) sub-polynomial regret with unknown reward distributions, it does not rule out algorithms with very small polynomial regret. A lower bound on a polynomial exponent would be a much better result in my opinion."
**Author response to comment 1**:
* When the Graves-Lai constant is not 0, our first main result shows that no algorithm can achieve sub-polynomial regret. In other words, **"we prove _poly(n)_ lower bound"**, which resonates with your point that _"a lower bound on a polynomial exponent will be nice"_. We greatly appreciate your pointing this out, as we have not yet explicitly stated this sentence in the previous draft.
**Comment 2**.
> "The positive result seems limited. As the authors say in the abstract - “as the condition of the Graves-Lai constant being zero may be a strong requirement for many applications, the practical usefulness of pursuing bounded regret (or in this case unknown delay functions) has been questioned"
**Author response to Comment 2**:\
Cost of satisfying condition of Graves-Lai constant being zero indeed questions the usefulness of bounded regret algorithm design, but **the cost we should pay is not too much impractical**. As discussed in our draft, for linear contextual bandits, for example, the necessary & sufficient condition for bounded regret ([Hao, Lattimore, and Szepesvari 2020][2] \[2\]) can be easily satisfied by having rich enough context space ([Kang and Kumar, 2023][3] \[3\]). For example, in [Kang and Kumar, 2023][3] \[3\]'s Spotify context, million daily users can be considered a rich enough context for exploring 60,000 new songs uploaded daily.
**Comment 3**.
> "Line 7 of Algorithm 1 suggests that F must be finite. If this is true then this is a serious limitation."
**Author response to Comment 3**:\
Thank you for your suggestion. We modified our Algorithm 1 pseudocode to remove the error.\
[Link to the new Algorithm 1 pseducode's image][4]
### **Answer to comments in "Questions"**
**Comment 1.**
> "Line 131 : what is an $\epsilon$-probability removal?"
**Author respoonse to Comment 1**:\
$\epsilon$-probability removal from a distribution is also called "Subtractive Contamination" in robust statistics literature. The definition is as follows:\
_Definition ($\epsilon$-probability removal or Subtractive Contamination)_\
_Given a parameter $0<\epsilon<1$ and a distribution $D$ on inliers, we say that one can sample from $D$ with $\epsilon$-subtractive contamination if the following holds: for some event $R$ with probability $1-\epsilon$, one can obtain independent samples from the distribution of $D$ conditioned on $R$. In other words, with probability $\epsilon$, the event $R^c$ occurs and these samples are removed from the data stream. This allows an adversary to remove an $\epsilon$-fraction of inlier samples._
**Comment 2.**
> Line 135: $\nu$ lots a lot like $v$ - I recommend a different letter.
**Author response to Comment 2**:\
The new draft now uses $\mu$.\
(Please note that we cannot show this change, as NeurIPS does not allow the pdf draft replacement in openreview)
[1]: https://epubs.siam.org/doi/abs/10.1137/S0363012994275440
[2]: https://proceedings.mlr.press/v108/hao20b.html
[3]: https://arxiv.org/abs/2301.12571
[4]: https://postimg.cc/VSdHXhf7
\[1\]: Graves, T. L., & Lai, T. L. (1997). Asymptotically efficient adaptive choice of control laws incontrolled markov chains. SIAM journal on control and optimization, 35(3), 715-743.
\[2\]: Hao, B., Lattimore, T., & Szepesvari, C. (2020). Adaptive exploration in linear contextual bandit. In International Conference on Artificial Intelligence and Statistics (pp. 3536-3545). PMLR.
\[3\]: Kang, H., & Kumar, P. R. (2023). Recommender system as an exploration coordinator: a bounded O (1) regret algorithm for large platforms. arXiv e-prints, arXiv-2301.
---
Rebuttal Comment 1.1:
Comment: I do not think that your response to comment 1 is correct. For instance, we could have, for every $\epsilon>0$, an algorithm with a regret of $\mathcal{O}(T^\epsilon)$. This means there is no polynomial lower bound. This does not mean that there exists an algorithm with regret $o(T^\epsilon)$ for every $\epsilon>0$. Hence, proving that there exists no algorithm with sub-polynomial regret is not the same thing as proving that there exists a polynomial lower bound on the regret. In any case, my question is asking if you can give the lower bound on the exponent (if one exists), which I guess would be problem dependent.
I don't understand your response to comment 2 - could you please rephrase?
In regards to comment 3 - in your new algorithm (for infinite F) how computationally hard is it to find such a g?
---
Rebuttal 2:
Comment: **Comment 1.**
> I do not think that your response to comment 1 is correct. For instance, we could have, for every $\epsilon>0$, an algorithm with a regret of $\mathcal{O}\left(T^\epsilon\right)$. This means there is no polynomial lower bound. This does not mean that there exists an algorithm with regret $o\left(T^\epsilon\right)$ for every $\epsilon>0$. Hence, proving that there exists no algorithm with sub-polynomial regret is not the same thing as proving that there exists a polynomial lower bound on the regret. In any case, my question is asking if you can give the lower bound on the exponent (if one exists), which I guess would be problem dependent.
**Answer to comment 1**.\
I see your point. Although this paper's scope does not include an answer to what you are asking for, trying to answer that will be a really nice future research direction. **We will state this point in the discussion section, as it demonstrates a sharp boundary of our contribution**. This paper's focus is indeed on rejecting the popular notion of "consistent" algorithm design, which assure _uniform_ $o(n^p)$ regret for all $p>0$ and for all problem instances. As you pointed out, if we just say that there is no algorithm that satisfies $n^p$ regret for some $p$ we are wrong; we can at most say that there is no algorithm that satisfy $n^p$ regret for all $p$ for all problem instances. (i.e. it is kind of _uniform_ $poly(T)$ lower bound, **which is much weaker than $poly(T)$ lower bound**). Your point is clearly legitimate, as our result only negates the notion of consistency. Thank you very much for your comment.
**Comment 2.**
>I don't understand your response to comment 2 - could you please rephrase?
**Answer to Comment 2.**\
In linear contextual bandits, for example, Graves-Lai constant being 0 can be easily satisfied in large enough systems; it only requires some constant times $klogk$ daily Spotify users (=# of contexts) to satisfy the constraint of graves-Lai constant being 0, if the number of new songs (=# of arms) uploaded daily is $k$ ([Kang and Kumar 2023][1]). As daily new songs to explore are around 60,000 and $60,000 \times \log 60,000 \le 300,000$ daily users, we can apply any algorithm that requires Graves-Lai constant being 0 for Spotify (which has more than $100m$ daily users).
[1]: https://arxiv.org/abs/2301.12571
**Comment 3.**
>In regards to comment 3 - in your new algorithm (for infinite F) how computationally hard is it to find such a g?
**Answer to Comment 3.**\
We really appreciate your pointing this out. Although $\mathcal{F}_n$ in our new algorithm can be easily computed in some problems like bandits, **we may definitely need to assume an existence of an oracle who can find an instance $g\in \mathcal{F}_n$ in general to abstract the computational complexity**. Like [Wagenmaker and Foster 2023][2] does, we will add an assumption on this and state that "We emphasize that the focus of this work is primarily statistical, and leave addressing the computational challenges for specific problems for future work." Thank you for improving this work.
[2]: https://proceedings.mlr.press/v195/wagenmaker23a.html | Summary: This paper studies the relationship between bounded regret and delay robustness in interactive decision-making, which captures bandits, contextual bandits, and reinforcement learning. The authors show that the Graves-Lai constant being zero is necessary for achieving delay model robustness when reward delays are unknown. On the other hand, it is also shown that the Graves-Lai constant being zero is sufficient for achieving bounded regret without delay model knowledge for linear reward models.
Strengths: 1. The paper introduces a novel connection between bounded regret and delay robustness, offering a fresh perspective on interactive decision-making.
2. The paper presents results in both directions, although positive results are limited to linear models.
Weaknesses: Indeed, the requirement for the Graves-Lai constant to be exactly zero is exceedingly strong. In the context of linear bandits, it necessitates that the set of optimal actions spans the entire action space, a condition that is nearly unattainable in practical applications. This consideration could potentially diminish the paper's overall significance.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The positive results are specific to linear reward models. Could you please elaborate on the challenges in extending these results to other types of models?
2. What if the reward delay distribution is precisely known?
3. Has there been any prior research in bandit or reinforcement learning that investigates the connection between regret and delay?
4. Minor typo on Line 253: "indicates" -> "indicate".
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: This theoretical paper may have limited direct societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Comment: References to the official Rebuttal below:
\[1\]: Hao, B., Lattimore, T., & Szepesvari, C. Adaptive exploration in linear contextual bandit. In International Conference on Artificial Intelligence and Statistics (pp. 3536-3545). PMLR. (2020)
\[2\]: Kang, H., & Kumar, P. R. (2023). Recommender system as an exploration coordinator: a bounded O (1) regret algorithm for large platforms. arXiv e-prints, arXiv-2301.
\[3\]: Pike-Burke, Ciara, et al. "Bandits with delayed, aggregated anonymous feedback." International Conference on Machine Learning. PMLR, 2018.
\[4\]: Thune, Tobias Sommer, Nicolò Cesa-Bianchi, and Yevgeny Seldin. "Nonstochastic multiarmed bandits with unrestricted delays." Advances in Neural Information Processing Systems 32 (2019).
\[5\]: Wu, Han, and Stefan Wager. "Thompson sampling with unrestricted delays." Proceedings of the 23rd ACM Conference on Economics and Computation. 2022.
\[6\]: Jin, Tiancheng, et al. "Near-optimal regret for adversarial mdp with delayed bandit feedback." Advances in Neural Information Processing Systems 35 (2022): 33469-33481.
\[7\]: Masoudian, S., Zimmert, J., & Seldin, Y. A Best-of-both-worlds Algorithm for Bandits with Delayed Feedback with Robustness to Excessive Delays. In ICML 2024 Workshop: Foundations of Reinforcement Learning and Control
\[8\]: Zimmert, J., & Seldin, Y. (2020, June). An optimal algorithm for adversarial bandits with arbitrary delays. In International Conference on Artificial Intelligence and Statistics (pp. 3285-3294). PMLR.
---
Rebuttal 2:
Rebuttal: ### **Answer to comments in "Weaknesses"**
**Comment 1.**
> Indeed, the requirement for the Graves-Lai constant to be exactly zero is exceedingly strong.
In the context of linear bandits, it necessitates that the set of optimal actions spans the entire action space, a condition that is nearly unattainable in practical applications.
This consideration could potentially diminish the paper's overall significance.
**Author response to Comment 1**:
* **Because the requirement for bounded regret is indeed exceedingly strong, our negative result shines.**: We show that the consistent algorithm design regime, one of the most popular algorithm design regime, may be impractical under anonymous reward delays, as Graves-Lai constant to be exactly zero is exceedingly strong. **In other words, we prove reduction of consistent algorithms with delayed rewards to algorithms with bounded regret in DMSO**.
* Cost of satisfying condition of Graves-Lai constant being zero indeed questions the usefulness of bounded regret algorithm design, but **the cost we should pay is not too much impractical in practice**. As discussed in our draft, for linear contextual bandits, for example, the necessary & sufficient condition for bounded regret ([Hao, Lattimore, and Szepesvari 2020][1] \[1\]) can be easily satisfied by having rich enough context space ([Kang and Kumar, 2023][2] \[2\]). For example, in [Kang and Kumar, 2023][2] \[2\]'s Spotify context, million daily users can be considered a rich enough context for exploring 60,000 new songs uploaded daily. That is, we can apply our algorithm to this Spotify music recommendation and exploration example.
### **Answer to comments in "Questions"**
**Comment 1.**
> The positive results are specific to linear reward models.
Could you please elaborate on the challenges in extending these results to other types of models?
**Author response to Comment 1**:\
Thank you for this question. As discussed in Section 4.2.1, cross-informativeness is
the key to achieving such positive result. For linear cases, we attribute Section 5 to show that cross-informativeness holds when the Graves-Lai constant is 0.
To extend the positive result in our paper for other type of models, one can show that the model's particular structure allows Graves-Lai constant being 0 to imply cross-informativeness.
We will add this answer to the discussions section.
**Comment 2.**
> What if the reward delay distribution is precisely known?
**Author response to Comment 2**:
* As discussed in 1.1. Related work, we can then apply [Pike-Burke et al., 2018][3] \[3\], which requires an assumption that the mean of delay distribution is precisely known (which cannot be achieved under $\epsilon$-contamination however small $\epsilon$ is).
* Of course, knowing _anonymous_ reward's delay distribution precisely is highly unrealistic.
**Comment 3.**
> Has there been any prior research in bandit or reinforcement learning that investigates the connection between regret and delay?
**Author response to Comment 3**:\
Thank you for your question. We believe that we were not clear enough about the fact that this is the first
paper that approaches _anonymous_ delay in rewards through the lens of delay robustness.
For the papers that discuss robustness/unrestricted delay for _non-anonymous_ delays (i.e., the setup where agents can associate each delayed reward to the arm it is from),
we are adding a new separate paragraph with 10-15 papers in the related work section, including:
* [Thune, Cesa-Bianchi, and Seldin (2019)][4] \[4\]
* [Wu, Ha, and Wager (2022)][5] \[5\]
* [Jin, Lancewicki, Luo, Mansour, Rosenberg (2022)][6] \[6\]
* [Masoudian, Zimmert, and Seldin (2024)][7] \[7\]
* [Zimmert and Seldin (2020)][8] \[8\]
* ...
**Comment 4.**
> Minor typo on Line 253: "indicates" -> "indicate".
**Author response to Comment 4**:\
Thank you for pointing this out. We have fixed the typo.
[1]: https://proceedings.mlr.press/v108/hao20b.html
[2]: https://arxiv.org/abs/2301.12571
[3]: https://arxiv.org/abs/1709.06853
[4]: https://proceedings.neurips.cc/paper/2019/hash/0e4f5cc9f4f3f7f1651a6b9f9214e5b1-Abstract.html
[5]: https://dl.acm.org/doi/abs/10.1145/3490486.3538376
[6]: https://proceedings.neurips.cc/paper_files/paper/2022/hash/d850b7e0cdc7f1c0820c6ad85405ae94-Abstract-Conference.html
[7]: https://openreview.net/forum?id=aLgJssbizV
[8]: https://proceedings.mlr.press/v108/zimmert20a.html
---
Rebuttal Comment 2.1:
Comment: Thank you for your rebuttal. After considering the rebuttal, I have decided to keep my original rating. | Summary: The paper investigates interactive decision-making in bandits, contextual bandits, and reinforcement learning, focusing on the Graves-Lai constant's role in achieving bounded regret. It establishes that a zero Graves-Lai constant is necessary and sufficient for bounded regret, but questions its practical utility due to its stringent requirements. The study shows that $\epsilon$-robustness in delay model robustness cannot be achieved if the Graves-Lai constant is non-zero, presenting a negative result for consistent algorithms. However, it offers a positive result for linear rewards models, indicating that a zero Graves-Lai constant is sufficient for bounded regret without delay model knowledge, balancing efficiency and robustness.
Strengths: The paper investigated some interesting questions.
Weaknesses: 1. The related work on robustness seems not comprehensive.
2. There is no experiment.
3. The authors didn't give proper citations in definitions or assumptions and so on.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Is the definition of "the sub-optimality gap of decision" also used in the previous paper? If yes, give the citation.
2. Is the assumption of "Realizability" generally used in other work? If yes, give the citation.
3. The lower bound is studied by other work. What is the explicit form of the lower bound and can you derive the instance-independent lower bound?
4. What is the application of your algorithm and can you run some experiments?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: There is no experiment.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Comment: References to the official Rebuttal below:
\[1\]: Foster, Dylan J., et al. "The statistical complexity of interactive decision making." arXiv preprint arXiv:2112.13487 (2021).
\[2\]: Dong, K., & Ma, T. (2023). Asymptotic instance-optimal algorithms for interactive decision making. The Eleventh International Conference on Learning Representations (ICLR)
\[3\]: Wagenmaker, A. J., & Foster, D. J. (2023). Instance-optimality in interactive decision making: Toward a non-asymptotic theory. In The Thirty Sixth Annual Conference on Learning Theory (pp. 1322-1472). PMLR.
\[4\]: Chen, F., & Mei S., & Chen F. (2024). Unified Algorithms for RL with Decision-Estimation Coefficients: PAC, Reward-Free, Preference-Based Learning, and Beyond
\[5\]: Foster, D. J., Golowich, N., & Han, Y. (2023, July). Tight guarantees for interactive decision making with the decision-estimation coefficient. In The Thirty Sixth Annual Conference on Learning Theory (pp. 3969-4043). PMLR.
\[6\]: Foster, D. J., Foster, D. J., Golowich, N., Qian, J., Rakhlin, A., & Sekhari, A. (2023). Model-free reinforcement learning with the decision-estimation coefficient. Advances in Neural Information Processing Systems, 36.
\[7\]: Foster, D. J., Han, Y., Qian, J., & Rakhlin, A. (2024). Online estimation via offline estimation: An information-theoretic framework. arXiv preprint arXiv:2404.10122.
\[8\]: Thune, Tobias Sommer, Nicolò Cesa-Bianchi, and Yevgeny Seldin. "Nonstochastic multiarmed bandits with unrestricted delays." Advances in Neural Information Processing Systems 32 (2019).
\[9\]: Wu, Han, and Stefan Wager. "Thompson sampling with unrestricted delays." Proceedings of the 23rd ACM Conference on Economics and Computation. 2022.
\[10\]: Jin, Tiancheng, et al. "Near-optimal regret for adversarial mdp with delayed bandit feedback." Advances in Neural Information Processing Systems 35 (2022): 33469-33481.
\[11\]: Masoudian, S., Zimmert, J., & Seldin, Y. A Best-of-both-worlds Algorithm for Bandits with Delayed Feedback with Robustness to Excessive Delays. In ICML 2024 Workshop: Foundations of Reinforcement Learning and Control
\[12\]: Zimmert, J., & Seldin, Y. (2020, June). An optimal algorithm for adversarial bandits with arbitrary delays. In International Conference on Artificial Intelligence and Statistics (pp. 3285-3294). PMLR.
\[13\]: Agarwal, Alekh, et al. "Contextual bandit learning with predictable rewards." Artificial Intelligence and Statistics. PMLR, (2012).
\[14\]: Du, Simon, et al. "Bilinear classes: A structural framework for provable generalization in rl." International Conference on Machine Learning. PMLR, (2021).
\[15\]: Kang, H., & Kumar, P. R. (2023). Recommender system as an exploration coordinator: a bounded O (1) regret algorithm for large platforms. arXiv e-prints, arXiv-2301.
\[16\]: Hao, B., Lattimore, T., & Szepesvari, C. Adaptive exploration in linear contextual bandit. In International Conference on Artificial Intelligence and Statistics (pp. 3536-3545). PMLR. (2020)
---
Rebuttal 2:
Rebuttal: ### **Answer to comments in "Weaknesses"**
**Comment 1**.
> "There is no experiment."
**Author response to comment 1**:\
AS DMSO is a framework that generalizes many different sequential-decision making problems
such as bandits, contextual bandits, and reinforcement learning and so forth, it is quite uncommon for papers on DMSO to include
simulation experiments for particular environments. Examples include all the key papers of this paper:
* The first paper that suggested the concept
of DMSO ([Foster, Kakade, Qian, Rakhalin, 2021][1] \[1\])
* The paper that characterizes Graves-Lai coefficient for DMSO ([Dong and Ma, 2023 ][2] \[2\])
* The paper that proposes an instance-optimal algorithm for DMSO ([Wagenmaker and Foster, 2023 ][3] \[3\])
and all other related papers on DMSO:
* [Chen, Mei and Bai (2024)][4] \[4\]
* [Foster, Golowich and Han (2023)][5] \[5\]
* [Foster, Golowich, Qian and Rakhlin (2023)][6] \[6\]
* [Foster, Han, Qian and Rakhlin (2024)][7] \[7\]
**Comment 2**.
> "The related work on robustness seems not comprehensive."
**Author response to Comment 2**:\
Thank you for pointing this out. We believe that we were not clear enough about the fact that this is the first
paper that approaches _anonymous_ delay in rewards through the lens of robustness.
For the papers that discuss robustness/unrestricted delay for _non-anonymous_ delays (i.e., the setup where agents can associate each delayed reward to the arm it is from),
we are adding a new separate paragraph with 10-15 papers in the related work section, including:
* [Thune, Cesa-Bianchi, and Seldin (2019)][8] \[8\]
* [Wu, Ha, and Wager (2022)][9] \[9\]
* [Jin, Lancewicki, Luo, Mansour, Rosenberg (2022)][10] \[10\]
* [Masoudian, Zimmert, and Seldin (2024)][11] \[11\]
* [Zimmert and Seldin (2020)][12] \[12\]
* ...
**Comment 3.**
> The authors didn't give proper citations in definitions or assumptions and so on.
**Author response to Comment 3**:\
Thank you for pointing this out and leaving detailed instruction in **Questions** section as questions. We will answer your questions in the next section.
### **Answer to comments in "Questions"**
**Comment 1.**
>Is the definition of "the sub-optimality gap of decision" also used in the previous paper? If yes, give the citation.
**Author response to Comment 1**:\
This is a kind of folklore definition used in almost every bandit/reinforcement learning
papers that deals with the concept of regret. Specifically for a decision's sub-optimality gap,
we can cite [Foster, Kakade, Qian, Rakhalin, 2021][1] \[1\], as this paper defines "decision" in a DMSO framework and also the sub-optimality gap of a decision accordingly.
**Comment 2.**
> Is the assumption of "Realizability" generally used in other work? If yes, give the citation.
**Author response to Comment 2**:\
Thank you for pointing this out; we will cite the following three papers.
* [Agarwal, Dudik, Kale, Langford and Schapire, 2012][13] \[13\]
* [Du, Kakade, Lee, Lovett, Mahajan, Sun and Wang, 2021][14] \[14\]
* [Foster, Kakade, Qian, Rakhalin, 2021][1] \[1\])
**Comment 3.**
> The lower bound is studied by other work. What is the explicit form of the lower bound and can you derive the instance-independent lower bound?
**Author response to Comment 3**:
* When the Graves-Lai constant is 0, the lower bound is clearly O(1), as we proved that there is an algorithm
that achieves the upper bound of O(1).
* When the Graves-Lai constant is not 0, our first main result shows that no algorithm can achieve sub-polynomial regret. In other words, "we prove _poly(n)_ lower bound". We greatly appreciate your pointing out this, as we have not yet explicitly stated this sentence in the previous draft.
**Comment 4.**
> What is the application of your algorithm and can you run some experiments?
**Author response to Comment 4**:\
While the algorithm proposed is a proof-of-concept algorithm of which purpose is to
prove the equivalence of bounded regret and any-level delay robustness, you can apply this algorithm for real-world platforms.
As discussed in our draft, for linear contextual bandits, for example, the necessary & sufficient condition for bounded regret ([Hao, Lattimore, and Szepesvari 2020][16] \[16\]) can be easily satisfied by having rich enough context space ([Kang and Kumar, 2023][15] \[15\]).
In [Kang and Kumar, 2023][15] \[15\]'s Spotify context, million daily users can be considered a rich enough context for exploring 60,000 new songs uploaded daily. That is, we can apply our algorithm to this Spotify music recommendation and exploration example.
For the author response related to experiments, please see above.
[1]: https://arxiv.org/abs/2112.13487
[2]: https://openreview.net/forum?id=oGVu9spZaJJ
[3]: https://proceedings.mlr.press/v195/wagenmaker23a.html
[4]: https://arxiv.org/abs/2209.11745
[5]: https://proceedings.mlr.press/v195/foster23b.html
[6]: https://proceedings.neurips.cc/paper_files/paper/2023/hash/3fcd0f8747f9217c6dbc45ed138b1fde-Abstract-Conference.html
[7]: https://arxiv.org/abs/2404.10122
[8]: https://proceedings.neurips.cc/paper/2019/hash/0e4f5cc9f4f3f7f1651a6b9f9214e5b1-Abstract.html
[9]: https://dl.acm.org/doi/abs/10.1145/3490486.3538376
[10]: https://proceedings.neurips.cc/paper_files/paper/2022/hash/d850b7e0cdc7f1c0820c6ad85405ae94-Abstract-Conference.html
[11]: https://openreview.net/forum?id=aLgJssbizV
[12]: https://proceedings.mlr.press/v108/zimmert20a.html
[13]: https://proceedings.mlr.press/v22/agarwal12
[14]: https://proceedings.mlr.press/v139/du21a.html
[15]: https://arxiv.org/abs/2301.12571
[16]: https://proceedings.mlr.press/v108/hao20b.html
---
Rebuttal Comment 2.1:
Comment: Thank you for the reply. The responses are satisfactory. Please add these works in your next version. I am raising my score by +1. | Rebuttal 1:
Rebuttal: ## 1. Overall comments and thank you response
We first want to thank all reviewers for putting enormous efforts in reviewing this paper. \
We are happy to hear that there was no major issue found by the reviewers, while we were told by all 5 reviewers that this paper
* makes significant contributions by establishing theoretical results and provides rigorous theoretical foundations leveraging concepts from robust statistics and decision theory, while the problem is clearly articulated, which is essential for understanding the challenges addressed by the proposed algorithms. _(Reviewer 3wWt)_
* introduces a novel connection between bounded regret and delay robustness _(Reviewer uxdM)_
* draws interesting connections between two seemingly different learning settings _(Reviewer bj5z)_
* investigated some interesting questions _(Reviewer ykEi)_
* presents a negative result that is quite neat _(Reviewer f9VR)_
### **For each of the reviews, we have submitted a separate author rebuttal below.**
## 2. Summary of the author responses on weaknesses and questions by multiple reviewers
**Point 1.**
> "incomprehensiveness of related works on robustness/unrestricted delay" (by Reviewer ykEi and uxdM)
**Answer to Point 1**: \
We believe that we were not clear enough about the fact that this is the first paper that approaches _anonymous_ delay in rewards through the lens of delay robustness. \
For the papers that discuss robustness/unrestricted delay for _non-anonymous_ delays (i.e., the setup where agents can associate each delayed reward to the arm it is from), we are adding a new separate paragraph with 10-15 papers in the related work section, including:
* [Thune, Cesa-Bianchi, and Seldin (2019)][4]
* [Wu, Ha, and Wager (2022)][5]
* [Jin, Lancewicki, Luo, Mansour, Rosenberg (2022)][6]
* [Masoudian, Zimmert, and Seldin (2024)][7]
* [Zimmert and Seldin (2020)][8]
**Point 2.**
> "There is no experiment" (by Reviewer ykEi and 3wWt)
**Answer to Point 2**: \
AS DMSO is a theoretical framework that generalizes many different sequential-decision making problems
such as bandits, contextual bandits, and reinforcement learning and so forth, **it is not a common practice for papers on DMSO to include
simulation experiments for particular environments.** Examples include all the key papers of this paper:
* The first paper that suggested the concept
of DMSO ([Foster, Kakade, Qian, Rakhalin, 2021][1])
* The paper that characterizes Graves-Lai coefficient for DMSO ([Dong and Ma, 2023][2])
* The paper that proposes an instance-optimal algorithm for DMSO ([Wagenmaker and Foster, 2023][3])
and all other papers on DMSO:
* [Chen, Mei and Bai (2024)][9]
* [Foster, Golowich and Han (2023)][10]
* [Foster, Golowich, Qian and Rakhlin (2023)][11]
* [Foster, Han, Qian and Rakhlin (2024)][12]
**Point 3.**
> "The requirement for bounded regret is indeed exceedingly strong, as pointed out by authors. Doesn't it potentially diminish the paper's overall significance? Is it practical?" (Reviewer f9VR, uxdM, bj5z)
**Answer to Point 3**:
* **Because the requirement for bounded regret is indeed exceedingly strong, our negative result shines**; we show that _consistent (i.e., $\log n$ regret for all instances) algorithm design (e.g., ([Dong and Ma, 2023][2] and [Wagenmaker and Foster, 2023][3]) may not be practical_ under anonymous delayed rewards, as _we prove reduction of consistent algorithms with anonymous delayed rewards to algorithms with bounded regret_ in DMSO. That is, we prove $poly(n)$ lower bound for the case when bounded regret cannot be achieved.
* The algorithm proposed for positive result is a proof-of-concept algorithm of which purpose is to prove the equivalence of bounded regret and any-level delay robustness. To this end, the key assumption we make for this algorithm is that the Graves-Lai constant is 0 (= the iff condition for bounded regret ([Hao, Lattimore and Szepesvari 2021][14])). **It is not hard for systems with large enough user pool such as Spotify to satisfy this strong requirement** (a million daily users are enough for 60,000 new songs exploration ([Kang and Kumar, 2023][13]). Therefore, we can apply our proposed algorithm in large digital platforms such as Spotify. **However, smaller systems won't satisfy this requirement**.
## 3. Attached pdf
The attached pdf includes the following information:
* Updated Algorithm 1 with improved representation, to fix the issue raised by reviewer f9VR
[1]: https://arxiv.org/abs/2112.13487
[2]: https://openreview.net/forum?id=oGVu9spZaJJ
[3]: https://proceedings.mlr.press/v195/wagenmaker23a.html
[4]: https://proceedings.neurips.cc/paper/2019/hash/0e4f5cc9f4f3f7f1651a6b9f9214e5b1-Abstract.html
[5]: https://dl.acm.org/doi/abs/10.1145/3490486.3538376
[6]: https://proceedings.neurips.cc/paper_files/paper/2022/hash/d850b7e0cdc7f1c0820c6ad85405ae94-Abstract-Conference.html
[7]: https://openreview.net/forum?id=aLgJssbizV
[8]: https://proceedings.mlr.press/v108/zimmert20a.html
[9]: https://arxiv.org/abs/2209.11745
[10]: https://proceedings.mlr.press/v195/foster23b.html
[11]: https://proceedings.neurips.cc/paper_files/paper/2023/hash/3fcd0f8747f9217c6dbc45ed138b1fde-Abstract-Conference.html
[12]: https://arxiv.org/abs/2404.10122
[13]: https://arxiv.org/abs/2301.12571
[14]: https://proceedings.mlr.press/v108/hao20b.html
Pdf: /pdf/1b24105ebd9dcb7350a4157c57ad1cf39a05b096.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper considers the problem of regret minimization under delayed rewards. The paper considers the setting where the Decision-making with Structured Observations (DMSO) setting and asks when the learner can achieve logarithmic regret when the reward signal is delayed and we only have an estimate of the delay distribution up to precision \epsion within the given model class.
The main results are twofold:
1. They show that logarithmic regret under \eps-misspecified delay distribution is possible (no matter what is \eps) only if a certain algebraic quantity called the graves-lai constant is 0 for all models in the given family. Interestingly, the same condition is necessary for obtaining constant regret bounds in DMSO.
2. They provide an upper bound showing that for learning settings for which the the graves-lai constant is 0, and an additional condition called cross-informativeness holds, then one can obtain an upper bound on the regret under delayed reward feedback (with \eps-contaminated delay distribution).
Overall I like the paper as it draws interesting connections between two seemingly different learning settings.
Strengths: - Interesting connection between two seemingly unrelated topics
- Applications to linear contextual bandits and linear MDPs given
Weaknesses: Weakness:
I think the overall exposition can be improved a bit. In particular, it was not clear to me for a long time whether the agent also received the time step for which the reward corresponds to, when it received the delayed reward at a later time step. Other minor issues are:
- Theorem 4.11 should provide a regret bound, if possible.
- Definition 5.4 is also called the Linear Bellman Complete setting. It might be worth adding this phrase for completeness.
Minor typo:
- "for of" at the end of page 5 last paragraph
- First paragraph of page 5 says that d_{TV}(D, D^) >= \eps implies that |E[D] - E[D^]| > 0 (should be >=0 I think).
- Section 4.2 title -- remove "when"
- Section 4.2.2 first line has redundancy
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Given the close connection, is there a direction reduction of consistent algorithms with delayed rewards to algorithms with bounded regret in DMSO?
2. The definition of consistency in Defn 2.2 seems to only consider algorithms that have at most logarithmic regret. Can we handle \eps-contamination in reward delay distribution, in the absence of the graves-lai constant being 0, if we are OK with some sort of sublinear regret? It seems to me right now that this notion of consistency is too strong.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Answer to comments in "Weaknesses"**
**Comment 1.**
> It was not clear to me for a long time whether the agent also received the time step for which the reward corresponds to, when it received the delayed reward at a later time step.
**Author response to Comment 1**:\
Thank you for pointing this out. We believe two changes we made will address this:
* Instead of "unknown reward delays", we now use "anonymous delayed rewards" to avoid ambiguity and improve representation.
* We added a clarifying sentence in the introduction that _"the agent never observes the period information for which each reward corresponds to, even after it receives the delayed reward at the later time step."_ Thank you for your suggestion.
**Comment 2.**
> Theorem 4.11 should provide a regret bound, if possible.
**Author response to Comment 2**:
_(Modified Theorem 4.11)_:
* _Under Assumption 4.7 and 4.8, the algorithm ST2C, which does not require any knowledge of the delay distribution model, achieves bounded regret. More precisely, the regret is bounded by $\Delta(1+5\frac{4 c^4 e^{-2} }{\mathcal{W}\left(2 c^2\right)^2}) \frac{\pi^2}{6}$, where $\mathcal{W}$ denote the principal branch lambert W function (which is an increasing function)._
, where $\Delta$ is the maximum per-period mean reward difference among decisions and $c$ is from Assumption 4.8:
_(Modififed Assumption 4.8)_:
* _For all $f, g \in \mathcal{F}, \pi \in \Pi$, and $o \in \mathcal{O}, |\log \frac{f\pi}{g\pi} |<c$ for some $c>0$._
**Comment 3.**
> Definition 5.4 is also called the Linear Bellman Complete setting. It might be worth adding this phrase for completeness.
**Author response to Comment 3**:\
Thank you for your suggestion. We updated this part accordingly.
**Comment 4.**
> Fix minor typo.
**Author response to Comment 4**:\
Thank you for pointing them out. We fixed them accordingly.
### **Answer to comments in "Questions"**
**Comment 1.**
> Given the close connection, is there a direct reduction of consistent algorithms with delayed rewards to algorithms with bounded regret in DMSO?
**Author response to Comment 1**:\
The phrase _"reduction of consistent algorithms with delayed rewards to algorithms with bounded regret in DMSO"_ correctly depicts our contribution. We will add it to the conclusion section.
In other words, we prove $poly(n)$ lower bound for the case when we cannot achieve bounded regret.
**Comment 2.**
>The definition of consistency in Defn 2.2 seems to only consider algorithms that have at most logarithmic regret. Can we handle $\epsilon$-contamination in reward delay distribution, in the absence of the graves-lai constant being 0, if we are OK with some sort of sublinear regret? It seems to me right now that this notion of consistency is too strong.
**Author response to Comment 2**:\
You are absolutely right. While this paper proves $poly(n)$ lower bound for the case when "Graves-Lai constant being 0" does not hold,
upper bound for this case is an open question. | null | null | null | null | null | null |
CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization | Accept (poster) | Summary: The paper introduces CoMERA, a novel training method for large AI models that focuses on optimizing both computing and memory efficiency through rank-adaptive tensor compression. CoMERA aims to reduce training costs and environmental impact by achieving high compression ratios and maintaining accuracy. Key contributions include a multi-objective optimization formulation, performance optimization techniques for tensor-compressed training on GPUs, and empirical validation showing significant speedup and memory savings compared to existing methods.
Strengths: 1. The experimental results demonstrate significant improvements in training speed and memory efficiency, outperforming recent methods like GaLore and LTE.
2. By addressing both computing and environmental costs, the method has practical implications for large-scale AI model training, making tensor models more practically useful in machine learning.
3. In the part of multi-objective optimization, the formulation balances compression ratio and model accuracy, providing a customizable framework for different resource requirements.
Weaknesses: While the method shows impressive results on tested models, scalability to even larger models and diverse architectures remains an area for further exploration.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. For learning TT-ranks, this work imposes diagonal matrices D (shown in Eq. 6) with l_1 norm regularization. This technique for tensor network structure search was also utilized in the recent paper:
Zheng, Yu-Bang, et al. “SVDinsTN: A Tensor Network Paradigm for Efficient Structure Search from Regularized Modeling Perspective.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
2. In line 122, could you provide more explanation of what the linear scalarization means? Does it refer solely to Eq. (7), or is there additional context?
3. Could you provide more intuition regarding Eq. (11), i.e., the achievement scalarization? While the two scalarizations in the paper may be standard methods in multi-objective optimization, a more intuitive explanation would help general readers unfamiliar with this field grasp the main idea quickly.
4. In lines 171-173, the design of the “tensor-index level” is unclear. Please improve the clarity of this part if possible.
5. In the contraction section, how many core tensors are involved in the contraction? If the number of involved tensors in the contraction is not large, why not use the default contraction path searching algorithms integrated into einsum, e.g., ‘dp’ or ‘optimal’? What benefits can be achieved from handcrafting a new contraction path for the current model compression task?
6. Could you provide more explanation in Sec. 4.3? Why can CUDA graphs improve tensor computation? What specific efforts were made in this work? Is using CUDA graphs straightforward, or does it require advanced programming techniques, such as designing a more GPU-suitable contraction order?
7. In the comparison with GaLore, why was a 3090 used instead of a 4090, which was used in the original GaLore paper?
8. Regarding the training time, only the time per epoch is given. Is it possible to provide the overall running time? Are more epochs required if tensor networks are utilized?
9. Will the codes be released once the paper is accepted?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Responses to Weaknesses:
***Weakness 1**: Scalability to larger models.*
We are conducting larger experiments. The preliminary result is in **Figure 1 in the attached PDF** and details are in **main author rebuttal**. We pre-train **CodeBERT-Large** with 357 million parameters on CodeSearchNet, a 20GB dataset. Compared to uncompressed training, the tensor-compressed model shows a similar convergence curve and reaches a similar training loss and achieves **training speedup** on a single 3090 GPU. Compression and speedup results are in the following table.
||Pre-training results of CodeBERT-large||
|-|-|-|
|compression ratio|overall| 4.25$\times$|
||tensorized layers| 9.77$\times$|
|training speedup|sequence length 128| 2.3$\times$|
||sequence length 512| 1.9$\times$|
## Responses to Questions:
***Question 1**: Compare diagonal D with l_1 norm in the recent paper*
We will cite it and discuss the differences. The referenced paper uses sparsity of diagonal matrices to search for a **compact TT representation of given data**, which is very different from our **end-to-end training**. We would like to highlight the differences in the following.
* The referenced paper searches for a TT representation of given data. It is same as **finding a low-rank approximation for a *given* tensor**. In contrast, our work uses TT to compress weights during **end-to-end training** and **we don’t have any prior information on the tensor** (i.e., model parameters). The tensor cores and diagonal entries are determined during training.
* Our work formulates the problem as a more generic **multi-objective** problem and uses a two-stage algorithm to solve it. The formulation in that paper is similar to linear scalarization approach in our early stage. Our work further uses the achievement scalarization in the late stage to find a model close to our preferred model performance and size.
***Question 2**: Explanation of linear scalarization*
Yes, the linear scalarization solely refers to the Eq(7). More details are in the next question.
***Question 3**: Intuition for scalarization?*
We will include more intuitions. The linear scalarization minimizes the weighted sum of objectives. Solving it provides a Pareto point, but it is hard to control the obtained point. The achievement scalarization finds a Pareto optimal solution that is close to a **given target**. When one objective is too far away from the target, we will mainly optimize this objective by increasing its weight. It gives us the achievement scalarization problem Eq. (11).
***Question 4**: Design of the “tensor-index level”.*
Thanks! We will revise that. “Tensor-index level” optimization aims to reduce redundant computation. In the tensor indices, two different rows may have shared values. For instance, (2,3,1,3) and (2,3,2,4) are common in (2,3), so we only need to compute (2,3) entry once. In this design, we focus on the unique indices required for the lookup and only compute the required unique indices.
***Question 5**: Why not using contraction path search in einsum?*
For TT with d tensor cores, each contraction has d+1 cores involved and there are in total d+2 coupled einsums, as in Eq. (19) (20) (21). The algorithm in “einsum” only searches for the path for a **single** einsum operation. However, our path optimization tries to minimize the **total costs of all d+2 coupled contraction** paths, which cannot be handled by the searching options in “einsum”. We also show in Proposition 4.1 that the contraction path for forward is already near-optimal. Similar results can be shown for back-propagation.
***Question 6**: Details about Cuda Graph*
Thanks! We will include more explanations about Cuda graphs in Section 4.3. Cuda Graph eliminates overhead of launching lots of kernels sequentially. It is more suitable for CoMERA since tensor-compressed training has much more small kernels than uncompressed training. No dynamic control flows and dynamic shapes are allowed in standard CudaGraph. We specifically revise some codes, making them compatible with Cuda Graph. For instance, the inputs are padded into a fixed length and the late-stage achievement scalarization in Section 3.2 is split into two graphs.
***Question 7**: Why was a 3090 used instead of a 4090 for GaLore comparison?*
Thankst! We don’t have the 4090 GPU in our lab. All comparisons are run on the same system and the 3090 GPU.
***Question 8**: Overall training time*
Thanks! Empirically CoMERA is **2-3X** faster in the whole training than uncompressed training for transformers on a single GPU, but we do not have theoretical guarantees about the number of epochs although they are similar in our experiments. The detailed explanations are in the **main author's rebuttal**. Some key points are summarized below:
* Training NNs is a highly non-convex optimization in compressed and uncompressed formats, making theoretical analysis very complicated. Overall training time depends on (1) the number of epochs and (2) time per epoch. While we observed consistent **2-3X** speedup in terms of (2), point (1) is highly case-dependent for almost all non-convex optimization solvers.
* CoMERA has a **similar empirical convergence behavior** to the uncompressed training on our tested cases. We observe that on both 6-encoder transformer and DLRM, shown in Figure 6 on paper and **Figure 2 in the PDF**.
* On all tested transformers, CAMERA generally takes 2-3X less training time because it has similar convergence curves as the uncompressed model, but each epoch of the tensorized model is 2-3X faster than standard training.
Although our method has similar convergence behavior compared with standard training, we think that it could be misleading to draw any formal conclusion now without theoretical proof (which may or may not exist).
***Question 9**: Code release?*
We are trying to figure out some potential IP issues. We hope to release the codes if the IP issue can be cleared.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Thank you very much for your thorough review and fruitful comments! We have carefully read all your feedback and addressed your concerns and questions. We would greatly appreciate it if you could take some time to review our responses.
We will stay online these days and are happy to address any further questions or concerns you may have. We look forward to your continued feedback on our work.
Thank you again for your time and consideration!
---
Rebuttal 2:
Comment: I appreciated the authors’ response and the clarifications provided. However, I have a few additional comments:
1.Regarding the referenced paper:
-- The reference seems to focus on general tensor networks rather than specifically on the TT (Tensor Train) format. While your work indeed targets a different set of tasks, the similarity between the two papers primarily lies in the use of sparse diagonal matrices to determine ranks. However, your response mentioning ‘finding a low-rank approximation for a given tensor’ could lead to confusion. I recommend refining this point to better differentiate your work from the referenced CVPR paper.
2. Regarding the use of GPUs:
-- The response mentioning the lack of a 4090 GPU in your lab is not entirely satisfying. If the choice of using a 3090 GPU is purely due to availability, it’s important to clarify this in your paper. Additionally, please ensure that the claims in your paper are accurately framed, especially when comparing your results with those of GaLore. Otherwise, there could be a risk of over-claiming or causing misunderstandings, which would be unfair to the GaLore work.
3. Regarding the release of code:
-- The uncertainty surrounding the release of your code due to potential IP issues is concerning. Not providing the code could significantly reduce the contribution and impact of your paper within the research community. If possible, please provide more clarity on this matter or consider alternative ways to share your work’s findings with the community.
---
Rebuttal 3:
Comment: Thanks a lot for your further feedback! We hope our explanations successfully address your concerns.
1. ***Compare with SVDinsTN.*** Thanks! The SVDinsTN in the reference paper and our work both use diagonal matrices to control tensor ranks, but these two works are entirely different.
The two works target two completely different problems. SVDinsTN aims to find a compact tensor network representation of a given tensor, whereas our work aims to adaptively determine ranks and achieve real speedup for tensor-compressed models during end-to-end training without any prior information, which is of great importance given the huge energy cost of current large AI models.
It is not surprising that both works (and possibly some other works) use sparse diagonal matrices and L-1 terms to control tensor ranks. Using diagonal matrices to control the ranks of matrices is very common, like SVD. The L-1 norm is also widely used to induce sparsity in various models, like compressed sensing and Lasso regression. It is natural to combine these two techniques to control tensor ranks, regardless of tensor formats. In our work, **we did not claim this as a novel contribution**. Our main contributions are: (1) a **multi-objective optimization** approach to achieve a good balance between model size and performance in end-to-end rank-adaptive training, and (2) **numerical and GPU optimization** to achieve real 2-3X speedup on GPU. This can have a high impact: since large AI models consume too many resources, even a small reduction can save huge money and energy. Besides the results in the paper, we have proved the great promise of this method in pre-training CodeBERT in the above response.
The main differences between the two works are summarized in the following table. We will include the discussions in our paper.
|| CoMERA |SVDinsTN|
|-|-|-|
| target problem | **End-to-end tensor compressed training for memory and computation reduction**|Find a compact tensor network representation of given data|
| problem formulation | a multi-objective problem| a single objective problem with regularizations|
| solver |a two-stage approach with two scalarization methods to solve multi-objective problem to balance model performance and size| a proximal alternating minimization method to solve the single objective problem with l1 regularizations |
| tensor format | focus on Tensor-Train and can be easily applied to all general tensor networks | general tensor networks|
| performance optimization| numerical and GPU optimization methods for real speedup on GPUs| N/A|
2. ***GPU.*** We rent a 4090 GPU to run experiments. The results are in **Table 1**. Compared to RTX 3090, the training on RTX 4090 uses similar memory and takes less training time, and **CoMERA is still the fastest method and consumes the least memory among all three techniques**. The **memory savings** are **almost the same** as the results reported in Figure 1 in our paper. The **speed-up factors** are **almost identical** for batch sizes 32, 64, and 128. For batch size 1, our method is 1.2X faster and 1.7X faster than GaLore on RTX 3090, respectively. The difference is because RTX 4090 GPU significantly accelerates matrix multiplications of batch size 1, while it does not accelarate that much for smaller tensor contractions. We test the times of typical matrix multiplications in CoMERA and GaLore on RTX 3090 and RTX 4090. The results are in **Table 2**. We find that r=30 matrix multiplication on RTX 3090 has a similar speedup for both batch sizes, whereas the same multiplication on RTX 4090 only has speedup for batch 32 and does not have any speedup for batch 1. We would like to note that it might be caused by that different GPU platforms have different backend overhead, which can become more dominant as computation decreases to batch=1. We will continue optimizing GPU-level kernels to accelerate small tensor contractions and expect to see a similar speedup. We will replace results in our paper with new results on RTX 4090.
**Table 1. Training comparison of CoMERA with GaLore and LTE on RTX 4090 GPU.**
||| CoMERA |GaLore| LTE |
|-|-|-|-|-|
|batch 1|time (min)| **37.1**|44.8| N/A|
| |memory (MB)|**182**|1674| N/A|
|batch 32|time (min)| **3.4**|6.8| 11.1|
| |memory (MB)| **1780** |3632| 4636|
|batch 64|time (min)| **3.4**| 6.3| 9.1|
| |memory (MB)| **3784**|4682| 6628|
|batch 128|time (min)| **3.4**| 6.0|8.2|
| |memory (MB)| **7002**|8048|10964|
**Table 2. Time comparison of matrix multiplications on RTX 3090 GPU and on RTX 4090 GPU. The multiplication is done between matrices of sizes (batch\*128) $\times$ 768 and 768 $\times$ r. The time is in seconds.**
||| r=30| r=768|
|-|-|-|-|
|batch 1|RTX 3090|0.34 (4.6X)|1.58|
||RTX 4090|0.22 (**1.0X**)|0.22|
|batch 32|RTX 3090|0.55 (5.4X)|2.98|
||RTX 4090|0.27 (4.5X)|1.21|
3. ***Code.*** Thanks! We will release a version of codes with confidential IP information removed.
---
Rebuttal 4:
Title: Reminder: discussion window will close in 1 day
Comment: Dear Reviwer Rz1r,
Thanks a lot for your detailed technical comments and your follow-up discussion.We fully understand that you may super busy with many deadlines at this moment. As the discussion window will close in 1 day, I would highly appreciate it if you confirm your comments have been addressed or not.
In our original rebuttal, we have addresed the main weaklness (lack of large-scale experiment) by providing the pre-training result of CodeBERT to show the significant training cost reduction. We also provided details to address your 9 questions (e.g., explanation of various scalarization techniques, difference of our contraction sequence optimization with that in einsum, CuDAGraph optimization, overall training time).
In the recent discussion, we have further provided details regarding the comparison with SVDinsTN, result on RTX 4090 (which does not change much from RTX 3090) and code release issue.
If our responses have well addressed your comments, we would highly appreciate it if you can acknowledge this. If some of our responses need further clarification, please feel free to let us know! We are staying online to provide potential further clarifications in a timely manner.
Thanks again for your detailed technical comments & engagement in the discussion.
Warm regards,
The authors.
---
Rebuttal Comment 4.1:
Comment: Thank you for the detailed response.
“The SVDinsTN in the reference paper and our work both use diagonal matrices to control tensor ranks, but these two works are entirely different.”
-- Yes, it's true. I agreed that the two works are different since you targeted two different tasks. That is why I put this concern in the question part rather than the weakness part. Thank you for the clarification. I mentioned it again in my last reply since the authors said the referenced paper was for TT (tensor-train). It is a confusing point since that I checked that paper. it seems for a general tensor network rather than the specific TT model.
"We rent a 4090 GPU to run experiments."
-- Thank you. I have this concern since in the GaLore paper the main claim is given with the setting of 4090 rather than 3090. It naturally made me to have the question why not to use the same hardware for comparison? Is there any deeper concern from the side of authors? The first-round response from authors is unexpected since I don't realize that 4090 is so difficult to have in the experiment like other more expensive GPUs. But the good thing is that you finally successfully get it.
"We will replace results in our paper with new results on RTX 4090."
-- Thanks , but I don't think it's necessary. It is good to see the proposed tensor-based method works well with a weaker device (3090) than other works like GaLore (using 4090). My concern is mainly for fair and more clear comparison. Putting the new results in the supp. would be very appreciated.
"We will release a version of codes with confidential IP information removed."
-- Thank you for the promise. It is very important for reproducibility and potential impact for the community.
---
Reply to Comment 4.1.1:
Title: Happy to know technical issues are fixed
Comment: Dear Reviewer Rz1r,
Thanks a lot for your further follow-up discussion.
I'm glad that we are on the same page: (1) the SVDinsTN and our work are entirely different. (2) The use of RTX 3090 GPU rather than 4090 is not a big issue.
Just two tiny additional notes: (1) in our first rebuttal, we indeed pointed out (see our table) that SVDinsTN considered general tensor networks (rather than tensor train in our work),(2) in our paper we implemented all methods (including GaLore and LTE) on RTX 3090 to ensure a fair comparision. We will make these points more explicitly in our revised paper to avoid similar misunderstanding in the future. | Summary: The manuscript presents techniques for efficient training from scratch based on tensor decompositions. The authors propose several modifications to the basic training approach to improve accuracy as well as several optimizations for tensor-compressed training, achieving training speedup. In the experiments, the proposed methodology is used to compress a transformer model and a recommender system model.
Strengths: The authors present a new approach for training a model with tenzorized weights, which allows for adaptation of ranks during training. They also managed to optimize the process and achieve a real-time speed-up, which is not an easy task when working with tensorized models. The described methods are useful and have the potential to significantly contribute to further advancements in efficient operation for tensor decomposition formats.
Weaknesses: 1) There are several places where a comparison with the baseline time (e.g., uncompressed) would be appropriate, such as in Figures 5 and 8.
2) Experiments with the transformers are limited. Training a model on MNLI from scratch may not accurately reflect the behaviour of the method for language models, as MNLI is a relatively simple classification dataset that might not require a large number of parameters. It would be beneficial if the paper included an experiment or a simulation of some kind in a pre-training setting for language models, such as pretraining Roberta on C4
Technical Quality: 2
Clarity: 3
Questions for Authors: 1) I am a little confused about how to adapt the rank to achieve further compression. It is not clear from the text what the procedure is and when it is applied during training. Specifically, it is not explained how the ranks are chosen and at what point of time the procedure is applied. I would recommend incorporating a new paragraph into the paper that provides a more detailed explanation of the procedure, as well as some practical advice on how to apply it in various scenarios. This could help readers better understand the potential uses and benefits of this approach.
2) As for the transformer model, I am also interested about the model upper compression limit that still maintains training speedup? I would imagine maintaining good performance can only be possible with lower compression ratios (say x2-5).
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Responses to Weaknesses:
***Weakness 1**: Baseline time in Figures 5 and 8.*
Response: Thanks a lot! We will include baseline time in the figures. For your convenience, we attach results in following **Table 1 and Table 2**. **Table 1** shows time and memory cost for embedding lookup forward-propagation and back-propagation. The uncompressed embedding with sparse gradients is faster than our approach since our TTM embedding table requires extra computation. However, the embedding without sparse gradients is slower than ours because of updating massive gradients. The proposed TTM embedding table uses **much less memory**, **7X less** than embedding with sparse gradients, and **15X less** than embedding without sparse gradients. **Table 2** shows the time and memory cost of training DLRM using CoMERA and standard training.Standard training is faster than CoMERA since DLRM is an embedding-intensive model, and the computation in the embedding table is look-up rather than matrix multiplications. However, CoMERA uses **much less memory**, saving **6.9X, 4.8X, and 3.1X** memory for batch sizes 10000, 2000, and 4000, respectively.
**Table 1. Time and memory for embedding lookups**
|| | CoMERA embedding | uncompressed w/ sparse gradients | uncompressed w/o sparse gradients |
|-|----------|-|----------------|--------------|
| batch 10000|time (s) | 0.48 | 0.06 | 2.43 |
| |memory (MB) | **670** | 5279 | 10558 |
| batch 20000|time (s) | 0.82 | 0.11 | 2.48 |
| |memory (MB) | **799** | 5284 | 10569 |
| batch 40000|time (s) | 1.42 | 0.22 | 2.47 |
| |memory (MB) | **896** | 5294 | 10592 |
**Table 2. Time and momery for DLRM training**
|| |CoMERA w/ optimization|CoMERA w/o optimization|uncompressed |
|-|-|-|-|-|
|batch size 10000|time (s)|807|1344|420|
||memory (MB)|**2612**|9259|18261|
|batch size 20000|time (s)|794|1182|423|
||memory (MB)|**3947**|18385|19005|
|batch size 40000|time (s)|791|N/A|424|
||memory (MB)|**6629**| N/A| 20459|
***Weakness 2**: Lack of pertaining results on larger models*
Response: Thank you! Scaling up our approach to larger models and datasets is an important future work. We are conducting larger experiments and the preliminary result is shown in the following **table** and **Figure 1 in the attached PDF**. The details can be found in **main author rebuttal**. Here are some key results.
* We pre-train the **CodeBERT-Large** model. It has 24 encoder blocks and in total 357 million parameters, whose architecture is similar to BERT Large. The pre-training is done on the CodeSearchNet, a 20GB dataset. Compared to uncompressed training, the tensor-compressed model shows a similar convergence curve and reaches a similar training loss, while compressing the whole model **4.25 times** and linear layers **9.77 times**. The tensor-compressed training is about **2.3X** faster in sequence length 128 and **1.9X** faster in sequence length 512 than uncompressed training on a single RTX 3090 GPU. The results demonstrate that our approach can scale up to larger models. We will investigate more in the future and are optimistic about the results.
||Pre-training results of CodeBERT-large||
|-|---|-|
|compression ratio|overall| 4.25$\times$|
| |tensorized layers| 9.77$\times$|
|training speedup|sequence length 128| 2.3$\times$|
||sequence length 512| 1.9$\times$|
## Responses to Questions:
***Question 1**: How to adapt the rank to achieve further compression.*
Response: Thank you! We will add more details about how to adapt the rank to achieve further compression. Our rank-adaptive approach uses the **multi-objective** formulation and consists of two stages:
* In the early stage, we solve the multi-objective problem using linear scalarization. The early stage starts from the beginning of the training process and prunes tensor ranks gently without hurting the model performance.
* After the early stage converges, we may continue training the model in the optional late stage. During the late stage, the multi-objective optimization problem is solved by the achievement scalarization approach to find a model close to our preferred performance and size for specific deployment requirements (e.g., on an FPGA).
***Question 2**: Model upper compression limit that still maintains training speedup?*
Response: In short, speedup can be achieved by CoMERA even when the compression ratio is close to 1.
We consider an m by n linear layer and an input of size b by m and it is represented by TT cores with internal TT rank r. Then, the computation cost for linear layer and TT linear layer is about O(bmn) and O(b(m+n)r) respectively when the batch b is large. TT-compression has computation reduction roughly when $r\le min(m,n)/2$. However, the compression ratio is close to 1 for such large ranks. We demonstrate the above analysis by testing the training time of the six-encoder transformer on MNLI. The following **Table 3** shows the per-epoch training time of CoMERA on MNLI dataset for different compression ratios. The acceleration is more obvious for larger compression ratios. When the compression ratio is greater than 1, CoMERA always has speedup. When the compression ratio approaches 1, the time of CoMERA approaches that of uncompressed training. We will include the results and discussions in our paper.
**Table 3. Per epoch training time on MNLI for various compression ratios**
|| | rank 30 | rank 120 | rank 240 | rank 360 | rank 480 | uncompressed |
|-|----|-|--|--|--|-|--|
| compression ratio| | 50$\times$ | 4.9$\times$ | 2.2$\times$ |1.5$\times$|1.1$\times$|N/A|
| time (min)| batch 32 | 7.2 | 7.79 | 11.16 | 13.94|17.46|18.5|
| | batch 64 | 6.4 | 6.73 | 9.7 |12.44 | 15.53|16.6|
| | batch 128 | 5.5 | 6.21 | 9.07 | 11.56 | 14.35|16.4|
In our pre-training of CodeBERT-large, we still get **2.3X** speedup when the overall compression ratio is 4.25.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Thank you very much for your thorough review and fruitful comments! We have carefully read all your feedback and addressed your concerns and questions. We would greatly appreciate it if you could take some time to review our responses.
We will stay online these days and are happy to address any further questions or concerns you may have. We look forward to your continued feedback on our work.
Thank you again for your time and consideration!
---
Rebuttal 2:
Comment: Dear Reviewer hvAM,
We sincerely appreciate your valuable comments and suggestions for our paper CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization.
We have carefully addressed all your comments in our response above:
* For the baseline times, the results are provided in the above response and will be included in our revision.
* Regarding the scalability of our approach, we have demonstrated the significant potential of this method in **pre-training CodeBERT, a model with 357 million parameters**, as detailed in the above response and main author rebuttal.
* We have added more explanations in our responses regarding our rank-adaptive training for further compression and will include additional details in our revised manuscript.
* We conducted experiments to illustrate the relationship between compression ratios and speedup. The results, along with some mathematical analysis, are presented in our responses.
As the discussion period is drawing to a close in a few days, we would be very grateful if you could take a moment to review our responses. If our replies have satisfactorily addressed your concerns, we greatly appreciate it if you could acknowledge this in the discussion thread. If there are any remaining questions or concerns, please do not hesitate to reach out to us. We will stay online these days and are ready to respond promptly.
Thank you again for your time and consideration. We look forward to your feedback.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. Based on your rebuttal I increased my score from 5 to 6. | Summary: This work proposes using low-rank tensor train decomposition to accelerate deep learning model training and save memory usage. In the algorithm, both embedding tables in recommendation systems and large linear weights are written as a tensor train, and rank-adaptive optimization is used to adaptively reduce the rank without sacrificing the model accuracy. Multiple techniques, including TT embedding lookup, contraction path optimization, and CUDA graphs are combined to improve the efficiency. Experimental results show good memory saving and speedup without sacrificing model accuracy.
Strengths: originality: the idea to adaptively change the TT rank during training looks new. In addition to the training algorithm, multiple performance optimization techniques make the paper solid. In particular, the TT embedding lookup algorithm that does sampling and contraction in a interleaved way is also useful.
Weaknesses: presentation: section 4.2 is hard to understand. For contraction path optimization, it would be good to visualize the process using tensor diagrams.
limitation of the proposed algorithm: it would be good to discuss the limitation of the algorithm in detail. In particular, I believe low-rank weights can help only under relatively small datasets/tasks, under which the low-rankness is a good regularization. Under large dataset/foundation model setting, I suspect the work can beat the baseline algorithms. In particular, the algorithm is also different from previous algorithms that use low-rank. In previous works, low-rank approximation is only applied on gradients rather than weights. To claim that the work can help foundation model training, we need larger datasets.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. line 42: replies -> relies
2. How to choose a reasonable initial TT rank?
3. how does accuracy compare between CoMERA, GaLore, and LTE?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Responses to weaknesses:
***Weakness 1**: presentation: section 4.2 is hard to understand. For contraction path optimization, it would be good to visualize the process using tensor diagrams.*
Response: Thanks a lot for the suggestion! We prepared the tensor diagrams to visualize the contraction paths, but removed them from the paper because of the page limits. **We have included the tensor diagrams in the attached PDF**, see **Figure 3**. We will include this diagram in the revision.
***Weakness 2**: limitation of the proposed algorithm: it would be good to discuss the limitation of the algorithm in detail. In particular, I believe low-rank weights can help only under relatively small datasets/tasks, under which the low-rankness is a good regularization. Under large dataset/foundation model setting, I suspect the work can beat the baseline algorithms. In particular, the algorithm is also different from previous algorithms that use low-rank. In previous works, low-rank approximation is only applied on gradients rather than weights. To claim that the work can help foundation model training, we need larger datasets.*
Response: Thank you for the important questions!
* The tensor-compression approach can achieve a higher compression ratio and better speedup on relatively small datasets. On larger datasets and models, a higher rank may be required to maintain the model performance. We are conducting larger experiments and the preliminary result is shown in the following **table** and **Figure 1 in the attached PDF file**. We pre-train the **CodeBERT-Large** model, released by Microsoft. It has 24 encoder blocks and in total **357 million parameters**, whose architecture is similar to BERT Large. The pre-training is done on the CodeSearchNet, a 20GB dataset widely used for pre-training LLM for automatic code generation. All linear layers in encoders are compressed into TT format. The embedding table and final linear layers are not compressed because CodeBERT enforces them to be the same, but CoMERA uses different tensor formats for TTM and for linear layers. Compared to uncompressed training, the tensor-compressed model shows a similar convergence curve and reaches a similar training loss, while compressing the whole model **4.25 times** and linear layers **9.77 times**. The tensor-compressed training is about **2.3X** faster in sequence length 128 and **1.9X** faster in sequence length 512 than uncompressed training on a single RTX 3090 GPU. The results demonstrate that our approach can scale up to larger models and tasks. We will investigate more in the future and are optimistic about the results on large models and datasets.
|| Pre-training results of CodeBERT-large | |
|-|----------|-|
|compression ratio|overall| 4.25$\times$|
| |tensorized layers| 9.77$\times$|
|training speedup|sequence length 128| 2.3$\times$|
| |sequence length 512| 1.9$\times$|
* The low-rank gradient approximation work, like GaLore, represents gradients by low-rank matrices, reducing the memory costs of first and second momentums for the optimizer. However, GaLore still uses the full model and applies the full back-propagation for gradients. Finding the projector and the projection of gradients into the compact space also bring extra computation overhead. In contrast, our approach directly compresses the weights, and the resulting gradient automatically has a compact low-rank tensorized form. Hence, our tensor-compression approach **does not have the computation overhead** and can reach **better memory savings and speedup**. A comparison is shown in Section 5.3 and Figure 1 in the paper. We will include more details to compare our method with the low-rank gradient approximation method.
## Responses to questions:
***Question 1**: line 42: replies -> relies*
Response: Thanks! We will fix the typo.
***Question 2**: How to choose a reasonable initial TT rank?*
Response: Thank you for the interesting question! Our work **adaptively** determines the TT ranks during training. Generally, it is a super challenging problem to choose good initial TT ranks maximizing model performance without sacrificing performance prior to training. For our approach, we usually start with relatively larger ranks and let our method gradually prune ranks during training. This choice provides us with good experiment results as shown in Section 5.
***Question 3**: how does accuracy compare between CoMERA, GaLore, and LTE?*
Response: The CoMERA and GaLore achieve almost the same validation accuracy, 64%, on the MNLI dataset. However, the LTE approach does not converge on the task by using its default setting.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Thank you very much for your thorough review and fruitful comments! We have carefully read all your feedback and addressed your concerns and questions. We would greatly appreciate it if you could take some time to review our responses.
We will stay online these days and are happy to address any further questions or concerns you may have. We look forward to your continued feedback on our work.
Thank you again for your time and consideration!
---
Rebuttal Comment 1.2:
Comment: Dear Reviewer 6ExQ,
We sincerely appreciate your valuable comments and suggestions for our paper CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization.
We have carefully addressed all your comments in our response above. For the tensor diagrams for the contraction paths, we have attached our previously prepared **tensor diagrams in the attached PDF** and will include them in our revision to improve readability. Regarding the scalability of our approach, we have demonstrated the significant potential of this method in **pre-training CodeBERT, a model with 357 million parameters**, as detailed in the above response and main author rebuttal. Additionally, we have discussed the differences between our approach and previous low-rank methods, such as GaLore, and answered other questions in the above response.
As the discussion period is drawing to a close in a few days, we would be very grateful if you could take a moment to review our responses. If our replies have satisfactorily addressed your concerns, we greatly appreciate it if you could acknowledge this in the discussion thread. If there are any remaining questions or concerns, please do not hesitate to reach out to us. We will stay online these days and are ready to respond promptly.
Thank you again for your time and consideration. We look forward to your feedback.
---
Rebuttal 2:
Title: Reminder: discussion window will close in 1 day
Comment: Dear Reviewer 6ExQ,
Thanks a lot for your constructive comments about our submission CoMERA.
We fully understand that you may busy with many deadlines at this moment. As the discussion window will close in 1 day, I would highly appreciate it if you could read our rebuttal.
In summary, we have (1) provided a detailed tensor network diagram in the attached PDF file, (2) shown the promising comperssion and 2.5X speedup in pre-training CodeBERT-large, (3)addressed other minior issues such as typos, initial rank setting and accuracy of GaLore and LTE.
If our responses have well addressed your comments, we would highly appreciate it if you can acknowledge this. If some of our responses needs further clarification, please feel free to let us know! We are staying online, and we are happy to provide potential further clarification in a timely manner.
---
Rebuttal Comment 2.1:
Title: Reply to comments
Comment: I would like to thank authors for the detailed feedback. I've decided to raise my score to 6.
---
Reply to Comment 2.1.1:
Title: Thanks for your discussion
Comment: Dear Reviewer 6ExQ,
Thanks for your participation in the discussion and for recognizing our work. Your review feedback helped a lot to improve the quality of our work. | Summary: This paper leverages the tensor decomposition concept in model training and tries to address two questions: the first is how to enable rank-adaptive, and the second one is how to obtain real efficiency. This paper achieves a 2 − 3× speedup per training epoch compared with standard training with on-par accuracy.
Strengths: * An easy paper to follow
* Although tensor decomposition is not new, they successfully enable real speed-up with several performance optimization techniques.
* Real speed up and on-par accuracy on small models.
Weaknesses: * The problem formulation in 3.1 and how to solve the l0 norm is not quite new.
* Contraction Path Optimization is also not a new topic. The authors should cite previous work, e.g. [1] and discuss differences, and highlight the uniqueness.
* Does this training take more time in terms of convergence? And could the author discuss more about the training difficulty of tensorized layers and normal layers.
[1] Gu, Jiaqi, et al. "Heat: Hardware-efficient automatic tensor decomposition for transformer compression." arXiv preprint arXiv:2211.16749 (2022).
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness before.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Weakness 1**: “The problem formulation in 3.1 and how to solve the l0 norm is not quite new.”*
Response: Thanks a lot for the comment!
* Our novelty is to formulate the problem of balancing performance and size as a **multi-objective** problem. The **linear scalarization** for gently pruning ranks and achievement scalarization for deployment requirements are applied in early-stage and late-stage to solve the problem. **It is an early try that this multi-objective optimization approach** is applied to satisfy deployment requirements. While the formulation invovles l0 and l1 regularization as in single-objective optimization, **the objectives are different from those in existing literature since the mathematical analysis is entirely different.**
* Using the l1 relaxation alone can cause both theoretical and practical issues:
* First, the l1 relaxation cannot effectively control the tensor ranks because magnitudes of TT factors can keep growing. Hence, an extra l2 norm regularization term for the tensor cores is added. Mathematically, the new problem with the l2 norm regularization is equivalent to a new constrained multi-objective optimization problem (10) in the paper whose tensor cores are bounded. The equivalence and related theoretical analysis are shown in Proposition 3.1 and its proof.
* Second, the l1 relaxation does not correctly reflect the model size in the achievement scalarization problem in late-stage. Thus, we propose to use the l0 norm for the comparison between performance gap and size gap. This **l0-based measure is used as a switching condition** *between* Eq. (13) and Eq. (14), then the **l1 norm is used in numerical implementation** *inside* the solver of Eq. (13) and (14). This completely differs from an l1-relaxed single-objective optimization which does not require l0-norm any more and does not require switching.
***Weakness 2**: Difference from Contraction Path Optimization in [1] HEAT paper.*
Response: Thanks! The contraction optimization in [1] is very different from ours. We will cite it and discuss differences. The key differences are summarized in the following table and described below:
* **Post-training compression VS compression in training.** The paper [1] considered **compression after training**, where trained model parameters were already given. Ours considers **end-to-end tensor-compressed training**, where no model parameters are known prior to training. It can reach computation savings and training acceleration, whereas post-training compression cannot.
* **Different tensor formats.** The paper [1] uses **CP, Tucker, and Tensor-Train-Matrix** formats to compress linear layers. In contrast, we use the **Tensor-Train** format for linear layers, and TT matrix for embedding tables.
* **One contraction path in forward VS d+2 coupled contraction paths in forward and backward.** The paper [1] only discussed the **single path optimization for forward propagation in CP format**. We have optimized **d+2 contraction paths jointly in both forward- and back- propagation in TT format**. Since these contractions can be coupled, we have also minimized the overall computation costs by reusing intermediate results. Finally, we have also provided a **theoretical analysis**, Proposition 4.1, demonstrating the proposed path is near optimal for large batch sizes.
| | CoMERA | HEAT |
|----------|-|----------------|
|Training|**end-to-end compressed training** |post-training compression|
|Tensor format| TT for linear layers and TT matrix for embedding |CP, Tucker, and TT matrix for linear layers|
|Contraction path|jointly optimize **d+2 coupled paths** for **forward and back** in TT |one path for forward-propagation in CP|
***Weakness 3.1**: Does this training take more time in terms of convergence?*
Response: Thanks! In short, empirically our method is **2-3X** faster in the whole training process than uncompressed training when training transformers on a single GPU, but we do not have theoretical guarantees about the number of epochs although we observed that they used a similar number of epochs. We have provided detailed explanations in the **main author's rebuttal**. Some key points are summarized below:
* Training neural networks is a highly non-convex optimization problem in both compressed and uncompressed formats, making theoretical convergence analysis very complicated. The overall training time depends on (1) the number of epochs and (2) time per epoch. While we observed consistent **2-3X** speedup in terms of (2), point (1) is highly case-dependent for almost all non-convex optimization solvers.
* Our CoMERA has a **similar empirical convergence behavior** to the uncompressed training on our tested cases. We observe that on both 6-encoder transformer and DLRM, shown in Figure 6 on paper and **Figure 2 in the attached PDF file**.
* On all tested transformers, CAMERA generally takes **2-3X less training time** because it has similar convergence curves as the uncompressed model, but each epoch of the tensorized model is 2-3X faster than standard training.
Although our method has similar (and sometimes better) convergence behavior compared with standard training, we think that it could be misleading to draw any formal conclusion at this moment without a theoretical proof (which may or may not exist).
***Weakness 3.2**: And could the author discuss more about the training difficulty of tensorized layers and normal layers.*
Response: From our experiments, we did not observe challenging difficulties of training tensorized layers. Adam is utilized to train the tensor-compressed model. The training hyperparameters, such as learning rate and weight decay, differ from standard training because of the different training parameters and loss landscapes, but the convergence behavior is similar. We will provide more details on training tensor-compressed models in the revised paper and conduct more analysis in the future.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Thank you very much for your thorough review and fruitful comments! We have carefully read all your feedback and addressed your concerns and questions. We would greatly appreciate it if you could take some time to review our responses.
We will stay online these days and are happy to address any further questions or concerns you may have. We look forward to your continued feedback on our work.
Thank you again for your time and consideration!
---
Rebuttal Comment 1.2:
Comment: Dear Reviewer 1f6L,
We sincerely appreciate your fruitful comments and suggestions for our paper **CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization**.
We have carefully addressed all your comments in our response above. In particular, we have clarified the **novelty of our work**, provided a **comparison with HEAT**, and discussed the **convergence and overall training time** of the proposed algorithm. Additionally, beyond the results mentioned in the paper, we have demonstrated the significant potential of this method in **pre-training CodeBERT, a model with 357 million parameters**, as detailed in the main author rebuttal.
As the discussion period is drawing to a close in a few days, we would be very grateful if you could take a moment to review our responses. If our replies have satisfactorily addressed your concerns, we greatly appreciate it if you could acknowledge this in the discussion thread. If there are any remaining questions or concerns, please do not hesitate to reach out to us. We will stay online these days and are ready to respond promptly.
Thank you again for your time and consideration. We look forward to your feedback.
---
Rebuttal 2:
Title: follow-up clarification about CodeBERT-large pretraining accuracy and tensor rank/architecture search.
Comment: Dear Reviewer 1f6L,
Thanks a lot for your discussion and detailed technical discussion.
We agree that there are plenty of work about searching tensor ranks and architectures in the machine learning community. Almost all of them are about tensor data compression and post-training model compression. We would like to remark that **searching the architecture and ranks in training is a much more challenging and a rarely investigated task**: the model parameters that we need to compress is not given in advance, and we can not afford a training for each architecture/rank setting due to the huge training cost. This is completely different from tensor data compression and post-training compression, where one can easily evaluate the quality of their rank/architecture setting by doing a cheap forward propagation. This is also why compressed training (even in the simpler low-rank matrix format) is a much more challenging and more important task.
Regarding the compression ratio and accuracy in tensor-compressed pre-training, we also would like to remark **three key different features in the LLM domain**:
(1) It is natural that compressed models have larger training loss, since its optimization feasible set is smaller than that of an uncompressed training. However, **a slightly larger training loss does not mean worse testing accuracy, because tensorized models have shown smaller generalization gaps (i.e., the difference between training accuracy and testing accuracy) than uncompressed models in many cases**. As an example, let us consider BERT-large pre-training using the same model architecture on the WikiText dataset. Standard pre-training produced a training loss of 1.26, our CoMERA produced a trainining loss of 1.45 (which is worse by 0.19). However, **the compressed model produced by CoMERA outperformed standard BERT-Large in 2 of 3 downstream testing tasks** due to its smaller generalization gaps, as shown below:
**Testing accuracy of standard BERT and tensorized BERT on various downstream tasks.**
|Models| Accuracy on SST2 |Accuracy on MRPC| Accuracy on SQuAD|
|-|-|-|-|
|Standard BERT-Large |91.74%| 86.00%|**90.68%**|
| Tensorized BERT-Large (ours)|**92.10%**|**86.82%**|88.76%|
Since the downstream testing datasets of CodeBERT were not released to the public by its developers, we could not do the same downstream testing at this moment (we are trying to generate a few downstream testing dataset for CodeBERT by ourselves, but it takes time). However, we can make a prediction based on the available training results. Right now, the training loss of CodeBERT using standard pre-training is $0.24$, and training loss produced by our CoMERA is $0.36$. The difference is only **0.12**, which is much better than the loss difference (**0.19**) on the WikiText dataset. Considering the smaller generalization gap of tensorized models, it is reasonable to predict that our compressed CodeBERT-large model will have **better (or at least similar) testing accuracy on downstream code generation tasks**, if those testing datasets are available.
(2) Different from regular neural network training which often uses a small dataset and thus the network is often over-parameterized and can have a large compression ratio, LLMs (even small-size LLM like BERT) are pre-trained with a super large dataset and the network is **under-parameterized**. Meanwhile, when those LLMs are released by their developers (often giant tech companies), many tricks have been tried to make sure that these LLMs cannot be dramatically compressed, otherwise customers will quckly swith to much smaller LLMs. As a result, **even a $4.25\times$ overall compression ratio is very large for LLM pre-training**. Note that (i) we only compressed part of the layers (with $\sim 10\times$ compressed ratio on these layers). The embedding tables and output linear layers used a transposted architecture and thus we didn't compress them; (ii) The LLM compression ratio can be further boosted if we combine CoMERA with quantization techniques.
(3) **The importance of $2.5\times$ speedup factor and $4.25$ overall compression ratio on CodeBERT-large should not be underestimated**. This means that we can reduce the GPU-hours by $>2.5\times$. **A $2.5\times$ GPU-hour reduction can make a huge impact in the LLM community**, since current LLM pre-training costs millions of US dollars per training run. As a comparison, the recently released and very popular GaLore (oral paper at ICML'2024) has much less memory saving than CoMERA (as shown in our paper) and actually slows down the training.
Again, we would like to thank Reviewer 1f6L for his/her technical feedback, which will greatly improve the quality of our work. We hope that the above technical details and potential impact in the LLM community can be considered.
---
Rebuttal Comment 2.1:
Comment: Thank you for your quick response.
First, the evaluated test accuracy on standard BERT already shows the CoMERA may lead to accuracy loss on SQUAD (90.68% compared to 88.76%). The results cannot convince me that CoMERA can yield on-par accuracy with the claimed speedup on large models. I do not judge that the claimed speedup is not significant. My concern is whether sacrificing the accuracy in the pre-training stage is meaningful and whether your claimed compression ratio may not hold for large models.
Second, your comment on GaLore is not right and fairly assessed. GaLore theoretically guarantees the same performance as full-rank training, but CoMERA cannot.
---
Rebuttal 3:
Title: Thanks for your follow-up, and further discussion about pre-training performance.
Comment: Dear Reviewer 1f6L,
Thanks for your quick follow-up and insightful discussion.
Your comment on Galore is insightful. Indeed the comparision should not be just focused on compression ratio and GPU-hours. We completely agree that Galore has better convergence gaurantee. Specifically, it will keep the same accuracy with standard training, under the assumption that the SVD compression in gradient compression does not lose much gradient information. Meanwhile, we also would like to gentlely point out that Galore will not gaurantee convergence and may face divergence issue when it has the same high compression ratio as CoMERA, since the gradient information loss is huge.
It is also worth noting that practical LLM pre-training iterations are normally **bounded by the given limited budget (e.g., dollar amounts or GPU-hours)**. As a result, many large-size LLMs are pre-trained for only 1 or 2 epochs in practice. **With the same limited budget of GPU-hours, CoMERA can train $2.5\times$ more iterations than standard pre-training and GaLore, achieving better acuracy**.
Regarding the testing accuracy of CoMERA: the testing accuracy of LLMs are evaluated on **multiple downstream tasks**, which are quite different from the pre-training dataset. As a result, a model (even if it has better training loss) often behave very differently on various downstream tasks: it may has better testing accuracy on some tasks, but meanwhile has worse testing accuracy on other tasks. This is very normal, and is exactly the case in our tensorized BERT-large model: our model outperformed the baseline BERT-large on two downstream tasks (SST2 and MRPC), and under-performed basline BERT-large on one downstream task SQuAD. Currently our produced model beats standard BERT-large on **two out of three** downstream tasks (**at the cost of $2.5\times$ less GPU-hours** in pre-training), and we look forward to its future performance as our pre-training methods keep evolving.
Thanks agian for your timely discussion. Your participation in the discussion is highly appreciated: it will enable a fair and thorough evaluation of our work. Please feel free to let us know if you have further questions.
Best regards,
The authors.
---
Rebuttal 4:
Comment: I am still doubtful about your statement about your fine-tuning accuracy. Based on my understanding, SQuAD is a relatively harder task than SST2 and MRPC (https://delvify.ai/glue/). For easier tasks, the compressed model may perform well at testing, while for harder tasks, the compressed model shows reduced accuracy(90.68 --> 88.76, as in your case). So, my major concern still remains that your claimed compression ratio and speedup are only feasible for simple tasks, which may not hold for harder tasks/pretraining, making your claim that you can train LLM from scratch(as you compare with Galore) weaker. The accuracy number further makes me concerned about this point.
Also, I am not very confident whether LLM shows low-rank characteristics for the weight matrix, and enforcing the model's weight matrix in a low-rank format since that training begins may not be a good choice.
My score is between 5 and 6, and I'd like to see the author report the claimed speedup with a comparable loss with the baseline in the pre-training stage of larger models instead of the speedup on smaller models/under considerable loss in the future version.
---
Rebuttal 5:
Title: Thanks, and further information about pre-training (including LlaMA models)
Comment: Dear Reviewer 1f6L,
Thanks a lot for your further follow-up discussion: we highly appreciate it, especially considering that yourself may also have many deadlines at this critical moment.
Your conern is very valid. Meanwhile, we would like to share more details and our own experience about the evaluation and pre-training of LLMs.
Regarding the SQuAD: you are VERY right, this is a more challenging downstream task. But the reason of performance difference across various downstream tasks is not just caused by the diffculty of a specific task like SQuAD. For instance, if we want to achieve a different compression ratio, we find that the accuracy on SQuAD can be higher than the basline, but the acuracy on the easier task MRPC may drop a little bit. As a result, we think that the main reasons are as follows:
--(1) **We tested the pre-trained model on downstream tasks without any fine-tuning**. The performance on various downstream tasks definitely can be further improved if fine-tuning is performed using each downstream dataset.
--(2) Since we did not conduct task-specific fine-tuning, we are atually evaluating a pre-trained model directly on multiple very differerent downstream tasks. Mathematically, this is equivallent to solving an optimization problem with objective function $f_0(x)$ (in pre-training), then evaluating its solution quality on three different objective functions $f_1(x), f_2(x)$ and $f_3(x)$ (on downstream tasks). Without fine-tuning (i.e., optimizing $f_1(x), f_2(x)$ and $f_3(x)$ **individually**), it is very normal that some tasks show superior performacne and some tasks show slightly degraded performance.
Regarding pre-training **larger LLMs** like LLaMa-1B (probably you mean LlaMA-7B?). It is indeed our goal to test our method on large models like LlaMA. However, as an academic group, we are constrainted by the computing budget. Here is one budget data sample: pre-training GPT-2 (with 762 Million parameters) would need 100,000 A100 GPU-hours (equivalent to 250,000 USD based on market cloud computing price). Based on the scaling law released by Google Deepmind (Hoffmann, et al., “Training compute-optimal large language models,”arXiv:2203.15556, 2022), we estimate that the pre-training budget of LlaMa-1B would be around **430,000 USD**, which is far beyond the financial capability of most (if not all) academic groups. Pre-training LlaMA-7B will cost **millions of USD**.
We are trying very hard to get more computing resources to validate our ideas on LlaMA-scale models, which is super challenging as an academic group. To be honest, starting up a company may be a better option to achieve this goal (that's why we said that we had some IP concerns when replying to a reviewer's question about a full code release). We will be very happy to share our result with the community in the future, if we can get the testing results on LlaMA-scale models.
Thanks again for your insightful discussions. Your active paticipation in the disucssion is highly appreciated. | Rebuttal 1:
Rebuttal: ## Common Concerns
We would like to thank the reviewers for their fruitful suggestions and comments. **We have addressed ALL review comments** (see the response to each reviewer).
Here we summarize some concerns arised in the review process.
## **Scalability of tensor-compressed training to larger models**
A few reviewers are concerned about whether the tensor-compressed training scales up to larger models and tasks. It is also a problem we are now actively investigating. We are conducting larger experiments and the preliminary result is shown in the following **table** and **Figure 1 in the attached PDF file**. We pre-train the **CodeBERT-Large** model, released by Microsoft. It has 24 encoder blocks and in total **357 million parameters**, whose architecture is similar to BERT Large. The pre-training is done on the CodeSearchNet, a 20GB dataset widely used for training LLM for automatic code generation. All linear layers in encoders are compressed into TT format. The embedding table and final linear layers are not compressed because CodeBERT enforces them to be the same, but CoMERA uses different tensor formats for TTM and for linear layers. Compared to uncompressed training, the tensor-compressed model shows a similar convergence curve and reaches a similar training loss, while compressing the whole model **4.25 times** and linear layers **9.77 times**. The tensor-compressed training is about **2.3X** faster in sequence length 128 and **1.9X** faster in sequence length 512 than uncompressed training on a single RTX 3090 GPU. The results demonstrate that our approach can scale up to larger models and tasks. We will investigate more in the future and are optimistic about the results on large models and datasets.
|| Pre-training results of CodeBERT | |
|-|----------|-|
|compression ratio|overall| 4.25$\times$|
| |tensorized layers| 9.77$\times$|
|training speedup|sequence length 128| 2.3$\times$|
| |sequence length 512| 1.9$\times$|
## **Overall training time and total epochs.**
The paper mainly compares the per-epoch time of CoMERA and uncompressed training. The reviewers also wonder about the overall training time and the number of epochs. We address the common concern here.
In short, empirically our method is **2-3X** faster than uncompressed training when training transformers on a single GPU, but we do not have theoretical guarantees about the number of epochs although we observed that they used a similar number of epochs. Detailed explanations are provided below
* Training neural networks is a highly non-convex optimization problem in both compressed and uncompressed formats, making the theoretical convergence analysis very complicated. Even existing theoretical analysis of uncompressed training is done on simplified cases, such as the two-layer neural networks with infinite width. The overall training time depends on (1) the number of epochs and (2) time per epoch. While we observed 2-3X speedup in terms of (2), point (1) is highly case-dependent for almost all non-convex optimization solvers.
* Our CoMERA has a **similar empirical convergence behavior** to the uncompressed training on our tested cases. For the 6-encoder transformer, CoMERA converges a little slower than the standard training at the beginning as shown in Figure 6 in the paper, but finally CoMERA achieves a higher validation accuracy. For the DLRM task, CAMERA needs fewer iterations than the standard training, as shown in **Figure 2 in the attached PDF file**. We will also add that figure to our revised manuscript.
* On all tested transformers, CAMERA generally takes 2-3X less training time because it has similar convergence curves as the uncompressed model, but each epoch of the tensorized model is 2-3X faster than standard training.
Although our method has similar (and sometimes better) convergence behavior compared with standard training. We think that it could be misleading to draw any conclusion at this moment without a theoretical proof (which may or may not exist).
## **Figures in PDF file** (attached)
**Figure 1.** Training loss of CodeBERT-large.
* Figure 1 shows the empirical convergence curve of tensor-compressed training on the **CodeBERT-Large** model, a model of 24 encoder blocks and in total **357 million** parameters before compression.
**Figure 2.** The validation normalized cross-entropy loss of training DLRM model.
* It presents the validation loss of CoMERA and uncompressed training during training DLRM. CoMERA converges slightly faster in terms of iterations than the uncompressed training on this task.
**Figure 3.** Tensor diagrams for contraction paths of TT forward- and back- propagation.
* The figure visualizes the proposed contraction paths for TT forward- and back- propagation as detailed in Section 4.2.
Pdf: /pdf/b55e57d5c464efe082a21e22d5fd6556d6014954.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Entropy-regularized Diffusion Policy with Q-Ensembles for Offline Reinforcement Learning | Accept (poster) | Summary: This work introduces a new (fast) SDE-based sampling technique to derive actions from a diffusion based policy.
Strengths: - Compares favorably against most relevant benchmark (Diff-QL)
- Good ablation studies; this was helpful to understand the importance of each component.
- Strong connection to standard tools in SDEs from diffusion literature
Weaknesses: I'm not sure how big a weakness this is, but there appears to be some disconnect in the theory vs. experiments for discounting, that is infinite-time horizons. Why do you introduce a discount factor but in all equations use a finite time horizon?
- High memory requirement: What is the memory cost (i.e. in VRAM) for an ensemble of 64 models?
- Inconsistent use of "max Q trick" from [27]. Can you explain why this is used only in a limited set of environments? What happens if it is removed? A performance comparison would be useful.
**Experiments**
- Given that this is a (mostly) experimental paper, it would be good to have more experiments, in particular some studies on the sensitivity of $\beta$ (is it OK to be equal for all envs? it seems like the magnitude of Q-value variance for the LCB should be MDP-dependent.)
- Use of $T=5$ steps: can you show any (simple) experiments illustrating the use of various $T$ values? It seems implicitly dependent on the environment since it somehow controls the expressivity of the policy (please correct me if wrong)
- L153, choice of $\eta$ (see above comments)
- In figure 3, can the training time be extended? Especially regarding the decrease in performance for antmaze-med-play; if there is always a decrease over time in training this is important to note. (on the flip side, it can be noted as a useful limitation for future work to improve on)
- Using non-scalar comparisons would be beneficial to gain a better idea of the statistical comparison of all algorithms. (cf. https://github.com/google-research/rliable in case data is still available, it should be straightforward to generate the plots)
Technical Quality: 4
Clarity: 3
Questions for Authors: - it's a bit hard to see variance of green line in Fig 1, Perhaps extracting it / using IQM plots with error bars would be helpful. (I know it is a toy example, but it does a good job illustrating the utility)
- may just be me, but I believe using $t$ index in RL terms is more standard than $i$, similarly for $T$ vs. $L$. Perhaps consider swapping this notation?
- on a similar notation front, the use of $a$ seems like a poor choice since starting at L76 it is very general (not yet discussing actions). Maybe switch to $x$?
- I'm curious why different gradient norms were required; do you have any insight here? Was there parameter/grad divergence in some cases?
- Missing reference to SUNRISE? (I know it is UCB instead of LCB, but perhaps still useful to comment on the relationship): https://arxiv.org/pdf/2007.04938
- In algorithm 1, can you please include Q^LCB_psi rather than Q_psi alone (if it is correct)? Otherwise it is confusing where the previously discussed value estimate comes in.
- Can you explain a bit more about the sense of "optimality" regarding the reverse time SDE?
- Use of Mish activation function seems a bit non-standard. How important is it?
- L356: can you give a rough time comparison of the diffusion policy vs. standard approaches?
- How does entropy regularization interact with the SDE sampling? Is MaxEnt required for diffusion to be useful?
- Can you increase linewidth in Fig 1a/c and Fig 3? Adding symbol/markers would be helpful too for accessibility
minor typographical:
- L10: "is tractable and that **it** can..."
- L27: rephrase "work introduce"
- L95-100, maybe point to Fig 1 to see multi-modality of proposed approach
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Beyond the weaknesses mentioned above, I believe the limitations were discussed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback and the opportunity to improve our work. Your insights are invaluable, and we look forward to incorporating these revisions to strengthen our submission.
**Weakness**
1. *Why do you introduce a discount factor but in all equations use a finite time horizon?*
Note that all considered environments are continuous tasks, and the return is computed using an infinite time horizon. Hence, a discount factor is used. On the other hand, a finite time horizon is used for the SDE steps. Note that the superscript $t$ of $a_i^t$ is the SDE step, and the subscript $i$ is the RL step.
2. *Memory Requirement of 64 Q-ensembles.*
For the Antmaze task, the model requires ~3GB and ~5.5GB for $M=4$ and $M=64$, respectively. We will report more details in the revised manuscript.
3. *The sensitivity of $\beta$.*
The additional experiments of $\beta=1,2,4$ are provided in **General Response Figure 5 and Table 1**. These results show that $\beta=4$ performs the best in both *antmaze-medium-diverse* and *antmaze-medium-play* environments. The value of $\beta$ determines the amount of pessimism, and of course, it can be tuned for different datasets.
4. *Use of "max Q trick".*
We follow the Diffusion-QL to only use max Q-backup in Antmaze tasks. The ablation study of max Q-backup on selected Antmaze tasks is provided in **General Response Table 2**. One can see that the max Q-backup trick improves the Antmaze results.
5. *Experiments on various $T$ values.*
The experiment of $T=3,5,10$ is shown in **General Response Table 3**. The step $T=5$ gives the best overall performance. Moreover, as shown in **General Response Figure 1**, when $T=2$, the diffusion model already achieves a good approximation of the original data distribution.
6. *Choice of $\eta$.*
We select the value of $\eta$ by checking the most used value for each task group. Compared to Diffusion-QL which carefully tunes parameters for each environment, our settings are more general. Here, $\eta$ is mainly for balancing the two losses but we did not explore different values in the paper.
7. *Extend training time.*
We found that extending the training time decreases the performance of the antmaze task in most offline RL methods such as Diffusion-QL. The Q-ensemble technique can alleviate this problem but is still unstable and challenging. We are happy to note and list it as our future work.
8. *Non-scalar comparisons.*
We appreciate the suggestion to use non-scalar comparisons for a more comprehensive statistical analysis. We have starred the project and will be happy to illustrate our non-scalar comparisons in the future.
**Questions**
1. *Figures improvement.*
Thank you for the suggestions on figure improvements. Due to the page limitation of the rebuttal PDF, we will try to update the figures (e.g., use IQM plots with error bars, increase the linewidth, and add symbols/markers) directly in the revised manuscript.
2. *Notation problems e.g. "$i$" vs. "$t$" and "$a$" vs. "$x$".*
We will consider swapping the notations to align more closely with standard RL terminology, making the presentation clearer.
3. *Why different gradient norms.*
Compared to Diffusion-QL which carefully tunes hyperparameters (e.g., the norm value) for each env, our settings are more general: the values are taken close to the average values as in Diffusion-QL and we use the same gradient norm value for the same type of tasks.
4. *Missing Reference to SUNRISE.*
Thank you for the suggestion. We will include a discussion on SUNRISE and its relationship to our method, particularly how it compares to using LCB.
5. *$Q^\text{LCB}_\psi$ in Algorithm 1.*
Thanks for pointing this out, we will change it to "Update policy $\pi_\phi$ by (15) using $Q^\text{LCB}_\psi$".
6. *The sense of "optimality" regarding the reverse time SDE?*
The path for reverse-time SDE can be diverse and tortuous due to the stochasticity of the diffusion term $\mathrm{d}w$, which requires long timesteps to research the real data distribution. Our optimal sampling utilizes the condition posterior trick (see Proposition 3.1) and can generate new samples in very few steps.
7. *Mish activation function.*
In this work, we didn't focus on architecture tuning and our network simply follows the Diffusion-QL with three dense layers and the Mish activation function.
8. *Time comparison of diffusion policy vs standard approaches.*
We have conducted additional experiments to provide training time and evaluation time comparison between our method and our method with Gaussian policy between different settings. The results are shown in \textbf{General Response Table 4}. While our diffusion policy requires more computational resources during training compared to standard Gaussian policies, the performance gains justify the additional time investment. The evaluation times remain comparable, ensuring practical applicability in real-world scenarios. These results provide a clearer understanding of the computational trade-offs and reinforce the robustness and effectiveness of our proposed method.
9. *How does entropy regularization interact with the SDE sampling?*
During training, we first sample action from the SDE and then learn to maximize the approximation of the entropy to increase exploration of the environment in offline settings. In this stage, the MaxEnt is helpful for policy learning. Then, in inference, we can directly sample actions from the SDE and the entropy won't affect the sampling. | Summary: The paper presents Entropy-Regularized Diffusion Policy with Q-Ensembles for offline reinforcement learning. This method addresses Q-value overestimation on out-of-distribution (OOD) data by using a mean-reverting stochastic differential equation (SDE) to transform action distributions into a Gaussian form, combined with entropy regularization for enhanced exploration and Q-ensembles for pessimistic Q-value estimation. The approach achieves state-of-the-art performance across D4RL benchmarks especially on AntMaze.
Strengths: The paper introduces a method that combines entropy regularization with Q-ensembles within the framework of diffusion policies for offline RL, offering a novel solution to tackle Q-value overestimation and limited exploration of OOD samples. The use of a mean-reverting SDE to model action distributions is theoretically robust and aligns well with the goal of transforming actions into a tractable Gaussian form for efficient sampling. The authors provide detailed theoretical contributions, including the tractability of entropy regularization in diffusion policies and the benefits of using Q-ensembles for robust value estimation.
Weaknesses: 1. Computational Cost: High computational resources are required, including extensive training epochs on high-capacity GPUs, which limits accessibility and applicability in real-world scenarios with constrained resources.
2. Hyperparameter Sensitivity: Although some experiments with different hyperparameters are provided, a more thorough exploration of hyperparameters like entropy temperature and Q-ensemble size is needed to further validate the method's robustness.
3. Generality: The evaluation is robust but focused on D4RL benchmarks. More experiments on diverse or real-world offline RL tasks would strengthen the claims about the method's generalizability.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could you please give a more thorough explanation on why the choice of hyperparameters, especially the entropy temperature plays a critical role in the performance. As we know that the adroit and kitchen tasks are more complex than Gym and AntMaze, I suspect on the explanation about narrowness of human demonstrations.
2. What are the potential challenges or limitations in extending this method to real-world offline RL tasks outside the D4RL benchmarks?
3. Could the authors provide more intuitive explanations or visualizations to illustrate the workings of the mean-reverting SDE and its impact on action distribution sampling?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have figured out the limitation of the proposed method on high computational cost and large inference time.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and for recognizing the strengths of our work. We appreciate your positive feedback on the novelty and theoretical robustness of our proposed method.
1. *Computational cost.*
We acknowledge that diffusion policies require longer training and inference time due to the multi-step sampling process. However, the increased time is manageable and justified by the significant performance gains. A detailed comparison is provided in **General Response Table 4**. Moreover, techniques like the SDE/ODE solver and diffusion distillation have been proven useful for sample accelerating. We will regard it as our future work and try to apply the faster version to some real-world scenarios.
2. *Hyperparameter sensitivity...exploration of hyperparameters like entropy temperature and Q-ensemble size is needed.*
Please let us clarify that the ablation studies on entropy temperature and Q-ensemble size are already provided in Table 3 and Table 4 of the paper. Moreover, we further add two ablation experiments on LCB coefficient $\beta$ and diffusion steps $T$ in **General Response Table 1 and Table 3**.
3. *Challenges and limitations in real-world applications.*
As discussed in the Conclusion section, the main challenges and limitations of our work are the action sample time and the computation cost. And the network is also too simple (only 3 dense layers) for complex real-world applications. Our future work will investigate real-time policy distillation under time and compute constraints and explore other efficient network architectures to address real-world datasets and scenarios.
4. *Could the authors provide more intuitive explanations or visualizations to illustrate the workings of the mean-reverting SDE and its impact on action distribution sampling.*
Intuitively, the mean-reverting SDE has a forward process (action to noise) and a reverse process (noise to action). The forward process is only used in training to provide an analytically tractable solution for score network learning. In inference, the reverse process (based on the trained score network) is used as the policy to generate actions conditioned on RL states. A rough visualization of the workings of the mean-reverting SDE is provided in **General Response Figure 4**. This will also be clarified in the revised manuscript.
---
Rebuttal 2:
Title: Reponse to rebuttal
Comment: Thank you for your rebuttal. However, I believe there may have been a misunderstanding regarding my first question. In Table 1, your method underperforms compared to others (e.g., Diff-QL) in the Adroit and Kitchen environments. I question your explanation attributing this to the narrowness of human demonstrations. I think it's more convincing to support your statement to generate additional data for retraining the policy rather than focusing on tuning $\alpha$? Adroit and Kitchen environments are generally considered more challenging than Gym and AntMaze, which raises further questions about this approach.
Additionally, Table 3 suggests that the auto-selection of entropy temperature is more critical for performance in Adroit and Kitchen than that in Gym and AntMaze. Could you elaborate on why this is the case?
---
Rebuttal Comment 2.1:
Comment: We sincerely appreciate your timely feedback and follow-up questions.
1. *Why can't we add additional training data?*
In offline RL, we typically avoid modifying datasets to ensure fair comparisons across different methods. While adding additional training data is indeed a potential solution for improving performance, in this study, we focused on making algorithmic adjustments to ensure consistency and fairness in evaluation. This approach allows us to directly compare the effectiveness of our method against others without introducing external variables. However, we acknowledge that augmenting data could be beneficial in real-world applications and plan to explore this in future work.
2. *The results in Table 1.*
In Table 1, our method underperforms in the Adroit and Kitchen environments compared to Diffusion-QL, mainly due to the fixed entropy temperature $\alpha$ set at 0.01. This fixed $\alpha$ leads the agent to continuously explore the action space throughout the entire training process, even when encountering unseen states. While exploration is generally advantageous, it can be detrimental in environments with limited data variability like Adroit and Kitchen. With sufficient data, the actor is encouraged to explore guided by accurate Q-values estimates; however, in the case of unseen state-action pairs, such exploration may harm performance. Additionally, unlike in antmaze tasks, random actions are more likely to negatively impact performance in more complex environments like Kitchen.
Overall, excessive exploration prevents the agent from effectively leveraging learned strategies from human demonstrations, and random actions are more detrimental in Adroit and Kitchen, where precise control is essential. These factors contribute to the lower performance observed compared to Diffusion-QL in these tasks.
3. *Why tuning $\alpha$?*
Auto-tuning $\alpha$ is useful because it dynamically adjusts the balance between exploration and exploitation based on the data. Initially, $\alpha$ is set to a non-zero value to encourage exploration of the action space. As training progresses, especially in environments like Adroit and Kitchen, where precise control is crucial, the auto-tuning mechanism reduces $\alpha$ to near zero. Also, with more accurate Q-functions later in training, this shift towards exploitation helps the agent focus on optimal actions, improving performance. | Summary: The paper proposes to use reverse-time SDE as the policy in an actor-critic algorithm. To make it work, entropy regularization is added, for which an entropy approximation scheme is suggested. Furthermore, to improve stability, an ensemble of Q-networks is employed, and the pessimistic lower-confidence bound (LCB) is taken as the value, i.e., $\mathbb{E}[Q] - \beta \sqrt{\mathbb{V}[Q]}$.
Evaluations on D4RL show improved performance compared to baselines.
Strengths: Originality: good. The paper proposes a few novel ideas to make diffusion policy work in offline RL.
Quality: good. Overall presentation and evaluation are good. Tested only on D4RL, but this is common in offline RL.
Clarity: excellent.
Significance: excellent. Potentially a new baseline on D4RL, especially AntMaze.
Weaknesses: 1) Influence of multi-modality of the policy not sufficiently explored. Is this in the end what makes it better compared to using a Gaussian? Or is it rather the LCB and Q-ensemble? How would SAC with Q-ensemble and LCB-value perform (i.e., not using the diffusion policy)?
2) No comparison to other methods that use multi-modal policies, e.g., Gaussian mixture model policy ([Wasserstein Gradient Flows for Optimizing Gaussian Mixture Policies](https://proceedings.neurips.cc/paper_files/paper/2023/hash/429b5216a4d08850c586fbf809e17877-Abstract-Conference.html)) or energy-based policies ([Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow](https://arxiv.org/abs/2405.13629))
Technical Quality: 3
Clarity: 4
Questions for Authors: In the abstract, the authors write "we show that the entropy of such a policy is tractable". This sounds like it is analytically tractable. But in fact an approximation is used (Sec. 3.2). I would suggest to reformulate the statement in the abstract to avoid such misunderstanding.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Computational limitation is briefly mentioned in the conclusion. It would be nice to have a more extended discussion of limitations and providing some numbers on the timings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for recognizing the novelty and potential impact of our work. We appreciate your positive feedback and are happy to provide our point-to-point response below:
1. *Influence of multi-modality of the policy...SAC with Q-ensemble and LCB*
In offline RL, the pre-collected datasets are always imbalanced and lead to a complex multi-modal policy rather than a Gaussian. We provided a comparison in Table 2 of the paper, which shows that our model outperforms MSG (Gaussian policy with Q-ensembles) on all AntMaze tasks, meaning that the multi-modal policy is important and improves the Gaussian in offline RL. Also, the ablation study in Table 4 of the paper shows that increasing the ensemble size further improves the performance. Moreover, unfortunately, we can't directly compare our method with SAC on offline RL tasks since SAC and its variants are all online algorithms that need to interact with the environment.
2. *Comparison to Other Multi-Modal Policies*
Please allow us to clarify that we have compared the behavior cloning (BC), decision transformer (DT), and Diffusion-QL, that are all non-Gaussian multi-modal policies. We appreciate the reviewer's suggestion and will be happy to implement the mentioned Gaussian mixture and energy-based policies in our future work.
3. *Reformulation of the Abstract*
Thank you for pointing out the unclear formulation in the abstract. We will revise the abstract to clarify that the entropy of the policy can be approximated in a tractable way.
4. *Computational Limitations and Timings*
We acknowledge that the computational time was only briefly mentioned. We add experiments to compare the training time and evaluation time between different policy types, diffusion steps, and the number of critics. The results are shown in **General Response Table 4**. These results indicate that while the diffusion policy requires longer training times compared to the Gaussian policy, the increase is manageable and justified by the significant performance gains. The evaluation times, however, remain relatively comparable, suggesting that the practical deployment of the trained models is feasible.
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledged
Comment: I thank the authors for their responses. They answer all of my questions. | Summary: The paper proposes that adding entropy regularization to offline RL is beneficial, and using pessimistic Q-value estimation through ensemble methods can provide a better estimate of the Q-value. Figure 1 explicitly shows the benefit of the ensemble Q method. The methods show impressive performance in D4RL.
Strengths: The paper is well-organized and easy to follow. I found the pessimistic ensemble Q trick, which increases estimation accuracy (Figure 1), to be interesting. It is intriguing to see that main entropy to the agent can also benefit offline RL in some tasks.
Weaknesses: I have some trouble overcoming some gaps in the mathematical proofs and found that some mathematical equations may not hold.
The empirical performance in Table 1 is not consistent with the training curve in Figure 3.
More detailed questions about these two aspects are in the questions section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. A typo in Eq(5). There is an unnecessary right parenthesis.
2. I found Mean-Reverting SDE is a mathematical rewrite of VP SDE, see [42]. VP SDE has form
\begin{align*}
dx=-\frac{1}{2}\beta(t)xdt+\sqrt{\beta(t)}dw
\end{align*}
Mean-Reverting SDE has form
\begin{align*}
dx=-\theta_txdt+\sqrt{2\theta_t}dw
\end{align*}
3. These two SDEs are the same in my mind. Then why is it called Mean-Reverting SDE instead of VP SDE in [42], and why is [42] not cited in this context? Can the authors point out any differences between them?
4. Optimal Sampling seems to use the same trick as DDIM[41], using $p(x_{t-1}|x_t,x_0)$ for reverse sampling instead of $p(x_{t-1}|x_t)$ as in DDPM. If that's the case, I believe the authors should point out and properly cite DDIM in the relevant context.
5. It is amazing to see that in Figure 2, optimal sampling can have such great performance when $N=5$. Can the authors also provide its performance in this toy task when $N=1$ and $N=2$? I am curious about their performance with fewer steps.
6. It is hard for me to prove Eq(14). Can the authors elaborate on more details about its proof?
7. Can the authors elaborate more on how Eq(15) is derived from Eq(12) and Eq(13)? I guess the last term in Eq(15) should be $\log p(\hat a_i^1|s_i)$, not $\log p(\hat a_i^1|a_i^T,s_i)$. (There is a typo in Eq(15). It should be $a_i^T$ not $a_t^T$).
8. The paper proposes using automatic entropy tuning as in SAC in line 216. In line 551, the authors mention that the entropy target is the dimension of the action space. I guess it should be the negative dimension of the action space (as SAC does)? For the alpha loss in Eq (45), $\alpha$ depends on $s_i$, while SAC doesn't. Why is the alpha loss designed this way? Is there any difference in making $\alpha$ depend on $s_i$?
9. I found your method's performance in Antmaze-medium-play-v0, Antmaze-medium-diverse-v0, Antmaze-large-play-v0, and Antmaze-large-diverse-v0 in Table 1 to be inconsistent with the training curve in Figure 3. For example, the score of Antmaze-medium-diverse-v0 in the table is 91.6, while the training curve in Figure 2 shows the training curve score is around 40. Why do they differ so much? The same phenomenon occurs in the other three Antmaze environments. Can the authors explain the mismatch? Can the authors also provide training curves for other environments?
10. In Table 1, how is the score recorded? Is it the final round score, online model selection, offline model selection, or moving average?
11. In Eq (34) and (35), should $t$, $t-1$, and $0$ be subscript or superscript? Why is $t-1$ subscript and $t$ superscript?
12. Can the authors elaborate on why Eq(43) holds?
13. In Table 5, the loss type is Likelihood or Noise. What are their mathematical forms?
[41] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
[42] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper has some gaps in the mathematical proofs. Additionally, more details of the experiments need to be added. I am more than willing to increase my score if my questions are well addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and insightful feedback. Your comments have been invaluable in guiding our efforts to refine and clarify our work. Below, we address your main concerns in detail:
1. *Mean-reverting SDE and VP SDE.*
Our mean-reverting SDE is derived from the famous Ornstein-Uhlenbeck (OU) process [1] which has the following form:
$$
\mathrm{d} x = \theta (\mu - x) \mathrm{d} t + \sigma \mathrm{d}w.
$$
As $t \to \infty$, its marginal distribution $p_t(x)$ converges to a stationary Gaussian towards the mean value $\mu$, which gives the informative name: “mean-reverting”. We assume there is no prior knowledge of the actions and thus set $\mu = 0$ to generate actions from standard Gaussian noise. Then, with $\mu = 0$, the mean-reverting SDE has the same form as VP SDE. However, in [42], no solution of the continuous time SDE was given. The authors start from pertubing data with multiple noise scales and generalize this idea with an infinite number of noise scales which makes the perturbed data distributions evolve according to an SDE. They keep using the solution of DDPM while we use Itô's formula to solve the continuous SDE. Compared to the original VP SDE, our mean-reverting SDE is analytically tractable (Eq.(2)) and thus its score $\nabla_{x} \log p_t(x)$ is easier to learn. More importantly, the solution of the mean-reverting SDE can be used for entropy approximation. We will clarify these points and cite [42] appropriately in the revised manuscript.
2. *Optimal Sampling and DDIM.*
Yes, our optimal sampling uses the same conditional posterior distribution$p(x_{t-1} | x_t, x_0)$ as DDIM for reverse sampling and we will cite it in Section 3.1 in the revised manuscript.
3. *Performance of Optimal Sampling with $N=1$ and $N=2$.*
We provide new samples generated with fewer steps of the toy task in **General Response Figure 1**.
4. *Proof for Eq.(14).*
Eq.(14) is a multiple conditions Bayes' rule given by
$$
p_\phi(a_i^1 \mid a_i^T, a_i^0, s_i) = \frac{ p_\phi(a_i^T \mid a_i^1, s_i) \; p_\phi(a_i^1 \mid a_i^0, s_i)}{p_\phi(a_i^T \mid a_i^0, s_i)}.
$$
Here we use $p_\phi(a_i^1 \mid a_i^T, s_i)$ instead of $p_\phi(a_i^1 \mid a_i^T, a_i^0, s_i)$ since both $a_i^0$ and $a_i^1$ are computed from the diffusion sampling process and the conditional posterior from $a_i^1$ and $a_i^0$ is a certain Gaussian (with known mean and variance, see Appendix A.4). We will clarify it in the revised manuscript to make the notation and proof clearer.
5. *Clarification on Eq.(15).*
Substituting Eq.(13) to Eq.(12) gives the new policy objective and its last term should be $\log (p(\hat{a}_i^1 \mid s_i))$. However, we assume that the terminal state $a_i^T$ must be a standard Gaussian and thus we add the condition $a_i^T$ as $\log (p(\hat{a}_i^1 \mid a_i^T, s_i))$ which can be obtained from Eq.(14) and the results won't be affected. (And thank you for pointing out the typo.)
6. *Entropy target should be the negative dimension of the action space.*
Yes, our code uses the negative dimension of the action space as the entropy target, the same as SAC. We will correct it in the paper.
7. *The $\alpha$ depends on $s_i$ in Eq.(45) while SAC doesn't. Why is the alpha loss designed this way?*
SAC is an online algorithm that can interact with the environment and collect new samples with various state actions. However, the offline dataset is pre-collected and may be *imbalanced* across different states. Thus our idea for alpha loss is to assign different entropy values to each state based on available data.
8. *Inconsistent between Table 1 and Figure 3.*
We made a mistake in Figure 3, Antmaze-medium-diverse-v0, where we used the wrong data source for the curve, resulting in a major inconsistency. The updated figure is provided in **General Response Figure 2**. The reason for other minor inconsistencies is that we report the mean of the *best* results in tables while showing the average values of each step in figures. Moreover, the error bar is a default value in *seaborn.lineplot* which takes 95% confidence.
9. *Training curves for other environments?*
We show some training curves in **General Response Figure 3** and will also add other curves to the appendix in the revised manuscript.
10. *How is the score recorded?*
Consider that the behavior-cloning (BC) loss is not suitable for exploring OOD samples thus we choose to use online evaluation to select the model. It is worth noting that Diffusion-QL has proven the online-selected model has a similar performance as the offline-selected model. This will be clarified in the revised manuscript.
11. *Subscript or superscript "$t$" in Eq.(34) and Eq.(35).*
All actions should use superscript sample steps (i.e., $t$, $t-1$, and $0$). We have now fixed them in the manuscript.
12. *Why Eq.(43) holds?*
Eq.(43) holds when action states $a_i^1$ and $a_i^0$ are sequentially sampled from a diffusion process. The term $\pi(a_i^0 \mid a_i^1, \hat{a}_i^0)$ is the conditional posterior sampling from Proposition 3.1. We will elaborate more details in the revised manuscript.
13. *Likelihood loss and Noise loss in Table 5.*
The "Noise" loss is similar to the simplified loss in DDPM as shown in Eq.(5). In contrast, the "Likelihood" loss is proposed in IR-SDE [2], which forces the model to learn optimal reverse paths from $x_t$ to $x_{t-1}$. Generally, the Likelihood loss is more stable while the Noise loss tends to generate more stochastic samples. This will be clarified in the revised manuscript.
14. *Typos*
Thank you for pointing out these typos and we will fix them in the revised manuscript.
*[1] Exact numerical simulation of the Ornstein-Uhlenbeck process and its integral. Physical review E, 1996.*
*[2] Image restoration with mean-reverting stochastic differential equations. ICML, 2023.*
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses to my questions. I appreciate the effort put into addressing most of my theoretical concerns. It appears that the diffusion training and sampling schedule is derived from IR-SDE. I would suggest including more theoretical components in the revised manuscript to enhance its completeness.
However, a few questions regarding the experimental results remain unresolved:
1.The authors mention the use of online model selection to record performance, while the DQL results presented in the main table are based on offline selection. This discrepancy might lead to a mismatch in comparisons. It would be fairer to compare performance using offline model selection. I understand that the BC loss might not function optimally with this method, but do the authors have any results for offline selection performance in Antmaze, Adroit, or Kitchen based on other criteria?
2.Regarding the training curve provided in the rebuttal, specifically Figure 2 in the global response, I noticed that the DQL performance in antmaze-medium-diverse appears low. While it is expected to be around 78 (offline) or 82 (online), the training curve consistently remains below 40. This may not be a significant issue, but I would appreciate it if the authors could provide an explanation for this observation.
3.Lastly, a minor question: Is the parameter $\alpha(s)$ implemented as a neural network?
Since my theoretical questions have been resolved, I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review and your continued engagement with our work. We appreciate the time and effort you have invested. Below, we address your remaining concerns:
1. *Online vs. Offline Model Selection*
We acknowledge that a comparison using offline model selection is essential for ensuring fairness. The table below presents a comparison of our method with Diffusion-QL, including both online and offline results. Additionally, we include our method's performance based on offline selection using the BC Loss criterion, selecting the step where the difference between consecutive steps was less than 4e-3. We will conduct further experiments to identify better offline model selection criteria and to evaluate performance across the Antmaze, Adroit, and Kitchen environments. These results will be incorporated into the revised manuscript.
| **AntMaze Tasks** | **Diffusion-QL (Offline)** | **Diffusion-QL (Online)** | **Ours (Offline)** | **Ours (Online)** |
| ------------------------- | -------------------------- | ------------------------- | ------------------ | ----------------- |
| antmaze-umaze-v0 | 93.4 | 96.0 | 99.0 | **100.0** |
| antmaze-umaze-diverse-v0 | 66.2 | **84.0** | 67.5 | 79.8 |
| antmaze-medium-play-v0 | 77.6 | 79.8 | 84.0 | **91.4** |
| antmaze-medium-diverse-v0 | 78.6 | 82.0 | 85.4 | **91.6** |
| antmaze-large-play-v0 | 46.6 | 49.0 | 72.6 | **81.2** |
| antmaze-large-diverse-v0 | 56.6 | 61.7 | 65.9 | **76.4** |
| **Average** | 69.6 | 75.4 | 79.2 | **86.7** |
2. *Training Curve in Antmaze-Medium-Diverse*
We obtained the results using [official github code]({https://github.com/Zhendong-Wang/Diffusion-Policies-for-Offline-RL), where the normalized score reflects the online evaluation at each step. The error bars in the graph represent the standard error. We apologize for the minor error that remains in the updated graph; although we corrected the data source, we inadvertently used the 95\% confidence interval for our method's results. As noted in the paper, the training performance of Diffusion-QL is inherently unstable, with performance occasionally dropping to zero. In addition, the best performance for each run occurs at different training steps. We also observed that in 2 out of 5 runs, the score remained at zero from the beginning to the end of training. These factors contribute to the mean and standard error appearing significantly worse than they might under more stable conditions.Using our online model selection approach, the average result we obtained for Diffusion-QL is 39.8 $\pm$ 43.3. The average result of 3 valid runs (without two runs remaining zero) is 66.3 $\pm$ 33.4.
3. *Implementation of $\alpha(s)$ as a Neural Network*
Yes, $\alpha(s)$ is implemented as a neural network with a single hidden layer consisting of 32 units. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their detailed reviews and constructive comments. We have conducted additional experiments to address the raised concerns and further validate our approach. All figures can be found in the attached PDF file. Below, we summarize the key results and discussions related to these experiments.
**Figure 1 (for Reviewer jdmN):** *The proposed optimal sampling with different sample steps.*
We added the figures of data generation with fewer steps ($T=1$ and $T=2$) for the toy task in Section 3.1. The results show that the optimal sampling strategy significantly outperforms reverse-time SDE in all steps, further demonstrating the efficiency and effectiveness of our method.
**Figure 2 (for Reviewer jdmN):** *Corrected learning curves of the Diffusion-QL and our method on selected Antmaze tasks.*
We updated the learning curves of our method and Diffusion-QL on selected AntMaze tasks, showing that our method consistently outperforms Diffusion-QL across various environments (the subfigure for ant-medium-diverse-v0 env in the draft uses the wrong data but now it is fixed).
**Figure 3 (for Reviewer jdmN):** *Additional learning curves of our method on different environments over 5 random seeds.*
We validate the robustness of our method with additional learning curves. However, due to the page limitation, we only provide the learning curves of Gym-medium-replay environments, and we will add the rest directly to the appendix of the revised manuscript.
**Figure 4 (for Reviewer Jzch):** *Visualization of the workings of the mean-reverting SDE for action prediction.*
For a more intuitive explanation of our approach, Figure 4 outlines the forward and reverse processes of the mean-reverting SDE used for action prediction.
**Figure 5 and Table 1 (for Reviewer T2t7):** *Ablation experiments of our method with different LCB coefficients.*
To explore the impact of different LCB coefficients $\beta$. We add an experiment of our method with $\beta$ values of 1, 2, and 4 on AntMaze-medium environments. Figure 5 demonstrates that adjusting the LCB coefficient improves performance, particularly for higher values, which helps in managing the exploration-exploitation trade-off effectively. In addition, the numerical results are provided in Table 1 below.
### Table 1: Experiment on LCB coefficients $\beta$
| LCB Coefficient $\beta$ | 1 | 2 | 4 |
| ------------------------- | ---------- | ---------- | -------------- |
| Antmaze-medium-play-v0 | 82.4 ± 4.9 | 88.6 ± 1.5 | **91.6 ± 2.3** |
| Antmaze-medium-diverse-v0 | 74.6 ± 3.7 | 84.0 ± 7.8 | **91.4 ± 1.5** |
| **Average** | 78.5 | 86.3 | **91.5** |
**Table 2 (for Reviewer T2t7):** *Ablation experiments with max Q-backup trick on AntMaze tasks.*
We conducted experiments with and without max Q-backup on AntMaze tasks in Table 2. The inclusion of max Q-backup significantly enhances performance, particularly in more complex environments (e.g., Antmaze-large).
### Table 2: Experiment on "Max Q trick"
| Max Q-backup | True | False |
| ------------------------- | -------------- | ---------- |
| Antmaze-medium-play-v0 | **91.6 ± 2.3** | 89.2 ± 2.9 |
| Antmaze-medium-diverse-v0 | **91.4 ± 1.5** | 87.6 ± 1.8 |
| Antmaze-large-play-v0 | **81.2 ± 3.0** | 22.3 ± 7.1 |
| Antmaze-large-diverse-v0 | **76.4 ± 2.1** | 26.5 ± 6.1 |
**Table 3 (for Reviewer T2t7):** *Ablation experiments with different diffusion steps on selected AntMaze tasks.*
We evaluated the impact of varying the number of diffusion steps on a range of tasks, including AntMaze, Gym, and Kitchen in **Table 3**. Our findings indicate that while increasing the number of steps generally improves performance, five steps provide the best balance across different tasks and between performance and computational time. Thus we choose $T=5$ for all tasks in the paper.
### Table 3: Experiment on diffusion step $T$
| Diffusion Step $T$ | 3 | 5 | 10 |
| ---------------------------- | -------- | --------- | -------- |
| Halfcheetah-medium-replay-v2 | 43.4 | **57.0** | 49.5 |
| Hopper-medium-replay-v2 | 39.4 | **102.7** | 101.7 |
| Walker2d-medium-replay-v2 | 51.2 | 94.2 | **98.1** |
| Antmaze-medium-play-v0 | **96.6** | 91.6| 90.2 |
| Antmaze-medium-diverse-v0 | **95.8** | 91.4 | 83.8 |
| Antmaze-large-play-v0 | 67.6 | **81.2** | 63.2 |
| Antmaze-large-diverse-v0 | **81.0** | 76.4 | 70.0 |
| pen-human-v1 | 65.4 | 67.2 | **70.0** |
| pen-cloned-v1 | 67.3 | 66.3 | **68.4** |
| Kitchen-complete-v0 | 7.5 | 82.3 | **92.7** |
| Kitchen-partial-v0| 10.9 | 60.3 | **66.3** |
| Kitchen-mixed-v0 | 4.8 | 60.2 | **68.0** |
| **Average**| 52.6 | **77.6** | 76.8 |
**Table 4 (for all Reviewers):** *Computational time comparison.*
We included a detailed comparison of training and evaluation times for Gaussian/Diffusion policies and Q-ensembles in Table 4 below.
Increasing $M$from 2 to 64 almost does not influence the evaluation time.
The diffusion step $T$ has more impact on both training and evaluation time but it's a common problem in diffusion models. We regard the sample acceleration as our future work and will try to address it with reliable SDE/ODE solvers or diffusion distillation techniques.
### Table 4: Computational time comparison with different settings on Antmaze-medium-play-v0
| Policy | Diffusion Step $T$ | # Critics $M$ | Training Time (1 Epoch) | Eval Time (1k steps) |
| --------- | -------------------- | --------------- | ----------------------- | -------------------- |
| Gaussian | 1 | 2 | 5m 35s | 1s 450ms |
| Gaussian | 1 | 64 | 7m 20s | 1s 450ms |
| Diffusion | 5 | 2 | 9m 30s | 4s 800ms |
| Diffusion | 5 | 64 | 11m | 4s 800ms |
| Diffusion | 10 | 2 | 12m 23s | 8s |
| Diffusion | 10| 64 | 13m 55s| 8s|
Pdf: /pdf/73d01ec108c3fb622cd3b8afa447bb8f9b9541dd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.